EA - Why we may expect our successors not to care about suffering by Jim Buhler

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why we may expect our successors not to care about suffering, published by Jim Buhler on July 11, 2023 on The Effective Altruism Forum.(Probably the most important post of this sequence.)Summary: Some values are less adapted to the “biggest potential futures” than others (see my previous post), in the sense that they may constrain how one should go about colonizing space, making them less competitive in a space-expansion race. The preference for reducing suffering is one example of a preference that seems particularly likely to be unadapted and selected against. It forces the suffering-concerned agents to make trade-offs between preventing suffering and increasing their ability to create more of what they value. Meanwhile, those who don’t care about suffering don’t face this trade-off and can focus on optimizing for what they value without worrying about the suffering they might (in)directly cause. Therefore, we should - all else equal - expect the “grabbiest” civilizations/agents to have relatively low levels of concern for suffering, including humanity (if it becomes grabby).Call this the Upside-focused Colonist Curse (UCC). In this post, I explain this UCC dynamic in more detail using an example. Then, I argue that the more significant this dynamic is (relative to competing others), the more we should prioritize s-risks over other long-term risks, and soon.The humane values, the positive utilitarians, and the disvalue penaltyConsider the concept of disvalue penalty: the (subjective) amount of disvalue a given agent would have to be responsible for in order to bring about the highest (subjective) amount of value they can. The story below should make what it means more intuitive.Say they are only two types of agents:those endorsing “humane values” (the HVs) who disvalue suffering and value things like pleasure;the “positive utilitarians” (the PUs) who value things like pleasure but disvalue nothing.These two groups are in competition to control their shared planet, or solar system, or light cone, or whatever.The HVs estimate that they could colonize a maximum of [some high number] of stars and fill those with a maximum of [some high number] units of value. However, they also know that increasing their civilization’s ability to create value also increases s-risks (in absolute). They, therefore, face a trade-off between maximizing value and preventing suffering which incentivizes them to be cautious with regard to how they colonize space. If they were to purely optimize for more value without watching for the suffering they might (directly or indirectly) become responsible for, they’d predict they would cause x unit of suffering for every 10 units of value they create. This is the HVs’ disvalue penalty: x/10 (which is a ratio; a high ratio means a heavy penalty).The PUs, however, do not care about the suffering they might be responsible for. They don’t face the trade-off the HVs face and have no incentive to be cautious like them. They can - right away - start colonizing as many stars as possible to eventually fill them with value, without worrying about anything else. The PU’s disvalue penalty is 0.Image 1: Niander Wallace, a character from Blade Runner 2049 who can be thought of as a particularly baddy PU.Because they have a higher disvalue penalty (incentivizing them to be more cautious), the humane values are less “grabby” than those of the PUs. While the PUs can happily spread without fearing any downside, the HVs would want to spend some time and resources thinking about how to avoid causing too much suffering while colonizing space (and about whether it’s worth colonizing at all), since suffering would hurt their total utility. This means, according to the Grabby Values Selection Thesis, that we should - all else equal - expect PU-ish values to be s...

Visit the podcast's native language site