EA - Effective altruism is no longer the right name for the movement by ParthThaya

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Podcast artwork

Kategorien:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruism is no longer the right name for the movement, published by ParthThaya on August 31, 2022 on The Effective Altruism Forum. TL;DR As some have already argued, the EA movement would be more effective in convincing people to take existential risks seriously by focusing on how these risks will kill them and everyone they know, rather than on how they need to care about future people Trying to prevent humanity from going extinct does not match people’s commonsense definition of altruism This mismatch causes EA to filter out two groups of people: 1) People who are motivated to prevent existential risks for reasons other than caring about future people; 2) Altruistically motivated people who want to help those less fortunate, but are repelled by EA’s focus on longtermism We need an existential risk prevention movement that people can join without having to rethink their moral ideas to include future people and we need an effective altruism movement that people can join without being told that the most altruistic endeavor is to try to minimize existential risks Addressing existential risk is not an altruistic endeavor In what is currently the fourth highest-voted EA forum post of all time, Scott Alexander proposes that EA could talk about existential risk without first bringing up the philosophical ideas of longtermism. If you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know. As a pitch to get people to care about something, this is a pretty strong one. But right now, a lot of EA discussion about this goes through an argument that starts with "did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself? Did you know that maybe you should care about their problems exactly as much as you care about global warming and other problems happening today?" Regardless of whether these statements are true, or whether you could eventually convince someone of them, they're not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know. The same argument applies to other long-termist priorities, like biosecurity and nuclear weapons. Well-known ideas like "the hinge of history", "the most important century" and "the precipice" all point to the idea that existential risk is concentrated in the relatively near future - probably before 2100. The average biosecurity project being funded by Long-Term Future Fund or FTX Future Fund is aimed at preventing pandemics in the next 10 or 30 years. The average nuclear containment project is aimed at preventing nuclear wars in the next 10 to 30 years. One reason all of these projects are good is that they will prevent humanity from being wiped out, leading to a flourishing long-term future. But another reason they're good is that if there's a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know. I agree with Scott here. Based on the reaction on the forum, a lot of others do as well. So, let’s read that last sentence again: “if there's a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know”. Notice that this is not an altruistic concern – it is a concern of survival and well-being. I mean, sure, you could make the case that not wanting the world to end is altruistic because you care about the billions of people currently living and the potential trillions of people who could exist in the future. But chances are, if you’re worried about the world ending, what’s actually driving you is a basic human desire for you and your loved ones to live and flourish. I share the longtermists’ concerns about bio-risk, un...

Visit the podcast's native language site