EA - Future Matters #5: supervolcanoes, AI takeover, and What We Owe the Future by Pablo
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund
Kategorien:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #5: supervolcanoes, AI takeover, and What We Owe the Future, published by Pablo on September 14, 2022 on The Effective Altruism Forum. Even if we think the prior existence view is more plausible than the total view, we should recognize that we could be mistaken about this and therefore give some value to the life of a possible future. The number of human beings who will come into existence only if we can avoid extinction is so huge that even with that relatively low value, reducing the risk of human extinction will often be a highly cost-effective strategy for maximizing utility, as long as we have some understanding of what will reduce that risk. Katarzyna de Lazari-Radek & Peter Singer Future Matters is a newsletter about longtermism. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish. Research William MacAskill’s What We Owe the Future was published, reaching the New York Times Bestseller list in its first week and generating a deluge of media for longtermism. We strongly encourage readers to get a copy of the book, which is filled with new research, ideas, and framings, even for people already familiar with the terrain. In the next section, we provide an overview of the coverage the book has received so far. In Samotsvety's AI risk forecasts, Eli Lifland summarizes the results of some recent predictions related to AI takeover, AI timelines, and transformative AI by a group of seasoned forecasters. In aggregate, the group places 38% on AI existential catastrophe, conditional on AGI being developed by 2070, and 25% on existential catastrophe via misaligned AI takeover by 2100. Roughly four fifths of their overall AI risk is from AI takeover. They put 32% on AGI being developed in the next 20 years. John Halstead released a book-length report on climate change and longtermism and published a summary of it on the EA Forum. The report offers an up-to-date analysis of the existential risk posed by global warming. One of the most important takeaways is that extreme warming seems significantly less likely than previously thought: the probability of >6°C warming was thought to be 10% a few years ago, whereas it now looks
