EA - Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk by Pablo
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund
Kategorien:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk, published by Pablo on December 30, 2022 on The Effective Altruism Forum.[T]he sun with all the planets will in time grow too cold for life, unless indeed some great body dashes into the sun and thus gives it fresh life. Believing as I do that man in the distant future will be a far more perfect creature than he now is, it is an intolerable thought that he and all other sentient beings are doomed to complete annihilation after such long-continued slow progress.Charles DarwinFuture Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.A message to our readersWelcome back to Future Matters. We took a break during the autumn, but will now be returning to our previous monthly schedule. Future Matters would like to wish all our readers a happy new year!The most significant development during our hiatus was the collapse of FTX and the fall of Sam Bankman-Fried, until then one of the largest and most prominent supporters of longtermist causes. We were shocked and saddened by these revelations, and appalled by the allegations and admissions of fraud, deceit, and misappropriation of customer funds. As others have stated, fraud in the service of effective altruism is unacceptable, and we condemn these actions unequivocally and support authorities’ efforts to investigate and prosecute any crimes that may have been committed.ResearchA classic argument for existential risk from superintelligent AI goes something like this: (1) superintelligent AIs will be goal-directed; (2) goal-directed superintelligent AIs will likely pursue outcomes that we regard as extremely bad; therefore (3) if we build superintelligent AIs, the future will likely be extremely bad. Katja Grace’s Counterarguments to the basic AI x-risk case [] identifies a number of weak points in each of the premises in the argument. We refer interested readers to our conversation with Katja below for more discussion of this post, as well as to Erik Jenner and Johannes Treutlein’s Responses to Katja Grace’s AI x-risk counterarguments [].The key driver of AI risk is that we are rapidly developing more and more powerful AI systems, while making relatively little progress in ensuring they are safe. Katja Grace’s Let’s think about slowing down AI [] argues that the AI risk community should consider advocating for slowing down AI progress. She rebuts some of the objections commonly levelled against this strategy: e.g. to the charge of infeasibility, she points out that many technologies (human gene editing, nuclear energy) have been halted or drastically curtailed due to ethical and/or safety concerns. In the comments, Carl Shulman argues that there is not currently enough buy-in from governments or the public to take more modest safety and governance interventions, so it doesn’t seem wise to advocate for such a dramatic and costly policy: “It's like climate activists in 1950 responding to difficulties passing funds for renewable energy R&D or a carbon tax by proposing that the sale of automobiles be banned immediately.It took a lot of scientific data, solidification of scientific consensus, and communication/movement-building over time to get current measures on climate change.â€We enjoyed Kelsey Piper's review of What We Owe the Future [], not necessarily because we agree with her criticisms, but because we thought the review managed to identify, and articulate very clearly, what we take to be the main c...
