EA - Debate series: should we push for a pause on the development of AI? by Ben West
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Debate series: should we push for a pause on the development of AI?, published by Ben West on September 8, 2023 on The Effective Altruism Forum.In March of this year, 30,000 people, including leading AI figures like Yoshua Bengio and Stuart Russell, signed a letter calling on AI labs to pause the training of AI systems. While it seems unlikely that this letter will succeed in pausing the development of AI, it did draw substantial attention to slowing AI as a strategy for reducing existential risk.While initial work has been done on this topic (this sequence links to some relevant work), many areas of uncertainty remain. I've asked a group of participants to discuss and debate various aspects of the value of advocating for a pause on the development of AI on the EA Forum, in a format loosely inspired by Cato Unbound.On September 16, we will launch with three posts:David Manheim will share a post giving an overview of what a pause would include, how a pause would work, and some possible concrete steps forwardNora Belrose will post outlining some of the risks of a pauseThomas Larson will post a concrete policy proposalAfter this, we will release one post per day, each from a different authorMany of the participants will also be commenting on each other's workResponses from Forum users are encouraged; you can share your own posts on this topic or comment on the posts from participants. You'll be able to find the posts by looking at this tag (remember that you can subscribe to tags to be notified of new posts).I think it is unlikely that this debate will result in a consensus agreement, but I hope that it will clarify the space of policy options, why those options may be beneficial or harmful, and what future work is needed.People who have agreed to participateThese are in random order, and they're participating as individuals, not representing any institution:David Manheim (Technion Israel)Matthew Barnett (Epoch AI)Zach Stein-Perlman (AI Impacts)Holly Elmore (AI pause advocate)Buck Shlegeris (Redwood Research)Anonymous researcher (Major AI lab)Anonymous professor (Major University)Rob Bensinger (Machine Intelligence Research Institute)Nora Belrose (EleutherAI)Thomas Larsen (Center for AI Policy)Quintin Pope (Oregon State University)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org