EA - Announcing the Winners of the 2023 Open Philanthropy AI Worldviews Contest by Jason Schukraft
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Winners of the 2023 Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on September 30, 2023 on The Effective Altruism Forum.IntroductionIn March 2023, we launched the Open Philanthropy AI Worldviews Contest. The goal of the contest was to surface novel considerations that could affect our views on the timeline to transformative AI and the level of catastrophic risk that transformative AI systems could pose. We received 135 submissions. Today we are excited to share the winners of the contest.But first: We continue to be interested in challenges to the worldview that informs our AI-related grantmaking. To that end, we are awarding a separate $75,000 prize to the Forecasting Research Institute (FRI) for their recently published writeup of the 2022 Existential Risk Persuasion Tournament (XPT). This award falls outside the confines of the AI Worldviews Contest, but the recognition is motivated by the same principles that motivated the contest. We believe that the results from the XPT constitute the best recent challenge to our AI worldview.FRI Prize ($75k)Existential Risk Persuasion Tournament by the Forecasting Research InstituteAI Worldviews Contest WinnersFirst Prizes ($50k)AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by Basil Halperin, Zachary Mazlish, and Trevor ChowEvolution provides no evidence for the sharp left turn by Quintin Pope (see the LessWrong version to view comments)Second Prizes ($37.5k)Deceptive Alignment is