EA - Safety timelines: How long will it take to solve alignment? by Esben Kran
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund
Kategorien:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Safety timelines: How long will it take to solve alignment?, published by Esben Kran on September 19, 2022 on The Effective Altruism Forum. TLDR; no one talks about how far we are towards safe AGI and the focus is on when AGI doom arrives. We want to readjust this focus and measure progress, guide and facilitate research, and evaluate projects in AI safety for impact. We also ask you to add your views to this survey. Prelude It was a hot London Summer day and fourteen people had gathered to discuss the future of AI safety. To ascertain the mission criticality of our endeavor, we drew everyone’s AGI timelines on a whiteboard. The product of our estimates was close to the Metaculus estimate and some expressed concern at the stress of impending doom. We then proceeded to draw our expected timelines for when alignment would be solved and very few had ever analyzed this. The few that expressed their timelines ended at 15 years hence, in 2037. And I thought this was a remarkably hopeful message! A case for hope Let us be optimistic and expect the median arrival for the solution to alignment is 2037 estimated as a Gaussian with a standard deviation of 10 years. If we estimate the probability that the solution will come before AGI based on sampling the probability mass of the Metaculus’ forecast for AGI and our “Safety Timeline” (or “alignment solution timeline”), we can calculate the probability that alignment will be solved before an AGI is released: P(solution=75/85/95% > Will a panel of 15 recognized experts in AI safety expect a pro...
