EA - Why I think it's important to work on AI forecasting by Matthew Barnett
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund
Kategorien:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I think it's important to work on AI forecasting, published by Matthew Barnett on February 27, 2023 on The Effective Altruism Forum.Note: this post is a transcript of a talk I gave at EA Global: Bay Area 2023.These days, a lot of effective altruists are working on trying to make sure AI goes well. But I often worry that, as a community, we don’t yet have a clear picture of what we’re really working on.The key problem is that predicting the future is very difficult, and in general, if you don’t know what the future will look like, it’s usually hard to be sure that any intervention we do now will turn out to be highly valuable in hindsight.When EAs imagine the future of AI, I think a lot of us tend to have something like the following picture in our heads.At some point, maybe 5, 15, 30 years from now, some AI lab somewhere is going to build AGI. This AGI is going to be very powerful in a lot of ways. And we’re either going to succeed in aligning it, and then the future will turn out to be bright and wonderful, or we’ll fail, and the AGI will make humanity go extinct, and it’s not yet clear which of these two outcomes will happen yet.Alright, so that’s an oversimplified picture. There’s lots of disagreement in our community about specific details in this story. For example, we sometimes talk about whether there will be one AGI or several. Or about whether there will be a fast takeoff or a slow takeoff.But even if you’re confident about some of these details, I think there are plausibly some huge open questions about the future of AI that perhaps no one understands very well.Take the question of what AGI will look like once it’s developed.If you asked an informed observer in 2013 what AGI will look like in the future, I think it’s somewhat likely they’d guess it’ll be an agent that we’ll program directly to search through a tree of possible future actions, and select the one that maximizes expected utility, except using some very clever heuristics that allows it to do this in the real world.In 2018, if you asked EAs what AGI would look like, a decent number of people would have told you that it will be created using some very clever deep reinforcement learning trained in a really complex and diverse environment.And these days in 2023, if you ask EAs what they expect AGI to look like, a fairly high fraction of people will say that it will look like a large language model: something like ChatGPT but scaled up dramatically, trained on more than one modality, and using a much better architecture.That’s just my impression of how people’s views have changed over time. Maybe I’m completely wrong about this. But the rough sense I’ve gotten while in this community is that people will often cling to a model of what future AI will be like, which frequently changes over time. And at any particular time, people will often be quite overconfident in their exact picture of AGI.In fact, I think the state of affairs is even worse than how I’ve described it so far. I’m not even sure if this particular question about AGI is coherent. The term “AGI†makes it sound like there will be some natural class of computer programs called “general AIs†that are sharply distinguished from this other class of programs called “narrow AIsâ€, and at some point – in fact, on a particular date – we will create the “first†AGI. I’m not really sure that story makes much sense.The question of what future AI will look like is a huge question, and getting it wrong could make the difference between a successful research program, and one that never went anywhere. And yet, it seems to me that, as of 2023, we still don’t have very strong reasons to think that the way we think about future AI will end up being right on many of the basic details.In general I think that uncertainty about the future of ...
