EA - Longtermists Should Work on AI - There is No "AI Neutral" Scenario by simeon c

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Podcast artwork

Kategorien:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Longtermists Should Work on AI - There is No "AI Neutral" Scenario, published by simeon c on August 7, 2022 on The Effective Altruism Forum. Summary: If you’re a longtermist (i.e you believe that most of the moral value lies in the future), and you want to prioritize impact in your career choice, you should strongly consider either working on AI directly, or working on things that will positively influence the development of AI. Epistemic Status: The claim is strong but I'm fairly confident (>75%) about it. I've spent 3 months working as a SERI fellow thinking about whether bio risks could kill humanity (including info hazardy stuff) and how the risk profile compared with the AI safety one, which I think is the biggest crux of this post. I've spent at least a year thinking about advanced AIs and their implications on everything, including much of today's decision-making. I've reoriented my career towards AI based on these thoughts. The Case for Working on AI If you care a lot about the very far future, you probably want two things to happen: first, you want to ensure that humanity survives at all; second, you want to increase the growth rate of good things that matter to humanity - for example, wealth, happiness, knowledge, or anything else that we value. If we increase the growth rate earlier and by more, this will have massive ripple effects on the very longterm future. A minor increase in the growth rate now means a huge difference later. Consider the spread of covid - minor differences in the R-number had huge effects on how fast the virus could spread and how many people eventually caught it. So if you are a longtermist, you should want to increase the growth rate of whatever you care about as early as possible, and as much as possible. For example, if you think that every additional happy life in the universe is good, then you should want the number of happy humans in the universe to grow as fast as possible. AGI is likely to be able to help with this, since it could create a state of abundance and enable humanity to quickly spread across the universe through much faster technological progress. AI is directly relevant to both longterm survival and longterm growth. When we create a superintelligence, there are three possibilities. Either: The superintelligence is misaligned and it kills us all The superintelligence is misaligned with our own objectives but is benign The superintelligence is aligned, and therefore can help us increase the growth rate of whatever we care about. Longtermists should, of course, be eager to prevent the development of a destructive misaligned superintelligence. But they should also be strongly motivated to bring about the development of an aligned, benevolent superintelligence, because increasing the growth rate of whatever we value (knowledge, wealth, resources.) will have huge effects into the longterm future. Some AI researchers focus more on the ‘carrot’ of aligned benevolent AI, others on the ‘stick’ of existential risk. But the point is, AI will likely either be extremely good or extremely bad - it’s difficult to be AI-neutral. I want to emphasize that my argument only applies to people who want to strongly prioritize impact. It’s fine for longtermists to choose not to work on AI for personal reasons. Most people value things other than impact, and big career transitions can be extremely costly. I just think that if longtermists really want to prioritize impact above everything else, then AI-related work is the best thing for (most of) them to do; and if they want to work on other things for personal reasons, they shouldn’t be tempted by motivated reasoning to believe that they are working on the most impactful thing. Objections Here are some reasons why you might be unconvinced by this argument, along with reasons why I find th...

Visit the podcast's native language site