EA - Project ideas for making transformative AI go well, other than by working on alignment by Lukas Finnveden
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project ideas for making transformative AI go well, other than by working on alignment, published by Lukas Finnveden on January 4, 2024 on The Effective Altruism Forum.This is a series of posts with lists of projects that it could be valuable for someone to work on. The unifying theme is that they are projects that:Would be especially valuable if transformative AI is coming in the next 10 years or so.Are not primarily about controlling AI or aligning AI to human intentions.[1]Most of the projects would be valuable even if we were guaranteed to get aligned AI.Some of the projects would be especially valuable if we were inevitably going to get misaligned AI.The posts contain some discussion of how important it is to work on these topics, but not a lot. For previous discussion (especially: discussing the objection "Why not leave these issues to future AI systems?"), you can see the sectionHow ITN are these issues? from my previousmemo on some neglected topics.The lists are definitely not exhaustive. Failure to include an idea doesn't necessarily mean I wouldn't like it. (Similarly, although I've made some attempts to link to previous writings when appropriate, I'm sure to have missed a lot of good previous content.)There's a lot of variation in how sketched out the projects are. Most of the projects just have some informal notes and would require more thought before someone could start executing. If you're potentially interested in working on any of them and you could benefit from more discussion, I'd be excited if you reached out to me! [2]There's also a lot of variation in skills needed for the projects. If you're looking for projects that are especially suited to your talents, you can search the posts for any of the following tags (including brackets):[ML] [Empirical research] [Philosophical/conceptual] [survey/interview] [Advocacy] [Governance] [Writing] [Forecasting]The projects are organized into the following categories (which are in separate posts). Feel free to skip to whatever you're most interested in.Governance during explosive technological growthIt's plausible that AI will lead to explosive economic and technological growth.Our current methods of governance can barely keep up with today's technological advances. Speeding up the rate of technological growth by 30x+ would cause huge problems and could lead to rapid, destabilizing changes in power.This section is about trying to prepare the world for this. Either generating policy solutions to problems we expect to appear or addressing the meta-level problem about how we can coordinate to tackle this in a better and less rushed manner.A favorite direction is to develop Norms/proposals for how states and labs should act under the possibility of an intelligence explosion.EpistemicsThis is about helping humanity get better at reaching correct and well-considered beliefs on important issues.If AI capabilities keep improving, AI could soon play a huge role in our epistemic landscape. I think we have an opportunity to affect how it's used: increasing the probability that we get great epistemic assistance and decreasing the extent to which AI is used to persuade people of false beliefs.A couple of favorite projects are: Create an organization that gets started with using AI for investigating important questions or Develop & advocate for legislation against bad persuasion.Sentience and rights of digital minds.It's plausible that there will soon be digital minds that are sentient and deserving of rights. This raises several important issues that we don't know how to deal with.It seems tractable both to make progress in understanding these issues and in implementing policies that reflect this understanding.A favorite direction is to take existing ideas for what labs could be doing and spell ou...