EA - Concrete projects for reducing existential risk by Buhl

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concrete projects for reducing existential risk, published by Buhl on June 21, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced quickly and is not to Rethink Priorities’ typical standards of substantiveness and careful checking for accuracy.Super quick summaryThis is a list of twenty projects that we (Rethink Priorities’ Existential Security Team) think might be especially promising projects for reducing existential risk, based on our very preliminary and high-level research to identify and compare projects.You can see an overview of the full list here. Here are five ideas we (tentatively) think seem especially promising:Improving info/cybersec at top AI labsAI lab coordinationField building for AI policyFacilitating people’s transition from AI capabilities research to AI safety researchFinding market opportunities for biodefence-relevant technologiesIntroductionWhat is this list?This is a list of projects that we (the Existential Security Team at Rethink Priorities) think are plausible candidates for being top projects for substantially reducing existential risk.The list was generated based on a wide search (resulting in an initial list of around 300 ideas, most of which we did not come up with ourselves) and a shallow, high-level prioritization process (spending between a few minutes and an hour per idea). The process took about 100 total hours of work, spread across three researchers. More details on our research process can be found in the appendix. Note that some of the ideas we considered most promising were excluded from this list due to being confidential, sensitive or particularly high-risk.We’re planning to prioritize projects on this list (as well as other non-public ideas) for further research, as candidates for projects we might eventually incubate. We’re planning to focus exclusively on projects aiming to reduce AI existential risk in 2023 but have included project ideas in other cause areas on this list as we still think those ideas are promising and would be excited about others working on them. More on our team’s strategy here.We’d be potentially excited about others researching, pursuing and supporting the projects on this list, although we don't think this is a be-all-end-all list of promising existential-risk-reducing projects and there are important limitations to this list (see “Key limitations of this list”).Why are we sharing this list and who is it for?By sharing this list, we’re hoping to:Give a sense of what kinds of projects we’re considering incubating and be transparent about our research process and results.Provide inspiration for projects others could consider working on.Contribute to community discussion about existential security entrepreneurship – we’re excited to receive feedback on the list, additional project suggestions, and information about the project areas we highlight (for example, existing projects we may have missed, top ideas not on this list, or reasons that some of our ideas may be worse than we think).You might be interested in looking at this list if you’re:Considering being a founder or early employee of a new project. This list can give you some inspiration for potential project areas to look into. If you’re interested in being a (co-)founder or early employee for one of the projects on this list, feel free to reach out to Marie Buhl at [email protected] so we can potentially provide you with additional resources or contacts when we have them.Note that our plan for 2023 is to zoom in on just a few particularly promising projects targeting AI existential risk. This means that we’ll have limited bandwidth to provide ad hoc feedback and support for projects that aren’t our main focus, and that we might not be able to respond to ev...

Visit the podcast's native language site