EA - Aptitudes for AI governance work by Sam Clarke

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Aptitudes for AI governance work, published by Sam Clarke on June 14, 2023 on The Effective Altruism Forum.I outline 8 “aptitudes” for AI governance work. For each, I give examples of existing work that draws on the aptitude, and a more detailed breakdown of the skills I think are useful for excelling at the aptitude.How this might be helpful:For orienting to the kinds of work you might be best suited toFor thinking through your skill gaps for those kinds of workOffering an abstraction which might help those thinking about field-building/talent pipeline strategyEpistemic status:I've spent ~3 years doing full-time AI governance work. Of that, I spent ~6 months FTE working on questions related to the AI governance talent pipeline, with GovAI.My work has mostly been fairly foundational research—so my views about aptitudes for research-y work (i.e. the first four aptitudes in this post) are more confident than for more applied or practical work (i.e. the latter three aptitudes in this post).I've spent ~5 hours talking with people hiring in AI governance about the talent needs they have. See this post for a write-up of that work. I've spent many more hours talking with AI governance researchers about their work (not focused specifically on talent needs).This post should be read as just one framework that might help you orient to AI governance work, rather than as making strong claims about which skills are most useful.Some AI governance-relevant aptitudesMacrostrategyWhat this is: investigating foundational topics that bear on more applied or concrete AI governance questions. Some key characteristics of this kind of work include:The questions are often not neatly scoped, such that generating or clarifying questions is part of the work.It involves balancing an unusually wide or open-ended range of considerations.A high level of abstraction is involved in reasoning.The methodology is often not very clear, such that you can’t just plug-and-play with some standard methodology from a particular field.Examples:Descriptive work on estimating certain ‘key variables’E.g. reports on AI timelines and takeoff speeds.Prescriptive work on what ‘intermediate goals’ to aim forE.g. analysis of the impact of US govt 2022 export controls.Conceptual work on developing frameworks, taxonomies, models, etc. that could be useful for structuring future analysisE.g. The Vulnerable World Hypothesis.Useful skills:Generating, structuring, and weighing considerations. Being able to generate lots of different considerations for a given question and weigh up these considerations appropriately.For example, there are a lot of considerations that bear on the question “Would it reduce AI risk if the US government enacted antitrust regulation that prevents big tech companies from buying AI startups?”Some examples of considerations are: “How much could this accelerate or slow down AI progress?”, “How much could this increase or decrease Western AI leadership relative to China?”, “How much harder or easier would this make it for the US government to enact safety-focused regulations?” “How would this affect the likelihood that a given company (e.g., Alphabet) plays a leading role in transformative AI development?” etc.Each of these considerations is also linked to various other considerations. For instance, the consideration about the pace of AI progress links to the higher-level consideration “How does the pace of AI progress affect the level of AI risk?” and the lower-level consideration “How does market structure affect the pace of AI progress?” That lower-level consideration can then be linked to even lower levels, like “What are the respective roles of compute-scaling and new ideas in driving AI progress?” and “Would spreading researchers out across a larger number of startups ...

Visit the podcast's native language site