EA - AGI x Animal Welfare: A High-EV Outreach Opportunity? by simeon c

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI x Animal Welfare: A High-EV Outreach Opportunity?, published by simeon c on June 29, 2023 on The Effective Altruism Forum.Epistemic status: Very quickly written, on a thought I've been holding for a year and that I haven't read elsewhere.I believe that within this decade, there could be AGIs (Artificial General Intelligences) powerful enough that the values they pursue might have a value lock-in effect, at least partially. This means they could have a long-lasting impact on the future values and trajectory of our civilization (assuming we survive).This brief post aims to share the idea that if your primary focus and concern is animal welfare (or digital sentience), you may want to consider engaging in targeted outreach on those topics towards those who will most likely shape the values of the first AGIs. This group likely includes executives and employees in top AGI labs (e.g. OpenAI, DeepMind, Anthropic), the broader US tech community, as well as policymakers from major countries.Due to the risk of lock-in effects, I believe that the values of relatively small groups of individuals like the ones I mentioned (less than 3 000 people in top AGI labs) might have a disproportionately large impact on AGI, and consequently, on the future values and trajectory of our civilization. My impression is that, generally speaking, these people currentlya) don't prioritize animal welfare significantlyb) don't show substantial concern for digital minds sentience.Hence if you believe those things are very important (which I do believe), and you think that AGI might come in the next few decades (which a majority of people in the field believe), you might want to consider this intervention.Feel free to reach out if you want to chat more about this, either here or via my contact you can find here.Even more so if you believe, as I do along with many software engineers in top AGI labs, that it could happen this decade.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Visit the podcast's native language site