EA - Announcement: You can now listen to the “AI Safety Fundamentals” courses by peterhartree

The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcement: You can now listen to the “AI Safety Fundamentals” courses, published by peterhartree on June 9, 2023 on The Effective Altruism Forum.The AI Safety Fundamentals courses are one of the best ways to learn about AI safety and prepare to work in the field.BlueDot Impact facilitates the courses several times per year, and the curricula are available online for anyone to read.The “Alignment” curriculum is created and maintained by Richard Ngo (OpenAI), and the “Governance” curriculum was developed in collaboration with a wide range of stakeholders.You can now listen to most of the core readings from both courses:AI Safety Fundamentals: AlignmentGain a high-level understanding of the AI alignment problem and some of the key research directions which aim to solve it.Listen online or subscribe:Apple Podcasts | Google Podcasts | Spotify | RSSAI Safety Fundamentals: GovernanceGain foundational knowledge for doing research or policy work on the governance of transformative AI.Listen online or subscribe:Apple Podcasts | Google Podcasts | Spotify | RSSWe've also made narrations for some readings from the advanced “Alignment 201” course, and we may record more later this year:AI Safety Fundamentals: Alignment 201Gain enough knowledge about alignment to understand the frontier of current research discussions. Listen online or subscribe:Apple Podcasts | Google Podcasts | Spotify | RSSApply to join the “AI Safety Fundamentals Governance Course” July cohort!Gain foundational knowledge for doing research or policy work on the governance of transformative AI.Successful applicants will participate in the AI Governance course with weekly virtual classes, and join the AI Safety Fundamentals community.Apply before 26th June 2023!Thoughts, feedback, suggestions?These narrations were created by Perrin Walker (TYPE III AUDIO) and Lukas Berglund on behalf of BlueDot Impact.We would love to hear your feedback. Do you find the narrations helpful? How could they be improved? What other AI safety material would you like to listen to? Please comment below, complete our feedback form, or write to [email protected] for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Visit the podcast's native language site