Why AI Alignment Could Be Hard With Modern Deep Learning

AI Safety Fundamentals: Alignment - Ein Podcast von BlueDot Impact

Why would we program AI that wants to harm us? Because we might not know how to do otherwise.Source:https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/Crossposted from the Cold Takes Audio podcast.---A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.

Visit the podcast's native language site