Thought Experiments Provide a Third Anchor

AI Safety Fundamentals: Alignment - Ein Podcast von BlueDot Impact

Previously, I argued that we should expect future ML systems to often exhibit "emergent" behavior, where they acquire new capabilities that were not explicitly designed or intended, simply as a result of scaling. This was a special case of a general phenomenon in the physical sciences called More Is Different. I care about this because I think AI will have a huge impact on society, and I want to forecast what future systems will be like so that I can steer things to be better. To that end, I ...

Visit the podcast's native language site