Goal Misgeneralisation: Why Correct Specifications Aren’t Enough for Correct Goals

AI Safety Fundamentals: Alignment - Ein Podcast von BlueDot Impact

As we build increasingly advanced AI systems, we want to make sure they don’t pursue undesired goals. This is the primary concern of the AI alignment community. Undesired behaviour in an AI agent is often the result of specification gaming —when the AI exploits an incorrectly specified reward. However, if we take on the perspective of the agent we’re training, we see other reasons it might pursue undesired goals, even when trained with a correct specification. Imagine that you are the agent (...

Visit the podcast's native language site