Measuring Progress on Scalable Oversight for Large Language Models

AI Safety Fundamentals: Alignment - Ein Podcast von BlueDot Impact

Abstract: Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on ways it can be studied empirically. We first prese...

Visit the podcast's native language site