Where I Agree and Disagree with Eliezer
AI Safety Fundamentals: Alignment - Ein Podcast von BlueDot Impact
Kategorien:
(Partially in response to AGI Ruin: A list of Lethalities. Written in the same rambling style. Not exhaustive.)Agreements Powerful AI systems have a good chance of deliberately and irreversibly disempowering humanity. This is a much easier failure mode than killing everyone with destructive physical technologies. Catastrophically risky AI systems could plausibly exist soon, and there likely won’t be a strong consensus about this fact until such systems pose a meaningful existential risk per y...