Do AI Models Lie on Purpose? Scheming, Deception, and Alignment with Marius Hobbhahn of Apollo Research
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
Marius Hobbhahn is the CEO and co-founder of Apollo Research. Through a joint research project with OpenAI, his team discovered that as models become more capable, they are developing the ability to hide their true reasoning from human oversight.
Jeffrey Ladish, Executive Director of Palisade Research, talks with Marius about this work. They discuss the difference between hallucination and deliberate deception and the urgent challenge of aligning increasingly capable AI systems.
Links:
Marius’ Twitter: https://twitter.com/mariushobbhahn
Apollo Research Twitter: https://twitter.com/apolloaievals
Apollo Research: https://www.apolloresearch.ai
Palisade Research: https://palisaderesearch.org/
Twitter/X: https://x.com/PalisadeAI
Anti-Scheming Project: https://www.antischeming.ai
Research paper “Stress Testing Deliberative Alignment for Anti-Scheming Training”: https://www.arxiv.org/pdf/2509.15541
Blog posts from OpenAI and Apollo: https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/ https://www.apolloresearch.ai/research/stress-testing-deliberative-alignment-for-anti-scheming-training/
Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !