Couverture de Do AI Models Lie on Purpose? Scheming, Deception, and Alignment with Marius Hobbhahn of Apollo Research

Do AI Models Lie on Purpose? Scheming, Deception, and Alignment with Marius Hobbhahn of Apollo Research

Do AI Models Lie on Purpose? Scheming, Deception, and Alignment with Marius Hobbhahn of Apollo Research

Écouter gratuitement

Voir les détails

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

Marius Hobbhahn is the CEO and co-founder of Apollo Research. Through a joint research project with OpenAI, his team discovered that as models become more capable, they are developing the ability to hide their true reasoning from human oversight.

Jeffrey Ladish, Executive Director of Palisade Research, talks with Marius about this work. They discuss the difference between hallucination and deliberate deception and the urgent challenge of aligning increasingly capable AI systems.

Links:

MariusTwitter: https://twitter.com/mariushobbhahn

Apollo Research Twitter: https://twitter.com/apolloaievals

Apollo Research: https://www.apolloresearch.ai

Palisade Research: https://palisaderesearch.org/

Twitter/X: https://x.com/PalisadeAI

Anti-Scheming Project: https://www.antischeming.ai

Research paper “Stress Testing Deliberative Alignment for Anti-Scheming Training”: https://www.arxiv.org/pdf/2509.15541

Blog posts from OpenAI and Apollo: https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/ https://www.apolloresearch.ai/research/stress-testing-deliberative-alignment-for-anti-scheming-training/

Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Aucun commentaire pour le moment