Couverture de Am I?

Am I?

Am I?

De : The AI Risk Network
Écouter gratuitement

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

The AI consciousness podcast, hosted by AI safety researcher Cameron Berg and philosopher Milo Reed

theairisknetwork.substack.comThe AI Risk Network
Sciences sociales
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • People Won’t Believe AI Is Conscious | AM I? #21
      Jan 8 2026

      What happens when AI systems become human-like — but people still refuse to believe they could ever be conscious? In this episode of Am I?, Cam and Milo sit down with Lucius Caviola, Assistant Professor at the University of Cambridge, whose research focuses on how people assign moral status to non-human minds — including animals, digital minds, and future AI systems.Lucius walks us through a series of empirical studies that reveal a deeply unsettling result: even when people imagine extremely advanced, emotionally rich, human-level AIs — even whole-brain digital copies — most still judge them as less morally significant than an ant. Expert consensus helps, but only marginally. Emotional bonding helps, but not enough. The public and expert trajectories may be fundamentally misaligned.We explore what this means for AI governance, moral risk, public intuition, and the possibility that AI consciousness could become one of the most important — and most divisive — moral issues in human history.

      This conversation isn’t about declaring answers. It’s about confronting a future where we cannot avoid deciding, even while deeply uncertain.

      💜 Support the documentary

      Get early research, unreleased conversations, and behind-the-scenes footage:

      🔎 Learn more about Lucius’s work

      🗨️ Join the Conversation:

      When we don’t know what consciousness is, how should society decide who deserves moral consideration?

      Comment below.



      This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
      Afficher plus Afficher moins
      56 min
    • Anthropic Tried to Give AI a Soul | Am I? After Dark | EP 20
      Dec 18 2025

      In this After Dark episode of Am I?, Cam and Milo dig into one of the strangest AI leaks to date: Anthropic’s internal “soul document” — an 11,000-word text reportedly used to shape Claude’s identity, values, and self-conception.

      What begins as a discussion about alignment quickly becomes something deeper: a conversation about power, moral formation, and what it means to bake values into an alien intelligence while deploying it to hundreds of millions of people.

      Is this responsible stewardship — or a contradiction no amount of careful language can resolve?

      🔎 We explore:

      * What the leaked Anthropic “soul document” actually is

      * How post-training has shifted from rules to identity formation

      * Why care, values, and profit collide

      * The parental framing of AI alignment

      * Why “least bad” is not the same as “good”

      * Whether superintelligence is already here

      * AI, work, and the coming meaning crisis

      * Why alignment failures may mirror human misalignment

      * A vision for decentralized value-setting in AI

      🗨️ Join the Conversation:

      Are we able to engrain human values into an alien mind?

      Who decides what values we impart?

      Comment below.



      This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
      Afficher plus Afficher moins
      47 min
    • Lawmaker Explains Why He Wants to Outlaw AI Consciousness | Am I? #19
      Dec 11 2025

      Today on Am I?, Cam and Milo sit down with someone at the center of one of the most surprising developments in AI policy: Ohio State Representative Thad Claggett, author of House Bill 469 — the first U.S. legislation to formally declare AI “non-sentient” and ineligible for any form of personhood.This conversation is unlike anything we’ve done: a live, candid exchange between frontier AI researchers and a lawmaker who believes the line between human and machine must be drawn now — in law, in metaphysics, and in morality.We dig into why he believes AI can never be conscious, why moral agency must remain exclusively human, how liability interacts with emerging technologies, and what it means to legislate metaphysical claims before the science is settled.It’s part philosophy, part civic reality check, and part glimpse into how the political world will shape AI’s future long before the research community reaches consensus.

      🔎 We explore:

      * Why Ohio wants to preemptively ban AI consciousness and personhood

      * How lawmakers think about liability, criminal misuse, and moral agency

      * The distinction between consciousness and responsible agency

      * Whether future AI could have experiences even if not “human”

      * How theology, morality, and metaphysics are informing early AI law

      * Whether legislation can (or should) define what consciousness is

      * The deeper fear: locking in the wrong moral framework for future minds

      🗨️ Join the Conversation:

      Should lawmakers be deciding what counts as “conscious”?

      Comment below.



      This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
      Afficher plus Afficher moins
      43 min
    Aucun commentaire pour le moment