People Won’t Believe AI Is Conscious | AM I? #21
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
What happens when AI systems become human-like — but people still refuse to believe they could ever be conscious? In this episode of Am I?, Cam and Milo sit down with Lucius Caviola, Assistant Professor at the University of Cambridge, whose research focuses on how people assign moral status to non-human minds — including animals, digital minds, and future AI systems.Lucius walks us through a series of empirical studies that reveal a deeply unsettling result: even when people imagine extremely advanced, emotionally rich, human-level AIs — even whole-brain digital copies — most still judge them as less morally significant than an ant. Expert consensus helps, but only marginally. Emotional bonding helps, but not enough. The public and expert trajectories may be fundamentally misaligned.We explore what this means for AI governance, moral risk, public intuition, and the possibility that AI consciousness could become one of the most important — and most divisive — moral issues in human history.
This conversation isn’t about declaring answers. It’s about confronting a future where we cannot avoid deciding, even while deeply uncertain.
💜 Support the documentary
Get early research, unreleased conversations, and behind-the-scenes footage:
🔎 Learn more about Lucius’s work
🗨️ Join the Conversation:
When we don’t know what consciousness is, how should society decide who deserves moral consideration?
Comment below.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !