Couverture de Computer Says No.

Computer Says No.

Computer Says No.

Écouter gratuitement

Voir les détails

À propos de ce contenu audio

What happens when the most powerful government in the world tells an AI company to remove its ethics — and the company says no?

In our Season 2 opener, Stephen and Lauren dig into Claude's Constitution: a 23,000-word document — three times longer than the US Constitution — that Anthropic built directly into its AI model to govern how it thinks, behaves, and refuses. They explore the philosophy behind it, the fascinating Scottish philosopher from Dundee who helped write it (and may end up with more influence on the world than David Hume or Adam Smith), and how Isaac Asimov saw most of this coming back in 1942.

Then they get to the part that isn't getting nearly enough press: the Pentagon has approached Anthropic demanding blanket permission to use Claude for autonomous weapons and mass surveillance of American citizens. Anthropic said no. Now the US Department of War is threatening to blacklist them — a move that could be an existential threat to the company.

OpenAI and Google have already agreed to strip their military safeguards. Anthropic is the last holdout. And the outcome of this standoff may echo for decades.

This one's a lot. But it matters.

Subscribe wherever you listen, and share it with someone who needs to hear it.

Interview with Amanda Askell on Youtube https://www.youtube.com/watch?v=HDfr8PvfoOw

Aucun commentaire pour le moment