Couverture de For Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

De : The AI Risk Network
Écouter gratuitement

À propos de ce contenu audio

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

theairisknetwork.substack.comThe AI Risk Network
Sciences sociales
Épisodes
  • The Filmmaker Who Sat Across From Sam Altman - And Walked Away With Nothing
    Apr 14 2026

    In this episode of For Humanity, John sits down with Daniel Roher - Oscar-nominated documentary filmmaker and director of The Apocaloptimist, a new feature-length film designed as what Roher calls “a first date with AI” for people who haven’t been following the technology closely.

    Roher brings a career in high-profile documentary filmmaking and a willingness to confront uncomfortable truths. Now he’s turned that lens on AI - and what he found shook him.

    The central question: what happens when you sit across from the most powerful people building AI, ask them the hard questions, and get nothing back?

    Together, they explore:

    * Why Roher describes making this film as “a suicide run” - an impossible task no viewer would ever feel was done perfectly

    * What it was like to interview Sam Altman - and why Roher describes an “energetic misalignment” that left both of them frustrated

    * How speaking to both Eliezer Yudkowsky and Peter Diamandis made Roher feel like he was losing his mind - because both are brilliant, both are convincing, and both can’t be right

    * The meaning behind “apocaloptimist” - not a binary between doom and utopia, but a call to hold both promise and peril at the same time

    * Why Roher believes rejecting cynicism and nihilism is essential - and that public pressure and collective action still matter

    * John’s thought experiment: if curiosity is at the core of intelligence, why would a system a million times smarter than us tolerate being controlled by us?

    * Roher’s pushback: if it’s that smart, couldn’t it equally become a benevolent guide? And why he prefers to focus on what can be done now rather than speculate about superintelligence

    * The historical parallel to nuclear weapons - and why AI may demand similar international institutional responses

    * John’s P(doom) of 75-80% on a two-to-five-year timeline - and how, paradoxically, he says he’s in the best mental state of his life

    * Why most people already understand the risk (polling shows roughly 80% agreement) but feel powerless to act - and why that sense of agency is the missing piece

    What stood out

    One of the most striking moments comes when Roher describes the experience of interviewing AI CEOs. He says there is “no interior life” to access - just polished talking points stacked on top of each other. John adds that the “fake earnestness” of these leaders shields what he sees as deeper evasion. Together, they paint a picture of an industry that asks for regulation publicly while lobbying against it privately.

    But the conversation isn’t just about frustration. Roher’s thesis - the apocaloptimist worldview - is ultimately about refusing to give up. He argues that burying your head in the sand is “probably the only wrong thing to do.” He believes the technology feels inevitable, but the trajectory does not. And he’s betting on the idea that enough people, caring enough, can still bend the arc.

    John’s own reflection near the end is equally powerful. Despite holding an 80% probability of catastrophic outcomes, he describes walking around the Baltimore Harbor feeling more present and appreciative of life than ever before. It’s a reminder that engaging with existential risk doesn’t have to mean despair - it can mean living with more intention, more gratitude, and more purpose.

    If you’ve ever wondered what it’s like to look directly at this issue and still choose to act, this conversation is for you.

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the threat and find a path forward.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Afficher plus Afficher moins
    39 min
  • How to Talk About AI Risk Without Scaring People Away (With Philip Trippenbach) | For Humanity 82
    Mar 28 2026

    In this episode of For Humanity, John sits down with Philip Trippenbach, Strategy Director at the Seismic Foundation, a team of veteran advertising, PR, and communications professionals who have turned their expertise toward one of the most urgent challenges of our time: getting the public to actually care about AI risk.Philip brings a decade in journalism at the CBC and BBC, and another decade in strategic communications for global brands. Now he's applying all of it to the AI safety movement, and what he has to say should change the way the movement thinks about messaging.The central question: why has one of the most important issues in human history failed to break through... and what would it actually take to fix that?

    Together, they explore:

    * Why the AI safety world has historically rejected advertising, marketing, and PR — and why that's a problem

    * Audience segmentation: why you can't say the same thing to everyone

    * What Google Trends data reveals about how public interest in AI risk is actually shifting

    * The surprising finding: AI extinction searches are being eclipsed by AI jobs, AI and children, and AI suicide

    * Why "this isn't fair" may be a more powerful message than "we're all going to die"

    * The case for creating friction across many AI harms as a path to slowing things down

    * How public demand drives policy — and what $400K/day in tech lobbying means for the movement

    * Why Seismic exists: raising the salience of AI risk through targeted, professional communications

    * What it looks like to run a real, orchestrated public awareness campaign on AI

    If you've ever felt like the AI safety movement is brilliant at research and terrible at talking to regular people than this episode is required viewing.

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Afficher plus Afficher moins
    1 h et 36 min
  • We Debated the Future of AI Safety in Brussels — Here's What Happened
    Mar 15 2026

    In this episode of For Humanity, John travels to Brussels, Belgium for PauseCon — the global gathering of Pause AI volunteers and advocates — joined by board member and author Louis Berman and filmmaker Beau Kershaw.

    The goal: train activists to be more effective in the fight against AI risk. What unfolded was one of the most honest conversations in the AI safety movement about why, despite 80% public support, almost nobody is actually showing up.

    John didn’t pull punches. Nothing is working. Not fast enough. Not at the scale we need. But the energy is out there — and this episode is about where to find it and how to channel it.

    The centerpiece is a live debate between John and Max Winga of Control AI on one of the most divisive strategic questions in the movement:

    Should we talk about extinction risk directly — or meet people where they are with the harms happening right now?

    Together, they explore:

    * Why 80% public support hasn’t translated into mass mobilization

    * The case for leading with existential risk vs. “mundane” AI harms

    * Data centers, community opposition, and financial pain as a strategy

    * Why John believes laws and treaties alone won’t save us

    * The winning state: making unsafe AI bad for business

    * What’s actually moving the needle in the US right now

    * How to talk to someone about AI risk without losing them

    * The “yes and” approach vs. the AI safety world’s love of “no but”

    If you've ever wondered why the AI safety movement struggles to break through despite overwhelming public agreement — this episode is required viewing.

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Afficher plus Afficher moins
    1 h et 41 min
Aucun commentaire pour le moment