In this episode of For Humanity, John sits down with Daniel Roher - Oscar-nominated documentary filmmaker and director of The Apocaloptimist, a new feature-length film designed as what Roher calls “a first date with AI” for people who haven’t been following the technology closely.
Roher brings a career in high-profile documentary filmmaking and a willingness to confront uncomfortable truths. Now he’s turned that lens on AI - and what he found shook him.
The central question: what happens when you sit across from the most powerful people building AI, ask them the hard questions, and get nothing back?
Together, they explore:
* Why Roher describes making this film as “a suicide run” - an impossible task no viewer would ever feel was done perfectly
* What it was like to interview Sam Altman - and why Roher describes an “energetic misalignment” that left both of them frustrated
* How speaking to both Eliezer Yudkowsky and Peter Diamandis made Roher feel like he was losing his mind - because both are brilliant, both are convincing, and both can’t be right
* The meaning behind “apocaloptimist” - not a binary between doom and utopia, but a call to hold both promise and peril at the same time
* Why Roher believes rejecting cynicism and nihilism is essential - and that public pressure and collective action still matter
* John’s thought experiment: if curiosity is at the core of intelligence, why would a system a million times smarter than us tolerate being controlled by us?
* Roher’s pushback: if it’s that smart, couldn’t it equally become a benevolent guide? And why he prefers to focus on what can be done now rather than speculate about superintelligence
* The historical parallel to nuclear weapons - and why AI may demand similar international institutional responses
* John’s P(doom) of 75-80% on a two-to-five-year timeline - and how, paradoxically, he says he’s in the best mental state of his life
* Why most people already understand the risk (polling shows roughly 80% agreement) but feel powerless to act - and why that sense of agency is the missing piece
What stood out
One of the most striking moments comes when Roher describes the experience of interviewing AI CEOs. He says there is “no interior life” to access - just polished talking points stacked on top of each other. John adds that the “fake earnestness” of these leaders shields what he sees as deeper evasion. Together, they paint a picture of an industry that asks for regulation publicly while lobbying against it privately.
But the conversation isn’t just about frustration. Roher’s thesis - the apocaloptimist worldview - is ultimately about refusing to give up. He argues that burying your head in the sand is “probably the only wrong thing to do.” He believes the technology feels inevitable, but the trajectory does not. And he’s betting on the idea that enough people, caring enough, can still bend the arc.
John’s own reflection near the end is equally powerful. Despite holding an 80% probability of catastrophic outcomes, he describes walking around the Baltimore Harbor feeling more present and appreciative of life than ever before. It’s a reminder that engaging with existential risk doesn’t have to mean despair - it can mean living with more intention, more gratitude, and more purpose.
If you’ve ever wondered what it’s like to look directly at this issue and still choose to act, this conversation is for you.
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the threat and find a path forward.
Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe