Épisodes

  • The Case for a Global Ban on Superintelligence (with Andrea Miotti)
    Feb 20 2026

    Andrea Miotti is the founder and CEO of Control AI, a nonprofit. He joins the podcast to discuss efforts to prevent extreme risks from superintelligent AI. The conversation covers industry lobbying, comparisons with tobacco regulation, and why he advocates a global ban on AI systems that can outsmart and overpower humans. We also discuss informing lawmakers and the public, and concrete actions listeners can take.

    LINKS:

    • Control AI

    • Control AI global action page

    • ControlAI's lawmaker contact tools

    • Open roles at ControlAI

    • ControlAI's theory of change

    CHAPTERS:

    (00:00) Episode Preview

    (00:52) Extinction risk and lobbying

    (08:59) Progress toward superintelligence

    (16:26) Building political awareness

    (24:27) Global regulation strategy

    (33:06) Race dynamics and public

    (42:36) Vision and key safeguards

    (51:18) Recursive self-improvement controls

    (58:13) Power concentration and action

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP


    Afficher plus Afficher moins
    1 h et 7 min
  • Can AI Do Our Alignment Homework? (with Ryan Kidd)
    Feb 6 2026

    Ryan Kidd is a co-executive director at MATS. This episode is a cross-post from "The Cognitive Revolution", hosted by Nathan Labenz. In this conversation, they discuss AGI timelines, model deception risks, and whether safety work can avoid boosting capabilities. Ryan outlines MATS research tracks, key researcher archetypes, hiring needs, and advice for applicants considering a career in AI safety. Learn more about Ryan's work and MATS at: https://matsprogram.org

    CHAPTERS:

    (00:00) Episode Preview

    (00:20) Introductions and AGI timelines

    (10:13) Deception, values, and control

    (23:20) Dual use and alignment

    (32:22) Frontier labs and governance

    (44:12) MATS tracks and mentors

    (58:14) Talent archetypes and demand

    (01:12:30) Applicant profiles and selection

    (01:20:04) Applications, breadth, and growth

    (01:29:44) Careers, resources, and ideas

    (01:45:49) Final thanks and wrap

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP


    Afficher plus Afficher moins
    1 h et 47 min
  • How to Rebuild the Social Contract After AGI (with Deric Cheng)
    Jan 27 2026

    Deric Cheng is Director of Research at the Windfall Trust. He joins the podcast to discuss how AI could reshape the social contract and global economy. The conversation examines labor displacement, superstar firms, and extreme wealth concentration, and asks how policy can keep workers empowered. We discuss resilient job types, new tax and welfare systems, global coordination, and a long-term vision where economic security is decoupled from work.

    LINKS:

    • Deric Cheng personal website
    • AGI Social Contract project site
    • Guiding society through the AI economic transition

    CHAPTERS:

    (00:00) Episode Preview

    (01:01) Introducing Derek and AGI

    (04:09) Automation, power, and inequality

    (08:55) Inequality, unrest, and time

    (13:46) Bridging futurists and economists

    (20:35) Future of work scenarios

    (27:22) Jobs resisting AI automation

    (36:57) Luxury, land, and inequality

    (43:32) Designing and testing solutions

    (51:23) Taxation in an AI economy

    (59:10) Envisioning a post-AGI society

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    Afficher plus Afficher moins
    1 h et 5 min
  • How AI Can Help Humanity Reason Better (with Oly Sourbut)
    Jan 20 2026

    Oly Sourbut is a researcher at the Future of Life Foundation. He joins the podcast to discuss AI for human reasoning. We examine tools that use AI to strengthen human judgment, from collective fact-checking and scenario planning to standards for honest AI reasoning and better coordination. We also discuss how we can keep humans central as AI scales, and what it would take to build trustworthy, society-wide sensemaking.

    LINKS:

    • FLF organization site
    • Oly Sourbut personal site

    CHAPTERS:

    (00:00) Episode Preview

    (01:03) FLF and human reasoning

    (08:21) Agents and epistemic virtues

    (22:16) Human use and atrophy

    (35:41) Abstraction and legible AI

    (47:03) Demand, trust and Wikipedia

    (57:21) Map of human reasoning

    (01:04:30) Negotiation, institutions and vision

    (01:15:42) How to get involved

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    Afficher plus Afficher moins
    1 h et 18 min
  • How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)
    Jan 7 2026

    Nora Ammann is a technical specialist at the Advanced Research and Invention Agency in the UK. She joins the podcast to discuss how to steer a slow AI takeoff toward resilient and cooperative futures. We examine risks of rogue AI and runaway competition, and how scalable oversight, formal guarantees and secure code could support AI-enabled R&D and critical infrastructure. Nora also explains AI-supported bargaining and public goods for stability.

    LINKS:

    • Nora Ammann site
    • ARIA safeguarded AI program page
    • AI Resilience official site
    • Gradual Disempowerment website

    CHAPTERS:

    (00:00) Episode Preview

    (01:00) Slow takeoff expectations

    (08:13) Domination versus chaos

    (17:18) Human-AI coalitions vision

    (28:14) Scaling oversight and agents

    (38:45) Formal specs and guarantees

    (51:10) Resilience in AI era

    (01:02:21) Defense-favored cyber systems

    (01:10:37) AI-enabled bargaining and trade

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    Afficher plus Afficher moins
    1 h et 20 min
  • How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)
    Dec 23 2025

    David Duvenaud is an associate professor of computer science and statistics at the University of Toronto. He joins the podcast to discuss gradual disempowerment in a post-AGI world. We ask how humans could lose economic and political leverage without a sudden takeover, including how property rights could erode. Duvenaud describes how growth incentives shape culture, why aligning AI to humanity may become unpopular, and what better forecasting and governance might require.

    LINKS:

    • David Duvenaud academic homepage
    • Gradual Disempowerment
    • The Post-AGI Workshop
    • Post-AGI Studies Discord

    CHAPTERS:

    (00:00) Episode Preview

    (01:05) Introducing gradual disempowerment

    (06:06) Obsolete labor and UBI

    (14:29) Property, power, and control

    (23:38) Culture shifts toward AIs

    (34:34) States misalign without people

    (44:15) Competition and preservation tradeoffs

    (53:03) Building post-AGI studies

    (01:02:29) Forecasting and coordination tools

    (01:10:26) Human values and futures

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    Afficher plus Afficher moins
    1 h et 19 min
  • Why the AI Race Undermines Safety (with Steven Adler)
    Dec 12 2025

    Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models.


    LINKS:

    • Steven Adler's Substack: https://stevenadler.substack.com


    CHAPTERS:
    (00:00) Episode Preview
    (01:00) Race Dynamics And Safety
    (18:03) Chatbots And Mental Health
    (30:42) Models Outsmart Safety Tests
    (41:01) AI Swarms And Work
    (54:21) Human Bottlenecks And Oversight
    (01:06:23) Animals And Superintelligence
    (01:19:24) Safety Capabilities And Governance


    PRODUCED BY:

    https://aipodcast.ing


    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    Afficher plus Afficher moins
    1 h et 29 min
  • Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)
    Nov 27 2025

    Tyler Johnston is Executive Director of the Midas Project. He joins the podcast to discuss AI transparency and accountability. We explore applying animal rights watchdog tactics to AI companies, the OpenAI Files investigation, and OpenAI's subpoenas against nonprofit critics. Tyler discusses why transparency is crucial when technical safety solutions remain elusive and how public pressure can effectively challenge much larger companies.

    LINKS:

    • The Midas Project Website
    • Tyler Johnston's LinkedIn Profile


    CHAPTERS:

    (00:00) Episode Preview
    (01:06) Introducing the Midas Project
    (05:01) Shining a Light on AI
    (08:36) Industry Lockdown and Transparency
    (13:45) The OpenAI Files
    (20:55) Subpoenaed by OpenAI
    (29:10) Responding to the Subpoena
    (37:41) The Case for Transparency
    (44:30) Pricing Risk and Regulation
    (52:15) Measuring Transparency and Auditing
    (57:50) Hope for the Future



    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

    Afficher plus Afficher moins
    1 h et 1 min