Couverture de Future of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute Podcast

De : Future of Life Institute
Écouter gratuitement

À propos de ce contenu audio

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.All rights reserved
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • How to Rebuild the Social Contract After AGI (with Deric Cheng)
      Jan 27 2026

      Deric Cheng is Director of Research at the Windfall Trust. He joins the podcast to discuss how AI could reshape the social contract and global economy. The conversation examines labor displacement, superstar firms, and extreme wealth concentration, and asks how policy can keep workers empowered. We discuss resilient job types, new tax and welfare systems, global coordination, and a long-term vision where economic security is decoupled from work.

      LINKS:

      • Deric Cheng personal website

      • AGI Social Contract project site

      • Guiding society through the AI economic transition

      CHAPTERS:

      (00:00) Episode Preview

      (01:01) Introducing Derek and AGI

      (04:09) Automation, power, and inequality

      (08:55) Inequality, unrest, and time

      (13:46) Bridging futurists and economists

      (20:35) Future of work scenarios

      (27:22) Jobs resisting AI automation

      (36:57) Luxury, land, and inequality

      (43:32) Designing and testing solutions

      (51:23) Taxation in an AI economy

      (59:10) Envisioning a post-AGI society

      PRODUCED BY:

      https://aipodcast.ing

      SOCIAL LINKS:

      Website: https://podcast.futureoflife.org

      Twitter (FLI): https://x.com/FLI_org

      Twitter (Gus): https://x.com/gusdocker

      LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

      YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

      Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

      Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP


      Afficher plus Afficher moins
      1 h et 5 min
    • How AI Can Help Humanity Reason Better (with Oly Sourbut)
      Jan 20 2026

      Oly Sourbut is a researcher at the Future of Life Foundation. He joins the podcast to discuss AI for human reasoning. We examine tools that use AI to strengthen human judgment, from collective fact-checking and scenario planning to standards for honest AI reasoning and better coordination. We also discuss how we can keep humans central as AI scales, and what it would take to build trustworthy, society-wide sensemaking.

      LINKS:

      • FLF organization site
      • Oly Sourbut personal site

      CHAPTERS:

      (00:00) Episode Preview

      (01:03) FLF and human reasoning

      (08:21) Agents and epistemic virtues

      (22:16) Human use and atrophy

      (35:41) Abstraction and legible AI

      (47:03) Demand, trust and Wikipedia

      (57:21) Map of human reasoning

      (01:04:30) Negotiation, institutions and vision

      (01:15:42) How to get involved

      PRODUCED BY:

      https://aipodcast.ing

      SOCIAL LINKS:

      Website: https://podcast.futureoflife.org

      Twitter (FLI): https://x.com/FLI_org

      Twitter (Gus): https://x.com/gusdocker

      LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

      YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

      Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

      Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

      Afficher plus Afficher moins
      1 h et 18 min
    • How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)
      Jan 7 2026

      Nora Ammann is a technical specialist at the Advanced Research and Invention Agency in the UK. She joins the podcast to discuss how to steer a slow AI takeoff toward resilient and cooperative futures. We examine risks of rogue AI and runaway competition, and how scalable oversight, formal guarantees and secure code could support AI-enabled R&D and critical infrastructure. Nora also explains AI-supported bargaining and public goods for stability.

      LINKS:

      • Nora Ammann site
      • ARIA safeguarded AI program page
      • AI Resilience official site
      • Gradual Disempowerment website

      CHAPTERS:

      (00:00) Episode Preview

      (01:00) Slow takeoff expectations

      (08:13) Domination versus chaos

      (17:18) Human-AI coalitions vision

      (28:14) Scaling oversight and agents

      (38:45) Formal specs and guarantees

      (51:10) Resilience in AI era

      (01:02:21) Defense-favored cyber systems

      (01:10:37) AI-enabled bargaining and trade

      PRODUCED BY:

      https://aipodcast.ing

      SOCIAL LINKS:

      Website: https://podcast.futureoflife.org

      Twitter (FLI): https://x.com/FLI_org

      Twitter (Gus): https://x.com/gusdocker

      LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

      YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

      Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

      Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

      Afficher plus Afficher moins
      1 h et 20 min
    Aucun commentaire pour le moment