Épisodes

  • Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]
    Jan 23 2026

    Professor Mazviita Chirimuuta joins us for a fascinating deep dive into the philosophy of neuroscience and what it really means to understand the mind.*What can neuroscience actually tell us about how the mind works?* In this thought-provoking conversation, we explore the hidden assumptions behind computational theories of the brain, the limits of scientific abstraction, and why the question of machine consciousness might be more complicated than AI researchers assume.Mazviita, author of *The Brain Abstracted,* brings a unique perspective shaped by her background in both neuroscience research and philosophy. She challenges us to think critically about the metaphors we use to understand cognition — from the reflex theory of the late 19th century to today's dominant view of the brain as a computer.*Key topics explored:**The problem of oversimplification* — Why scientific models necessarily leave things out, and how this can sometimes lead entire fields astray. The cautionary tale of reflex theory shows how elegant explanations can blind us to biological complexity.*Is the brain really a computer?* — Mazviita unpacks the philosophical assumptions behind computational neuroscience and asks: if we can model anything computationally, what makes brains special? The answer might challenge everything you thought you knew about AI.*Haptic realism* — A fresh way of thinking about scientific knowledge that emphasizes interaction over passive observation. Knowledge isn't about reading the "source code of the universe" — it's something we actively construct through engagement with the world.*Why embodiment matters for understanding* — Can a disembodied language model truly understand? Mazviita makes a compelling case that human cognition is deeply entangled with our sensory-motor engagement and biological existence in ways that can't simply be abstracted away.*Technology and human finitude* — Drawing on Heidegger, we discuss how the dream of transcending our physical limitations through technology might reflect a fundamental misunderstanding of what it means to be a knower.This conversation is essential viewing for anyone interested in AI, consciousness, philosophy of mind, or the future of cognitive science. Whether you're skeptical of strong AI claims or a true believer in machine consciousness, Mazviita's careful philosophical analysis will give you new tools for thinking through these profound questions.---TIMESTAMPS:00:00:00 The Problem of Generalizing Neuroscience00:02:51 Abstraction vs. Idealization: The "Kaleidoscope"00:05:39 Platonism in AI: Discovering or Inventing Patterns?00:09:42 When Simplification Fails: The Reflex Theory00:12:23 Behaviorism and the "Black Box" Trap00:14:20 Haptic Realism: Knowledge Through Interaction00:20:23 Is Nature Protean? The Myth of Converging Truth00:23:23 The Computational Theory of Mind: A Useful Fiction?00:27:25 Biological Constraints: Why Brains Aren't Just Neural Nets00:31:01 Agency, Distal Causes, and Dennett's Stances00:37:13 Searle's Challenge: Causal Powers and Understanding00:41:58 Heidegger's Warning & The Experiment on Children---REFERENCES:Book:[00:01:28] The Brain Abstractedhttps://mitpress.mit.edu/9780262548045/the-brain-abstracted/[00:11:05] The Integrated Action of the Nervous Systemhttps://www.amazon.sg/integrative-action-nervous-system/dp/9354179029[00:18:15] The Quest for Certainty (Dewey)https://www.amazon.com/Quest-Certainty-Relation-Knowledge-Lectures/dp/0399501916[00:19:45] Realism for Realistic People (Chang)https://www.cambridge.org/core/books/realism-for-realistic-people/ACC93A7F03B15AA4D6F3A466E3FC5AB7---RESCRIPT:https://app.rescript.info/public/share/A6cZ1TY35p8ORMmYCWNBI0no9ChU3-Kx7dPXGJURvZ0PDF Transcript:https://app.rescript.info/api/public/sessions/0fb7767e066cf712/pdf

    Afficher plus Afficher moins
    54 min
  • Why Every Brain Metaphor in History Has Been Wrong [SPECIAL EDITION]
    Jan 23 2026

    What if everything we think we know about the brain is just a really good metaphor that we forgot was a metaphor?This episode takes you on a journey through the history of scientific simplification, from a young Karl Friston watching wood lice in his garden to the bold claims that your mind is literally software running on biological hardware.We bring together some of the most brilliant minds we've interviewed — Professor Mazviita Chirimuuta, Francois Chollet, Joscha Bach, Professor Luciano Floridi, Professor Noam Chomsky, Nobel laureate John Jumper, and more — to wrestle with a deceptively simple question: *When scientists simplify reality to study it, what gets captured and what gets lost?**Key ideas explored:**The Spherical Cow Problem* — Science requires simplification. We're limited creatures trying to understand systems far more complex than our working memory can hold. But when does a useful model become a dangerous illusion?*The Kaleidoscope Hypothesis* — Francois Chollet's beautiful idea that beneath all the apparent chaos of reality lies simple, repeating patterns — like bits of colored glass in a kaleidoscope creating infinite complexity. Is this profound truth or Platonic wishful thinking?*Is Software Really Spirit?* — Joscha Bach makes the provocative claim that software is literally spirit, not metaphorically. We push back on this, asking whether the "sameness" we see across different computers running the same program exists in nature or only in our descriptions.*The Cultural Illusion of AGI* — Why does artificial general intelligence seem so inevitable to people in Silicon Valley? Professor Chirimuuta suggests we might be caught in a "cultural historical illusion" — our mechanistic assumptions about minds making AI seem like destiny when it might just be a bet.*Prediction vs. Understanding* — Nobel Prize winner John Jumper: AI can predict and control, but understanding requires a human in the loop. Throughout history, we've described the brain as hydraulic pumps, telegraph networks, telephone switchboards, and now computers. Each metaphor felt obviously true at the time. This episode asks: what will we think was naive about our current assumptions in fifty years?Featuring insights from *The Brain Abstracted* by Mazviita Chirimuuta — possibly the most influential book on how we think about thinking in 2025.---TIMESTAMPS:00:00:00 The Wood Louse & The Spherical Cow00:02:04 The Necessity of Abstraction00:04:42 Simplicius vs. Ignorantio: The Boxing Match00:06:39 The Kaleidoscope Hypothesis00:08:40 Is the Mind Software?00:13:15 Critique of Causal Patterns00:14:40 Temperature is Not a Thing00:18:24 The Ship of Theseus & Ontology00:23:45 Metaphors Hardening into Reality00:25:41 The Illusion of AGI Inevitability00:27:45 Prediction vs. Understanding00:32:00 Climbing the Mountain vs. The Helicopter00:34:53 Haptic Realism & The Limits of Knowledge---REFERENCES:Person:[00:00:00] Karl Friston (UCL)https://profiles.ucl.ac.uk/1236-karl-friston[00:06:30] Francois Chollethttps://fchollet.com/[00:14:41] Cesar Hidalgo, MLST interview.https://www.youtube.com/watch?v=vzpFOJRteeI[00:30:30] Terence Tao's Bloghttps://terrytao.wordpress.com/Book:[00:02:25] The Brain Abstractedhttps://mitpress.mit.edu/9780262548045/the-brain-abstracted/[00:06:00] On Learned Ignorancehttps://www.amazon.com/Nicholas-Cusa-learned-ignorance-translation/dp/0938060236[00:24:15] Science and the Modern Worldhttps://amazon.com/dp/0684836394


    RESCRIPT:https://app.rescript.info/public/share/CYy0ex2M2kvcVRdMnSUky5O7H7hB7v2u_nVhoUiuKD4PDF Transcript: https://app.rescript.info/api/public/sessions/6c44c41e1e0fa6dd/pdf

    Thank you to Dr. Maxwell Ramstead for early script work on this show (Ph.D student of Friston) and the woodlice story came from him!

    Afficher plus Afficher moins
    42 min
  • Bayesian Brain, Scientific Method, and Models [Dr. Jeff Beck]
    Dec 31 2025

    Dr. Jeff Beck, mathematician turned computational neuroscientist, joins us for a fascinating deep dive into why the future of AI might look less like ChatGPT and more like your own brain.


    **SPONSOR MESSAGES START**

    Prolific - Quality data. From real people. For faster breakthroughs.

    https://www.prolific.com/?utm_source=mlst

    **END**


    *What if the key to building truly intelligent machines isn't bigger models, but smarter ones?*


    In this conversation, Jeff makes a compelling case that we've been building AI backwards. While the tech industry races to scale up transformers and language models, Jeff argues we're missing something fundamental: the brain doesn't work like a giant prediction engine. It works like a scientist, constantly testing hypotheses about a world made of *objects* that interact through *forces* — not pixels and tokens.


    *The Bayesian Brain* — Jeff explains how your brain is essentially running the scientific method on autopilot. When you combine what you see with what you hear, you're doing optimal Bayesian inference without even knowing it. This isn't just philosophy — it's backed by decades of behavioral experiments showing humans are surprisingly efficient at handling uncertainty.


    *AutoGrad Changed Everything* — Forget transformers for a moment. Jeff argues the real hero of the AI boom was automatic differentiation, which turned AI from a math problem into an engineering problem. But in the process, we lost sight of what actually makes intelligence work.


    *The Cat in the Warehouse Problem* — Here's where it gets practical. Imagine a warehouse robot that's never seen a cat. Current AI would either crash or make something up. Jeff's approach? Build models that *know what they don't know*, can phone a friend to download new object models on the fly, and keep learning continuously. It's like giving robots the ability to say "wait, what IS that?" instead of confidently being wrong.


    *Why Language is a Terrible Model for Thought* — In a provocative twist, Jeff argues that grounding AI in language (like we do with LLMs) is fundamentally misguided. Self-report is the least reliable data in psychology — people routinely explain their own behavior incorrectly. We should be grounding AI in physics, not words.


    *The Future is Lots of Little Models* — Instead of one massive neural network, Jeff envisions AI systems built like video game engines: thousands of small, modular object models that can be combined, swapped, and updated independently. It's more efficient, more flexible, and much closer to how we actually think.


    Rescript: https://app.rescript.info/public/share/D-b494t8DIV-KRGYONJghvg-aelMmxSDjKthjGdYqsE


    ---

    TIMESTAMPS:

    00:00:00 Introduction & The Bayesian Brain

    00:01:25 Bayesian Inference & Information Processing

    00:05:17 The Brain Metaphor: From Levers to Computers

    00:10:13 Micro vs. Macro Causation & Instrumentalism

    00:16:59 The Active Inference Community & AutoGrad

    00:22:54 Object-Centered Models & The Grounding Problem

    00:35:50 Scaling Bayesian Inference & Architecture Design

    00:48:05 The Cat in the Warehouse: Solving Generalization

    00:58:17 Alignment via Belief Exchange

    01:05:24 Deception, Emergence & Cellular Automata


    ---

    REFERENCES:

    Paper:

    [00:00:24] Zoubin Ghahramani (Google DeepMind)

    https://pmc.ncbi.nlm.nih.gov/articles/PMC3538441/pdf/rsta201

    [00:19:20] Mamba: Linear-Time Sequence Modeling

    https://arxiv.org/abs/2312.00752

    [00:27:36] xLSTM: Extended Long Short-Term Memory

    https://arxiv.org/abs/2405.04517

    [00:41:12] 3D Gaussian Splatting

    https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

    [01:07:09] Lenia: Biology of Artificial Life

    https://arxiv.org/abs/1812.05433

    [01:08:20] Growing Neural Cellular Automata

    https://distill.pub/2020/growing-ca/

    [01:14:05] DreamCoder

    https://arxiv.org/abs/2006.08381

    [01:14:58] The Genomic Bottleneck

    https://www.nature.com/articles/s41467-019-11786-6

    Person:

    [00:16:42] Karl Friston (UCL)

    https://www.youtube.com/watch?v=PNYWi996Beg

    Afficher plus Afficher moins
    1 h et 17 min
  • Your Brain is Running a Simulation Right Now [Max Bennett]
    Dec 30 2025

    Tim sits down with Max Bennett to explore how our brains evolved over 600 million years—and what that means for understanding both human intelligence and AI.


    Max isn't a neuroscientist by training. He's a tech entrepreneur who got curious, started reading, and ended up weaving together three fields that rarely talk to each other: comparative psychology (what different animals can actually do), evolutionary neuroscience (how brains changed over time), and AI (what actually works in practice).


    *Your Brain Is a Guessing Machine*

    You don't actually "see" the world. Your brain builds a simulation of what it *thinks* is out there and just uses your eyes to check if it's right. That's why optical illusions work—your brain is filling in a triangle that isn't there, or can't decide if it's looking at a duck or a rabbit.


    *Rats Have Regrets*

    *Chimps Are Machiavellian*

    *Language Is the Human Superpower*

    *Does ChatGPT Think?*


    (truncated description, more on rescript)


    Understanding how the brain evolved isn't just about the past. It gives us clues about:

    - What's actually different between human intelligence and AI

    - Why we're so easily fooled by status games and tribal thinking

    - What features we might want to build into—or leave out of—future AI systems


    Get Max's book:

    https://www.amazon.com/Brief-History-Intelligence-Humans-Breakthroughs/dp/0063286343


    Rescript: https://app.rescript.info/public/share/R234b7AXyDXZusqQ_43KMGsUSvJ2TpSz2I3emnI6j9A


    ---

    TIMESTAMPS:

    00:00:00 Introduction: Outsider's Advantage & Neocortex Theories

    00:11:34 Perception as Inference: The Filling-In Machine

    00:19:11 Understanding, Recognition & Generative Models

    00:36:39 How Mice Plan: Vicarious Trial & Error

    00:46:15 Evolution of Self: The Layer 4 Mystery

    00:58:31 Ancient Minds & The Social Brain: Machiavellian Apes

    01:19:36 AI Alignment, Instrumental Convergence & Status Games

    01:33:07 Metacognition & The IQ Paradox

    01:48:40 Does GPT Have Theory of Mind?

    02:00:40 Memes, Language Singularity & Brain Size Myths

    02:16:44 Communication, Language & The Cyborg Future

    02:44:25 Shared Fictions, World Models & The Reality Gap


    ---

    REFERENCES:Person:

    [00:00:05] Karl Friston (UCL)

    https://www.youtube.com/watch?v=PNYWi996Beg

    [00:00:06] Jeff Hawkins

    https://www.youtube.com/watch?v=6VQILbDqaI4

    [00:12:19] Hermann von Helmholtz

    https://plato.stanford.edu/entries/hermann-helmholtz/

    [00:38:34] David Redish (U. Minnesota)

    https://redishlab.umn.edu/

    [01:10:19] Robin Dunbar

    https://www.psy.ox.ac.uk/people/robin-dunbar

    [01:15:04] Emil Menzel

    https://www.sciencedirect.com/bookseries/behavior-of-nonhuman-primates/vol/5/suppl/C

    [01:19:49] Nick Bostrom

    https://nickbostrom.com/

    [02:28:25] Noam Chomsky

    https://linguistics.mit.edu/user/chomsky/

    [03:01:22] Judea Pearl

    https://samueli.ucla.edu/people/judea-pearl/

    Concept/Framework:

    [00:05:04] Active Inference

    https://www.youtube.com/watch?v=KkR24ieh5Ow

    Paper:

    [00:35:59] Predictions not commands [Rick A Adams]

    https://pubmed.ncbi.nlm.nih.gov/23129312/

    Book:

    [01:25:42] The Elephant in the Brain

    https://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995

    [01:28:27] The Status Game

    https://www.goodreads.com/book/show/58642436-the-status-game

    [02:00:40] The Selfish Gene

    https://amazon.com/dp/0198788606

    [02:14:25] The Language Game

    https://www.amazon.com/Language-Game-Improvisation-Created-Changed/dp/1541674987

    [02:54:40] The Evolution of Language

    https://www.amazon.com/Evolution-Language-Approaches/dp/052167736X

    [03:09:37] The Three-Body Problem

    https://amazon.com/dp/0765377063

    Afficher plus Afficher moins
    3 h et 17 min
  • The 3 Laws of Knowledge [César Hidalgo]
    Dec 27 2025

    César Hidalgo has spent years trying to answer a deceptively simple question: What is knowledge, and why is it so hard to move around?


    We all have this intuition that knowledge is just... information. Write it down in a book, upload it to GitHub, train an AI on it—done. But César argues that's completely wrong. Knowledge isn't a thing you can copy and paste. It's more like a living organism that needs the right environment, the right people, and constant exercise to survive.


    Guest: César Hidalgo, Director of the Center for Collective Learning


    1. Knowledge Follows Laws (Like Physics)

    2. You Can't Download Expertise

    3. Why Big Companies Fail to Adapt

    4. The "Infinite Alphabet" of Economies


    If you think AI can just "copy" human knowledge, or that development is just about throwing money at poor countries, or that writing things down preserves them forever—this conversation will change your mind. Knowledge is fragile, specific, and collective. It decays fast if you don't use it.


    The Infinite Alphabet [César A. Hidalgo]

    https://www.penguin.co.uk/books/458054/the-infinite-alphabet-by-hidalgo-cesar-a/9780241655672

    https://x.com/cesifoti


    Rescript link.

    https://app.rescript.info/public/share/eaBHbEo9xamwbwpxzcVVm4NQjMh7lsOQKeWwNxmw0JQ


    ---

    TIMESTAMPS:

    00:00:00 The Three Laws of Knowledge

    00:02:28 Rival vs. Non-Rival: The Economics of Ideas

    00:05:43 Why You Can't Just 'Download' Knowledge

    00:08:11 The Detective Novel Analogy

    00:11:54 Collective Learning & Organizational Networks

    00:16:27 Architectural Innovation: Amazon vs. Barnes & Noble

    00:19:15 The First Law: Learning Curves

    00:23:05 The Samuel Slater Story: Treason & Memory

    00:28:31 Physics of Knowledge: Joule's Cannon

    00:32:33 Extensive vs. Intensive Properties

    00:35:45 Knowledge Decay: Ise Temple & Polaroid

    00:41:20 Absorptive Capacity: Sony & Donetsk

    00:47:08 Disruptive Innovation & S-Curves

    00:51:23 Team Size & The Cost of Innovation

    00:57:13 Geography of Knowledge: Vespa's Origin

    01:04:34 Migration, Diversity & 'Planet China'

    01:12:02 Institutions vs. Knowledge: The China Story

    01:21:27 Economic Complexity & The Infinite Alphabet

    01:32:27 Do LLMs Have Knowledge?


    ---

    REFERENCES:

    Book:

    [00:47:45] The Innovator's Dilemma (Christensen)

    https://www.amazon.com/Innovators-Dilemma-Revolutionary-Change-Business/dp/0062060244

    [00:55:15] Why Greatness Cannot Be Planned

    https://amazon.com/dp/3319155237

    [01:35:00] Why Information Grows

    https://amazon.com/dp/0465048994

    Paper:

    [00:03:15] Endogenous Technological Change (Romer, 1990)

    https://web.stanford.edu/~klenow/Romer_1990.pdf

    [00:03:30] A Model of Growth Through Creative Destruction (Aghion & Howitt, 1992)

    https://dash.harvard.edu/server/api/core/bitstreams/7312037d-2b2d-6bd4-e053-0100007fdf3b/content

    [00:14:55] Organizational Learning: From Experience to Knowledge (Argote & Miron-Spektor, 2011)

    https://www.researchgate.net/publication/228754233_Organizational_Learning_From_Experience_to_Knowledge

    [00:17:05] Architectural Innovation (Henderson & Clark, 1990)

    https://www.researchgate.net/publication/200465578_Architectural_Innovation_The_Reconfiguration_of_Existing_Product_Technologies_and_the_Failure_of_Established_Firms

    [00:19:45] The Learning Curve Equation (Thurstone, 1916)

    https://dn790007.ca.archive.org/0/items/learningcurveequ00thurrich/learningcurveequ00thurrich.pdf

    [00:21:30] Factors Affecting the Cost of Airplanes (Wright, 1936)

    https://pdodds.w3.uvm.edu/research/papers/others/1936/wright1936a.pdf

    [00:52:45] Are Ideas Getting Harder to Find? (Bloom et al.)

    https://web.stanford.edu/~chadj/IdeaPF.pdf

    [01:33:00] LLMs/ Emergence

    https://arxiv.org/abs/2506.11135

    Person:

    [00:25:30] Samuel Slater

    https://en.wikipedia.org/wiki/Samuel_Slater

    [00:42:05] Masaru Ibuka (Sony)

    https://www.sony.com/en/SonyInfo/CorporateInfo/History/SonyHistory/1-02.html


    Afficher plus Afficher moins
    1 h et 37 min
  • "I Desperately Want To Live In The Matrix" - Dr. Mike Israetel
    Dec 24 2025

    This is a lively, no-holds-barred debate about whether AI can truly be intelligent, conscious, or understand anything at all — and what happens when (or if) machines become smarter than us.


    Dr. Mike Israetel is a sports scientist, entrepreneur, and co-founder of RP Strength (a fitness company). He describes himself as a "dilettante" in AI but brings a fascinating outsider's perspective.


    Jared Feather (IFBB Pro bodybuilder and exercise physiologist)


    The Big Questions:


    1. When is superintelligence coming?

    2. Does AI actually understand anything?

    3. The Simulation Debate (The Spiciest Part)

    4. Will AI kill us all? (The Doomer Debate)

    5. What happens to human jobs and purpose?

    6. Do we need suffering?


    Mikes channel: https://www.youtube.com/channel/UCfQgsKhHjSyRLOp9mnffqVg


    RESCRIPT INTERACTIVE PLAYER: https://app.rescript.info/public/share/GVMUXHCqctPkXH8WcYtufFG7FQcdJew_RL_MLgMKU1U


    ---

    TIMESTAMPS:

    00:00:00 Introduction & Workout Demo

    00:04:15 ASI Timelines & Definitions

    00:10:24 The Embodiment Debate

    00:18:28 Neutrinos & Abstract Knowledge

    00:25:56 Can AI Learn From YouTube?

    00:31:25 Diversity of Intelligence

    00:36:00 AI Slop & Understanding

    00:45:18 The Simulation Argument: Fire & Water

    00:58:36 Consciousness & Zombies

    01:04:30 Do Reasoning Models Actually Reason?

    01:12:00 The Live Learning Problem

    01:19:15 Superintelligence & Benevolence

    01:28:59 What is True Agency?

    01:37:20 Game Theory & The "Kill All Humans" Fallacy

    01:48:05 Regulation & The China Factor

    01:55:52 Mind Uploading & The Future of Love

    02:04:41 Economics of ASI: Will We Be Useless?

    02:13:35 The Matrix & The Value of Suffering

    02:17:30 Transhumanism & Inequality

    02:21:28 Debrief: AI Medical Advice & Final Thoughts


    ---

    REFERENCES:

    Paper:

    [00:10:45] Alchemy and Artificial Intelligence (Dreyfus)

    https://www.rand.org/content/dam/rand/pubs/papers/2006/P3244.pdf

    [00:10:55] The Chinese Room Argument (John Searle)

    https://home.csulb.edu/~cwallis/382/readings/482/searle.minds.brains.programs.bbs.1980.pdf

    [00:11:05] The Symbol Grounding Problem (Stephen Harnad)

    https://arxiv.org/html/cs/9906002

    [00:23:00] Attention Is All You Need

    https://arxiv.org/abs/1706.03762

    [00:45:00] GPT-4 Technical Report

    https://arxiv.org/abs/2303.08774

    [01:45:00] Anthropic Agentic Misalignment Paper

    https://www.anthropic.com/research/agentic-misalignment

    [02:17:45] Retatrutide

    https://pubmed.ncbi.nlm.nih.gov/37366315/

    Organization:

    [00:15:50] CERN

    https://home.cern/

    [01:05:00] METR Long Horizon Evaluations

    https://evaluations.metr.org/

    MLST Episode:

    [00:23:10] MLST: Llion Jones - Inventors' Remorse

    https://www.youtube.com/watch?v=DtePicx_kFY

    [00:50:30] MLST: Blaise Agüera y Arcas Interview

    https://www.youtube.com/watch?v=rMSEqJ_4EBk

    [01:10:00] MLST: David Krakauer

    https://www.youtube.com/watch?v=dY46YsGWMIc

    Event:

    [00:23:40] ARC Prize/Challenge

    https://arcprize.org/

    Book:

    [00:24:45] The Brain Abstracted

    https://www.amazon.com/Brain-Abstracted-Simplification-Philosophy-Neuroscience/dp/0262548046

    [00:47:55] Pamela McCorduck

    https://www.amazon.com/Machines-Who-Think-Artificial-Intelligence/dp/1568812051

    [01:23:15] The Singularity Is Nearer (Ray Kurzweil)

    https://www.amazon.com/Singularity-Nearer-Ray-Kurzweil-ebook/dp/B08Y6FYJVY

    [01:27:35] A Fire Upon The Deep (Vernor Vinge)

    https://www.amazon.com/Fire-Upon-Deep-S-F-MASTERWORKS-ebook/dp/B00AVUMIZE/

    [02:04:50] Deep Utopia (Nick Bostrom)

    https://www.amazon.com/Deep-Utopia-Meaning-Solved-World/dp/1646871642

    [02:05:00] Technofeudalism (Yanis Varoufakis)

    https://www.amazon.com/Technofeudalism-Killed-Capitalism-Yanis-Varoufakis/dp/1685891241

    Visual Context Needed:

    [00:29:40] AT-AT Walker (Star Wars)

    https://starwars.fandom.com/wiki/All_Terrain_Armored_Transport

    Person:

    [00:33:15] Andrej Karpathy

    https://karpathy.ai/

    Video:

    [01:40:00] Mike Israetel vs Liron Shapira AI Doom Debate

    https://www.youtube.com/watch?v=RaDWSPMdM4o

    Company:

    [02:26:30] Examine.com

    https://examine.com/

    Afficher plus Afficher moins
    2 h et 56 min
  • Making deep learning perform real algorithms with Category Theory (Andrew Dudzik, Petar Velichkovich, Taco Cohen, Bruno Gavranović, Paul Lessard)
    Dec 22 2025

    We often think of Large Language Models (LLMs) as all-knowing, but as the team reveals, they still struggle with the logic of a second-grader. Why can’t ChatGPT reliably add large numbers? Why does it "hallucinate" the laws of physics? The answer lies in the architecture. This episode explores how *Category Theory* —an ultra-abstract branch of mathematics—could provide the "Periodic Table" for neural networks, turning the "alchemy" of modern AI into a rigorous science.


    In this deep-dive exploration, *Andrew Dudzik*, *Petar Velichkovich*, *Taco Cohen*, *Bruno Gavranović*, and *Paul Lessard* join host *Tim Scarfe* to discuss the fundamental limitations of today’s AI and the radical mathematical framework that might fix them.


    TRANSCRIPT:

    https://app.rescript.info/public/share/LMreunA-BUpgP-2AkuEvxA7BAFuA-VJNAp2Ut4MkMWk


    ---


    Key Insights in This Episode:


    * *The "Addition" Problem:* *Andrew Dudzik* explains why LLMs don't actually "know" math—they just recognize patterns. When you change a single digit in a long string of numbers, the pattern breaks because the model lacks the internal "machinery" to perform a simple carry operation.

    * *Beyond Alchemy:* deep learning is currently in its "alchemy" phase—we have powerful results, but we lack a unifying theory. Category Theory is proposed as the framework to move AI from trial-and-error to principled engineering. [00:13:49]

    * *Algebra with Colors:* To make Category Theory accessible, the guests use brilliant analogies—like thinking of matrices as *magnets with colors* that only snap together when the types match. This "partial compositionality" is the secret to building more complex internal reasoning. [00:09:17]

    * *Synthetic vs. Analytic Math:* *Paul Lessard* breaks down the philosophical shift needed in AI research: moving from "Analytic" math (what things are made of) to "Synthetic" math [00:23:41]


    ---


    Why This Matters for AGI

    If we want AI to solve the world's hardest scientific problems, it can't just be a "stochastic parrot." It needs to internalize the rules of logic and computation. By imbuing neural networks with categorical priors, researchers are attempting to build a future where AI doesn't just predict the next word—it understands the underlying structure of the universe.


    ---

    TIMESTAMPS:

    00:00:00 The Failure of LLM Addition & Physics

    00:01:26 Tool Use vs Intrinsic Model Quality

    00:03:07 Efficiency Gains via Internalization

    00:04:28 Geometric Deep Learning & Equivariance

    00:07:05 Limitations of Group Theory

    00:09:17 Category Theory: Algebra with Colors

    00:11:25 The Systematic Guide of Lego-like Math

    00:13:49 The Alchemy Analogy & Unifying Theory

    00:15:33 Information Destruction & Reasoning

    00:18:00 Pathfinding & Monoids in Computation

    00:20:15 System 2 Reasoning & Error Awareness

    00:23:31 Analytic vs Synthetic Mathematics

    00:25:52 Morphisms & Weight Tying Basics

    00:26:48 2-Categories & Weight Sharing Theory

    00:28:55 Higher Categories & Emergence

    00:31:41 Compositionality & Recursive Folds

    00:34:05 Syntax vs Semantics in Network Design

    00:36:14 Homomorphisms & Multi-Sorted Syntax

    00:39:30 The Carrying Problem & Hopf Fibrations


    Petar Veličković (GDM)

    https://petar-v.com/

    Paul Lessard

    https://www.linkedin.com/in/paul-roy-lessard/

    Bruno Gavranović

    https://www.brunogavranovic.com/

    Andrew Dudzik (GDM)

    https://www.linkedin.com/in/andrew-dudzik-222789142/


    ---

    REFERENCES:


    Model:

    [00:01:05] Veo

    https://deepmind.google/models/veo/

    [00:01:10] Genie

    https://deepmind.google/blog/genie-3-a-new-frontier-for-world-models/

    Paper:

    [00:04:30] Geometric Deep Learning Blueprint

    https://arxiv.org/abs/2104.13478

    https://www.youtube.com/watch?v=bIZB1hIJ4u8

    [00:16:45] AlphaGeometry

    https://arxiv.org/abs/2401.08312

    [00:16:55] AlphaCode

    https://arxiv.org/abs/2203.07814

    [00:17:05] FunSearch

    https://www.nature.com/articles/s41586-023-06924-6

    [00:37:00] Attention Is All You Need

    https://arxiv.org/abs/1706.03762

    [00:43:00] Categorical Deep Learning

    https://arxiv.org/abs/2402.15332

    Afficher plus Afficher moins
    44 min
  • Are AI Benchmarks Telling The Full Story? [SPONSORED] (Andrew Gordon and Nora Petrova - Prolific)
    Dec 20 2025

    Is a car that wins a Formula 1 race the best choice for your morning commute? Probably not. In this sponsored deep dive with Prolific, we explore why the same logic applies to Artificial Intelligence. While models are currently shattering records on technical exams, they often fail the most important test of all: **the human experience.**


    Why High Benchmark Scores Don’t Mean Better AI


    Joining us are **Andrew Gordon** (Staff Researcher in Behavioral Science) and **Nora Petrova** (AI Researcher) from **Prolific**. They reveal the hidden flaws in how we currently rank AI and introduce a more rigorous, "humane" way to measure whether these models are actually helpful, safe, and relatable for real people.


    ---


    Key Insights in This Episode:


    * *The F1 Car Analogy:* Andrew explains why a model that excels at the "Humanities Last Exam" might be a nightmare for daily use. Technical benchmarks often ignore the nuances of human communication and adaptability.

    * *The "Wild West" of AI Safety:* As users turn to AI for sensitive topics like mental health, Nora highlights the alarming lack of oversight and the "thin veneer" of safety training—citing recent controversial incidents like Grok-3’s "Mecha Hitler."

    * *Fixing the "Leaderboard Illusion":* The team critiques current popular rankings like Chatbot Arena, discussing how anonymous, unstratified voting can lead to biased results and how companies can "game" the system.

    * *The Xbox Secret to AI Ranking:* Discover how Prolific uses *TrueSkill*—the same algorithm Microsoft developed for Xbox Live matchmaking—to create a fairer, more statistically sound leaderboard for LLMs.

    * *The Personality Gap:* Early data from the **Humane Leaderboard** suggests that while AI is getting smarter, it is actually performing *worse* on metrics like personality, culture, and "sycophancy" (the tendency for models to become annoying "people-pleasers").


    ---


    About the HUMAINE Leaderboard

    Moving beyond simple "A vs. B" testing, the researchers discuss their new framework that samples participants based on *census data* (Age, Ethnicity, Political Alignment). By using a representative sample of the general public rather than just tech enthusiasts, they are building a standard that reflects the values of the real world.


    *Are we building models for benchmarks, or are we building them for humans? It’s time to change the scoreboard.*


    Rescript link:

    https://app.rescript.info/public/share/IDqwjY9Q43S22qSgL5EkWGFymJwZ3SVxvrfpgHZLXQc


    ---

    TIMESTAMPS:

    00:00:00 Introduction & The Benchmarking Problem

    00:01:58 The Fractured State of AI Evaluation

    00:03:54 AI Safety & Interpretability

    00:05:45 Bias in Chatbot Arena

    00:06:45 Prolific's Three Pillars Approach

    00:09:01 TrueSkill Ranking & Efficient Sampling

    00:12:04 Census-Based Representative Sampling

    00:13:00 Key Findings: Culture, Personality & Sycophancy


    ---

    REFERENCES:

    Paper:

    [00:00:15] MMLU

    https://arxiv.org/abs/2009.03300

    [00:05:10] Constitutional AI

    https://arxiv.org/abs/2212.08073

    [00:06:45] The Leaderboard Illusion

    https://arxiv.org/abs/2504.20879

    [00:09:41] HUMAINE Framework Paper

    https://huggingface.co/blog/ProlificAI/humaine-framework

    Company:

    [00:00:30] Prolific

    https://www.prolific.com

    [00:01:45] Chatbot Arena

    https://lmarena.ai/

    Person:

    [00:00:35] Andrew Gordon

    https://www.linkedin.com/in/andrew-gordon-03879919a/

    [00:00:45] Nora Petrova

    https://www.linkedin.com/in/nora-petrova/

    Event:

    Algorithm:

    [00:09:01] Microsoft TrueSkill

    https://www.microsoft.com/en-us/research/project/trueskill-ranking-system/

    Leaderboard:

    [00:09:21] Prolific HUMAINE Leaderboard

    https://www.prolific.com/humaine

    [00:09:31] HUMAINE HuggingFace Space

    https://huggingface.co/spaces/ProlificAI/humaine-leaderboard

    [00:10:21] Prolific AI Leaderboard Portal

    https://www.prolific.com/leaderboard

    Dataset:

    [00:09:51] Prolific Social Reasoning RLHF Dataset

    https://huggingface.co/datasets/ProlificAI/social-reasoning-rlhf

    Organization:

    [00:10:31] MLCommons

    https://mlcommons.org/

    Afficher plus Afficher moins
    16 min