Épisodes

  • The Turing Test Is Flawed. Here's Why.
    Jan 7 2026

    I thought the Turing Test was between a person and a computer. But Turing’s paper seems to imply it’s between a man and a woman, and then a computer and a woman. So what really is the Turing Test? And is it a good measure of intelligence anyway?



    Sources


    Alan Turing: Computing Machinery and Intelligence, 1950

    https://phil415.pbworks.com/f/TuringComputing.pdf


    Gualtiero Piccinini: Turing’s Rules for the Imitation Game, 2000

    https://www.researchgate.net/publication/251383110_Turing's_Rules_for_the_Imitation_Game


    Judith Genova: Turing’s Sexual Guessing Game, 1994

    https://www.tandfonline.com/doi/abs/10.1080/02691729408578758


    EDSAC: https://commons.wikimedia.org/wiki/File:EDSAC_(25).jpg


    Jimmy Kimmel Clip: https://www.youtube.com/watch?v=earRJKrE8Bw



    EDSAC iPhone13 Comparison


    EDSAC, from Wiki:

    1. “Cycle time was 1.5 ms for all ordinary instructions”

    2. It looks like addition was one of the “ordinary instructions.”

    3. “Numbers were either 17 bits (one word) or 35 bits (two words) long.”

    4. My understanding is that 35 bits would take two operations to add, so I’ve stuck to adding two 17 bit numbers, which can be done in one floating point operation.

    5. “The first calculation done by EDSAC was a program run on 6 May 1949”

    6. From then to Christmas day 2025 is 27992 days = 2418508800 seconds

    7. So the number of 17 bit numbers the EDSAC could add in the time period is

    8. 2418508800 s / 1.5 ms = 1612339200000


    iPhone 13, from Wiki:

    1. The GPU runs at 1.37 TFLOPS (tera FLOPS), so 1.37 * 1012 FLOPS.

    2. Let’s assume adding two 17 bit numbers takes 1 FLOP.

    3. Then adding 1612339200000 17 bit numbers can be done in

    4. (1612339200000 additions) / (1.37 * 1012 additions/s) = *1.176889 s.*

    Afficher plus Afficher moins
    16 min
  • Could We Really Lose Control of AI?
    Dec 29 2025

    Suppose an AI “went rogue”.

    Couldn’t we just switch it off?


    How would it keep itself running without human help or an army of robots?


    And why would AI necessarily be evil, rather than kind?


    I put these questions to Zershaaneh Qureshi, a researcher at 80,000hrs.


    Her article “Risks from power-seeking AI systems” is *the* best introduction to the debate on whether AI may one day be an extinction level risk.


    What struck me most was this fantastic analogy:

    “You could be a scholar of the antebellum south… You'll know everything about why slave owners believe that they were justified in owning slaves. But that definitely doesn't mean that you're going to think yourself that slavery is justifiable.


    This really drives home the fact that even if we manage to build AIs that understand human values, that doesn’t mean that they will adopt those values as their own.


    Timestamps:

    06:49 - Is Talk of AI Extinction Just Hype From AI Companies?

    18:08 - Will AI Always Be Just a Tool?

    26:26 - Can We Just Switch It Off If It “Goes Rogue”?

    33:52 - The Challenge of Instilling the Right Goals

    46:38 - Specification Gaming and Goal Misgeneralization

    53:48 - Instrumental Goals: Self-Preservation and Power-Seeking

    1:01:57 - Situational Awareness: Do AIs Need to Be Conscious?

    1:08:53 - Why Would We Deploy Something This Dangerous?

    1:11:48 - The Deception Problem: AIs Could Hide Their True Intentions

    1:20:53 - Could AI Actually Take Over the Physical World?

    1:36:26 - Have We Argued Ourselves to an Absurd Conclusion?

    Afficher plus Afficher moins
    1 h et 40 min
  • ChatGPT Can Now Use a Computer. Like a Boomer…
    Nov 13 2025

    Sam Altman said 2025 is the year of agents.Andrej Karpathy said they’re slop.The AI Village is a team of AIs working together to do real work, like raising money for charity, creating websites to sell merchandise and even organising an in person event.But the project has shown that while AIs can now use computers, they fall over on the simplest tasks. Doing anything requires multiple attempts, with frequent comedic failures.Is this just the start of a technology that may soon revolutionise the economy? Or is it just more AI slop? To find out, I spoke to Adam Binksmith, CEO of AI Digest and co-creator of the AI Village.#ai #agi #agents

    Afficher plus Afficher moins
    1 h
  • Is Spirituality Necessary For AGI? Kenneth Cukier
    Nov 4 2025

    Kenneth Cukier is Deputy Executive Editor at The Economist and co-author of "Framers: Human Advantage in an Age of Technology and Turmoil." He came on to debate whether creating spiritual machines would be a necessary stepping stone towards AGI, and whether that's even possible.Kenneth argues that while AI excels at rational thinking (logos), it fundamentally lacks the spiritual dimension (mythos) that makes us human. We dig into whether AI can develop genuine intuition, whether there exists a "life force" and whether machines could have it, and what any of this means for AI existential risk.We also discuss:- Whether LLM usage in business has been successful- The loneliness epidemic and emotional connections with AI- Whether humans will retreat from or embrace AI in the coming years- How AI might transform medical diagnosis, auditing, and other professions**Kenneth's work:**- Website: cukier.com- Substack: https://chiefwordofficer.substack.com/What do you think - can machines ever be truly conscious, or only simulate it? Let me know in the comments.#AI #AGI #ArtificialIntelligence #Philosophy #AIEthics #AIAlignment #Consciousness #TheEconomist #AIDebate

    Afficher plus Afficher moins
    1 h et 37 min
  • Oxford Philosophers Found a FLAW in the AI Doom Argument?
    Oct 28 2025

    The explicit goal of OpenAI, DeepMind and others is to create AGI.This is insanely risky.It keeps me up at night.AIs smarter than us might:🚨Resist shutdown.🚨Resist us changing their goals.🚨Ruthlessly pursue goals, even if they know it’s not what we want or intended.Some people think I’m nuts for believing this. But they often come round once they hear the central arguments.At the core of the AI doom argument are two big ideas:💡Instrumental Convergence💡The Orthogonality Thesis❌If you don’t understand these ideas, you won’t truly understand why some AI researchers are so worried about AGI or Superintelligence.Oxford philosopher Rhys Southan joined me to explain the situation.💡Rhys Southan and his co-authors Helena Ward and Jen Semler argue that powerful AIs might NOT resist having their goals changed. Possibly a fatal flaw in the Instrumental Convergence Thesis.This would be a BIG DEAL. It would mean we could modify powerful AIs if they go wrong.While I don’t fully agree with their argument, it radically changed how I understand the Instrumental Convergence Thesis and forced me to rethink what it means for AIs to have goals.Check out the paper "A Timing Problem for Instrumental Convergence" here: https://link.springer.com/article/10.1007/s11098-025-02370-4

    Afficher plus Afficher moins
    58 min
  • Does ChatGPT have a mind?
    Oct 14 2025

    Do large language models like ChatGPT actually understand what they're saying? Can AI systems have beliefs, desires, or even consciousness? Philosophers Henry Shevlin and Alex Grzankowski debunk the common arguments against LLM minds and explore whether these systems genuinely think.This episode examines popular objections to AI consciousness - from "they're just next token predictors" to "it's just matrix multiplication" - and explains why these arguments fail. The conversation covers the Moses illusion, competence vs performance, the intentional stance, and whether we're applying unfair double standards to AI that we wouldn't apply to humans or animals.Key topics discussed:

    • Why "just next token prediction" isn't a good argument against LLM minds
    • The competence vs performance distinction in cognitive science
    • How humans make similar errors to LLMs (Moses illusion, conjunction fallacy)
    • Whether LLMs can have beliefs, preferences, and understanding
    • The difference between base models and fine-tuned chatbots
    • Why consciousness in LLMs remains unlikely despite other mental states

    Featured paper: "Deflating Deflationism: A Critical Perspective on Debunking Arguments Against LLM Mentality"Authored by Alex Grzankowski, Geoff Keeling, Henry Shevlin and Winnie Street


    Guests:Henry Shevlin - Philosopher and AI ethicist at the Leverhulme Centre for the Future of Intelligence, University of CambridgeAlex Grzankowski - Philosopher at King's College London#AI #Philosophy #Consciousness #LLM #ArtificialIntelligence #ChatGPT #MachineLearning #CognitiveScience

    Afficher plus Afficher moins
    1 h et 17 min
  • AI Powered Ransomware Is Coming. Tony Anscombe, ESET.
    Oct 7 2025

    LLMs like ChatGPT are incredibly useful for coding. So naturally they can also be useful for hacking. Tony Anscombe explains how his cybersecurity company ESET discovered the first AI powered ransomware, and its unexpected origins.

    Afficher plus Afficher moins
    1 h et 5 min
  • Humans Are NOT The Most Intelligent Species. Professor Peter Bentley
    Sep 30 2025

    Different species solve different problems, so how can we say one is smarter than another? To me, it's intuitively obvious that humans are the most intelligent species on the planet. But Professor Peter Bentley from UCL argues we are intelligent in different ways and cannot be ranked.

    Afficher plus Afficher moins
    1 h et 27 min