Couverture de Code & Cure

Code & Cure

Code & Cure

De : Vasanth Sarathy & Laura Hagopian
Écouter gratuitement

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

Decoding health in the age of AI


Hosted by an AI researcher and a medical doctor, this podcast unpacks how artificial intelligence and emerging technologies are transforming how we understand, measure, and care for our bodies and minds.


Each episode unpacks a real-world topic to ask not just what’s new, but what’s true—and what’s at stake as healthcare becomes increasingly data-driven.


If you're curious about how health tech really works—and what it means for your body, your choices, and your future—this podcast is for you.


We’re here to explore ideas—not to diagnose or treat. This podcast doesn’t provide medical advice.


© 2026 Code & Cure
Hygiène et vie saine Science
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • #29 - AI Hype Meets Hospital Reality
      Jan 29 2026

      What really happens when a “smart” system steps into the operating room, and collides with the messy, time-pressured reality of clinical care?

      In this episode, we unpack a multi-center pilot that streamed audio and video from live surgeries to fuel safety checklists, flag cases for review, and promise rapid, actionable insight. What emerged instead was a clear-eyed lesson in the gap between aspiration and execution. Across four fault lines, the story shows where clinicians’ expectations of AI ran ahead of what today’s systems can reliably deliver, and what that means for patient safety.

      We begin with the promise. Surgeons and care teams envisioned near-instant post-case summaries: what went well, what raised concern, and which patients might be at risk. The reality looked different. Training demands, configuration work, and brittle workflows made it clear that AI is anything but plug-and-play. We explore why polished language can be mistaken for intelligence, why models need the right tools to reason effectively, and why moving AI from one hospital to another is closer to a redesign than a simple deployment.

      Then we follow the data. When it takes six to eight weeks to turn raw footage into usable insight, the value of learning forums like morbidity and mortality conferences quickly erodes. Privacy protections, de-identification, and quality control matter—but without pipelines built for speed and trust, insights arrive too late to change practice. We contrast where the system delivered real value, such as checklists and procedural signals, with where it fell short: predicting post-operative complications and producing research-ready datasets.

      Throughout the conversation, we argue for a minimum clinically viable product: tightly scoped use cases, early and deep involvement from surgeons and nurses, and data flows that respect governance without stalling learning. AI can strengthen patient safety and team performance—but only when expectations align with capability and operations are designed for real clinical tempo.

      If this resonates, follow the show, share it with a colleague, and leave a review with one takeaway you’d apply in your own clinical setting.

      Reference:

      Expectations vs Reality of an Intraoperative Artificial Intelligence Intervention
      Melissa Thornton et al.
      JAMA Surgery (2026)

      Credits:

      Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
      Licensed under Creative Commons: By Attribution 4.0
      https://creativecommons.org/licenses/by/4.0/


      Afficher plus Afficher moins
      26 min
    • #28 - How AI Confidence Masks Medical Uncertainty
      Jan 22 2026

      Can you trust a confident answer, especially when your health is on the line?

      This episode explores the uneasy relationship between language fluency and medical truth in the age of large language models (LLMs). New research asks these models to rate their own certainty, but the results reveal a troubling mismatch: high confidence doesn’t always mean high accuracy, and in some cases, the least reliable models sound the most sure.

      Drawing on her ER experience, Laura illustrates how real clinical care embraces uncertainty—listening, testing, adjusting. Meanwhile, Vasanth breaks down how LLMs generate their fluent responses by predicting the next word, and why their self-reported “confidence” is just more language, not actual evidence.

      We contrast AI use in medicine with more structured domains like programming, where feedback is immediate and unambiguous. In healthcare, missing data, patient preferences, and shifting guidelines mean there's rarely a single “right” answer. That’s why fluency can mislead, and why understanding what a model doesn’t know may matter just as much as what it claims.

      If you're navigating AI in healthcare, this episode will sharpen your eye for nuance and help you build stronger safeguards.

      Reference:


      Benchmarking the Confidence of Large Language Models in Answering Clinical Questions: Cross-Sectional Evaluation Study
      Mahmud Omar et al.
      JMIR (2025)

      Credits:

      Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
      Licensed under Creative Commons: By Attribution 4.0
      https://creativecommons.org/licenses/by/4.0/


      Afficher plus Afficher moins
      26 min
    • #27 - Sleep’s Hidden Forecast
      Jan 15 2026

      What if one night in a sleep lab could offer a glimpse into your long-term health? Researchers are now using a foundation model trained on hundreds of thousands of hours of sleep data to do just that, by predicting the next five seconds of a polysomnogram, the model learns the rhythms of sleep and, with minimal fine-tuning, begins estimating risks for conditions like Parkinson’s, dementia, heart failure, stroke, and even some cancers.

      We break down how it works: during a sleep study, sensors capture brain waves (EEG), eye movements (EOG), muscle tone (EMG), heart rhythms (ECG), and breathing. The model compresses these multimodal signals into a reusable format, much like how language models process text. Add a small neural network, and suddenly those sleep signals can help predict disease risk up to six years out. The associations make clinical sense: EEG patterns are more telling for neurodegeneration, respiratory signals flag pulmonary issues, and cardiac rhythms hint at circulatory problems. But, the scale of what’s possible from a single night’s data is remarkable.

      We also tackle the practical and ethical questions. Since sleep lab patients aren’t always representative of the general population, we explore issues of selection bias, fairness, and external validation. Could this model eventually work with consumer wearables that capture less data but do so every night? And what should patients be told when risk estimates are uncertain or only partially actionable?

      If you're interested in sleep science, AI in healthcare, or the delicate balance of early detection and patient anxiety, this episode offers a thoughtful look at what the future might hold—and the trade-offs we’ll face along the way.

      Reference:

      A multimodal sleep foundation model for disease prediction
      Rahul Thapa
      Nature (2026)

      Credits:

      Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
      Licensed under Creative Commons: By Attribution 4.0
      https://creativecommons.org/licenses/by/4.0/

      Afficher plus Afficher moins
      24 min
    Aucun commentaire pour le moment