Couverture de Code & Cure

Code & Cure

Code & Cure

De : Vasanth Sarathy & Laura Hagopian
Écouter gratuitement

À propos de ce contenu audio

Decoding health in the age of AI


Hosted by an AI researcher and a medical doctor, this podcast unpacks how artificial intelligence and emerging technologies are transforming how we understand, measure, and care for our bodies and minds.


Each episode unpacks a real-world topic to ask not just what’s new, but what’s true—and what’s at stake as healthcare becomes increasingly data-driven.


If you're curious about how health tech really works—and what it means for your body, your choices, and your future—this podcast is for you.


We’re here to explore ideas—not to diagnose or treat. This podcast doesn’t provide medical advice.


© 2026 Code & Cure
Hygiène et vie saine Science
Épisodes
  • #37 - Training A Neural Network On Toilet Photos
    Mar 26 2026

    What if a single smartphone photo could make colonoscopy prep more reliable? Colonoscopy can save lives through early detection of colorectal cancer, but its success depends on one stubborn detail: a clean colon. When bowel prep falls short, important findings can be missed, procedures can take longer, and patients may have to repeat the entire process. The question is simple but important: could there be an easier way for patients to know whether they are truly ready before heading to the clinic?

    In this episode, we explore research that puts artificial intelligence to work on exactly that problem. Using a smartphone app, patients take a photo of their final bowel movement and receive an immediate yes-or-no result about whether their preparation is adequate. We break down how the system works, from convolutional neural networks and expert clinician labeling to data augmentation that helps the model adapt to real-world conditions like poor lighting, different angles, and varying distances. We also unpack a key challenge in medical AI: overfitting, and why strong performance in a study does not always guarantee success in everyday use.

    The potential impact is significant. Patients in the intervention group achieved better bowel cleansing quality, suggesting a practical way to improve the consistency and effectiveness of colorectal cancer screening. At the same time, important questions remain about adenoma detection, repeat procedures, and how tools like this fit into clinical workflow. This is a fascinating example of AI solving a very human problem: reducing friction, improving preparation, and helping patients get the most out of an essential preventive test.

    References:

    An Artificial Intelligence-Guided Strategy to Reduce Poor Bowel Preparation: A Multicenter Randomized Controlled Study
    Gimeno-García et al.
    American Journal of Gastroenterology (2026)

    Design and validation of an artificial intelligence system to detect the quality of colon cleansing before colonoscopy
    Gimeno-García et al.
    Gastroenterology and Hepatology (2023)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Afficher plus Afficher moins
    20 min
  • #36 - Should A Chatbot Ever Refuse To Reassure You
    Mar 19 2026

    What if the chatbot that always has an answer is actually making anxiety worse? For people living with obsessive-compulsive disorder (OCD), instant, endless reassurance can feel helpful in the moment while quietly strengthening the very cycle that keeps OCD going. In this episode, we explore why AI chatbots and large language models are designed to be responsive, agreeable, and supportive—and how those same qualities can unintentionally fuel reassurance seeking, compulsive checking, and avoidance instead of real relief.

    We break down OCD in clear, practical terms: intrusive thoughts trigger fear, compulsions bring temporary comfort, and that short-term relief reinforces the cycle over time. Whether it shows up as repeated handwashing, constant checking, or asking the same question again and again, OCD often centers on the desperate need to eliminate uncertainty. That is exactly where evidence-based treatment takes a different path. We discuss exposure and response prevention (ERP), the gold-standard therapy that helps people face doubt without falling back on rituals, and why a general-purpose chatbot may accidentally validate the opposite by offering reassurance, endorsing avoidance, or helping users “pivot” toward the answer they were hoping to hear.

    We also look at the broader mental health challenge now that people are already turning to AI for support. What responsibility do clinicians, AI companies, and regulators have? We argue that clinicians should ask directly about chatbot use, and we examine what meaningful guardrails might look like—from detecting repetitive reassurance loops to refusing to continue harmful patterns. Using a real-world germ-related prompting example, we show where chatbot advice can be useful and where it can slip into enabling OCD. This conversation will change how you think about AI, anxiety, and the line between support and harm.

    Reference:

    A transdiagnostic model for how general purpose AI chatbots can perpetuate OCD and anxiety disorders
    Golden and Aboujaoude
    Nature npj Digital Medicine (2026)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Afficher plus Afficher moins
    19 min
  • #35 - How AI Image Generators Portray Substance Use Disorder
    Mar 12 2026

    What does an AI-generated image of addiction look like, and why does it so often default to darkness, isolation, and despair? As AI tools make it easier than ever to produce visuals for health education, those same tools can unintentionally reinforce stigma about substance use disorder.

    In this episode, we explore how AI image generators shape the way addiction is portrayed. Laura brings the perspective from emergency medicine and digital health, where substance use disorder is part of everyday clinical reality and where language and imagery can influence how patients are perceived. Vasanth breaks down the technical side, explaining how diffusion models create images by gradually denoising noise into structured visuals, guided by text prompts that steer what the model produces.

    That process is powerful, but it also means biases from internet training data and the connotations embedded in words can compound. The result? AI outputs that repeatedly frame addiction through dramatic “rock bottom” scenes, lone figures, and visual cues that unintentionally reinforce shame rather than understanding.

    We also look at research that systematically tests prompts and applies best-practice guidelines for more respectful depictions. The difference is striking: fewer stigmatizing signals, more human-centered imagery, and practical guardrails such as avoiding drug paraphernalia and moving beyond the isolated, ashamed figure. But sanitization has a price. For healthcare AI teams, the lesson is clear: visuals should be treated like clinical content, not decoration, with thoughtful review processes that protect dignity and support stigma-free health communication.

    Reference:

    AI-Generated Images of Substance Use and Recovery: Mixed Methods Case Study
    Heley et al.
    JMIR AI (2026)

    Credits:

    Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0
    https://creativecommons.org/licenses/by/4.0/

    Afficher plus Afficher moins
    20 min
Aucun commentaire pour le moment