Chatbot MD
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
Why do people turn to chatbots when seeking medical advice? Is it due to cost, out-of-hours access or simply wanting to feel heard? Christian, Tomás and Grace discuss why patients and clinicians are increasingly using Large Language Models (LLMs) in medical contexts and what that means for the American healthcare system.
The doctors examine how certain attributes of LLMs such as sycophancy and engagement incentives can amplify anxiety, delusions or unrealistic expectations. The lack of accountability by big tech can also undermine trust for patients and healthcare practitioners alike. They raise concerns about training data quality (medical journals mixed with sources like Reddit) and the need for citations, transparency and regulation comparable to healthcare quality oversight. They also ask ChatGPT what it thinks about guardrails and risk controls!
This conversation is spurred on by two New York Times articles (What OpenAI Did When ChatGPT Users Lost Touch With Reality; Empathetic, Available, Cheap: When A.I. Offers What Doctors Don’t).
If you or someone you know needs help, in the US, you can call or text the National Suicide Prevention Lifeline on 988, chat on 988lifeline.org, or text HOME to 741741 to connect with a crisis counselor. If you’re listening in another part of the world, international helplines can be found at befrienders.org.
The episode was recorded remotely in November 2025. Presented by Christian, Tomás and Grace. Music by Nylonia. Produced by Ilia Rogatchevski.
Follow Doctor Friends on Instagram @doctorfriendspodcast
Or write to us on doctorfriendspodcast@gmail.com
Hosted on Acast. See acast.com/privacy for more information.
Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !