AI for Doctors? Making Breast Cancer Detection Smarter and More "Honest"
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
Featured paper: Towards Trustworthy Breast Tumor Segmentation in Ultrasound using Monte Carlo Dropout and Deep Ensembles for Epistemic Uncertainty Estimation
What if AI could admit when it's confused and help doctors catch cancer more safely? In this episode, we explore groundbreaking research on trustworthy breast tumor segmentation that flips the script on black-box AI. Discover how researchers uncovered shocking flaws in the popular BUSI dataset, duplicate images, jaw scans labeled as breasts, and why this "data leakage" made AI look far better than it actually was. Learn how Monte Carlo Dropout and Deep Ensembles teach AI to measure its own uncertainty, creating "heat maps" that highlight exactly where the model is struggling. We dive into why an AI that runs 25 times slower but admits confusion is actually safer for doctors, explore what happens when AI meets completely new, unfamiliar images, and unpack why this human-AI partnership could revolutionize breast cancer detection in low-resource settings. Join us as we investigate how teaching machines to say "I don't know" makes them more trustworthy, and ultimately more powerful tools for saving lives.
*Disclaimer: This content was generated by NotebookLM and has been reviewed for accuracy by Dr. Tram.*
Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !