Épisodes

  • Healthcare Is Losing Its Best People | Provider Burnout, Trust & Ethical Leadership in AI with Poonam Patel
    Apr 21 2026

    Send us Fan Mail

    Healthcare AI and ethical leadership must give time back to clinicians, not take more away — Poonam Patel on AI strategy, provider burnout, and trust erosion in healthcare.

    Provider burnout is pushing clinicians out of healthcare at an unsustainable rate. In this episode of The Signal Room, Chris Hutchins sits down with Poonam Patel, a pediatric nurse practitioner turned healthcare strategy advisor, to examine what happens when the system built to care for patients stops caring for its own people. From pajama time documentation burdens to the erosion of trust between patients and providers, Poonam shares what she has witnessed firsthand across clinical and operational settings.

    What We Cover

    • Why provider burnout is a workforce sustainability crisis, not a wellness problem
    • How pajama time documentation burden erodes the patient and provider relationship
    • Where clinical AI and ambient clinical intelligence are actually giving time back
    • Why healthcare interoperability is still the biggest structural barrier to useful AI
    • What empathetic leadership looks like in healthcare organizations under pressure

    Key Takeaways

    • Trust drives adherence, not dashboards. Patients follow clinical guidance when they trust the provider delivering it. Systems that erode trust erode outcomes.
    • Giving time back is a survival strategy. Efficiency gains from AI should flow back to the clinician, not into more patient volume per shift.
    • Empathetic leadership has to run through every layer. Front-line supervisors need empathy training as much as the C-suite. Burnout is solved in the middle, not at the top.
    • Solve one problem well. AI initiatives fail when they try to fix everything at once. Pick one workflow, fix it end-to-end, and consolidate inside the EMR.

    Timestamps

    • 0:00 – Welcome and the shared mission behind the conversation
    • 2:33 – The multi-lens view: clinician, operator, and program builder
    • 6:45 – Pajama time and the intangible ROI of giving time back
    • 8:25 – Trust as the through line for patient adherence
    • 13:19 – The emotional toll on pediatric and frontline providers
    • 18:19 – Burnout, raising your hand, and why clinicians cope alone
    • 25:07 – Solving for the human component first
    • 28:32 – The workforce shortage and the incentive to enter healthcare
    • 32:00 – AI scribing, diagnostics, and early detection that actually helps
    • 36:28 – Interoperability and why AI has to live inside the EMR
    • 39:24 – Trust erosion and the case for empathetic leadership
    • 44:03 – Consolidating patient information and family navigation
    • 46:58 – Empathy as a management training requirement, not a poster
    • 49:21 – Closing thoughts and how to reach Poonam

    About Poonam Patel

    Poonam Patel, NP, is a pediatric nurse practitioner turned healthcare operator and co-founder with 20 years of experience across clinical care, consulting, and healthcare innovation. As Chief Operating Officer and Co-Founder of a care management and remote patient monitoring services company, she led o

    Support the show

    About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


    Website: https://www.hutchinsdatastrategy.com

    LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/

    YouTube: https://www.youtube.com/@ChrisHutchinsAi

    Book Chris to speak: https://www.chrisjhutchins.com

    Afficher plus Afficher moins
    46 min
  • The Dark Side of the $50B AI Medical Boom | Lorraine Fernandes
    Apr 15 2026

    Send us Fan Mail

    The $50B AI in healthcare investment wave is outpacing what most health systems can govern — Lorraine Fernandes on AI strategy, AI governance, and the dark side of the medical AI boom.

    The $50 billion AI in healthcare investment wave is accelerating faster than most health systems can evaluate, integrate, or govern the tools arriving on their doorstep. Lorraine Fernandes, a global health information leader with 50 years at the center of clinical data strategy, joins Chris to examine what vendors leave out of their pitch decks and what health system leaders should be asking before signing their next AI contract.

    What We Cover
    • Why data stewardship is the single word that decides whether a $50B AI bet pays off or collapses
    • How the Health Information Management role is shifting from manual data entry to governance of AI-generated records
    • What global standards like ICD-11 and SNOMED reveal about the structural gaps AI cannot close
    • Practical upskilling moves that let HIM professionals thrive as AI tools replace rote work
    • Why leadership at the intersection of clinical, technical, and administrative functions is the real AI readiness test
    Key Takeaways
    • A trustworthy AI in healthcare strategy starts with data stewardship. If the inputs are ungoverned, the outputs are liability.
    • AI governance requires the HIM profession, not the other way around. Health systems that treat HIM as clerical work will inherit every bias, gap, and error their models produce.
    • Global terminology standards are the scaffolding for clinical AI. ICD-11 and SNOMED are not paperwork. They are the prerequisites for AI that can actually be audited.
    Frameworks & Tools Mentioned
    • IFHIMA (International Federation of Health Information Management Associations)
    • IFHIMA AI Toolkit
    • ICD-11 (WHO International Classification of Diseases)
    • SNOMED CT (clinical terminology standard)
    • World Health Organization digital health initiatives
    • Focus on the Future 2026 webinar series

    ## Timestamps 0:00 – The $50B AI Investment in Healthcare 1:40 – Evolution of HIM: From Paper to Digital Stewardship 4:55 – Curators vs. Creators: The New Role of Data Experts 8:45 – The Trust Factor: Why Stewardship Prevents AI Failure 13:10 – Global Perspectives: The IFHIMA AI Toolkit 17:25 – Digital Health Trends and WHO Initiatives 20:55 – Upskilling for the AI Workforce: Will AI Replace Jobs? 23:45 – Event Preview: Focus on the Future 2026 Series 26:05 – Deep Dive: ICD-11, SNOMED, and Global Classifications 31:00 – Building Better Health Outcomes Through Trusted Data

    About Lorraine Fernandes

    Lorraine Fernandes is a globally recognized expert in health information management whose 50-year career includes leadership roles at IFHIMA and sustained advocacy for data privacy, clinical terminology standards, and ethical digital health implementation. She works at the intersection of global policy and on-the-ground health system operations.

    Related Resource

    Support the show

    About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


    Website: https://www.hutchinsdatastrategy.com

    LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/

    YouTube: https://www.youtube.com/@ChrisHutchinsAi

    Book Chris to speak: https://www.chrisjhutchins.com

    Afficher plus Afficher moins
    35 min
  • Strengthen your AI Projects in 2026. Privacy and AI Governance Insights with Andre Samokish
    Apr 8 2026

    Send us Fan Mail

    AI governance is the difference between shipping healthcare AI and watching the project get shut down — Andre Samokish on privacy, AI strategy, and governance for 2026.

    AI governance is becoming the difference between shipping AI in healthcare and watching the project get shut down. Andre Samokish, a privacy and AI governance expert, joins Chris Hutchins to explain why most AI initiatives will fail by 2026 and what responsible AI actually looks like inside organizations that refuse to take vendor assurances at face value.

    What We Cover

    • The concrete difference between privacy governance, AI governance, and cybersecurity, and why conflating them creates blind spots leaders will pay for later
    • Why governance is not a project blocker. It is the pathway that lets teams move fast without inheriting regulatory debt
    • The 3 pillars of AI literacy that separate organizations ready for responsible AI from ones that will inherit their vendor's mistakes
    • How to embed privacy by design into AI product workflows before launch, not after incidents
    • The failure modes hiding in data collection, model deployment, and organizational culture that teams routinely misdiagnose

    Key Takeaways

    • The "vendor has it covered" assumption is the single most dangerous governance gap in AI today. If you cannot explain how a model was trained, you cannot defend the decision it made.
    • AI literacy is not training. It is infrastructure. Organizations treat it as optional, then discover their executives cannot distinguish generative AI risk from traditional IT risk when regulators ask.
    • Data minimization is a governance principle before it is a privacy one. The less data you collect, the less exposure you carry through the model's full lifecycle.

    Frameworks & Tools Mentioned

    • OneTrust (privacy + AI governance platform)
    • IAPP (International Association of Privacy Professionals) certifications
    • Privacy by design methodology
    • AI literacy pillars (technical, operational, governance)
    • Vendor governance frameworks

    ## Timestamps 00:00 Introduction: The AI project failure wave of 2026 03:00 Andre Samokish on why AI governance is the root cause 09:30 AI strategy beyond proof of concept: what enterprises get wrong 16:00 AI implementation challenges that kill projects at scale 22:30 AI readiness: governance maturity vs. technical capability 29:00 Responsible AI development when privacy controls are inadequate 35:00 AI regulation signals and what they mean for 2026 planning 41:00 Leadership strategies for surviving the AI contraction

    About Andre Samokish

    Andre Samokish is a privacy and AI governance expert whose work spans regulated industries implementing responsible AI at scale. He advises organizations on embedding governance into product workflows, building AI literacy across technical and non-technical teams, and navigating the intersection of privacy law and machine learning practice.

    Related Resources

    • Episode: The Dark Side of the $5

    Support the show

    About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


    Website: https://www.hutchinsdatastrategy.com

    LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/

    YouTube: https://www.youtube.com/@ChrisHutchinsAi

    Book Chris to speak: https://www.chrisjhutchins.com

    Afficher plus Afficher moins
    43 min
  • Good People Are Quietly Quitting: Ethical Leadership, AI Strategy & Why Culture Determines AI Success | Carly Caminiti
    Apr 1 2026

    Send us Fan Mail

    Ethical leadership and AI strategy collapse when the people executing the strategy are quietly burning out — Carly Caminiti on why culture determines healthcare AI success.

    Healthcare innovation leadership stops working when the people who execute the strategy are quietly burning out. Carly Caminiti, an ICF-certified executive coach and creator of the 5C Leadership Performance System, joins Chris Hutchins to examine why healthcare's best people are disengaging, why AI adoption amplifies the problem, and what ethical leadership in healthcare requires when strategy depends on humans who are under-resourced.

    What We Cover
    • Why "quiet quitting" is a governance signal, not a workforce trend, and what it reveals about leadership capacity
    • How executives promoted for clinical or technical skill end up running teams without ever learning how to lead
    • The 5C Leadership Performance System and why healthcare organizations need a repeatable framework, not more off-site retreats
    • What happens when AI transformation lands on top of existing burnout, and why technology strategy is fundamentally a people strategy
    • How to identify the high performers who are about to leave before they tell you
    Key Takeaways
    • The healthcare leaders who will survive AI transformation are the ones who invest in the people executing it. Tools do not fix culture. Culture determines whether tools get adopted.
    • Ethical leadership in healthcare is not a values statement. It is a weekly operating practice visible in how communication, feedback, and decisions happen across teams.
    • Retention is a leading indicator of AI readiness. Organizations that cannot hold onto their strongest people will not have the capacity to absorb AI-driven change.
    Frameworks & Tools Mentioned
    • 5C Leadership Performance System (Caminiti's 12-week executive coaching framework)
    • ICF (International Coaching Federation) certification standards
    • Executive coaching methodology for healthcare leaders
    • Burnout detection signals
    • Communication frameworks for team performance

    ## Timestamps 00:00 Introduction: The quiet quitting signal leaders are missing 03:00 Carly Caminiti on why culture eats AI strategy for breakfast 09:30 Ethical leadership as the prerequisite for AI adoption 16:00 AI leadership strategies that actually retain talent 22:45 Leadership ethics when automation changes the work itself 29:00 AI coaching for leaders: what it looks like in practice 35:30 Why quiet quitting is an AI governance signal 41:00 Building organizations where ethical AI and ethical leadership coexist

    About Carly Caminiti

    Carly Caminiti is an ICF-certified executive and personal development coach who works with healthcare and corporate leaders to build performance without burning out their teams. She is the creator of the 5C Leadership Performance System, a 12-week coaching program designed for leaders who need a framework they can actually apply, not another leadership theory.

    Re

    Support the show

    About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


    Website: https://www.hutchinsdatastrategy.com

    LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/

    YouTube: https://www.youtube.com/@ChrisHutchinsAi

    Book Chris to speak: https://www.chrisjhutchins.com

    Afficher plus Afficher moins
    49 min
  • Healthcare Experts on Ethical AI in Operational Reality: AI Transformation Strategies and Healthcare Innovation | Markeisha Snaith
    Mar 25 2026

    Send us Fan Mail

    AI strategy for healthcare fails when strategic intent hits operational reality at the bedside — MarKeisha Snaith on ethical AI, transformation, and healthcare innovation.

    Healthcare innovation leadership rarely fails at the strategy level. It fails when strategic intent hits operational reality at the bedside. MarKeisha Snaith joins Chris Hutchins to examine the signals that matter most inside large health systems, why AI leadership strategies stall between planning and execution, and what distinguishes leaders who drive transformation from the ones who announce it.

    What We Cover
    • How AI governance decisions made in the boardroom play out at the point of care, and where the translation breaks
    • Why communication patterns inside health systems determine whether AI transformation strategies survive contact with operations
    • The operational signals leaders routinely miss because they live between departments, between roles, and between what gets measured and what actually happens
    • How to build healthcare leadership capacity for AI readiness before the technology arrives
    • What generational workforce shifts mean for leadership models in health systems
    Key Takeaways
    • AI transformation strategies that do not account for operational reality will not survive their own rollout. The strongest leaders treat clinical execution as the first-class design constraint.
    • Trust is the currency of healthcare innovation leadership. When communication breaks, AI tools inherit the distrust regardless of how good the model is.
    • Healthcare innovation requires both technical fluency and operational empathy. Leaders who have one without the other produce strategy decks nobody executes.
    Frameworks & Tools Mentioned
    • Strategic planning vs. operational execution frameworks
    • Healthcare leadership and system transformation methodology
    • Cross-generational workforce leadership models
    • AI governance decision-making in clinical settings
    • Communication cascades in large health systems
    Timestamps
    • 00:00 Introduction: what healthcare experts really face with AI transformation
    • 03:30 MarKeisha Snaith on AI governance in clinical reality
    • 10:00 AI transformation strategies that survive contact with operations
    • 17:00 AI healthcare innovations: what is working and what is not
    • 24:00 Healthcare innovation leadership at the intersection of tech and care
    • 31:00 Ethical AI when patient outcomes depend on the model
    • 37:00 Building healthcare leadership capacity for AI readiness
    • 43:00 The future of AI transformation strategies in health systems
    About MarKeisha Snaith

    MarKeisha Snaith is a healthcare leader whose work focuses on the operational reality of AI transformation inside complex health systems. She examines how strategic decisions cascade through clinical, technical, and administrative functions, and what it takes to build leadership capacity that

    Support the show

    About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


    Website: https://www.hutchinsdatastrategy.com

    LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/

    YouTube: https://www.youtube.com/@ChrisHutchinsAi

    Book Chris to speak: https://www.chrisjhutchins.com

    Afficher plus Afficher moins
    53 min
  • Healthcare AI and Rare Disease Caregiving: Why Patient Advocates Deserve a Seat at the Table | Amanda Roser
    Mar 18 2026

    Send us Fan Mail

    Healthcare AI succeeds or fails at the connective tissue of care — Amanda Roser on AI strategy, rare disease caregiving, and why patient advocates belong at the table.

    AI applications in healthcare succeed or fail at the connective tissue of care delivery, the caregivers, patient advocates, and family members who hold fragmented systems together. Amanda Roser, who has spent 5 years navigating her son's rare genetic disorder across endocrinology, genetics, metabolic medicine, and gastroenterology, joins Chris Hutchins to examine what responsible AI in healthcare requires when the actual users are families, not specialists.

    What We Cover
    • How rare disease caregivers become the de facto data stewards, record keepers, and medical translators the system requires but rarely recognizes
    • Why interoperability failures create a "Groundhog Day" problem where patients retell their history at every appointment, and what AI could actually fix
    • How Amanda trained an AI tool on her son's daily health patterns and lab history, and the clinical conversation that shifted in real time when she showed it to a physician
    • The gap between what caregivers expect from healthcare systems and what systems actually deliver
    • Why patient advocacy panels belong at every healthcare innovation conference
    Key Takeaways
    • AI in healthcare that ignores caregivers is not responsible AI. Every system decision about interoperability, documentation, and coordination lands on the family in the waiting room.
    • Caregivers are the operational infrastructure the healthcare system depends on. Any AI strategy that does not account for this inherits the fragility the system already has.
    • Rare disease care is the stress test for healthcare innovation. If your AI tool does not work for multi-system patients, it will not work for anyone.
    Frameworks & Tools Mentioned
    • Care coordination across multi-specialty clinical teams
    • Healthcare interoperability standards (and where they fail)
    • AI-assisted patient advocacy and symptom tracking
    • Rare disease care models (glycogen storage disease type zero)
    • Digital health tools for caregiver-physician communication
    Timestamps
    • 00:00 Amanda's story: an ER dismissal that became a turning point for caregiver advocacy
    • 02:12 What caregivers expect vs. what the healthcare system actually delivers
    • 05:18 Becoming the coordinator: when parents realize the system depends on them
    • 10:12 The invisible operational burden families carry between appointments
    • 13:30 Gaps in patient tracking, documentation, and clinical communication
    • 16:21 Learning medical terminology as a
    Humanizing AI for Care.
    Empowering healthcare with ethical, scalable AI and data strategies that work.

    Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

    Support the show

    About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


    Website: https://www.hutchinsdatastrategy.com

    LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/

    YouTube: https://www.youtube.com/@ChrisHutchinsAi

    Book Chris to speak: https://www.chrisjhutchins.com

    Afficher plus Afficher moins
    50 min
  • AI Regulation in ER and Clinical Judgment: Why AI Tools Must Be Designed for 3 AM, Not 3 PM | Dr. Natasha Dole
    Mar 13 2026

    Send us Fan Mail

    AI regulation and healthcare AI meet their hardest test in the emergency department — Dr. Natasha Dole on designing clinical AI for 3 AM, not 3 PM.

    Emergency departments are the hardest environments to deploy AI applications in healthcare because speed, accuracy, and contextual judgment all compress into seconds. Dr. Natasha Dole, an emergency physician and digital health leader, joins Chris Hutchins to examine why AI tools designed for routine clinical workflows fail under ER conditions, and what responsible AI in healthcare actually requires when a missed signal can end a life.

    What We Cover
    • Why emergency medicine is the hardest stress test for AI in healthcare, and what that exposes about every other deployment setting
    • How trust gaps between ER physicians and AI tools compound when systems produce recommendations without contextual awareness
    • Where clinical decision support adds value in the ER and where it breaks down under the pressure of a live trauma bay
    • What AI regulation and patient consent actually look like when a patient arrives unconscious and a scribe tool is already recording
    • How digital health leadership inside a clinical setting is different from strategy work done outside the care environment
    Key Takeaways
    • Clinical judgment is not a legacy skill AI replaces. It is the thing AI tools must be designed around. Emergency physicians develop situational awareness algorithms cannot replicate from training data.
    • A trust gap is a patient-safety issue, not a change-management issue. When ER physicians do not trust an AI tool, they either override it or disengage from it. Both outcomes degrade care.
    • Responsible AI in healthcare means designing for the worst 3 AM, not the average Tuesday. Any AI tool that cannot survive the emergency department's conditions is not ready for the rest of the hospital either.
    Frameworks & Tools Mentioned
    • Human-in-the-loop AI design for high-acuity clinical settings
    • AI scribes and clinical documentation tools in the ER
    • Clinical decision support integration with emergency workflows
    • Patient consent protocols for AI-assisted care
    • Digital health leadership inside clinical operations

    ## Timestamps 0:00 The 2:00 AM Crisis: Why AI Fails 0:35 Introducing Dr. Natasha Dole: ER Innovation 1:30 Credibility in the ER: Pre-AI vs. AI 2:45 The AI Scribe: Reducing Cognitive Load 4:15 Why Patients Must Stop Using AI for Triage 6:02 AI vs. Clinical Judgment: Who Wins? 8:40 The "Scary Truth" About AI Hallucinations 11:15 Responsible AI: Consent and Disclosure 13:40 Designing for the 3:00 AM Bottleneck 15:50 Will AI Replace Doctors? The Real Answer 18:10 Final Verdict: The Future of Responsible Care

    About Dr. Natasha Dole

    Dr. Natasha Dole is an emergency physician and digital health leader focused on how AI tools actually perform inside real clinical environments. She works at the intersection of emergency medicine, AI governance, and responsible deployment, with particul

    Support the show

    About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


    Website: https://www.hutchinsdatastrategy.com

    LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/

    YouTube: https://www.youtube.com/@ChrisHutchinsAi

    Book Chris to speak: https://www.chrisjhutchins.com

    Afficher plus Afficher moins
    44 min
  • Enterprise AI Journey: Agentic AI, Generative AI and Data Foundations in Healthcare | Gary Cao
    Mar 4 2026

    Send us Fan Mail

    Enterprise AI strategy in healthcare is a multi-year sequence, not a single deployment — Gary Cao on agentic AI, generative AI, and healthcare data foundations.

    The most successful enterprise AI strategies in healthcare treat the journey as a multi-year sequence, not a single deployment. Gary Cao, a chief data, analytics, and AI officer with 30 years of experience across 8 companies spanning healthcare, financial services, and multiple industries, joins Chris Hutchins to map the full arc of AI transformation strategies from data foundations through analytics maturity to generative and agentic AI.

    What We Cover
    • What organizations actually mean when they say they are on an AI journey, and why most have a vague intention rather than a 3 to 5 year roadmap
    • The 4 pillars of AI maturity, business strategy, analytics and innovation, data management, technology infrastructure, and why the ones that get the least visibility matter most
    • How to distinguish generative AI from traditional analytics, and why the wrong data foundation makes generative AI produce superficial outputs
    • Probabilistic versus deterministic thinking, and why executives must learn to decide in ranges rather than exact answers
    • A 3-part ROI scorecard that balances direct revenue, cost avoidance, and qualitative strategic value
    Key Takeaways
    • AI readiness is built bottom-up, not top-down. Data management stays below the surface but determines what analytics and generative AI can actually deliver later.
    • Technology gets the most budget and the least leverage. Business strategy is the hardest conversation to have and the one with the highest return.
    • AI governance is not a separate workstream from AI strategy for healthcare. The organizations that separate them end up with tools that outrun their decision-making.
    Frameworks & Tools Mentioned
    • 4-pillar AI maturity model (business strategy, analytics, data management, technology)
    • 3-layer AI stack (traditional analytics, NLP and image processing, generative AI)
    • 3-part ROI scorecard for AI investment
    • Probabilistic vs. deterministic decision-making
    • Healthcare data analytics governance
    Timestamps
    • 00:00 The CFO regret: we should have invested in data analytics years ago
    • 00:24 Gary Cao's career across healthcare, financial services, and enterprise AI
    • 01:46 What organizations actually mean when they say "we are on an AI journey"
    • 02:39 The 4 pillars of AI maturity
    • 05:01 Enterprise AI framework from 30 years across 8 companies
    • 07:13 The hidden cost beneath technology contracts: getting data fit for use
    • 09:33 Three layers of AI: traditional analytics, NLP/image processing, generative AI
    • 12:41 The tension between enterprise systems and probabilistic AI models
    • 13:13 Healthcare versus financial services: different tolerance for accuracy
    • 18:07 Does generative AI need different governance than traditiona

    Support the show

    About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


    Website: https://www.hutchinsdatastrategy.com

    LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/

    YouTube: https://www.youtube.com/@ChrisHutchinsAi

    Book Chris to speak: https://www.chrisjhutchins.com

    Afficher plus Afficher moins
    42 min