Épisodes

  • AI Governance Isn't Compliance. It's About Humans with Victoria Gamerman
    May 7 2026

    Most organizations are treating AI governance as a compliance checkbox. Victoria Gamerman argues it's the gateway that finally forces organizations to confront the human side of AI adoption. In this episode, Victoria breaks down why the POC-to-operationalization gap is so hard to close, why people, process, and data each carry more human weight than most leaders acknowledge, and what "AI-ready data" actually means and a reminder that AI isn't a technology problem. It never was.

    See omnystudio.com/listener for privacy information.

    Afficher plus Afficher moins
    41 min
  • TAKEAWAY - AI Governance Isn't Compliance. It's About Humans with Victoria GamermaN
    May 7 2026

    This is the takeaway episode with Victoria Gamerman who argues that AI Governance is the gateway that finally forces organizations to confront the human side of AI adoption.

    See omnystudio.com/listener for privacy information.

    Afficher plus Afficher moins
    6 min
  • AI Appetite Is Easy, Digestion Is Hard with Diana Wu David
    May 1 2026

    Everybody wants AI. AI adoption conversations are dominated by tools, models, and metrics. Far fewer organizations have figured out what to do with it once it's inside the building. The harder question, one that most leaders are avoiding, is what happens to the humans? Diana Wu David, Director of Futures at ServiceNow joins Juan and Tim to unpack what leaders should and should not be doing.

    See omnystudio.com/listener for privacy information.

    Afficher plus Afficher moins
    54 min
  • TAKEAWAY - AI Appetite Is Easy, Digestion Is Hard with Diana Wu David
    May 1 2026

    This is the takeaway episode with Diana Wu David, Director of Futures at ServiceNow where we discuss AI adoption, the metrics and how far fewer organizations have figured out what to do with AI once it's inside the building.

    See omnystudio.com/listener for privacy information.

    Afficher plus Afficher moins
    5 min
  • EVA - A Framework for Evaluating Voice Agents by ServiceNow
    Apr 29 2026

    Voice AI agent evaluation — why it's fundamentally harder than text, how cascade failures derail conversations invisibly, and ServiceNow's open-source framework to establish industry evaluation standards. Featuring real audio examples showing authentication failures, leaked reasoning, and latency problems.

    WHAT WE COVER

    TARA BOGAVELLI — Research Engineer, ServiceNow
    Leading the open-source voice agent evaluation framework. Explains why existing benchmarks don't measure what matters and what ServiceNow is releasing to establish industry standards.

    KATRINA STANKIEWICZ — Staff Machine Learning Engineer, ServiceNow
    Cascade model architecture expert. Breaks down STT → LLM → TTS failure modes, named entity transcription challenges, and real audio example analysis.

    GABRIELLE GAUTHIER MELANÇON — Staff Applied Research Scientist, ServiceNow
    Multi-language evaluation specialist. Reveals why Large Audio Language Models lag behind, the native speaker requirement, and bot-to-bot simulation methodology.

    CHAPTERS
    0:00 Introduction — The evaluation gap
    1:11 ServiceNow's Open-Source Framework Announcement — Tara Bogavelli
    2:43 Meet the Researchers
    3:43 Voice-Specific Challenges — Tara Bogavelli
    5:03 Cascade Architecture: STT → LLM → TTS — Katrina Stankiewicz
    7:57 The Named Entity Problem — Katrina Stankiewicz
    10:06 Evaluation Metrics: Accuracy vs Experience — Gabrielle Gauthier Melançon
    11:23 Bot-to-Bot Testing at Scale — Gabrielle Gauthier Melançon
    14:30 The LALM Gap: Why Audio AI Judges Struggle — Tara Bogavelli
    16:57 Real Audio Example: Flight Rebooking Gone Wrong
    21:58 Breaking Down the Failures — Katrina Stankiewicz 28:30 Wrap-Up & Resources

    KEY INSIGHTS

    The Cascade Failure Problem: STT → LLM → TTS errors propagate invisibly Named Entity Transcription: The #1 enterprise blocker—names, confirmation codes, emails break authentication Accuracy vs Experience: Perfect task completion means nothing if users hang up due to poor experience LALM Gap: Large Audio Language Models lag behind text LLMs—human evaluators remain essential Latency Kills Conversations: Five-second pauses make users think the call dropped, breaking the experience even when tasks complete Open-Source Framework: ServiceNow releasing evaluation tools, metrics, and bot-to-bot simulation methodology for the industry.

    LEARN MORE

    Website: https://servicenow.github.io/eva/ GitHub:
    https://github.com/servicenow/eva Blog Post:
    https://huggingface.co/blog/ServiceNow-AI/eva Dataset: https://huggingface.co/datasets/ServiceNow-AI/eva

    ABOUT

    Hosted by Bobby Brill. ServiceNow Insights podcast explores AI research, real-world applications, and the people building the future of work. #VoiceAI #AIEvaluation #ServiceNow #MachineLearning #OpenSource #ConversationalAI #STT #TTS #LLM #VoiceAgents #AIResearch #Podcast

    See omnystudio.com/listener for privacy information.

    Afficher plus Afficher moins
    30 min
  • It's Friday: Juan and Tim rant about AI, Agents, and the Uncomfortable Truth About Data's New Center of Gravity
    Apr 24 2026

    Juan and Tim's Friday rant covers a lot of ground, from Juan's TED takeaways on AI's unprecedented speed and what it means for humanity, to the uncomfortable shift data teams need to make: work and decisions are the point, not pipelines and gold layers. They dig into what Medallion Architecture 2.0 looks like (feedback loops, insights to action, agent governance), why organizational design theory applies directly to agent swarms, and what library science can teach us about the future data stack. The thread running through all of it: the humans who thrive in this moment won't be the ones who build the most, but the ones with taste.

    See omnystudio.com/listener for privacy information.

    Afficher plus Afficher moins
    39 min
  • Think Like a Librarian: Why the Reference Interview Is the Framework Data Teams Are Missing with Jenna Jordan and Amalia Child
    Apr 16 2026

    Data teams spend enormous energy building pipelines, platforms, and governance frameworks but often skip the most fundamental step: truly understanding what people are actually asking for. In this episode, Juan and Tim sit down with data librarians Jenna Jordan and Amalia Child to explore why library science may be the missing lens for data work.

    At the heart of the conversation is the reference interview, a structured technique librarians use to uncover a user's "true information need," which almost never matches the first question they ask. From establishing trust and listening without judgment, to asking open-ended questions and verifying whether the need was actually met, the reference interview offers a rigorous, repeatable framework for anyone serving data users.

    If you've ever wondered why data projects deliver less value than expected, this episode will reframe the problem entirely and give you a practical toolkit to start closing the gap.

    See omnystudio.com/listener for privacy information.

    Afficher plus Afficher moins
    52 min
  • TAKEAWAY - Think Like a Librarian: Why the Reference Interview Is the Framework Data Teams Are Missing with Jenna Jordan and Amalia Child
    Apr 15 2026

    Data teams obsess over pipelines and platforms but often skip the most fundamental step: truly understanding what people are actually asking for. We chat with data librarians Jenna Jordan and Amalia Child who share a framework for exactly that; it's called the reference interview, and it might be the most practical toolkit data teams have never used.

    See omnystudio.com/listener for privacy information.

    Afficher plus Afficher moins
    6 min