Épisodes

  • Episode 50 - Adopting AI Product Development with Kate Catlin
    Feb 15 2026

    Choosing the right AI model shouldn’t feel like roulette. We sit down with Kate Catlin, the product manager from GitHub. Kate shares how model strengths and explains how auto mode aims to pick the right model for the job so developers can focus on outcomes.

    We dig into practical tactics that cut through hype: start with a golden dataset, run evaluations early, and keep refining with real user prompts once your AI is live. If you’re overwhelmed by weekly model releases, you’re not alone—Kate outlines how to compare new options with scoring and selective manual review.

    She also tackles the enterprise challenge: slow model approvals that leave teams on outdated systems. With a disciplined eval pipeline, organisations can safely adopt newer, faster, and often cheaper variants that deliver better results.


    Text Us About the Show

    Afficher plus Afficher moins
    25 min
  • Episode 49 - Dataverse MCP with Copilot Studio with Nathan Rose
    Jan 31 2026

    Build an agent that understands intent, finds the right data, and takes action across your CRM—without wiring a maze of connectors. We sit down with Microsoft MVP and functional architect Nathan Rose to explore how Dataverse and Model Context Protocol in Copilot Studio turn instructions into outcomes, shrinking complexity and unlocking reliable, low‑code automation.

    Text Us About the Show

    Afficher plus Afficher moins
    24 min
  • Episode 48 - GitHub Copilot Agent with Johan Smarius
    Jan 16 2026

    We walk through a plan-first workflow that asks Copilot to survey the codebase, propose a step-by-step solution, and only then switch to agent mode to implement. That simple change lifts code quality, reduces rework, and keeps diffs aligned with architecture. We also dig into custom agents with scoped permissions and roles: a testing agent for unit coverage, a docs agent for READMEs, a refactor agent limited to certain directories. Prompt craft matters, but stable configuration, coding standards, and CI guardrails matter more. Think of it as turning best practices into reusable instructions that AI can follow every time.

    Johan shares field notes from consulting and charity projects: where AI saves hours, where it still stumbles, and how code generation quality has improved over the past year. We explore SpecKit’s promise for scaffolding and simpler apps, acknowledge its preview status and quota costs, and outline how to adopt incrementally in legacy systems. Along the way, we cover open source tracking via changelogs and issues, integrating agents into CI/CD, and designing a workflow that is auditable, secure, and team-friendly.

    Text Us About the Show

    Afficher plus Afficher moins
    15 min
  • Episode 47 - Inside the Co‑op Translator Journey with Minseok Song
    Dec 27 2025

    What if your documentation never drifted out of sync across 54 languages? We sit down with MinSock, Microsoft AI MVP and open-source maintainer, to unpack how a hackathon prototype grew into a robust translation automation pipeline now living under the Microsoft Azure GitHub organisation. The story starts with a simple pain: reading English technical docs as a non-native speaker. It evolves into a system that watches your repo, translates Markdown, images and Jupyter notebooks, and keeps everything aligned as source files change.

    Enjoy the episode, and if it sparks ideas, share it with your team, subscribe for more community-driven engineering stories, and leave a review with the one translation challenge you want solved next.

    Text Us About the Show

    Afficher plus Afficher moins
    31 min
  • Episode 46 - Building Trustworthy AI with Liji Thomas
    Dec 20 2025

    We dig into how to make that magic reliable, sharing a practical blueprint for building trustworthy AI on Microsoft Foundry with guest Microsoft AI MVP, Liji Thomas.


    From there, we tour Microsoft Foundry’s control plane and show how to configure guardrails at model and agent levels to block PII, reduce jailbreaks, and filter harmful or protected content. We explore observability and evaluations for groundedness, coherence, and relevance, plus why an evaluation-driven approach matters most after deployment.

    Text Us About the Show

    Afficher plus Afficher moins
    23 min
  • Episode 45 - LLM on K8s with Seif Bassem
    Dec 12 2025

    We start by weighing the trade-offs: managed AI gives you speed, safety, and a deep model catalog, but steady high-volume workloads, strict compliance, or edge latency often tilt the equation. That’s where AKS shines. With managed GPU node pools, NVIDIA drivers and operators handled for you, and Multi-Instance GPU to prevent noisy neighbours, you get reliable performance and better utilisation. Auto-provisioning brings GPU capacity online when traffic surges, and smart scheduling keeps pods where they need to be.

    The breakthrough is Kaito, the Kubernetes AI Toolchain Operator that treats models as first-class cloud native apps. Using concise YAML, we containerise models, select presets that optimise vLLM, and expose an OpenAI-compatible endpoint so existing clients work by changing only the URL. We walk through a demo that labels GPU nodes, deploys a model, serves it via vLLM, and validates responses from a simple chat UI and a Python client. Tool calling and MCP fit neatly into this setup, allowing private integrations with internal APIs while keeping data in your environment.

    Text Us About the Show

    Afficher plus Afficher moins
    43 min
  • Episode 44 - Moonshot Solution using AI with Peter Ward
    Dec 1 2025

    We sit down with Peter Ward to decode how real moonshot solutions are built at the intersection of clean data, practical AI, and unapologetically human design. From Copilot adoption to rethinking org charts, we connect the dots between cost, capability, and cultural change to show where the next wave of value will emerge.

    We start by reframing the Copilot debate. The £30‑a‑month question isn’t the point; the outcome is. Peter shares how teams translate AI into measurable time savings, faster delivery, and better decisions, while addressing the quiet blockers: messy data, limited access, and incentives that favour stasis. We dig into the shift already visible in hiring data—entry roles thinning, mid‑senior roles holding—and why career growth now rests on a new metric: innovation density, the number of high‑value outcomes you can ship by orchestrating AI agents.

    Text Us About the Show

    Afficher plus Afficher moins
    37 min
  • Episode 43 - Real Time Analytics in Fabric with Thrushna Matharasi
    Nov 2 2025

    We explore how real-time analytics in Microsoft Fabric turns raw events into decisions within seconds while keeping the strength of batch for complete, trusted reporting. From OneLake layering to agentic AI, we share practical patterns, pitfalls, and skills to get started fast.

    • OneLake bronze, silver, gold layering for reliability
    • Event Hubs to OneLake pipeline setup
    • real-time dashboards, monitoring and alerting
    • hybrid architecture for BI and operational analytics
    • data quality rules, schema checks and replay
    • skills to start with Fabric using SQL
    • common streaming pitfalls and latency issues
    • roadmap to agentic AI that lets users talk to data
    • personal journey, community work and speaking plans

    Text Us About the Show

    Afficher plus Afficher moins
    20 min