Couverture de The Memriq AI Inference Brief – Leadership Edition

The Memriq AI Inference Brief – Leadership Edition

The Memriq AI Inference Brief – Leadership Edition

De : Keith Bourne
Écouter gratuitement

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

The Memriq AI Inference Brief – Leadership Edition is a weekly panel-style talk show that helps tech leaders, founders, and business decision-makers make sense of AI. Each episode breaks down real-world use cases for generative AI, RAG, and intelligent agents—without the jargon. Hosted by a rotating panel of AI practitioners, we cover strategy, roadmapping, risk, and ROI so you can lead AI initiatives confidently from the boardroom to the product roadmap. And when we say "AI" practitioners, we mean they are AI...AI practitioners.Copyright 2025 Memriq AI Direction Economie Management Management et direction
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • Recursive Language Models: The Future of Agentic AI for Strategic Leadership
      Jan 12 2026

      Unlock the potential of Recursive Language Models (RLMs), a groundbreaking evolution in AI that empowers autonomous, strategic problem-solving beyond traditional language models. In this episode, we explore how RLMs enable AI to think recursively—breaking down complex problems, improving solutions step-by-step, and delivering higher accuracy and autonomy for business-critical decisions.

      In this episode:

      - What makes Recursive Language Models a paradigm shift compared to traditional and long-context AI models

      - Why now is the perfect timing for RLMs to transform industries like fintech, healthcare, and legal

      - How RLMs work under the hood: iterative refinement, recursion loops, and managing complexity

      - Real-world use cases demonstrating significant ROI and accuracy improvements

      - Key challenges and risk factors leaders must consider before adopting RLMs

      - Practical advice for pilot projects and building responsible AI workflows with human-in-the-loop controls

      Key tools & technologies mentioned:

      - Recursive Language Models (RLMs)

      - Large Language Models (LLMs)

      - Long-context language models

      - Retrieval-Augmented Generation (RAG)

      Timestamps:

      0:00 - Introduction and guest expert Keith Bourne

      2:30 - The hook: What makes recursive AI different?

      5:00 - Why now? Industry drivers and technical breakthroughs

      7:30 - The big picture: How RLMs rethink problem-solving

      10:00 - Head-to-head comparison: Traditional vs. long-context vs. recursive models

      13:00 - Under the hood: Technical insights on recursion loops

      15:30 - The payoff: Business impact and benchmarks

      17:30 - Reality check: Risks, costs, and oversight

      19:00 - Practical tips and closing thoughts

      Resources:

      "Unlocking Data with Generative AI and RAG" by Keith Bourne - Search for 'Keith Bourne' on Amazon and grab the 2nd edition

      This podcast is brought to you by Memriq.ai - AI consultancy and content studio building tools and resources for AI practitioners.

      Afficher plus Afficher moins
      21 min
    • Agentic AI Evaluation: DeepEval, RAGAS & TruLens Compared
      Jan 5 2026

      # Evaluating Agentic AI: DeepEval, RAGAS & TruLens Frameworks Compared

      In this episode of Memriq Inference Digest - Leadership Edition, we unpack the critical frameworks for evaluating large language models embedded in agentic AI systems. Leaders navigating AI strategy will learn how DeepEval, RAGAS, and TruLens provide complementary approaches to ensure AI agents perform reliably from development through production.

      In this episode:

      - Discover how DeepEval’s 50+ metrics enable comprehensive multi-step agent testing and CI/CD integration

      - Explore RAGAS’s revolutionary synthetic test generation using knowledge graphs to accelerate retrieval evaluation by 90%

      - Understand TruLens’s production monitoring capabilities powered by Snowflake integration and the RAG Triad framework

      - Compare strategic strengths, limitations, and ideal use cases for each evaluation framework

      - Hear real-world examples across industries showing how these tools improve AI reliability and speed

      - Learn practical steps for leaders to adopt and combine these frameworks to maximize ROI and minimize risk

      Key Tools & Technologies Mentioned:

      - DeepEval

      - RAGAS

      - TruLens

      - Retrieval Augmented Generation (RAG)

      - Snowflake

      - OpenTelemetry

      Timestamps:

      0:00 Intro & Why LLM Evaluation Matters

      3:30 DeepEval’s Metrics & CI/CD Integration

      6:50 RAGAS & Synthetic Test Generation

      10:30 TruLens & Production Monitoring

      13:40 Comparing Frameworks Head-to-Head

      16:00 Real-World Use Cases & Industry Examples

      18:30 Strategic Recommendations for Leaders

      20:00 Closing & Resources

      Resources:

      - Book: "Unlocking Data with Generative AI and RAG" by Keith Bourne - Search for 'Keith Bourne' on Amazon and grab the 2nd edition

      - This podcast is brought to you by Memriq.ai - AI consultancy and content studio building tools and resources for AI practitioners.

      Afficher plus Afficher moins
      18 min
    • Model Context Protocol (MCP): The Future of Scalable AI Integration
      Dec 15 2025

      Discover how the Model Context Protocol (MCP) is revolutionizing AI system integration by simplifying complex connections between AI models and external tools. This episode breaks down the technical and strategic impact of MCP, its rapid adoption by industry giants, and what it means for your AI strategy.

      In this episode:

      - Understand the M×N integration problem and how MCP reduces it to M+N, enabling seamless interoperability

      - Explore the core components and architecture of MCP, including security features and protocol design

      - Compare MCP with other AI integration methods like OpenAI Function Calling and LangChain

      - Hear real-world results from companies like Block, Atlassian, and Twilio leveraging MCP to boost efficiency

      - Discuss the current challenges and risks, including security vulnerabilities and operational overhead

      - Get practical adoption advice and leadership insights to future-proof your AI investments

      Key tools & technologies mentioned:

      - Model Context Protocol (MCP)

      - OpenAI Function Calling

      - LangChain

      - OAuth 2.1 with PKCE

      - JSON-RPC 2.0

      - MCP SDKs (TypeScript, Python, C#, Go, Java, Kotlin)

      Timestamps:

      0:00 - Introduction to MCP and why it matters

      3:30 - The M×N integration problem solved by MCP

      6:00 - Why MCP adoption is accelerating now

      8:15 - MCP architecture and core building blocks

      11:00 - Comparing MCP with alternative integration approaches

      13:30 - How MCP works under the hood

      16:00 - Business impact and real-world case studies

      18:30 - Security challenges and operational risks

      21:00 - Practical advice for MCP adoption

      23:30 - Final thoughts and strategic takeaways

      Resources:

      • "Unlocking Data with Generative AI and RAG" by Keith Bourne - Search for 'Keith Bourne' on Amazon and grab the 2nd edition
      • This podcast is brought to you by Memriq.ai - AI consultancy and content studio building tools and resources for AI practitioners.

      Afficher plus Afficher moins
      18 min
    Aucun commentaire pour le moment