Couverture de The AI Briefing

The AI Briefing

The AI Briefing

De : Tom Barber
Écouter gratuitement

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

The AI Briefing is your 5-minute daily intelligence report on AI in the workplace. Designed for busy corporate leaders, we distill the latest news, emerging agentic tools, and strategic insights into a quick, actionable briefing. No fluff, no jargon overload—just the AI knowledge you need to lead confidently in an automated world.2025 Spicule LTD
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • The Data Quality Crisis Killing 85% of AI Projects (And How to Fix It)
      Jan 7 2026

      85% of AI leaders cite data quality as their biggest challenge, yet most initiatives launch without addressing foundational data problems. Tom Barber reveals the uncomfortable conversation your AI team is avoiding.

      The Data Quality Crisis Killing 85% of AI Projects

      Key Statistics

      • 85% of AI leaders cite data quality as their most significant challenge (KPMG 2025 AI Quarterly Poll)
      • 77% of organizations lack essential data and AI security practices (Accenture State of Cybersecurity Resilience 2025)
      • 72% of CEOs view proprietary data as key to Gen AI value (IBM 2025 CEO Study)
      • 50% of CEOs acknowledge significant data challenges from rushed investments
      • 30% of Gen AI projects predicted to be abandoned after proof of concept (Gartner)

      Three Critical Questions for Your AI Initiative

      1. Single Source of Truth

      • Do we have unified data for AI models to consume?
      • Are AI initiatives using centralized data warehouses or convenient silos?
      • How do conflicting data versions affect AI outputs?

      2. Data Quality Ownership

      • Who owns data quality in our organization?
      • Do they have authority to block deployments?
      • Was data quality specifically signed off on your last AI launch?

      3. Data Lineage and Traceability

      • Can we trace AI decisions back to source data?
      • How do we debug AI failures without lineage?
      • Are we prepared for EU AI Act requirements (phased in February 2025)?

      The Real Cost of Poor Data Governance

      • Organizations skip governance → hit problems at scale → abandon initiatives → repeat cycle
      • Tech debt compounds from rushed implementations
      • Strong data foundations enable faster AI scaling

      Action Items for This Week

      1. Ask for data quality scores on your highest priority AI initiative
      2. Identify who owns data quality decisions and their authority level
      3. Test traceability: can you track wrong outputs to source data?
      4. Ensure data governance is a budget line item, not buried assumption

      Key Frameworks Mentioned

      • Accenture: Data security, lineage, quality, and compliance
      • PwC: Board-level data governance priority
      • KPMG: Integrated AI and data governance under single umbrella

      Research Sources

      • KPMG 2025 AI Quarterly Poll Survey
      • Accenture State of Cybersecurity Resilience 2025
      • IBM 2025 CEO Study
      • Drexel University and Precisely Study
      • PwC Research on AI Data Governance
      • Gartner AI Project Predictions
      • Forrester IT Landscape Analysis
      • EU AI Act Requirements

      Chapters

      • 0:00 - Introduction: The Data Quality Crisis
      • 0:29 - Why 85% of AI Leaders Struggle with Data Quality
      • 2:12 - How AI Makes Data Problems Worse
      • 2:56 - Three Critical Questions Every Organization Must Ask
      • 4:45 - The Real Cost of Skipping Data Governance
      • 5:34 - Reframing Data Governance as an Accelerant
      • 6:16 - What Good Data Governance Looks Like
      • 7:33 - Action Steps You Can Take This Week
      Afficher plus Afficher moins
      9 min
    • Why 95% of AI Pilots Fail: The Hidden Scaling Problem Killing Your ROI
      Jan 6 2026

      MIT research reveals 95% of AI pilots fail to deliver revenue acceleration. Tom breaks down why this isn't a technology problem but a scaling failure, and provides three critical questions to identify which pilots deserve investment.

      Show Notes

      Key Statistics

      • 95% of generative AI pilots fail to achieve rapid revenue acceleration (MIT, 2025)
      • 8 in 10 companies have deployed Gen AI but report no material earnings impact
      • Only 25% of AI initiatives deliver expected ROI
      • Just 16% scale enterprise-wide
      • Only 6% achieve payback in under a year
      • 30% of GenAI projects predicted to be abandoned by end of 2025

      Core Problem: Horizontal vs. Vertical Deployments

      • Horizontal: Enterprise-wide copilots, chatbots, general productivity tools
        • Scale quickly but deliver diffuse, hard-to-measure gains
      • Vertical: Function-specific applications that transform actual work
        • 90% remain stuck in pilot mode

      Three Critical Evaluation Questions

      1. Does this pilot solve a problem we pay to fix?
      2. Can we measure impact in terms the CFO cares about?
      3. Does it require process redesign or just tool adoption?

      Success Factors

      • Empower line managers, not just central AI labs
      • Select tools that integrate deeply and adapt over time
      • Consider purchasing solutions over custom builds
      • Be willing to retire failing pilots

      This Week's Action Items

      • Inventory current AI pilots
      • Categorize as: scaling successfully, stalled but salvageable, or stalled and unlikely to recover
      • Apply the three evaluation questions
      • Identify specific barriers for salvageable pilots

      Chapters

      • 0:00 - The 95% Problem: Why AI Pilots Aren't Becoming Products
      • 0:24 - The Research: MIT, McKinsey, and IBM Findings on AI Failure Rates
      • 1:49 - Why Pilots Stall: Horizontal vs. Vertical Deployments
      • 3:07 - What Successful Scaling Actually Looks Like
      • 4:11 - Three Critical Questions to Evaluate Your AI Pilots
      • 5:40 - The Permission to Stop: When to Retire Failing Pilots
      • 6:45 - Action Steps: What to Do This Week
      Afficher plus Afficher moins
      9 min
    • Why One AI Model Won't Rule Them All: Choose the Right Tool for Each Job
      Jan 5 2026

      Not all AI models are created equal. Learn why you need different AI tools for different tasks and how to strategically deploy multiple models in your organization for maximum effectiveness.

      Episode Show Notes

      Key Topics Covered

      AI Model Diversity & Specialization

      • Why different AI models serve different purposes
      • The importance of testing multiple platforms and engines
      • How model capabilities vary across use cases

      Platform-Specific Strengths

      • Microsoft Copilot: Office integration, Windows embedding, email management, document analysis
      • Claude Opus Models: Programming and development tasks
      • GPT-5 Codecs: Advanced coding capabilities
      • Google Gemini: Emerging competitive solutions

      Strategic Implementation

      • Moving beyond "one size fits all" AI deployment
      • Testing methodologies for different scenarios
      • Adapting to evolving model capabilities

      Main Takeaways

      1. No single AI model excels at everything
      2. Test different engines for different purposes
      3. Match the right tool to the specific task
      4. Continuously evaluate as models evolve
      5. Strategic deployment beats widespread single-platform adoption

      Looking Ahead

      This episode kicks off a series exploring AI use cases and workplace optimization strategies for 2026.

      Chapters

      • 0:00 - Introduction: AI in 2026
      • 0:31 - The Reality of AI Model Diversity
      • 0:50 - Microsoft Copilot's Strengths and Limitations
      • 1:32 - Specialized Models: Claude, GPT-5, and Gemini
      • 2:31 - Strategic Testing and Implementation
      • 2:53 - Key Takeaways and Next Steps
      Afficher plus Afficher moins
      4 min
    Aucun commentaire pour le moment