Épisodes

  • Why 40% of AI Projects Fail: The "Amnesia" Problem Nobody is Talking About | with Nemanja Bibic
    May 5 2026
    Gartner predicts that 40% of agentic AI projects will be canceled by 2027—not because the models aren't smart enough, but because the underlying memory layer isn't ready. In this episode, we sit down with Nemanja Bibic, Head of Growth at Cognee, a startup building the knowledge layer for our agentic future, to discuss the urgent need for a "hippocampus" for artificial intelligence. We dive deep into why current LLMs suffer from "AI Alzheimer’s," the technical difference between stateless and stateful versions, and why better memory—not just bigger models—is the key to reaching AGI. Inside the Episode The "AI Alzheimer’s" Effect: Why current models struggle to retain context beyond 48 hours and act as a blank slate every time you open them. The $7.5M Infrastructure Bet: Why founders from OpenAI and Facebook AI Research are backing the development of a "cognitive infrastructure". Neurological vs. Cognitive Mimicry: How mimicking human reasoning and learning processes is moving AI accuracy from 60% toward 100%. The Economics of Memory: Why the next trillion-dollar companies will be the ones building the infrastructure that allows AI to learn, doubt itself, and function without constant human supervision. Community-Driven Intelligence: How 12,000 developers are building a persistent memory layer for the future of agentic workflows. Prefer watching the conversation? Subscribe to our YouTube channel: https://www.youtube.com/@TheTrustMoatPod ⏱️ Chapters: 00:00:00 Intro 00:03:40 CHAPTER 1 - WHY YOUR AI HAS AMNESIA 00:05:15 The "48-Hour Wipe": Why LLMs lose context over time 00:08:30 Stateless vs. Stateful: The technical root of AI memory loss 00:14:22 Why "Long Context Windows" aren't a permanent fix 00:25:42 CHAPTER 2 - THE $7.5M BET ON A PROBLEM NOBODY'S TALKING ABOUT 00:27:10 Why OpenAI and FAIR founders are backing memory infrastructure 00:29:45 Identifying the "infrastructure gap" in the current AI stack 00:32:15 The business risk of building on "forgetful" models 00:34:47 CHAPTER 3 - WHAT "AI MEMORY" ACTUALLY MEANS 00:36:12 Vector Databases vs. Knowledge Graphs: A breakdown 00:38:50 The role of "Ontologies" in making AI understand your business 00:40:05 Ephemeral vs. Persistent memory in agentic workflows 00:42:08 CHAPTER 4 - HOW THE HUMAN BRAIN INSPIRED A FIX FOR BROKEN AI MEMORY 00:44:10 Mimicking the Hippocampus: Building a digital long-term memory 00:46:55 How "Cognitive Mimicry" beats simple "Neurological Mimicry" 00:49:30 The "Self-Doubt" Mechanism: Teaching AI to know what it doesn't know 00:52:48 CHAPTER 5 - 12,000 DEVELOPERS WHO CHOSE TO BUILD THIS FOR FREE 00:54:20 The Open Source advantage in AI security and memory 00:58:15 Why community-driven "Maltbots" are outpacing corporate labs 01:03:40 Collaborative Intelligence: Lessons from 12k contributors 01:07:54 CHAPTER 6 - THE RACE BENEATH THE RACE: MEMORY VS MODELS 01:09:15 Commodity Models: Why intelligence is becoming a "race to zero" 01:11:05 Proprietary Data + Memory: The only defensible moat left 01:13:39 CHAPTER 7 - WHAT YOUR COMPANY SHOULD DO BEFORE LOCKING IN AN AI STACK 01:15:20 The Gartner Warning: Avoiding the 40% project failure rate 01:18:45 Evaluating your "Memory Layer" before you buy the model 01:21:10 Future-proofing your agentic infrastructure Follow Nemanja Bibic on : LinkedIn: https://www.linkedin.com/in/nemanja-bibic-20523b87/ Check out Cognee: https://www.cognee.ai/ Connect with Maja on: LinkedIn: https://www.linkedin.com/in/zmajapbaines/X: https://x.com/lazarevic_p?s=11Instagram: https://www.instagram.com/majaperovicbaines_mbm
    Afficher plus Afficher moins
    1 h et 19 min
  • Prompt Injection, Cloud Code & Agent Security Explained | CISO Guillaume Ross
    Apr 28 2026
    Get this straight in your inbox --> 📩 Subscribe to the Trust Moat newsletter: https://majapbaines.substack.com/ AI agent security is the silent threat behind every startup using Claude, ChatGPT, Cloud Code, or autonomous agents in 2026 — and most founders don't know what the "lethal trifecta" is or why it increases their risks of leaking their entire customer database. In this episode, head of security consultant for startups, Guillaume Ross, breaks down the real-world security risks of agentic AI, prompt injection attacks, and the identity problem of AI agents acting on your behalf. From Cloud Code on a marketing team's laptop to customer service chatbots leaking data, Guillaume shares almost two decades of cybersecurity experience securing startups, fintechs, and regulated banks — and explains what every founder, developer, and everyday Claude user should be doing TODAY to stay safe. THE GUEST Guillaume Ross is a startup CISO and security consultant based in Montreal, who has built security infrastructure from scratch at companies ranging from pre-revenue startups to regulated financial institutions, crypto companies, and banks. Previously Head of Security at Jupiter One. Connect with Guillaume on LinkedIn: https://www.linkedin.com/in/guillaumeross Check out his website on security: https://foundersfirewall.io 🔥 What you'll learn: - Why "shadow AI" is the new shadow IT — and how to stop it - The lethal trifecta: private data + untrusted input + internet access = disaster - Why BYOD laptops are a security nightmare for AI-first startups - How prompt injection actually works (with a real email example) - The AI agent identity problem nobody is talking about - Why customer service chatbots are the #1 attack surface in 2026 - Sandboxing OpenClaw, Cloud Code, and computer-use agents safely - Vibe coding security: what to never roll yourself - MCP servers: the hidden risk in your AI stack - What governments get WRONG about LLMs (the August 2025 CISA incident) - AI-assisted vulnerability scanning vs. AI-generated code risks ⏱️ Chapters: 00:00:00 Intro 00:04:52 CHAPTER 1: EVERYONE IS A DEVELOPER NOW 00:05:23 The expansion of the corporate attack surface 00:07:38 Why startups selling to enterprise need security on Day 1 00:08:35 The problem with "Bring Your Own Device" (BYOD) 00:09:42 Choosing tech that is "easy to manage." 00:10:49 CHAPTER 2: SHADOW AI IS THE NEW SHADOW IT 00:11:43 Lessons from the CISA document leak 00:12:02 The Dropbox era vs. the AI era 00:12:47 Why blocking AI tools usually fails 00:13:44 How to force corporate versions of ChatGPT and Claude 00:14:24 Why personal accounts bypass legal data protections 00:22:32 CHAPTER 3 - THE AGENT IS YOU 00:26:39 Security risks of browser-based AI agents 00:27:14 Why you shouldn't use agents in your primary browser profile 00:32:47 The consolidation of the AI startup market 00:33:41 Transparency: Identifying agents vs. humans 00:34:00 The difficulty of detecting synthetic voice and deepfakes 00:47:53 CHAPTER 4 - THE LETHAL TRIFECTA 00:48:05 Why text-based LLMs can't separate instructions from data 00:48:30 Indirect prompt injection: The "hidden email" threat 00:49:35 How attackers can exfiltrate quarterly reports via AI 00:52:20 The danger of agents with "Write" access 00:53:15 Sandboxing "OpenClaw" and computer-use models 00:59:01 CHAPTER 5 - WE DON'T HAVE A FIX FOR THIS YET 01:00:15 Why basic threat modeling is essential for builders 01:02:30 Dealing with "close calls" in AI automation 01:05:40 The "Identity Crisis" of agentic authentication 01:10:12 Future predictions for AI native security products 01:15:50 Resources for builders: foundersfirewall.io 🔗 Resources mentioned: → Founders Firewall (Guillaume's free security guide for startup founders): https://foundersfirewall.io → Simon Willison on the lethal trifecta: https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/ → OWASP LLM Top 10: https://owasp.org/www-project-top-10-for-large-language-model-applications/ If you're building a startup, shipping AI features, or just using Claude and ChatGPT every day, this conversation will change how you think about security forever. Prefer to watch on Youtube --> https://youtu.be/-p139v8fAgw?si=FQzJxRmVNcP5gGKA Connect with Maja on: - LinkedIn https://www.linkedin.com/in/zmajapbaines - X https://x.com/lazarevic_p?s=11 - Instagram - https://www.instagram.com/majaperovicbaines_mbm
    Afficher plus Afficher moins
    1 h et 17 min
  • He Spent 20 Years Depositing Value Before Asking For Anything. Now He Runs A VC Fund — Tiho BAJIC
    Apr 21 2026

    Talking about community as a moat, capital, and the long game - with the investor who spent 20 years earning the right to write checks. Tihomir Bajic has seen what makes companies last. Most founders are solving the wrong problem. In this episode, we get into why community isn't a marketing tactic — it's a moat. Why trust compounds slower than revenue but outlasts it. And what it actually looks like to build something designed to survive the long game. If you're building a company and playing for keeps, this one's for you.

    Follow Monkey Business Media on:

    Subscribe to our YouTube Channel

    Follow us on Instagram

    Connect with Maja on LinkedIn

    Connect with Maja on X

    Subscribe to The Trust Moat Newsletter

    ----

    Where to find Tihomir Bajić

    Connect with Tiho on LinkedIn

    Connect with Tiho on X

    Afficher plus Afficher moins
    1 h et 31 min