Couverture de Trust Issues

Trust Issues

Trust Issues

De : Ailish McLaughlin
Écouter gratuitement

À propos de ce contenu audio

Welcome to Trust Issues, the podcast for people who use AI but don't fully trust it. Each episode, we speak to experts in building and using AI to help you understand how it really works, who's behind it, where it's going and when you can (and can't) trust it. So you can stop second-guessing and start using AI with confidence.Copyright 2026 Ailish McLaughlin Développement personnel Réussite personnelle Sciences sociales
Épisodes
  • Why opting out of AI is actually harming your future self with Kate Minogue
    Mar 18 2026

    You can't opt out of AI. It's already in your Uber, your Netflix, your news feed. So the real question is: do you want to understand it, or let someone else decide how it works for you?

    Kate Minogue (ex-Meta, AI advisor, founder of The AI Leadership Lab) joins us to talk incentives, algorithms, user power, and why checking out of AI is the worst thing you can do right now.

    SHOW NOTES

    About the Guest

    Kate Minogue is an AI advisor and fractional product leader with 6 years at Meta and a background spanning data science, gaming, fintech, and banking. She's passionate about helping non-technical business leaders get confident with AI, and recently launched The AI Leadership Lab, a course designed to do exactly that. Find Kate on LinkedIn or at kate-minogue.com.

    In This Episode

    1. The Uber driver who checked out of AI (and why that's not actually possible)
    2. Netflix vs TikTok: same technology, completely different incentives
    3. Why understanding incentives is the key to trusting (or not trusting) AI
    4. AI hallucinations explained: what Kate told her sister that made her stop being scared
    5. How your data actually shapes the AI products being built
    6. Misinformation, deepfakes, and AI-generated content: which fears are warranted
    7. Why CEOs and graduates are behaving the same way around AI right now
    8. The "safe zones" framework for AI use in organisations
    9. How users (yes, you) can influence how AI develops
    10. US vs Europe: deregulation vs responsible AI as competitive advantage
    11. What teams actually want from leaders in the age of AI (it's not expertise)
    12. "Do it because the men are doing it and they are not apologising for it"

    Mentioned in This Episode

    1. The AI Leadership Lab (Kate's course for non-technical business leaders)
    2. Max Tegmark (AI safety researcher, Web Summit talk)
    3. DeepSeek (Chinese AI lab)
    4. Sora (OpenAI's image/video generation app)
    5. EU AI Act and GDPR
    6. Boxer CEO memo ("AI is for you, not to you")
    7. Women in Africa building their own AI models (Web Summit)

    Afficher plus Afficher moins
    1 h et 16 min
  • AI - magic or maths? A no-jargon guide on how AI actually works.
    Mar 11 2026

    Last week, Florence helped us get our heads around the right mindset for using AI. But there were a lot of words flying around. Agents. LLMs. Machine learning. What do those things actually mean? And more importantly, does it matter?

    This week we're joined by Raji Ramakrishnan, a product leader at Lloyds Banking Group who works on agentic AI observability. Which, yes, is a mouthful. But by the end of this episode, you'll actually know what all of those words mean. And that's kind of the point.

    Raji breaks down the entire AI landscape in a way that finally makes sense. She starts with the basics (AI is not magic, it's maths, data and programming) and walks us through how machines learn using an analogy that anyone who's taught a child flashcards will immediately get. Supervised learning? That's you holding up the flashcard. Unsupervised learning? That's the kid pointing at a cat in the street having figured it out on their own.

    But this episode isn't just a glossary. It's about why understanding this stuff actually matters. Raji makes a compelling case that AI is coming whether you engage with it or not. Your mobile provider, your bank, your electricity company are all already using it. And the more you understand, the better equipped you are to know when to trust it and when to push back.

    We also get into hallucinations (why AI confidently makes stuff up), the difference between generative AI and agentic AI, and what banks are actually doing behind the scenes to make sure AI doesn't go rogue. Spoiler: there are real humans watching.

    In this episode, we cover:

    1. AI, machine learning, deep learning, generative AI, agentic AI: what each one actually means and how they connect
    2. The flashcard analogy: how machines learn in a similar way to children (supervised vs unsupervised learning)
    3. Why AI is a prediction machine, not a truth machine, and why that distinction matters
    4. Hallucinations: what they are, why they happen, and why you should always sense-check
    5. Agentic AI: what changes when AI can take actions on its own, not just generate content
    6. Observability and guardrails: what's actually happening inside banks to keep AI in check
    7. Why jargon is an unnecessary barrier to entry and how to not let it hold you back
    8. The mobile phone analogy: remember buying minutes for your Nokia 3310? AI adoption is on the same trajectory

    Afficher plus Afficher moins
    1 h et 9 min
  • Drunk Interns, Lazy Brains and Knowing When to To use AI
    Feb 25 2026

    This week we're kicking things off with a big question: is AI making us lazy? There's a study from MIT that suggests our brains might be outsourcing more than we realise. And with our brains not fully developing until around age 32, what does it mean that we're handing over so much cognitive work to AI tools before we've even finished cooking?

    To help us figure it out, we're joined by Florence Jumpp, a product leader who's been working in AI and machine learning since 2019. Florence has a background in experimental psychology, and she's built her whole AI career around solving problems rather than obsessing over the tech itself.

    Florence introduces us to her "drunk intern" framework. It's exactly what it sounds like. Think of AI as a capable but overconfident intern who's had a few too many. They'll absolutely get stuff done for you, but you wouldn't send them to the board meeting. And you definitely wouldn't have them work on your hardest problems.

    She also shares her VEER framework for deciding which tasks to hand off to AI: looking at a task's Value, Enjoyment, Effort and Risk to decide whether it's a good one to hand off to AI.

    In this episode, we cover:

    1. Why thinking of AI as a "drunk intern" helps you use it more wisely (and why Florence's is called Jack)
    2. The VEER framework for figuring out what to delegate to AI and what to protect
    3. Cognitive offloading: why your brain has stopped taking notes in personal conversations too
    4. How Florence uses Zapier to never face a post-holiday email wall again
    5. Why doing the hard thing still matters, and how to force yourself to sit with the blank page
    6. The positive feedback loop: using freed-up time to get even better at AI, not just filling it with more work
    7. Why the people who think for themselves are the ones who'll stand out

    About our guest: Florence Jumpp is a product leader specialising in AI and machine learning, with a background in experimental psychology. She brings a neuroscience lens to how we should think about AI's impact on our brains and our work.

    Resources mentioned:

    1. Zapier (zapier.com) for building AI-powered automations
    2. MIT study on AI and cognitive offloading

    Afficher plus Afficher moins
    1 h et 24 min
Aucun commentaire pour le moment