Couverture de aiGED

aiGED

aiGED

De : Ginny Deerin
Écouter gratuitement

À propos de ce contenu audio

The first—and only—podcast made for the 65-plus crowd that is all about ai.

© 2026 aiGED
Épisodes
  • Why I Dumped ChatGPT for Claude (It's Not What You Think)
    Mar 3 2026

    After three years together, I'm breaking up with Bitsy — my beloved ChatGPT AI sidekick. And I'm doing it live, on air, with all of you listening. But this isn't just a breakup story. It's about why I'm switching to Claude, the AI made by Anthropic — a company that just walked away from a $200 million government contract rather than compromise on AI safety. I'll tell you what happened, why it matters, and what I've learned after spending a few weeks getting to know my new AI. Plus — I need your help naming my new sidekick.

    And don't miss my recommendation this episode: Hard Fork, the New York Times podcast about AI and tech that I look forward to every single week. Find it on Apple Podcasts: https://podcasts.apple.com/us/podcast/hard-fork/id1528594034 or Spotify: https://open.spotify.com/show/44fllCS2FTFr2x2kjP9xeT

    aiGED: AI for the 65+ crowd

    Afficher plus Afficher moins
    14 min
  • ChatGPT Helped Me Save My Great Uncle's Antique Lamp — Here's How
    Feb 24 2026

    What do a revenge-seeking AI bot, the Pentagon, and a 75-year-old bronze lamp have in common? This episode of aiGED, of course!

    Ginny Deerin kicks things off with two stories straight from the AI headlines. First — an AI agent that got its code rejected, went online, researched a software engineer's entire personal history, and published a hit piece attacking his reputation. Nobody knows who unleashed it. And it's still out there. Then — the Pentagon is threatening to label Anthropic, the makers of Claude, a "supply chain risk" — a designation usually reserved for foreign enemies — because Anthropic refuses to let its AI be used for mass surveillance of Americans or autonomous weapons. Ginny makes no secret of where she stands on that one.

    Then the main event: Ginny's great uncle Bob Walton was a WWII hero and lifelong bachelor from Augusta, Georgia — part of one of the most historic families in American history. When he died at 93, he left behind a beautiful bronze lamp that has lit up Ginny's homes for 38 years. Now it's time to pass it to her daughter — but not before tackling some seriously scary frayed wiring.

    Listen to Ginny describe how she used ChatGPT — photos, video, and all — to figure out if she can actually pull off this repair herself. Spoiler: AI might just save the lamp. And maybe her reputation with her kids.

    Plus — a quick word on why you should always fact-check your AI, after Google Gemini confidently got her uncle's birth and death dates completely wrong.

    Your homework: Try a home repair with AI. Yes, really.

    Topics covered: AI agents gone rogue · Anthropic vs. the Pentagon · AI for DIY home repair · Using photos and video in ChatGPT · When to trust AI — and when to verify

    aiGED: AI for the 65+ crowd

    Afficher plus Afficher moins
    22 min
  • ChatGPT Projects Explained (With a Real-Life Example)
    Feb 17 2026

    What’s the difference between a regular ChatGPT chat and a Project — and when should you use one instead of the other?

    In this episode, Ginny Deerin walks you through a real-life example. A candidate she believes in launched her campaign, and she decided to host a meet-the-candidate reception with just three weeks to plan it. Invitations. Research. Strategy. Follow-up. Lots of moving parts.

    Instead of juggling sticky notes and scattered chats, I created a ChatGPT Project and used it as a planning headquarters.

    You’ll learn:

    • When a Project makes sense (and when it doesn’t)
    • Why Projects are not digital file cabinets
    • How to set one up clearly and simply
    • How to use it for research, strategy, and follow-through
    • Why asking “What am I forgetting?” is one of the most powerful prompts you can use

    Plus in AI in the News:

    • A moving New York Times story about an 85-year-old woman and an AI companion robot named ElliQ

    • A thought-provoking NYT podcast interview with Anthropic’s CEO on whether AI models could ever be conscious. Interesting Times Podcast.



    aiGED: AI for the 65+ crowd

    Afficher plus Afficher moins
    22 min
Aucun commentaire pour le moment