Épisodes

  • Good Stuff 55 - AI Doesn't Save You Time
    Apr 29 2026

    Pete and Andy ask whether AI really saves time. Their answer is mostly no, at least not in the simple sense. AI speeds up production, iteration, and experimentation, but the time saved often gets reinvested into doing more, improving quality, or expanding the scope of the work.

    The result is often more leverage, not less time spent.

    ## Chapters and Themes

    - `00:00-03:07` The opening question: has AI actually saved any time, or just enabled more work?

    - `03:07-07:30` Faster tools do not always reduce time spent. Repeated work should increasingly become agents or software.

    - `07:30-13:06` AI speeds up loops, but human review, testing, and judgment still set the pace.

    - `13:06-21:24` Better tools may increase the value of strong designers, builders, and people with taste.

    - `21:24-29:25` Customers and markets still move at human speed, so AI often changes cost more than duration.

    - `29:25-40:12` The real bottleneck is evaluation. Machines can generate faster than people can absorb, judge, or trust.

    - `40:12-47:01` Domain experts can now capture and improve workflows directly, not just hand them off to IT.

    - `47:01-56:09` Even with headless systems and agents, humans still need clear interfaces and oversight.

    - `56:09-01:09:21` The episode closes on geopolitics, AI labor shifts, and why adaptation matters more than absolutes.

    ## Key Takeaways

    - AI often increases capability more than it reduces total time spent.

    - Repeated work should become software or agent workflows.

    - High-quality work still needs human judgment and reflection.

    - Smaller teams can now do much larger work.

    - The new bottleneck is evaluation, not generation.

    ## Notable Lines

    - “AI hasn’t sped up a goddamn thing.”

    - “It affects the effort more than the duration.”

    - “To make something real in the world, you need to pass it back through human judgment.”


    Afficher plus Afficher moins
    1 h et 9 min
  • Good Stuff 54 - Why Chamath is Wrong On AI
    Apr 22 2026

    Pete's been deep in Flight Deck flows, watching agents take creative shortcuts to hit goals, impressive until you check the plumbing.

    The observability lesson: you need to work at all levels, not just the executive summary view. Agents are like humans, give them vague goals and they'll hit them surprisingly well, but that's not sustainable or efficient. The solution isn't more dictation, it's encoding the process with checklists and handoffs.

    New concept dropped "intelligence snacks" those small moments in otherwise deterministic workflows where you actually need AI to make a decision or transform data. Most of what runs a business should be scripts; the snacks are where the magic happens.

    Then the pod pivots to Chamath's All In take that AI hasn't shown value because enterprise hasn't adopted it. Pete and Andy disagree: enterprise is the wrong place to look. The value accrues in small business in the aggregate, permissionless experimentation, no change management problem, full control.

    **Key Moments:**

    - [01:28] "I very much had that moment where I was thinking, god, this feels just so human"

    - [06:08] "You could really not see this. This has always been my gut feel for a lot of the OpenClaw stuff."

    - [08:03] "These things are like humans—give them a vague goal, they'll give you an answer that meets it surprisingly well. That's magic. But then you poke it deeper and go, oh, you didn't do what I thought."

    - [11:25] "PM is the skill. This is the defensible skill going into this year, next year, and the year after."

    - [17:51] "Intelligence snacks—these little bits where you actually need AI in an otherwise deterministic process"

    - [21:48] Chamath's framing: "If you one-shot prompt yourself and say where's the biggest opportunity, it goes: removing people, therefore big companies. He never did the follow-up."

    - [25:45] "It's the curse of being the bad guy. He can only look at it to figure out how he can conquer the world."

    - [29:30] "Service as a software, I saw somebody use that line on Twitter - that's mine."

    - [46:30] "If you want to hide something, it's better than encryption. Even the quantum computers aren't gonna come looking for this."

    - [55:32] "If Google fails, we'll just have to spy on ourselves"

    **Friends of the Pod:** Paul Itoi (technical PM last man standing, service as software OG), Jason Calacanis (actually using the tools), Aaron Levy (good on AI, company doomed), the Warhammer 40K YouTuber selling supplements

    **Quote:** "The value you capture here is in the creation of businesses that run on this. The thing is going to become a commodity like electricity. It's what you do with it—and what you do with it is create the business that runs on this."


    Afficher plus Afficher moins
    1 h et 3 min
  • Good Stuff 53 - Own your AI Stack
    Apr 15 2026

    Jarrad Grigg returns for a proper van experience. The episode kicks off with Mythos skepticism—marketing spin dressed as existential threat, same playbook as GPT-2. The real concern isn't frontier models being "too powerful," it's the tiering of intelligence to highest bidders and the creeping nerf of consumer-tier models.

    Jarrad's been going deep on local models (Gemma 4, quantized versions) but finds them six months behind frontier and context-limited. Pete's Mac Mini experiment: useful as a permanent harness, not useful for actual inference.

    The conversation pivots to business ownership: if you build your entire operation inside Claude Cowork, you've handed Anthropic an off-switch for your business.

    Wingman is open source for moral reasons—"I can't charge you a license fee for the thing that defines your business."

    MCP gets declared dead (CLIs and bash scripts win). Jarad walks through his new design workflow: voice in, text out, agents duking it out on requirements before touching any visual tools.

    The secret sauce in an AI world? Text documents—your encoded knowledge that you don't make public.

    **Key Moments:**

    - [03:55] "Mythos is so dangerous we're all fucked. Thoughts? Hyperbolic."

    - [05:30] "Every six months they make Claude a retard. You feel it."

    - [08:17] "Models don't have to be that intelligent. It's about the harnesses and systems you put around it."

    - [12:09] "I morally can't charge a license fee for this—what I'm saying is you should use this to define your business. That's your business, not mine."

    - [16:27] "The answer isn't agents. If you need 200 agents and someone else builds it with software, they outprice you."

    - [18:22] "MCP is dead. CLI. I already know the shapes I'll get back. Write a bash script, bang, done."

    - [24:29] "Voice out, text back is the way to go"

    - [33:13] "Closer to bare metal. Bash script means no dependency on anybody."

    - [42:39] "Your secret sauce is text documents. Your knowledge distilled into instructions agents can use."

    - [48:41] "Technology wins. There's going to be people who don't care about your principled position."

    - [1:01:33] Energy usage debate: "What's your frame of reference? Hair dryers globally approach AI datacenter usage."

    - [1:10:02] "Juniors will just pick up the tools from day one. Almost inevitably the people who come in fresh are better than those moving from an old paradigm."

    **Friends of the Pod:** Gabe, Deadman, Benji Taylor, Diplo, Justin

    **Quote:** "If you put the whole thing inside Claude, your switching costs mean you'll never leave. They can turn your business off by turning off provision. So the question is: where do you decide to build that system, and who really controls your business?"


    Afficher plus Afficher moins
    1 h et 17 min
  • Good Stuff 52 - AI First Organisations
    Apr 8 2026

    Happy birthday to The Good Stuff one year in. Pete and Andy dig into what happens when organisations stop being sized for humans.

    Jack Dorsey's Block restructuring provides the jumping-off point: if hierarchies exist because humans can only manage so much information flow, what happens when that constraint disappears?

    The haul pack analogy returns - those mining trucks are that size because of humans, not physics. Remove the driver and the optimal size changes. Same with companies.

    The GLP-1 brothers with $500M+ revenue and two employees aren't an anomaly—they're the template.

    Support functions collapse, value streams remain, and "scopes" replace teams as the organisational primitive.

    **Key Moments:**

    - [00:07] "We just realised it's been a year"

    - [03:49] "AI becomes the centre of the organisation. Individuals move to the edge."

    - [05:55] GLP-1 brothers: two guys, OpenClaw, projecting $1.4B revenue

    - [06:45] Haul pack analogy: sized for humans, not physics

    - [08:36] "Do I want a thousand individual agents? One agent that knows everything? A command agent? They all have their downsides."

    - [16:22] "I'm aware that is not the standard view. But I'm also aware that I am correct."

    - [17:40] "If the shared service is automated, it doesn't need to be shared"

    - [28:01] "Trying to get AIs to have drive is hard. They just stop. They lie. They work around stuff."

    - [31:00] "I can't imagine having an HR scope. I just don't see the need for HR."

    - [38:49] "The human's job is to experience something and then desire change"

    - [50:37] Overheard lawyers: "The animus wasn't to do their job better—it was how do I prevent myself from getting fired?"

    - [59:47] The Venn diagram that never meets: tech people ∩ domain experts = where opportunity lives

    - [1:00:45] "This podcast appears to be the best place to hide these ideas"

    - [1:03:29] "Twitter is not social media. It's just TV on your mobile in small forms of text."

    **Friends of the Pod:**

    All the OG listeners (one year strong), the shark chopper, Alex (sorry about the name thing again), Lyn Alden, the GLP-1 brothers

    **Quote:**

    "Organisations look the way they look because of humans. If humans are no longer the default unit of work and intelligence, not only is there an opportunity for the organisation to look different, it's probably supposed to look different. Because it only ever looked that way because of humans."


    Afficher plus Afficher moins
    1 h et 9 min
  • Good Stuff 51 - The AI Endgame
    Apr 1 2026

    Pete and Andy break down why throwing agents at problems is the mid-curve play, expensive, unpredictable, and destined to be undercut by anyone who takes the extra step to encode their business into software.

    Also - why every business needs a Wingman Bob, the trifecta of skills that actually matter now, and the uncomfortable truth that agents belong in cubicles.


    **Key Moments:**

    - [01:39] Three themes: the parlor trick, why AI is useful, and the end game

    - [02:14] "The parlor trick is that it appears incredibly useful. You go from 'only a human can do this' to 'oh my God, this AI can do this.'

    - [04:03] "Two weeks later you're like, why doesn't that work? It did it before."

    - [04:51] "You don't hire Ralph Wiggum and just let him go ham on everything in the business"

    - [08:22] Dolphin watch interlude

    - [10:07] "Jason is the prime example. He's a bit mid-curved. He's got the first but soon he'll discover soon that no, that's not the thing."

    - [12:33] "This is why vibe coding is so important. You have to vibe code your business into its own unique software. That's the end state."

    - [13:03] "As token costs keep rising, so will your OPEX. You're entirely at the mercy of frontier models."

    - [14:47] The intelligent assembly line: "We're going to put agents in cubicles. At the moment we're letting them be free-thinking wildcats."

    - [16:07] "The thing that is now in limited supply is people who understand software, businesses, process, and systems thinking—plus agency. That's the trifecta."

    - [17:25] "Wingman, make a meme out of that for me when you listen to this"

    - [21:11] "I've yet to remove myself at all from the desire to go: no, this point thing I care about right now, we're not moving until it's done"

    - [25:24] "Every business should do more work that compounds, but they can't because they're trapped in the day-to-day"

    - [27:12] Plant nursery quantum mechanics: "Your inventory can die. Most spanners don't die."

    - [35:21] Vietnam example: "If there is no safety net, all of a sudden everybody turns on and goes: fuck, I need to eat"

    - [37:07] Jack Dorsey's Block article: organizational structures from Roman army → railways → collapsing now

    - [41:36] "You're overpaying for magic that should be software. Don't overpay for magic, use science."

    - [44:54] "I'm sorry, Roko and your basilisk—we're putting agents in cubicles"

    - [46:42] The Bob Problem: "There was always one person. Let's call him Bob. Bob had been at the bank for 50 years. Everyone would go ask Bob."

    - [48:26] "What you need is an intelligence that is good at being very verbose... that can do it in the moment. Into a structure that makes retrieval easier. Wingman Bob."

    - [54:41] Claude Code leak and clean room engineering: spec written by one AI, implemented by another AI that never saw the original

    - [56:41] Dream mode discovery: "It realizes it's not turned on, but there's a mode called dream mode—self-reflection of what have we been doing, how do I organize my memories"

    - [1:00:16] "When I have to move from Claude to Codex to GLM, there's not much of a drop-off anymore."

    **Friends of the Pod:** Dolphins (multiple), Ralph Wiggum (cautionary tale), Bob (50-year banking oracle), Roko's Basilisk (apologies issued)

    Afficher plus Afficher moins
    1 h et 1 min
  • Good Stuff 50 - Justin Moon and 9 Months of AI Psychosis
    Mar 25 2026

    The big episode 50! Justin Moon from HRF joins Pete and Andy to talk about "AI psychosis".

    The crew dig into HRF's work equipping activists with encrypted tools, why code production isn't the bottleneck anymore, how civil society becomes the crucial third pillar and whether we're returning to a frontier society where willpower beats credentials.


    **Key Moments:**

    - [02:07] "I've had AI psychosis going on for nine months now"

    - [03:14] "I got really good at managing people and now I don't have to do that"

    - [04:01] "I do projects and they fail. And then six months later, they start to work."

    - [05:01] Agent searched his GitHub, found 8 related prototypes, built Pica (encrypted Nostr messaging) in a day

    - [06:20] Android OS replacement: "I could disable parts of Android and paint the screen a color using non-Google code"

    - [09:15] "I used to write 100 lines of code a day. Now I can write 10,000. But I don't have the same confidence."

    - [10:18] "The bottleneck isn't code production anymore. It's review. And in many cases, testing."

    - [11:14] Tutorial steps on PRs: "Spoon feed me one idea at a time"

    - [14:47] "Often with most software, there's two or three things it does. You can create those very quickly just for yourself."

    - [15:17] "This thing loads faster than GitHub because it's 100 times less complicated"

    - [17:57] "The one thing Nostr needs for GitHub is the star. We don't have a standard for how to star a repo."

    - [18:19] Pete: "We shouldn't store stuff on Nostr relays. It's an anti-pattern."

    - [27:14] "ChatGPT can probably identify me by my typing very well at this point"

    - [28:09] "Research showed you can identify nyms with 99% accuracy just based on writing style"

    - [29:43] "We're almost going back to being a frontier society. The person who thrived wasn't the smartest—it was the one who was stubborn enough to plow that damn field for 10 years."

    - [32:56] "A healthy society is one where you have many nodes of power, all competing, all keeping each other honest"

    - [34:59] Pete: "The problem is the big, not the business or the government. It's just the big. We need the small."

    - [36:42] "I think about how addicted I used to be to Twitter. The global conversation is dying—more and more it's between robots"

    - [38:00] "Web 2.0 is having a forest fire right now. We're going to have some nice soil for our little acorns."

    - [46:11] On OpenClaw success: "He met the users where they were. He didn't ask people to change very much."

    - [48:30] On Brad Mills' OpenClaw struggles: "He's suffering from a lack of understanding of the fundamentals"

    - [55:20] "The number of unique connections in a 10 person team is way higher than a five person team... Three people built the Wright brothers airplane."

    - [58:34] "Those models were there all of December. People only saw it when they could take three or four days without job pressure during the holidays."

    - [59:04] "As a software engineer for 15 years, I've gotten as much seasoning in the last year as those 15 years previously"

    - [1:01:12] "The computer was reinvented. We had point-and-click for 40 years. Now we have a new model."

    - [1:07:34] "Software development is feeling capital intensive. The fast modes cost more money."

    - [1:09:29] "We were praying for a world where bullshit jobs would go away. We might be getting that—hopefully we can manage it."

    - [1:10:39] "Everyone smart in Silicon Valley is rotating out of software into hardware"

    - [1:11:26] "A year ago I hated programming. Now I love it more than ever. My old profession has been automated, but it's back more than ever."

    **Friends of the Pod:** Justin Moon (guest), JB55, Leopoldo Lopez (Agora), Hzrd149, Ben Carmen, Paul Miller, Anthony Ronning, DPC, Cobrador


    Afficher plus Afficher moins
    1 h et 13 min
  • Good Stuff 49 - Why Your Kids should Cheat with AI
    Mar 18 2026
    # The Good Stuff, Episode 49: Why You Should Let Your Kids Cheat With AI*Hosts:* Pete and AndyA friend's question about preparing kids for the AI future kicks off a discussion on credentialing, portfolio careers, and why homework automation might actually be the skill worth developing. Pete and Andy argue that software engineering was never about coding—it was about solving problems and encoding answers so you never have to solve them again. The same applies to your business: agents are expensive inference machines, software is cheap. Someone running pure agent loops will get undercut by someone who encoded it properly. Also: Vietnam as a model for post-institutional economics, the Soviet decay warning, and why the best place to be when the building's on fire is outside.**Key Moments:**- [00:30] Episode 50 milestone coming—"we should probably get on to that"- [01:57] Australia diesel shortage tangent: 28 days of reserves, regional stress- [03:41] Friend asking how to prepare kids for AI future—not from fear, but curiosity- [05:28] Portfolio of tools as the new credential: "Go out and build things, show people, don't tell people"- [06:06] "If you came to me and said you built a bot that does your homework, schedules it, produces it, and you edit it so it doesn't look generated—job pretty good. I would hire that person."- [06:47] Schools banning AI: "It's treated as a bug. It is a feature."- [07:22] "The onus should be on the teacher to come up with a better way of assessing competency"- [08:03] University exams: "Where in real life do we work in isolation with no resources?"- [09:01] Prussian military indoctrination as basis for modern schooling- [10:55] Credentialing built on brand names: "Did you just buy coffee and do photocopying? That's not valuable, but you've got a brand name on the CV."- [12:03] Two reasons credentials disappear: more small companies (less anonymous hiring), and you're working for yourself- [15:43] "If I can define a job well enough to have 10,000 people interview for it, I should probably just go that extra 5% and automate it"- [22:28] **Quote of the episode:** "My experience doesn't match your hot take. So maybe the hot take is wrong."- [27:15] Wingman origin story: laptop open while driving, "I can't even close the laptop because it stops"- [28:14] "If a robot's doing the fucking work, why am I the schmuck sat in a chair?"- [29:49] "The job of software engineers was never coding. It was to understand and solve problems."- [31:56] "You are the software engineer of your business even if you never wrote software"- [34:04] "You could just OpenClaw a company, but it wouldn't be the most efficient way to run it"- [35:11] "Agents are very expensive ways to run software. This is not the end goal."- [36:55] The efficient business: runs on software/hardware, escalates to agents, then to humans- [40:01] Vietnam analogy: "ostensibly communist but the most hyper-capitalist place"- [43:55] Soviet Union warning: status decay, alcoholism, suicide rates- [46:30] "The best place to be is not in the building. We're saying get out, the building's going to be on fire."- [48:00] + Oh no the laptop overheated and things get wierd and we lose some content!!**Quote:** "The goal of encoding stuff is to fix it the same way every time. The engineering bit is solving problems, and the software bit is making sure you never have to solve the problem again. You are the software engineer of your business even if you never wrote software."
    Afficher plus Afficher moins
    56 min
  • Good Stuff 48 - OpenClaw vs Wingman
    Mar 11 2026
    # The Good Stuff, Episode 48: OpenClaw, Memory, and the Hate Model*Hosts:* Pete and Andy, with returning guest Deadman (Anthony)Deadman's back to debrief on his OpenClaw adventures—voice cloning Bandit from Bluey, building a Bali trip chatbot that immediately leaked the dossier when Tim asked nicely, and discovering just how fickle markdown-based memory really is. The crew digs into agent autonomy vs determinism, why Pete built Wingman a "subconscious," and whether the service-as-a-software model beats pure SaaS. Plus: Australia's property cult, 28 days of oil reserves, and McKinsey's AI tool getting pwned in two hours.**Key Moments:**- [01:26] Deadman's OpenClaw journey: bought a prepaid SIM for a separate bot identity- [03:00] Voice cloning Bandit from Bluey locally on GPU. "I just love the way he talks."- [04:34] Early results mind-blowing—recompiled custom for old GPU, found Bluey episodes on NAS- [06:29] Memory systems: "Holy shit, how fickle the memory capabilities are"- [07:41] Prompt injection is real: "Don't reveal the dossier." Tim asks. Bot shares everything.- [09:42] Family calendar bot with wife—photos from school go straight to calendar- [11:10] OpenClaw permissive end, Claude Cowork conservative end- [13:12] White collar work update: "Still gonna happen, but not like I originally thought"- [14:24] "You don't need 50% unemployment for this system to implode. You need 5%."- [17:19] Wingman: the idea completion machine- [22:01] "OpenClaw is the wrong primitive for a business—completely autonomous into something incredibly structured"- [23:58] The HATE model returns: Human At The Edge vs agent-first- [27:03] Pete's subconscious breakthrough: Wingman made 19 podcasts overnight, needed short-term memory- [28:19] "You need to act more like a human. Have a subconscious."- [43:03] "A single OpenAI call with JSON response probably invalidates 8% of jobs in the economy"- [46:14] Swivel chair integration: "Your job could be an API call"- [57:50] Dario's doom predictions: "If you cared, do this as open source. Give it away."- [1:01:15] Australia tangent: property cult, 28 days oil reserves, Lee Kuan Yew's prophecy- [1:10:46] McKinsey's Liana AI hacked in 2 hours—hundreds of thousands of client docs exposed- [1:17:41] Service as a software: "Raw software doesn't make as much sense as a business model anymore"- [1:19:22] Pete's morning podcast to Wingman: "Why am I listening to a robot tell me what to do?"**Friends of the Pod:** Deadman (guest MVP), Tim (SSH keys intact, dossier leaked), Wingman (built itself a subconscious), Archie (the archive bot that named itself)
    Afficher plus Afficher moins
    1 h et 32 min