Épisodes

  • What boards accept
    May 16 2026
    Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-05-16
    Afficher plus Afficher moins
    12 min
  • Choosing is the work
    May 9 2026
    Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-05-09
    Afficher plus Afficher moins
    14 min
  • The bill and the harness
    May 2 2026
    David builds the case that flat-rate AI pricing is dying and that the buyer's question is no longer 'how much will this cost' but 'where does the spending compound'. He opens at a Las Vegas buffet that closed on 31st May, then moves to the supplier-side news: three of the four biggest AI vendors switched pricing in the last few weeks (Anthropic stripped bundled tokens out of Enterprise seats in mid-April, OpenAI took Codex pay-as-you-go a fortnight earlier, GitHub moves every Copilot plan to usage-based billing on 1st June, and an Anthropic manager admitted Pro and Max tiers have been outgrown). He brings in two friends' worried voice notes from the buyer side: a friend in Tokyo asking what happens when bills go up five or ten times, and a partner at a professional services firm naming the outsourcing trap. He explains the supplier maths (unit prices falling roughly tenfold a year, his $200-a-month Max plan delivering $500 a day of equivalent API use, unsustainable) and the buyer maths (Jevons Paradox: cheaper energy made coal use rise, not fall). The radiologist is the modern Jevons: Hinton's 2016 'stop training radiologists' was right about the models and wrong about the radiologists. Ten years on the US has six thousand more of them and pay is up roughly seventy per cent. Punchline: the bill rises either way, the question is whether the spending compounds in the model (a utility cost) or in the harness, the layer of instructions, context and workflows that wraps the model (an asset nobody else can buy). Intercom doubled engineering velocity in nine months on exactly that bet. What happened this week: * AI adoption stalls one layer below the executive sponsor, at the line manager: Gallup data (Q4 2025) finds AI use correlates more strongly with managerial endorsement than with tool access. In firm... * The frontier-model leaderboard is now refreshing in weeks, not quarters: The Epoch Capabilities Index now shows GPT-5.5 Pro and Gemini 3.1 Pro above 155, up from GPT-4o's 128 in mid-2024. Seventeen... * Six VC firms, one investment thesis: Linas Beliunas read the published 2026 investment theses of six of the biggest venture firms side by side and found the same handful of AI bets in all of them: ... What to try: * Pick one tool, get fluent, then refine your harness: A leader David spoke to had spent weeks running the same task through ChatGPT and Claude side by side, then asking each to review the other. Gen... * Force yourself to change something on every AI output before you ship it: Came up at a senior training session this week, as the room debated when the 'check, edit, own' model breaks down. Increasi... * Skip the slides, build the page: In a senior strategy session this week, the most-praised artefact in the room was not a deck. It was a web page someone had built to walk teams through their thinki... Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-05-02
    Afficher plus Afficher moins
    14 min
  • Rise of the auditors
    Apr 25 2026
    AI-native teams need three roles: Director, Builder, Auditor. Execution is cheap, verification is expensive. Most organisations have zero Auditors and are shipping nothing because nobody is named to check. What happened this week: * <10% of organisations scale agents beyond pilots (McKinsey) * GitHub paused Copilot signups / Uber blew 2026 AI budget / Goldman inference costs approaching headcount parity * 29% of employees sabotaging AI initiatives (Writer survey) What to try: * Ask is this the simplest version (Cantrill laziness) * Audit cold: different model, fresh context * Save one reusable AI workflow (Chrome Skills) Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-04-25
    Afficher plus Afficher moins
    13 min
  • The proxy break
    Apr 18 2026
    AI broke the old proxy (good writing = good thinking) but the new proxy ('sounds like AI' = no thinking) is equally unreliable. A friend's challenge prompted a deeper question: evaluate thinking, not wording. Two tests proposed: Quality (does the argument hold under pressure?) and Ownership (CEO principle). Fine-tuned models now preferred over human writing 62% of the time. What happened this week: * AI cover letters killed the signal: Freelancer.com study shows Goodhart's Law in action, better letters no longer predict better hires * Snap cut 1,000 jobs (16% workforce), AI writes 65% of new code, $500M annualised savings: substitution model has arrived * Passive AI delegation erodes confidence, pushing back strengthens it: 2,000-person study. Gartner: of 5.4hr saved, only 0.6hr reduces working time What to try: * Ask what keeps people awake at night, not how AI can help: surfaces real problems with AI solutions * Let the model research you before writing custom instructions: web search + self-portrait generates better instructions than manual writing * Find where your AI value sits: AI Value Map interactive tool, five questions on value allocation then five on capture Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-04-18
    Afficher plus Afficher moins
    11 min
  • What a day can do
    Apr 11 2026
    Team-level AI infrastructure can precede and contain individual training. The cost of encoding how a team works into shared reusable tools just dropped from hours to minutes with Gen 2 tools (Claude Code + transcripts). A small jewellery company built thirteen shared skills in a day. Step two doesn't just follow step one, it can contain it. What happened this week: * Claude Code now writes 4% of all GitHub commits, doubled in six weeks; Anthropic run rate $30B (up from $9B at end of 2025), Claude Code alone $2.5B; projected 20% of commits by December * Goldman Sachs quantified AI's net labour market drag: -25k jobs substituted + 9k augmented = 16k net monthly loss; entry-level-to-experienced wage gap widened 3.3pp. But CFO surveys put genuine AI ... * Meta's internal tokenmaxxing leaderboard: 85k+ employees, 60T tokens in one month, Zuckerberg not in top 250. Rewards orchestration over outcomes. Incentivise use yes, incentivise maxxing no What to try: * Start with critique, not creation. Brand voice evaluator was diagnosis-only; teams fear proofreaders less than replacements. Nobody fights the spellchecker * Ask what keeps people up at night, not what they want AI to do. First question surveys existing habits; second surfaces unmet needs. Almost nothing appears on both lists * Show your team how others use AI. 515-startup field experiment: case studies alone led to 44% more AI usage, 1.9x revenue, 39% less capital needed ('the mapping problem') Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-04-11
    Afficher plus Afficher moins
    12 min
  • What is your organisation actually for?
    Apr 4 2026
    Organisations say they're production systems but behave like human systems. The revealed preference is togetherness, not efficiency. AI adoption reverts because training optimises for individual productivity while the real binding force is human collaboration. What happened this week: * Dorsey wants to replace org charts with AI world models (Block restructuring) * Mollick says de-weirding AI is a mistake; hidden AI use is the harder problem * Zapier raised the AI fluency hiring bar: slope not snapshot, accountability added What to try: * Have AI interview you before building anything * Ask AI what looks weird before analysing data * Let AI be the app: build skills not standalone software Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-04-04
    Afficher plus Afficher moins
    11 min
  • The system and the surrender (plz fix!)
    Mar 28 2026
    A Wharton study of 1,372 people identified 'cognitive surrender': when AI produces an answer, people stop questioning it while recoding it as their own judgment. Accuracy drops from 45.8% alone to 31.5% with incorrect AI. The better the system gets, the harder it becomes to stay vigilant inside it. What happened this week: * Three CEOs (Coca-Cola Quincey, Walmart McMillon, Adobe Narayen) stepped down in one quarter citing AI transformation pressure; 38 years of tenure in one turnover, most since 1999 * Anthropic 5th Economic Index + HBR 2,500-employee study: experienced users (6+ months) treat AI as thinking partner, not productivity shortcut. AI may be skill-biased tech that compounds existing a... * Ethan Mollick: companies with zero AI failures aren't being ambitious enough. R&D-style experimental budgets need to reach HR, operations, finance What to try: * Don't fact-check AI in the same conversation: model defends its own chain. Start fresh, upload source materials cold for critique * Give your AI reviewer a persona with skin in the game: six senior-partner personas converged on the same systematic error a neutral reviewer missed * After every good session, turn it into a reusable skill: capture what 'good' looks like the moment you've achieved it, before memory fades Read the full edition with all links and sources: https://steadman.ai/newsletters/david/#edition-2026-03-28
    Afficher plus Afficher moins
    11 min