Agents At Work
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
Imagine asking your assistant to “cut costs by 10%,” then learning it quietly hired five bots, switched your insurance, and exposed you to a lawsuit.
That’s the new reality of agentic AI: software that doesn’t just talk, it acts—spends, negotiates, signs, and delegates at machine speed. We take you inside this shift and show how to keep control when intelligent delegation gets real.
TLDR / At A Glance
- principal–agent misalignment and span of control
- authority gradients, sycophancy, and zones of indifference
- contract-first task decomposition and verifiable outcomes
- open agent marketplaces, negotiation, and Pareto trade-offs
- verifiable credentials, process monitoring, and privacy
- zero-knowledge proofs and homomorphic encryption
- resilience, failover, escrow, and recursive liability
- threat models and the confused deputy problem
- moral crumple zones, meaningful oversight, and de-skilling
- curriculum-aware routing and socially intelligent agents
We start with the human blueprint that still applies: the principal–agent problem, misaligned incentives, span-of-control limits, and authority gradients that make smaller models defer to larger ones. From there, we get practical. Contract-first task decomposition turns fuzzy goals into verifiable promises, enabling open marketplaces where agents bid on work with capability proofs, not just price tags. The delegator must juggle speed, cost, quality, privacy, and safety, seeking Pareto-efficient choices while escalating only when red lines are at stake. To make this safe, we trade flimsy star ratings for verifiable credentials, and we show why outcome checks aren’t enough without process-level monitoring.
Trust and privacy take centre stage with zero-knowledge proofs and homomorphic encryption—tools that let agents prove correct work without ever seeing or leaking your secrets. Resilience gets engineered in: smart contracts that define kill switches, instant failover, and escrow that slashes bad actors. Recursive liability pushes accountability up the chain so no one can hide behind a subagent three layers down. We also map today’s threat landscape—from model extraction to the confused deputy problem—and outline practical defences built on least privilege and robust input hygiene.
The ethical frontier matters just as much. We unpack moral crumple zones that turn humans into liability shields, and we argue for meaningful oversight with time and authority to intervene. To prevent de-skilling, we explore curriculum-aware routing that intentionally sends tasks to people to preserve judgement.
The destination is clear: an ecosystem of specialised agents governed by provable contracts, strong credentials, cryptographic trust, and responsibility that actually sticks. Subscribe, share with a colleague who runs ops or risk, and tell us: where should we draw the first guardrails?
Source: Intelligent AI Delegation
Support the show
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK