14. AI and Agentic Systems: Balancing Autonomy with Human Oversight
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
When AI agents can navigate systems autonomously, where do you draw the line between efficiency and control?
Ed Crook, VP Strategy & Operations at DeepL, reveals how the company shifted from specialised translation to launching autonomous AI agents, and why human-in-the-loop oversight remains non-negotiable even as agentic AI scales across heavily regulated industries.
This conversation explores how DeepL agents work through a secondary browser interface where users can view real-time navigation, pause, raise their hand, and take or relinquish control at any time. Ed explains why the agent asks when unsure, building trust the same way you'd work with a new colleague, rather than locking themselves in a dark room until 5pm. We discuss where users still actively request control (login access, sensitive systems), what 20,000 completed tasks during beta testing revealed about when AI needs intervention, and why agents can flawlessly complete advanced tasks yet fail at very basic ones.
Ed shares how DeepL works with financial services, pharmaceuticals, and legal professionals navigating compliance requirements whilst exploring agentic AI. Over half of legal professionals report AI lets them spend more time on high-judgment strategic tasks, and two-thirds are already exploring agentic systems. He explains why shadow AI shouldn't be vilified but understood as employees seeking productivity.
We discuss how the EU AI Act encourages proportionate responses where high-risk applications carry high responsibility, why having European-built AI success stories matters, and how centrally managed AI tools create governance oversight whilst enabling peer learning across teams. Ed reveals the education gap: access to AI tools has grown faster than training on responsible use, and why upskilling, both technical and conceptual, is the burning priority for companies navigating AI adoption.
The challenge: build agents that combine autonomy with human judgment, scale AI adoption with responsible governance, and future-proof teams through peer learning rather than just technical training.
AI Ethics Now
Exploring the ethical dilemmas of AI in Higher Education and beyond.
A University of Warwick IATL Podcast
This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the IATL module "The AI Revolution: Ethics, Technology, and Society" at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'
This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.
Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.
We will discuss:
- Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
- Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
- The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.
If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.