Couverture de AI Standards Stack

AI Standards Stack

AI Standards Stack

De : Professor Michael Mainelli Z/Yen Group and Adam Leon Smith AIQI
Écouter gratuitement

À propos de ce contenu audio

Join us for the AI Standards Stack podcast series, hosted by Professor Michael Mainelli (Z/Yen Group) and Adam Leon Smith (AIQI). This podcast series examines the latest developments in AI assurance, alignment, governance, and responsible innovation. Each session features expert guests from around the world who are shaping standards, ethics, regulation, and best practices for trustworthy artificial intelligence.Professor Michael Mainelli, Z/Yen Group and Adam Leon Smith, AIQI Politique et gouvernement
Épisodes
  • Patrick Sullivan On The Rise Of AI Certification
    Apr 21 2026

    In this episode, hosts Professor Michael Mainelli and Adam Leon Smith welcome Patrick Sullivan for a practical look at AI certification and ISO/IEC 42001. Patrick explains what third-party certifiers actually do and how they provide objective assurance on AI management systems. He addresses common misconceptions since 42001’s 2023 release, whilst also highlighting the growing market demand, with major firms like Microsoft, Oracle, and Anthropic earning certification and pushing it down supply chains, plus real value in risk management and investment confidence. The discussion covers regional differences in AI governance, the skills needed for effective audits, implementation challenges for organisations of all sizes, and formalising expectations around agentic AI governance. A grounded, certifier’s-eye view on turning the AI standards stack into real-world assurance.

    Afficher plus Afficher moins
    42 min
  • Nicholas Beale On The Importance Of Responsible AI
    Apr 2 2026

    In this episode, hosts Michael Minelli and Adam Smith welcome Nicholas Beale, founder and director at Sciteb, for an insightful look at AI ethics and governance. Nicholas discusses his early internet ethics work and his paper on the Unethical Optimisation Principle, explaining why AI optimisers disproportionately pick unethical strategies by ignoring future downsides. He explores mitigations like panels of AIs, risks of single-system reliance, defence challenges, the Investor Consensus on Responsible AI, guardrail issues, and the need for diversity to avoid systemic risks. A mathematically grounded conversation urging balanced systems that preserve human judgment and the common good.

    Afficher plus Afficher moins
    40 min
  • Dr Piercosma Bisconti On The Social Frontiers Of Generative AI
    Mar 16 2026

    In this episode, hosts Michael Mainelli (London) and Adam Leon Smith welcome Piercosma Bisconti, dialling in from Rome, for a fresh European perspective on the evolving ethics and governance of generative AI. With a background in philosophy, robotics, and global politics, Piercosma shares his surprising shift from academic research to actively shaping EU and international AI standards, including his work with DEXAI – Artificial Ethics.

    The conversation dives into how ChatGPT's 2022 launch changed everything, suddenly bringing AI directly into human social spaces in ways earlier ethical frameworks never fully anticipated. Piercosma explores the rise of more interconnected AI systems and the surprising new risks that emerge when multiple models interact, collaborate, or even compete in real-world environments. Drawing on philosophy and systems thinking, he reflects on what this means for society, especially how always-agreeable AI might quietly reshape human relationships, emotional resilience, and social skills in the years ahead. Expect thoughtful insights on where standards and governance fit in, the limits of current testing approaches, and why the biggest changes may be more social than technological.

    A fascinating, big-picture discussion that asks: as AI becomes part of everyday social life, how do we keep our humanity intact? Tune in for Piercosma's unique blend of deep thinking and practical standards experience.

    Afficher plus Afficher moins
    45 min
Aucun commentaire pour le moment