Couverture de Along The Edge Podcast: Breaking, Defending, and Understanding Agentic AI

Along The Edge Podcast: Breaking, Defending, and Understanding Agentic AI

Along The Edge Podcast: Breaking, Defending, and Understanding Agentic AI

De : Andrius Useckas
Écouter gratuitement

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

Along The Edge is a podcast about life on the frontier of AI security—where large language models turn into agents, tools get wired into everything, and the old web-app threat models stop being enough. Hosted by Andrius Useckas (Co-founder & CTO of ZioSec), Along The Edge dives deep into agentic AI security: jailbreaks, prompt injection, data leaks, MCP/tooling risks, least privilege for agents, and what “don’t trust, verify” really means in an AI-native stack. Each episode features hands-on practitioners—security architects, red teamers, researchers, and builders—who are actively breaking and defending real systems in production. If you’re building, deploying, or testing AI agents (SDR agents, SOC assistants, coding copilots, internal HR or payroll agents, etc.), this show gives you concrete attack paths, defensive patterns, and hard-earned lessons you won’t get from marketing decks and “AI safety” platitudes. Along The Edge is for: Security engineers and architects responsible for AI/agentic systems Red teams, pentesters, and researchers exploring AI-native attack surfaces Engineering leaders who don’t want to bolt security on after the breach Anyone who suspects “the model will handle it” is not a real security strategy© 2026 Andrius Useckas Politique et gouvernement
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • Along The Edge – Episode 1: Agentic AI Security, Jailbreaks, and Why You Shouldn’t Trust Your Agents
      Jan 13 2026

      Welcome to Along The Edge, a podcast about AI security and agentic AI.

      In Episode 1, Andrius Useckas (Co-founder & CTO, ZioSec) sits down with Alex Gatz (Staff Security Architect, ZioSec) to break down the emerging world of agentic AI security: jailbreaks, prompt injection, SDR and SOC agents, data leaks, least privilege, and why “don’t worry, the model will filter it” is a dangerous assumption.

      They also walk through V-HACK, an intentionally vulnerable agentic lab project that lets security researchers and pentesters safely experiment with agent exploits, tool calling, jailbreaks, and attack paths—helping define what “pen tester 2.0” looks like.

      Chapters / In this episode:

      00:00 – Intro: who we are & why a new AI security podcast
      02:00 – What is agentic AI vs a plain LLM?
      03:10 – SDR agents, SOC workflows & new “Layer 8 / Layer 9” problems
      09:00 – Prompt injection 101: direct vs indirect attacks & context windows
      12:00 – Chatbots vs agents and why agent risk is higher
      15:00 – Foundation model trust & the Anthropic horror-story jailbreak demo
      19:30 – Why jailbreaks are (currently) an unsolved problem
      22:30 – Social engineering parallels & detecting AI / agentic attacks
      27:00 – V-HACK: intentionally vulnerable agent lab for pentesters
      32:00 – Securing agents: WAFs, runtime protection, identity & MCP proxies
      36:00 – Scanners, evals vs real pentesting & terrifying token bills
      39:00 – Least privilege, DLP & identity for SDR and payroll-style agents
      44:00 – “Don’t trust, verify”: threat modeling & testing agents early
      46:00 – Future of AI security: consolidation, CNAPs & SOC-as-an-agent
      49:00 – Magic wand: fixing context & memory in agents
      50:30 – Closing thoughts & what’s next

      Links mentioned:

      ZioSec – www.ziosec.com
      V-HACK (GitHub) – https://github.com/ZioSec/VHACK

      About the guests:

      Andrius Useckas has 25+ years in security and now focuses on agentic AI security, offensive testing, and red teaming for enterprise AI deployments.

      Alex Gatz is a Staff Security Architect at ZioSec. He has a background in emergency medicine and construction, then transitioned into AI in 2014 working on NLP, deep learning, anomaly detection, and now AI security.

      If you’re building or testing agents in 2026, this episode gives you a practical look at how real attack paths work, what breaks in production, and how to defend before attackers get there first.

      Afficher plus Afficher moins
      51 min
    Aucun commentaire pour le moment