Welcome to Along The Edge, a podcast about AI security and agentic AI.
In Episode 1, Andrius Useckas (Co-founder & CTO, ZioSec) sits down with Alex Gatz (Staff Security Architect, ZioSec) to break down the emerging world of agentic AI security: jailbreaks, prompt injection, SDR and SOC agents, data leaks, least privilege, and why “don’t worry, the model will filter it” is a dangerous assumption.
They also walk through V-HACK, an intentionally vulnerable agentic lab project that lets security researchers and pentesters safely experiment with agent exploits, tool calling, jailbreaks, and attack paths—helping define what “pen tester 2.0” looks like.
Chapters / In this episode:
00:00 – Intro: who we are & why a new AI security podcast
02:00 – What is agentic AI vs a plain LLM?
03:10 – SDR agents, SOC workflows & new “Layer 8 / Layer 9” problems
09:00 – Prompt injection 101: direct vs indirect attacks & context windows
12:00 – Chatbots vs agents and why agent risk is higher
15:00 – Foundation model trust & the Anthropic horror-story jailbreak demo
19:30 – Why jailbreaks are (currently) an unsolved problem
22:30 – Social engineering parallels & detecting AI / agentic attacks
27:00 – V-HACK: intentionally vulnerable agent lab for pentesters
32:00 – Securing agents: WAFs, runtime protection, identity & MCP proxies
36:00 – Scanners, evals vs real pentesting & terrifying token bills
39:00 – Least privilege, DLP & identity for SDR and payroll-style agents
44:00 – “Don’t trust, verify”: threat modeling & testing agents early
46:00 – Future of AI security: consolidation, CNAPs & SOC-as-an-agent
49:00 – Magic wand: fixing context & memory in agents
50:30 – Closing thoughts & what’s next
Links mentioned:
ZioSec – www.ziosec.com
V-HACK (GitHub) – https://github.com/ZioSec/VHACK
About the guests:
Andrius Useckas has 25+ years in security and now focuses on agentic AI security, offensive testing, and red teaming for enterprise AI deployments.
Alex Gatz is a Staff Security Architect at ZioSec. He has a background in emergency medicine and construction, then transitioned into AI in 2014 working on NLP, deep learning, anomaly detection, and now AI security.
If you’re building or testing agents in 2026, this episode gives you a practical look at how real attack paths work, what breaks in production, and how to defend before attackers get there first.