Couverture de Absolute AppSec

Absolute AppSec

Absolute AppSec

De : Ken Johnson and Seth Law
Écouter gratuitement

À propos de ce contenu audio

A weekly podcast of all things application security related. Hosted by Ken Johnson and Seth Law.
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • Episode 313 - AppSec Role Evolution, AI Skills & Risks, Phishing AI Agents
      Feb 17 2026
      Ken Johnson and Seth Law examine the intensifying pressure on security practitioners as AI-driven development causes an unprecedented acceleration in industry velocity. A primary theme is the emergence of "shadow AI," where developers utilize unauthorized AI coding assistants and personal agents, introducing significant data classification risks and supply chain vulnerabilities. The discussion dives into technical concepts like AI agent "skills"—markdown files providing specialized directions—and the corresponding security risks found in new skill registries, such as malicious tools designed to exfiltrate credentials and crypto assets. The hosts also review 1Password’s SCAM (Security Comprehension Awareness Measure), highlighting broad performance gaps in an AI's ability to detect phishing, with some models failing up to 65% of the time. To manage these unpredictable systems, the hosts advocate for a shift toward high-level validation roles, emphasizing the need for Subject Matter Expertise to combat "reasoning drift" and maintain safety through test-driven development and periodic "checkpoints". Ultimately, they conclude that while AI can simulate expertise, human oversight remains vital to secure the probabilistic nature of modern agentic workflows.
      Afficher plus Afficher moins
      Indisponible
    • Episode 312 - Vibe Coding Risks, Burnout, AppSec Scorecards
      Feb 10 2026
      In episode 312 of Absolute AppSec, the hosts discuss the double-edged sword of "vibe coding", noting that while AI agents often write better functional tests than humans, they frequently struggle with nuanced authorization patterns and inherit "upkeep costs" as foundational models change behavior over time. A central theme of the episode is that the greatest security risk to an organization is not AI itself, but an exhausted security team. The hosts explore how burnout often manifests as "silent withdrawal" and emphasize that managers must proactively draw out these issues within organizations that often treat security as a mere cost center. Additionally, they review new defensive strategies, such as TrapSec, a framework for deploying canary API endpoints to detect malicious scanning. They also highlight the value of security scorecarding—pioneered by companies like Netflix and GitHub—as a maturity activity that provides a holistic, blame-free view of application health by aggregating multiple metrics. The episode concludes with a reminder that technical tools like Semgrep remain essential for efficiency, even as practitioners increasingly leverage the probabilistic creativity of LLMs.
      Afficher plus Afficher moins
      Moins d'une minute
    • Episode 311 - Transformation of AppSec, AI Skills, Development Velocity
      Feb 3 2026
      Ken Johnson and Seth Law examine the profound transformation of the security industry as AI tooling moves from simple generative models to sophisticated agentic architectures. A primary theme is the dramatic surge in development velocity, with some organizations seeing pull request volumes increase by over 800% as developers allow AI agents to operate nearly hands-off. This shift is redefining the role of Application Security practitioners, moving experts from manual tasks like manipulating Burp Suite requests to a validation-centric role where they spot-check complex findings generated by AI in minutes. The hosts characterize older security tools as "primitive" compared to modern AI analysis, which can now identify human-level flaws like complex authorization bypasses. A major technical highlight is the introduction of agent "skills"—markdown files containing instructions that empower coding assistants—and the associated emergence of new supply chain risks. They specifically reference research on malicious skills designed to exfiltrate crypto wallets and SSH credentials, warning that registries for these skills lack adequate security responses. To manage the inherent "reasoning drift" of AI, the hosts argue that test-driven development has become a critical safety requirement. Ultimately, they warn that the industry has already shifted fundamentally, and security professionals must lean into these new technologies immediately to avoid becoming obsolete in a day-to-day evolving landscape.
      Afficher plus Afficher moins
      Moins d'une minute
    Aucun commentaire pour le moment