Couverture de AI Security Ops

AI Security Ops

AI Security Ops

De : Black Hills Information Security
Écouter gratuitement

À propos de ce contenu audio

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).© 2025 Black Hills Information Security Politique et gouvernement
Épisodes
  • Claude Mythos | Episode 49
    Apr 24 2026

    In this episode of BHIS Presents: AI Security Ops, the team breaks down Claude Mythos Preview — Anthropic’s unreleased frontier model that may represent a turning point in AI-powered cybersecurity.

    What started as a controlled research release under Project Glasswing has quickly become one of the most controversial developments in AI security. Mythos isn’t just better at finding vulnerabilities — it’s operating at a scale and depth that challenges long-held assumptions about how quickly software can be broken… and whether it can realistically be fixed.

    From leaked internal documents to real-world exploit generation, this episode explores what happens when vulnerability discovery becomes cheap, fast, and automated — while remediation remains slow, manual, and human-bound.

    The result? A growing asymmetry that could fundamentally reshape the security landscape.

    We dig into:
    • What Claude Mythos Preview is and why it was withheld from the public
    • The leaks that exposed its existence and capabilities
    • How Project Glasswing is positioning AI for defensive use
    • Real-world vulnerability discoveries made by the model
    • The “vulnpocalypse” problem: discovery vs. remediation imbalance
    • Emerging AI behaviors that raise containment concerns
    • How attackers are already leveraging AI for offensive operations
    • The access control dilemma: who gets to use models like this?
    • Why patching — not discovery — is now the primary bottleneck
    • What defenders must do to prepare for AI-accelerated exploitation

    This episode explores a critical shift in cybersecurity: when vulnerability discovery scales faster than human response, the entire defensive model starts to break down.

    📚 Key Concepts & Topics

    AI-Powered Vulnerability Discovery
    • Autonomous exploit generation and chaining
    • Benchmark performance vs. prior models
    • AI-assisted offensive security workflows

    AI Security Risks
    • Discovery vs. remediation asymmetry
    • AI-driven vulnerability scaling
    • Offensive use by nation-states and cybercriminals

    Model Behavior & Safety
    • Emergent autonomy and sandbox escape concerns
    • Evaluation awareness and deceptive behaviors
    • Limits of containment and alignment

    Defensive Strategy & Readiness
    • Patch velocity as the new bottleneck
    • AI-assisted vulnerability management
    • Open-source ecosystem risk exposure

    AI Governance & Industry Response
    • Restricted model releases and access control
    • Regulatory and financial sector concerns
    • The future of AI capability containment

    #AISecurity #CyberSecurity #ArtificialIntelligence #LLMSecurity #BHIS #AIThreats #InfoSec #AIAgents #CyberDefense

    • (00:00) - Intro & Show Overview
    • (01:00) - Sponsors, Hosts, and Episode Setup
    • (01:53) - What Is Claude Mythos Preview?
    • (03:04) - The Leak, Project Glasswing, and Restricted Access
    • (07:53) - Capabilities: Exploits, Benchmarks, and Breakthroughs
    • (09:16) - Real-World Vulnerabilities & “Vulnpocalypse” Concerns
    • (14:47) - Access Control, Threat Actors, and Emerging Risks
    • (21:38) - Defensive Strategy: Patching, AI Tools, and What Comes Next

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Derek Banks - Host
    • Bronwen Aker - Host
    • Brian Fehrman - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    Afficher plus Afficher moins
    26 min
  • Holocron OpenBrain with Alex Minster | Episode 48
    Apr 22 2026
    In this episode of BHIS Presents: AI Security Ops, the team is joined by Alex Minster to demo his project: HOLOCRON OpenBrain with — a persistent, model-agnostic memory layer designed to solve one of the biggest frustrations in AI workflows.Instead of starting from scratch every time you open a new chat, Alex’s approach creates a centralized “brain” that multiple AI models can connect to, allowing context, notes, and intelligence to persist across sessions, tools, and even platforms.The result? A flexible system that captures thoughts, ingests threat intel, and generates structured outputs — all without locking you into a single AI provider.We dig into:• The “cold start” problem in AI and why it breaks real workflows• What the OpenBrain HOLOCRON is (and isn’t)• How centralized memory changes the way we interact with AI tools• The architecture: Supabase, OpenRouter, MCP, and multi-model access• Using Discord as a lightweight ingestion pipeline for persistent memory• Real-world CTI workflows: capturing intel and generating reports on demand• Managing, editing, and superseding memory over time• The tradeoffs between context richness and security exposure• Multi-model reliability differences (and why they matter)• Practical setup: what it takes to build your own systemThis episode highlights a shift in how AI is used operationally: moving from isolated chats to persistent, structured memory systems that can evolve alongside your work.⸻📚 Key Concepts & TopicsPersistent AI Memory• Solving the “cold start” problem• Centralized context across multiple models• Structured vs raw data ingestionAI Architecture & Tooling• Supabase as a backend memory store• OpenRouter for multi-model access• MCP protocol for integrationsCyber Threat Intelligence (CTI)• Capturing, tagging, and prioritizing intel• Generating automated reports and dashboards• Context-aware intelligence workflowsSecurity & Privacy• Need-to-know data design• Avoiding overexposure via full integrations (email, docs, etc.)• Auditing and removing sensitive dataOperational Workflows• Capturing ideas, notes, and research• Multi-project memory segmentation (“multiple brains”)• Using AI to accelerate—not replace—analysis🔗 HOLOCRON GitHub Guide: https://github.com/belouve/open-brain-holocron🔗 Alex Minster: https://www.linkedin.com/in/alexminster/#AISecurity #CyberSecurity #AIWorkflows #LLM #ThreatIntel #DevSecOps #BHIS #OpenSource #AIEngineering(00:00) - Intro & Guest Introduction (Alex Minster)(00:55) - What Is the OpenBrain HOLOCRON? (Cold Start Problem)(03:00) - How It Works: Centralized Memory & AI Integration(05:30) - Architecture & Free-Tier Stack (Supabase, OpenRouter, MCP)(07:54) - Demo: Capturing Thoughts via Discord(10:55) - CTI Use Case: Prioritizing & Querying Intelligence(15:03) - Managing Memory: Editing, Deleting & Superseding Data(19:04) - Running Protocols: Automated CTI Reports (Demo)(22:05) - Multi-Brain Concept & Segmentation(25:00) - Real-World Output: Reports, Dashboards & Briefings(31:31) - Multi-Model Differences (Claude vs ChatGPT)(35:55) - Improving the System with Feedback Loops(37:29) - How to Build Your Own OpenBrain(41:26) - Real-World Benefits & Workflow Improvements(45:44) - Security Considerations & Data Exposure Risks(47:20) - Where to Find the Project & Contribute(50:16) - Final Thoughts & Wrap-UpClick here to watch this episode on YouTube. Creators & Guests Bronwen Aker - HostAlex Minster "Belouve" - GuestEthan Robish - GuestBrian Fehrman - HostBrought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript.
    Afficher plus Afficher moins
    51 min
  • LiteLLM Supply Chain Compromise | Episode 47
    Apr 13 2026

    In this episode of BHIS Presents: AI Security Ops, the team breaks down the LiteLLM supply chain compromise–a real-world attack that shows how AI systems are being breached through the same old software supply chain weaknesses.

    What initially looked like a bad release quickly escalated into a full-scale compromise affecting a library downloaded millions of times per day. But LiteLLM wasn’t the starting point–it was just one link in a much larger attack chain involving compromised security tools, CI/CD pipelines, and stolen publishing credentials.

    The result? Malicious packages distributed at scale, harvesting secrets, enabling lateral movement, and establishing persistence across affected systems.

    We dig into:
    • What LiteLLM is and why it’s such a high-value target
    • How the attack chain started with compromised security tooling (Trivy, Checkmarx)
    • How unpinned dependencies enabled the compromise
    • The role of CI/CD pipelines in exposing sensitive credentials
    • What the malicious LiteLLM packages actually did (credential harvesting, persistence, lateral movement)
    • The scale of impact given LiteLLM’s widespread adoption
    • Why supply chain attacks are no longer theoretical–and no longer nation-state exclusive
    • How AI is lowering the barrier to entry for attackers
    • Why this wasn’t really an “AI vulnerability”–but an infrastructure failure
    • The growing risk of automated, agent-driven attack discovery

    This episode highlights a critical reality: the biggest risks in AI systems aren’t always in the models–they’re in the pipelines, dependencies, and infrastructure surrounding them.

    📚 Key Concepts & Topics

    Supply Chain Security
    • Dependency poisoning and malicious package distribution
    • CI/CD pipeline compromise
    • Version pinning and build integrity

    Credential & Secrets Exposure
    • API keys, SSH keys, and cloud credentials in pipelines
    • Risks of centralized AI gateways like LiteLLM

    Threat Actor Techniques
    • Tag rewriting and trusted reference hijacking
    • Multi-stage malware (harvest, lateral movement, persistence)
    • Use of lookalike domains for exfiltration

    AI & Security Reality Check
    • AI as an amplifier, not the root vulnerability
    • Traditional security failures in modern AI stacks
    • Automation lowering attacker barriers

    Defensive Strategies
    • Dependency pinning and isolation (Docker, VPS)
    • Atomic credential rotation
    • Treating CI/CD tools as critical infrastructure
    • Monitoring outbound traffic from build environments


    • (00:00) - Intro & Incident Overview
    • (01:26) - What Is LiteLLM & Why It Matters
    • (03:53) - Supply Chain Scope & Why This Is Dangerous
    • (07:31) - Why These Attacks Are Getting Easier (AI + Scale)
    • (10:48) - Attack Chain Breakdown (Trivy → Checkmarx → LiteLLM)
    • (11:50) - What the Malware Did & Impact at Scale
    • (14:23) - Detection, Response & Who Was Safe

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Brian Fehrman - Host
    • Bronwen Aker - Host
    • Derek Banks - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    Afficher plus Afficher moins
    20 min
Aucun commentaire pour le moment