Épisodes

  • Can AI Spot a Phish? LLMs vs. Email Scams
    Jun 22 2025

    Phishing emails are getting smarter—can AI keep up?

    In this episode of Neuro Sec Ops, hosts Alex Carter and Maya Lin dive into a cutting-edge study that tested large language models like BERT, LLaMA, and Wizard against real phishing threats. From email deception to model explainability, we unpack why some models detect scams better than others—and why trust in AI isn’t just about accuracy.

    We explore fine-tuning strategies, the ethics of explainable AI, and why even a smart model might still fall for a phish. Whether you're in cybersecurity or just curious about AI’s role in digital defense, this one's for you.

    Topics Covered:

    • Phishing detection with LLMs

    • Explainability vs. accuracy

    • SHAP values and CC-SHAP scoring

    • The future of AI in cybersecurity

    Subscribe for more deep dives at the intersection of AI and security.

    Kuikel, S., Piplai, A., & Aggarwal, P. (2025). Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and

    Afficher plus Afficher moins
    7 min
  • The Bias Beneath: Can AI Recruiters Ever Be Fair?
    Jun 19 2025

    AI is revolutionizing hiring—but what happens when it quietly learns our biases?

    In this episode, Alex Carter and Maya Lin unpack a compelling new study that reveals how Large Language Models like BERT and RoBERTa can inherit gender bias when scoring résumés—and what that means for fairness in automated hiring. From biased tokens to adversarial learning hacks, we explore the hidden risks and radical fixes in AI-based recruitment.

    You’ll learn:

    • How AI picks up gender signals even without explicit data
    • What “allocational harm” is and why it matters
    • Two cutting-edge methods to remove bias from LLMs
    • Why removing bias actually improves accuracy


    Whether you’re into cybersecurity, AI ethics, or just job-hunting in the digital age—this one’s for you.


    More information

    Afficher plus Afficher moins
    7 min
  • Guardrails for AI: Can We Stop LLMs from Going Rogue?
    Jun 17 2025

    In this episode of Neuro Sec Ops, hosts Alex Carter and Maya Lin dive into the evolving world of AI security and large language model (LLM) jailbreaks. Based on a new study from HKUST, we explore how jailbreak guardrails are being developed to detect and prevent malicious prompts that bypass LLM safety mechanisms.

    From pre-processing, intra-processing, and post-processing guardrails to rule-based vs. LLM-based detection methods, we break down the pros, cons, and performance trade-offs of today's best defenses. What are multi-turn jailbreaks, and why are session-level guardrails still vulnerable? How do SEU metrics—Security, Efficiency, Utility—shape AI defense strategies?

    Whether you're a cybersecurity expert, AI developer, or curious tech follower, this episode delivers an insightful, jargon-free overview of one of the most critical issues in AI alignment and safety today.

    🔑 Keywords: AI jailbreaks, LLM guardrails, AI safety, prompt injection, large language model security, cybersecurity, GPT-4 jailbreak, AI ethics, neural networks, adversarial AI, SEU framework

    More information


    Afficher plus Afficher moins
    6 min
  • Silent Signals: How Smartwatches Are Breaching Air-Gapped Systems
    Jun 15 2025

    You’ve heard that air-gapped computers are unhackable. No internet. No Bluetooth. No problem, right?
    Think again.

    In this eye-opening episode of Neuro Sec Ops, Alex Carter and Maya Lin dive into SmartAttack—a groundbreaking (and slightly terrifying) cybersecurity exploit that uses ultrasonic signals and smartwatches to breach even the most secure, air-gapped systems.

    Yes, you read that right. Just a smartwatch on someone’s wrist can become a covert receiver, silently collecting sensitive data using sound frequencies you can’t even hear.

    🎯 What you'll learn in this episode:

    • How SmartAttack works—from malware infiltration to ultrasonic data exfiltration

    • Why smartwatches are the perfect stealth tool for cybercriminals

    • The surprising physics behind ultrasonic communication

    • Real-world scenarios where this could happen

    • Practical steps organizations can take to prevent it

    Whether you're a cybersecurity pro, a tech enthusiast, or someone who just wears a smartwatch to count steps—this episode will change how you think about wearable tech and data privacy.

    🎧 Tune in and find out: Is your wrist the weakest link in the security chain?

    more information

    Afficher plus Afficher moins
    7 min
  • Trailer - Silent Signals: How Smartwatches Are Breaching Air-Gapped Systems
    Jun 14 2025

    What if the smartwatch on your wrist is the biggest cybersecurity threat in the room?

    In this episode of Neuro Sec Ops, Alex and Maya unpack SmartAttack—a jaw-dropping method for exfiltrating data from air-gapped computers using nothing but ultrasonic sound and a wearable device.

    From how the attack works to what it means for high-security environments, this episode dives deep into the silent, invisible world of covert data leaks. If you thought air-gaps were unbreachable, think again.

    Afficher plus Afficher moins
    1 min
  • Digital War Games: Simulating Cyber Battles with AI Agents
    Jun 13 2025

    Discover how artificial intelligence is transforming cybersecurity in this episode of Neuro Sec Ops. Hosts Alex and Maya unpack a groundbreaking cyberattack simulation where autonomous AI agents—both attackers and defenders—battle in a virtual network modeled on real-world infrastructure. Learn how MITRE ATT&CK tactics, reinforcement learning, and multi-agent systems are shaping the future of cyber defense and threat response. A must-listen for cybersecurity professionals, AI enthusiasts, and tech futurists.

    ---

    J. Soulé, J. -P. Jamont, M. Occello, P. Théron and L. -M. Traonouez, "Towards a Multi-Agent Simulation of Cyber-attackers and Cyber-defenders Battles," 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Honolulu, Oahu, HI, USA, 2023, pp. 3594-3599, doi: 10.1109/SMC53992.2023.10394564. #Organizations #BehavioralSciences #Cyberattack #Cybernetics},


    Afficher plus Afficher moins
    7 min