Couverture de Neuro Sec Ops

Neuro Sec Ops

Neuro Sec Ops

De : NeuroSec Ops
Écouter gratuitement

À propos de ce contenu audio

Neuro Sec Ops explores the cutting edge of AI and cybersecurity through expert interviews, technical deep dives, and real-world threat analysis. Whether you're a developer, researcher, or just tech-curious, tune in to decode how AI is transforming digital defense—and what’s at risk.NeuroSec Ops
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • Can AI Spot a Phish? LLMs vs. Email Scams
      Jun 22 2025

      Phishing emails are getting smarter—can AI keep up?

      In this episode of Neuro Sec Ops, hosts Alex Carter and Maya Lin dive into a cutting-edge study that tested large language models like BERT, LLaMA, and Wizard against real phishing threats. From email deception to model explainability, we unpack why some models detect scams better than others—and why trust in AI isn’t just about accuracy.

      We explore fine-tuning strategies, the ethics of explainable AI, and why even a smart model might still fall for a phish. Whether you're in cybersecurity or just curious about AI’s role in digital defense, this one's for you.

      Topics Covered:

      • Phishing detection with LLMs

      • Explainability vs. accuracy

      • SHAP values and CC-SHAP scoring

      • The future of AI in cybersecurity

      Subscribe for more deep dives at the intersection of AI and security.

      Kuikel, S., Piplai, A., & Aggarwal, P. (2025). Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and

      Afficher plus Afficher moins
      7 min
    • The Bias Beneath: Can AI Recruiters Ever Be Fair?
      Jun 19 2025

      AI is revolutionizing hiring—but what happens when it quietly learns our biases?

      In this episode, Alex Carter and Maya Lin unpack a compelling new study that reveals how Large Language Models like BERT and RoBERTa can inherit gender bias when scoring résumés—and what that means for fairness in automated hiring. From biased tokens to adversarial learning hacks, we explore the hidden risks and radical fixes in AI-based recruitment.

      You’ll learn:

      • How AI picks up gender signals even without explicit data
      • What “allocational harm” is and why it matters
      • Two cutting-edge methods to remove bias from LLMs
      • Why removing bias actually improves accuracy


      Whether you’re into cybersecurity, AI ethics, or just job-hunting in the digital age—this one’s for you.


      More information

      Afficher plus Afficher moins
      7 min
    • Guardrails for AI: Can We Stop LLMs from Going Rogue?
      Jun 17 2025

      In this episode of Neuro Sec Ops, hosts Alex Carter and Maya Lin dive into the evolving world of AI security and large language model (LLM) jailbreaks. Based on a new study from HKUST, we explore how jailbreak guardrails are being developed to detect and prevent malicious prompts that bypass LLM safety mechanisms.

      From pre-processing, intra-processing, and post-processing guardrails to rule-based vs. LLM-based detection methods, we break down the pros, cons, and performance trade-offs of today's best defenses. What are multi-turn jailbreaks, and why are session-level guardrails still vulnerable? How do SEU metrics—Security, Efficiency, Utility—shape AI defense strategies?

      Whether you're a cybersecurity expert, AI developer, or curious tech follower, this episode delivers an insightful, jargon-free overview of one of the most critical issues in AI alignment and safety today.

      🔑 Keywords: AI jailbreaks, LLM guardrails, AI safety, prompt injection, large language model security, cybersecurity, GPT-4 jailbreak, AI ethics, neural networks, adversarial AI, SEU framework

      More information


      Afficher plus Afficher moins
      6 min
    Aucun commentaire pour le moment