Couverture de BrakeSec Education Podcast

BrakeSec Education Podcast

BrakeSec Education Podcast

De : Bryan Brake Amanda Berlin and Brian Boettcher
Écouter gratuitement

À propos de ce contenu audio

A podcast about the world of Cybersecurity, Privacy, Compliance, and Regulatory issues that arise in today's workplace. Co-hosts Bryan Brake, Brian Boettcher, and Amanda Berlin teach concepts that aspiring Information Security professionals need to know, or refresh the memories of seasoned veterans.Copyright 2024. All rights reserved Politique et gouvernement
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • Jay Beale discusses his K8s class at BlackHat, Kubernetes developments, and mental health
      Jul 17 2025

      Youtube Video at: https://www.youtube.com/watch?v=yHPvGVfPgjI


      Jay Beale is a principal security consultant and CEO/CTO for InGuardians. He is the architect of multiple open source projects, including the Peirates attack tool for Kubernetes (in Kali Linux), the Bustakube CTF Kubernetes cluster, and Bastille Linux. Jay created and leads the Kubernetes CTF at DEF CON and previously helped in the Kubernetes project's Security efforts. He's co-written eight books and given many public talks at Black Hat, DEF CON, RSA, CanSecWest, Blue Hat, ToorCon, DerbyCon, WWHF, HushCon and others. He teaches the highly-rated Black Hat class, "Attacking and Protecting Kubernetes, Linux, and Containers." He has served on the review board of the O'Reilly Security Conference, the board of Mitre's CVE-related Open Vulnerability and Assessment Language, and been a member of the HoneyNet project. He's briefed both Congress and the White House.

      Questions and topics: (please feel free to update or make comments for clarifications)
      * Kubernetes vs. Docker vs. LXC vs. VMs - why did you settle on K8s?
      * What's new with k8s? Version 1.33? Do you always implement the latest version in your CTF, or something that is deliberately vulnerable? (https://www.loft.sh/blog/kubernetes-v-1-33-key-features-updates-and-what-you-need-to-know)
      * When you are making a CTF, what's your methodology? Threat model then verify? Code review? Github pull requests?
      * Story time; Not the first year you've done this(?), have participants ever surprised you finding something you didn't expect?
      * If I'm running K8s at my workplace, what should be bare minimum k8s security I should implement? Any security controls that I should implement that might cause performance or are 'nice-to-have' but may run counter to how orgs use k8s that I should be concerned about implementing?


      Additional information / pertinent LInks (Would you like to know more?):
      https://kubernetes.io/
      DEF CON Kubernetes CTF: https://containersecurityctf.com/
      Black Hat training: https://www.blackhat.com/us-25/training/schedule/index.html#0-day-unnecessary-attacking-and-protecting-kubernetes-linux-and-containers-45335
      https://www.bustakube.com/
      https://github.com/inguardians/peirates
      Rory McCune's blog: https://raesene.github.io/
      https://www.oreilly.com/library/view/production-kubernetes/9781492092292/ - O'Reilly book: Production Kubernetes


      Show points of Contact:
      Amanda Berlin: https://www.linkedin.com/in/amandaberlin/
      Brian Boettcher: https://www.linkedin.com/in/bboettcher96/
      Bryan Brake: https://linkedin.com/in/brakeb
      Brakesec Website: https://www.brakeingsecurity.com
      Youtube channel: https://youtube.com/@brakeseced
      Twitch Channel: https://twitch.tv/brakesec

      Afficher plus Afficher moins
      1 h et 49 min
    • Socvel intel threat quiz, Pearson Breached, nintendo bricking stuff, and kevintel.com
      May 10 2025

      socvel.com/quiz if you want to play along!

      Check out the BrakeSecEd Twitch at https://twitch.tv/brakesec

      join the Discord: https://bit.ly/brakesecDiscord


      Music:

      Music provided by Chillhop Music: https://chillhop.ffm.to/creatorcred

      "Flex" by Jeremy Blake
      Courtesy of Youtube media library

      Afficher plus Afficher moins
      1 h et 25 min
    • Bronwen Aker - harnessing AI for improving your workflows
      Apr 22 2025
      Guest Info: Name: Bronwen Aker Contact Information (N/A): https://br0nw3n.com/ Time Zone(s): Pacific, Central, Eastern –Copy begins– Disclaimer: The views, information, or opinions expressed on this program are solely the views of the individuals involved and by no means represent absolute facts. Opinions expressed by the host and guests can change at any time based on new information and experiences, and do not represent views of past, present, or future employers. Recorded: https://youtube.com/live/guhM8v8Irmo?feature=share Show Topic Summary: By harnessing AI, we can assist in being proactive in discovering evolving threats, safeguard sensitive data, analyze data, and create smarter defenses. This week, we'll be joined by Bronwen Aker, who will share invaluable insights on creating a local AI tailored to your unique needs. Get ready to embrace innovation, transform your work life, and contribute to a safer digital world with the power of artificial intelligence! (heh, I wrote this with the help of AI…) Questions and topics: (please feel free to update or make comments for clarifications) Things that concern Bronwen about AI: (https://br0nw3n.com/2023/12/why-i-am-and-am-not-afraid-of-ai/) Data Amplification: Generative AI models require vast amounts of data for training, leading to increased data collection and storage. This amplifies the risk of unauthorized access or data breaches, further compromising personal information. Data Inference: LLMs can deduce sensitive information even when not explicitly provided. They may inadvertently disclose private details by generating contextually relevant content, infringing on individuals' privacy. Deepfakes and Misinformation: Generative AI can generate convincing deepfake content, such as videos or audio recordings, which can be used maliciously to manipulate public perception or deceive individuals. (Elections, anyone?) Bias and Discrimination: LLMs may inherit biases present in their training data, perpetuating discrimination and privacy violations when generating content that reflects societal biases. Surveillance and Profiling: The utilization of LLMs for surveillance purposes, combined with big data analytics, can lead to extensive profiling of individuals, impacting their privacy and civil liberties. Setting up a local LLM? CPU models vs. gpu models pros/cons? Benefits? What can people do if they lack local resources? Cloud instances? Ec2? Digital Ocean? Use a smaller model? https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/ AI coding assets are hallucinating package names 5.2 percent of package suggestions from commercial models didn't exist, compared to 21.7 percent from open source or openly available models Attackers can then create malicious packages matching the invented name, some are quite convincing with READMEs, fake github repos, even blog posts An evolution of typosquatting named "slopsquating" by Seth Michael Larson of Python Software Foundation Threat actor "_Iain" posted instructions and videos using AI for mass-generated fake packages from creation to exploitation Additional information / pertinent LInks (Would you like to know more?): https://www.reddit.com/r/machinelearningnews/s/HDHlwHtK7U https://br0nw3n.com/2024/06/llms-and-prompt-engineering/ - Prompt Engineering talk https://br0nw3n.com/wp-content/uploads/LLM-Prompt-Engineering-LayerOne-May-2024.pdf (slides) Daniel Meissler 'Fabric' - https://github.com/danielmiessler/fabric https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/a_starter_guide_for_playing_with_your_own_local_ai/ Ollama tutorial (co-founder of ollama - Matt Williams): https://www.youtube.com/@technovangelist https://mhtntimes.com/articles/altman-please-thanks-chatgpt https://www.whiterabbitneo.com/ - AI for DevSecOps, Security https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/ https://www.youtube.com/watch?v=OuF3Q7jNAEc - neverending story using an LLM https://science.nasa.gov/venus/venus-facts Show points of Contact: Amanda Berlin: https://www.linkedin.com/in/amandaberlin/ Brian Boettcher: https://www.linkedin.com/in/bboettcher96/ Bryan Brake: https://linkedin.com/in/brakeb Brakesec Website: https://www.brakeingsecurity.com Youtube channel: https://youtube.com/@brakeseced Twitch Channel: https://twitch.tv/brakesec
      Afficher plus Afficher moins
      1 h et 37 min
    Aucun commentaire pour le moment