Couverture de Domesticating AI

Domesticating AI

Domesticating AI

De : SoyPete Tech
Écouter gratuitement

À propos de ce contenu audio

Domesticating AI is a bi-weekly podcast about practical AI for developers. We cover self-hosted models, local AI, homelabs, hardware, agents, security, and reliability so software engineers can build - Miriah Peterson: Software engineer, Go educator, and community builder focused on *production-first* AI. Runs SoyPete Tech (streams + writing + open-source). - Matt Sharp: AI Engineer/Strategist, co-author of *LLMs in Production*, MLOps practitioner. Writes **The Data Pioneer**. - Chris Brousseau: NLP practitioner, co-author of LLMs in Production, VP of AI at VEOX. You can find him as IMJONEZZSoyPete Tech
Épisodes
  • Hacking AI: Why Most AI Systems Are Insecure by Default
    Apr 24 2026

    Hosts: Miriah Peterson, Matt Sharp, Chris Brousseau
    Recorded: April 2026
    Status: Released

    Most AI systems today are designed to be helpful — not secure.

    In this episode, we break down how AI systems actually get exploited in production:

    • a real supply chain attack on a widely used AI dependency
    • prompt injection and why it still works
    • image-based (multimodal) exploits
    • tool and agent abuse

    If you’re building AI — especially at a startup — you are the security team.

    A widely used AI dependency was compromised via a malicious .pth file:

    • executes automatically when Python starts
    • no import required
    • targets credentials, SSH keys, and environment variables

    👉 Just installing the package was enough.

    This highlights a critical reality:

    Your AI system is only as secure as your dependencies.

    • Models cannot distinguish between instructions and data
    • External content can override system behavior
    • Still one of the most common AI vulnerabilities

    🔗 https://learnprompting.org/docs/prompt_hacking/injection

    • Hidden instructions embedded in images
    • AI interprets images differently than humans
    • Expands the attack surface significantly

    🔗 https://arxiv.org/abs/2306.11698

    • AI systems can take real-world actions via tools
    • Prompt injection → API calls, data leaks, unintended execution
    • Agents amplify risk through autonomy and retries

    If you’re building AI systems today:

    • separate instructions from data
    • limit tool permissions
    • treat outputs as untrusted
    • validate everything before execution
    • AI systems have an internet-sized attack surface
    • Supply chain attacks bypass all AI safeguards
    • Prompt injection is a fundamental problem
    • AI doesn’t fail safely — it fails wherever your system is weakest
    • LiteLLM incident: https://github.com/BerriAI/litellm/issues/24512
    • Attack breakdown: https://futuresearch.ai/blog/litellm-pypi-supply-chain-attack/
    • LLM attack techniques: https://llm-attacks.org/
    • OWASP LLM Top 10: https://owasp.org/www-project-top-10-for-large-language-model-applications/
    • Gandalf challenge: https://gandalf.lakera.ai/

    We’ve launched a Patreon for Domesticating AI 🎉

    Get:

    • early access to episodes
    • behind-the-scenes content
    • bloopers and uncut moments

    👉 https://patreon.com/DomesticatingAIPodcast

    • 🎥 YouTube: https://youtu.be/HTTxE7Y1sko

    What’s the weirdest way an AI system has broken for you?

    Keep your AI on a leash.

    Afficher plus Afficher moins
    43 min
  • Coding with AI: Vibe Coding vs Real Engineering (with Tyler Folkman)
    Apr 10 2026

    AI can write code — but that doesn’t mean you should trust it.

    In this episode of Domesticating AI, we’re joined by Tyler Folkman (author of The AI Architect) to break down how engineers are actually using AI to build software — and why most people are still just vibe coding.

    • Vibe coding vs real engineering
    • Reasoning models vs coding models
    • How to plan and prompt AI effectively
    • When to let AI take the wheel (and when not to)
    • Local vs cloud coding agents
    • Token costs vs owning hardware
    • Tyler Folkman — The AI Architect
    • Anthropic
      https://www.anthropic.com
    • OpenAI
      https://openai.com
    • Ollama
      https://ollama.com
    • MiniMax-M2.5
      https://ollama.com/library/minimax-m2.5
    • GLM-5
      https://ollama.com/library/glm-5
    • AmpCode Chronicle
      https://ampcode.com/chronicle
    • Andrej Karpathy on Context Engineering
      https://x.com/karpathy
    • “Human in the Loop is Tired”
      (add link if you have it)

    Domesticating AI is a bi-weekly podcast about practical AI for developers.

    We help you brace the feral open-source AI landscape — so you can tame it instead of getting dragged by it.

    contact@domesticatingai.com

    Spotify
    https://open.spotify.com/show/2WsAR4fvcXzp3vVZGVlkE2

    Apple Podcasts
    https://podcasts.apple.com/us/podcast/domesticating-ai/id1873338950

    Are you vibe coding — or engineering with AI?

    Let us know your setup.

    Keep your AI on a leash.

    🧠 What We Cover🔗 Links & ResourcesGuestModels & ToolsArticles / Mentions🎧 About the Podcast📬 Contact🔥 Follow👇 Join the Discussion

    Afficher plus Afficher moins
    40 min
  • Securing Your Homelab: AI Infrastructure, Access Control & Why Docker Isn’t Isolation
    Mar 27 2026
    Recording Date: February 27, 2026Hosts: Miriah Peterson, Matt Sharp, Chris BrousseauRunning AI locally is easier than ever.Running it securely is another story.In this episode of Domesticating AI, we break down the moment every homelab builder hits:The second you move from one machine to two machines…access becomes your first real engineering problem.We explore the real architecture questions behind self-hosting AI:Why a dedicated machine isn’t a sandboxWhy Docker alone isn’t isolationHow homelabs evolve from Plex servers to AI infrastructureThe blast radius problem with local agentsWhy networking and access control matter more than model sizeWe also discuss the surge in local AI hardware demand and the risks of running powerful agents on machines with unrestricted access.Whether you're running OpenClaw, Ollama, a NAS, Postgres, or a home automation stack, the same rule applies:Infrastructure without containment is just risk waiting to happen.High-memory Mac Minis are seeing long shipping delays as developers rush to build local AI systems.https://www.tomshardware.com/tech-industry/artificial-intelligence/openclaw-fueled-ordering-frenzy-creates-apple-mac-shortage-delivery-for-high-unified-memory-units-now-ranges-from-6-days-to-6-weeksMarketplace plugins and execution boundaries are becoming a growing security concern in agent systems.https://www.linkedin.com/posts/matthewsharp_i-use-to-do-nothing-but-post-about-clean-activity-7432832983339999232-iR04Overview of risks around agent plugin ecosystems and execution boundaries.https://conscia.com/blog/the-openclaw-security-crisis/Private mesh networking used to securely access homelabs.https://tailscale.comLocal AI coding agent framework.https://openclaw.aiLocal LLM runtime used for running models on personal machines.https://ollama.comWhy people actually build homelabsPlex, NAS, and home automation as infrastructure entry pointsAI workloads vs dev workloadsWhy long-running services shouldn’t live on your laptopNetworking architecture for homelabsRBAC-style access control between machinesSecrets management mistakes developers makeContainment and blast-radius thinking for AI agentsTailscale and private mesh networkingEach host answers:If I had $0What I would runWhat I would avoidIf I had $1KWhat machine I’d buyHow I’d isolate workloadsIf I had $5KHow I’d segment infrastructureWhat monitoring I’d deployWhat I would never expose to the internetStaff Data Engineer, content creator, and founder of SoyPete Tech.Miriah focuses on practical AI systems, Go infrastructure, and self-hosted AI engineering.She is also a Google Developer Expert in Go and organizer of Go West Conf.https://soypete.techAI engineer and co-author of LLMs in Production.Matt focuses on applied AI systems, local model infrastructure, and developer-focused AI tooling.Software engineer and AI practitioner focused on practical applications of machine learning and developer infrastructure.Domesticating AI is supported by the SoyPete Tech community.If you enjoy the show:Subscribe on YouTubeFollow on SpotifyJoin the Discord communityShare the episode with another engineer building with AIMore content and tutorials:https://soypetech.substack.com📰 News DiscussedMac Mini Shortages from Local AI DemandOpenClaw Security DiscussionOpenClaw Security Concerns (Referenced)🧰 Tools & Technologies MentionedTailscaleOpenClawOllama🏗 Topics Covered⚡ Lightning Round🎙 HostsMiriah PetersonMatt SharpChris Brousseau🤝 Sponsors
    Afficher plus Afficher moins
    30 min
Aucun commentaire pour le moment