• Accessible UX Research by Michele A. Williams, PhD
    Feb 23 2026

    When Doors Close on Those Who Need Them Most


    As a podcast host exploring the intersection of humanity and technology I keep asking: Are we really including everyone in our digital transformation?


    Dr Michele A Williams, Owner & Accessibility Consultant, new book Accessible UX Research challenges us to move beyond checklists and truly design with not for disabled users.


    Coming soon to the Human Signal podcast: Dr Michele A Williams PhD joins us to break down how to make digital accessibility work in an AI world.


    https://mawconsultingllc.com/


    Accessibility is not just about digital spaces. Accessibility is about fundamental human rights.


    What Gen X leaders and professionals need to know:

    ✓ How to spot invisible exclusion in UX research and code

    ✓ Moving beyond compliance checklists to build truly inclusive systems

    ✓ Using AI for captions and alt text without creating new barriers

    ✓ 90 day accessibility practices your team can sustain


    Because real inclusion means ensuring everyone has access to the places and systems they need whether digital or physical.


    Production notes:


    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


    📧 Contact & Subscribe


    LinkedIn: linkedin.com/in/tuboise

    Email: tuboise@humansignal.io

    GoFundMe: https://gofund.me/117dd0d3d


    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://gofund.me/117dd0d3d


    Every contribution sustains the signal.


    📜 Transcript


    Full transcript available upon request at support@humansignal.io


    🏷️ Tags


    #HumanSignal #DigitalAccessibility #ArtificialIntelligence #InclusiveDesign #UXResearch #GenXLeaders #TechLeadership #Accessibility

    Afficher plus Afficher moins
    3 min
  • AI Activism for Insiders | This Is Not Ethics Work Brief
    Feb 20 2026

    AI Activism for Insiders: This Is Not Ethics Work


    🧠 About Human Signal


    Human Signal monitors governance patterns across frontier AI labs, tracking the gap between stated safety commitments and operational reality. Through the L.E.A.C. Protocol and tools, like Noise Discipline and Workflow Thesis, we identify where governance erodes under capital pressure and where external oversight needs to be applied.


    Production notes:


    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


    📧 Contact & Subscribe


    LinkedIn: linkedin.com/in/tuboise

    Email: tuboise@humansignal.io

    GoFundMe: https://gofund.me/117dd0d3d


    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://gofund.me/117dd0d3d


    Every contribution sustains the signal.


    📜 Transcript


    Full transcript available upon request at support@humansignal.io


    🏷️ Tags


    #AI #FutureOfWork #ResponsibleAI #AIGovernance #AIActivism

    Afficher plus Afficher moins
    2 min
  • The Anthropic Exodus and Governance Collapse | Human Signal Failure File 002
    Feb 20 2026

    Episode Summary


    On February 9th, 2026, Anthropic's head of safeguards research, Mrinank Sharma, resigned—and his departure tells us everything about what happens when billion-dollar infrastructure commitments collide with safety protocols.


    This episode examines how AI labs build world-class safeguards on paper while struggling to maintain them in practice. We explore the gap between stated safety commitments and operational reality, and why that gap is where systemic risk accumulates.


    🔑 Key Topics Covered


    The Signal, Not Just Personnel

    - Mrinank Sharma's resignation as organizational telemetry

    - Sharma's critical research areas: reality distortion in AI chatbots, AI-assisted bioterrorism defense, and sycophancy prevention

    - Why departures from safety leadership roles are data points in governance collapse patterns


    Infrastructure Economics vs. Safety

    - The capital-intensive reality: lithography, GPUs, data centers, and energy

    - How financial models lock organizations into velocity-prioritizing postures

    - The mechanism of slow-motion governance collapse


    The Public-Private Governance Gap

    - U.S. Department of Labor's AI Literacy Framework and public-side initiatives

    - The irony of raising the AI literacy floor while the ceiling cracks in frontier labs

    - Where systemic risk accumulates in this disconnect


    The L.E.A.C. Protocol Framework

    Dr. Floyd introduces Human Signal's analytical framework for understanding AI governance failures.


    🔗 Resources & Links


    Referenced Frameworks & Projects

    - L.E.A.C. Protocol Framework: https://youtube.com/shorts/VpDm5LnW20g?si=J6nz3wPQz3c97-1r

    - Project Cerebellum: https://projectcerebellum.com

    - TAIMScore - Structured assessment tool for AI governance evaluation: https://projectcerebellum.com/#taimscore

    - U.S. Department of Labor AI Literacy Framework - Federal guidance on AI skills and safeguards: https://www.dol.gov/sites/dolgov/files/ETA/advisories/TEN/2025/TEN%2007-25/TEN%2007-25%20(complete%20document).pdf


    Key Research Areas (Mrinank Sharma)

    - AI chatbot reality distortion effects

    - AI-assisted bioterrorism defense mechanisms

    - Sycophancy in AI models and powerful user interactions


    Related Reading

    - Anthropic's published safety commitments and responsible scaling policy

    - Analysis of frontier AI lab governance structures

    - Case studies in AI safety leadership turnover


    📥 Episode Audio Files


    Full Episode Segments:

    1. ​Introduction - 22 seconds
    2. ​Sharma's Resignation & Governance Gap - 59 seconds
    3. ​Sharma's Track Record & Organizational Telemetry - 61 seconds
    4. ​Infrastructure & Financial Pressures - 67 seconds
    5. ​L.E.A.C. Framework Analysis - 2 minutes 4 seconds
    6. ​Closing & Sign-off - 29 seconds


    🧠 About Human Signal


    Human Signal monitors governance patterns across frontier AI labs, tracking the gap between stated safety commitments and operational reality. Through the L.E.A.C. Protocol and tools, like Noise Discipline and Workflow Thesis, we identify where governance erodes under capital pressure and where external oversight needs to be applied.


    Production notes:


    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


    📧 Contact & Subscribe


    LinkedIn: linkedin.com/in/tuboise

    Email: tuboise@humansignal.io

    GoFundMe: https://gofund.me/117dd0d3d


    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://gofund.me/117dd0d3d


    Every contribution sustains the signal.


    📜 Transcript


    Full transcript available upon request at support@humansignal.io


    🏷️ Tags


    #AIGovernance #AIEthics #Anthropic #AISafety #TechPolicy #FrontierAI #GovernanceCollapse #AIResearch #MachineLearning #TechAccountability #AIInfrastructure #ProjectCerebellum #LEACProtocol


    © 2026 Human Signal. All rights reserved.

    Afficher plus Afficher moins
    7 min
  • The Governance Gap - Why AI Contracts Outpace Control Systems
    Feb 14 2026

    The Governance Gap - Why AI Contracts Outpace Control Systems


    EPISODE DESCRIPTION


    Is your leadership signing AI contracts faster than they're building governance?


    That gap is where the lawsuits, scandals, and quiet institutional failures live. It's how you wake up with a 'successful AI pilot' and a mess in risk, workforce, and public trust.


    The Critical Problem:


    Organizations are racing to deploy AI without the control systems, oversight mechanisms, and governance frameworks needed to manage the technology safely. The result? A dangerous gap between what leadership promises and what operations can actually deliver.


    Who This Is For:

    • ​Mid-career operators inside AI-disrupted institutions
    • ​Federal IT leaders watching risky deployments unfold
    • ​University CIOs managing AI rollouts without adequate governance
    • ​Enterprise strategists caught between innovation pressure and risk reality
    • ​Policy teams trying to create guardrails after the fact


    The Solution:


    Human Signal is an independent strategy lab that helps institutional operators close the governance gap from the inside. We provide the frameworks, playbooks, and strategic guidance to reshape—or stop—bad AI deployments before they break your institution.


    Key Takeaway:


    You don't have to wait for leadership to figure this out. Mid-career operators have the leverage to intervene, redirect, and demand better governance before the failures compound.


    This isn't about slowing down innovation. It's about surviving it.


    ABOUT DR. TUBOISE FLOYD


    Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.


    He used to be the person quietly fixing other people's broken systems. Now he builds and broadcasts his own.


    SUBSCRIBE & SUPPORT


    Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.


    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://gofund.me/117dd0d3d


    Every contribution sustains the signal.


    PRODUCTION NOTES


    Host & Producer: Dr. Tuboise Floyd

    Creative Director: Jeremy Jarvis


    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


    CONNECT


    • ​Website: HumanSignal.io
    • ​LinkedIn: linkedin.com/in/tuboise
    • ​Email: tuboise@humansignal.io
    • ​GoFundMe: https://gofund.me/117dd0d3d


    TRANSCRIPT


    Full transcript available upon request at support@humansignal.io


    TAGS/KEYWORDS


    Governance Gap, AI Governance, AI Contracts, Risk Management, Institutional AI, Mid-Career Operators, Federal IT, Enterprise AI, AI Deployment, Control Systems, AI Oversight, Strategic Governance, AI Policy, Institutional Risk


    HASHTAGS


    #GovernanceGap #AIGovernance #AIContracts #RiskManagement #InstitutionalAI #MidCareer #FederalIT #EnterpriseAI #HumanSignal #AIDeployment #StrategicGovernance


    LEGAL


    © 2026 Dr. Tuboise Floyd. All rights reserved.

    Content is part of the Presence Signaling Architecture (PSA™) and L.E.A.C. Protocol™

    Afficher plus Afficher moins
    1 min
  • Fund Independent AI Governance Research & Strategy for Federal & Enterprise Leaders
    Feb 13 2026

    Help keep Human Signal independent for the next 6 months. This GoFundMe funds weekly episodes, visual playbooks, and protected research time—without sponsors, paywalls, or vendor capture.


    🔗 Support at any level: https://gofund.me/ee1ceb7a1


    Every contribution sustains the signal.


    ABOUT THE HOST


    Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.


    ABOUT HUMAN SIGNAL


    Human Signal is a strategy podcast and research lab for operators—not vendors or VCs. Over 12 episodes across two seasons, listeners have used these frameworks to rewrite procurement criteria, slow unsafe rollouts, and rebuild governance around real-world constraints instead of marketing decks.


    CONNECT


    • ​LinkedIn: linkedin.com/in/tuboise
    • ​Email: tuboise@humansignal.io
    • ​GoFundMe: https://gofund.me/ee1ceb7a1


    TAGS/KEYWORDS


    AI Governance, Risk Management, AI Policy, Tech Leadership, Institutional AI, Future of Work, AI Ethics, Governance Failure, Enterprise AI, Government AI, AI Safety, Technology Policy, Digital Transformation, AI Deployment, Systems Thinking


    HASHTAGS


    #AIGovernance #RiskManagement #AIPolicy #TechLeadership #InstitutionalAI #FutureOfWork #HumanSignal #AIEthics #GovernanceFailure #EnterpriseAI


    EPISODE CREDITS


    Host & Producer: Dr. Tuboise Floyd

    Creative Director: Jeremy Jarvis


    TRANSCRIPT AVAILABLE


    Full transcript available upon request at support@humansignal.io


    © 2026 Human Signal. All rights reserved.

    Afficher plus Afficher moins
    2 min
  • AI Governance: Balancing Innovation with Risk Management
    Feb 12 2026

    EPISODE DESCRIPTION


    In this episode, Dr. Tuboise Floyd is joined by Col. Kathy Swacina (USA Ret.), CIO of Sherpawerx, and Taiye Lambo, Founder and Chief Artificial Intelligence Officer of Holistic Information Security Practitioner Institute (HISPI), to discuss Project Cerebellum, AI Governance, and balancing innovation with risk management.


    We delve into the critical need for a holistic control layer in AI development. Without appropriate checks and balances, the rapid race to be first with AI could lead to dire consequences. The discussion touches on the role of CIOs, the moral compass of AI systems, and the potential risks of operating without proper oversight.


    This is a cautionary tale for executives and developers alike, emphasizing the importance of a balanced approach to AI innovation and governance.


    Key Topics:

    • ​Project Cerebellum and holistic AI control layers
    • ​The race to AI deployment vs. responsible governance
    • ​The evolving role of CIOs in AI oversight
    • ​Building moral compasses into AI systems
    • ​Risk management frameworks that actually work


    GUESTS


    Col. Kathy Swacina (USA Ret.)

    CIO, Sherpawerx

    🔗 https://sherpawerx.com/


    Taiye Lambo

    Founder & Chief Artificial Intelligence Officer

    Holistic Information Security Practitioner Institute (HISPI)

    🔗 https://www.hispi.org/


    SUBSCRIBE & SUPPORT


    Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.


    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://gofund.me/117dd0d3d


    Every contribution sustains the signal.


    ABOUT THE HOST


    Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.


    PRODUCTION NOTES


    Host & Producer: Dr. Tuboise Floyd

    Creative Director: Jeremy Jarvis


    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


    CONNECT


    • ​LinkedIn: linkedin.com/in/tuboise
    • ​Email: tuboise@humansignal.io
    • ​GoFundMe: https://gofund.me/117dd0d3d


    TRANSCRIPT


    Full transcript available upon request at support@humansignal.io


    TAGS/KEYWORDS


    AI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology Leadership


    HASHTAGS


    #AIGovernance #RiskManagement #Innovation #AIPolicy #CIOLeadership #AIEthics #HumanSignal #MilitaryTech #Cybersecurity #ProjectCerebellum


    LEGAL


    © 2026 Dr. Tuboise Floyd. All rights reserved.

    Content is part of the Presence Signaling Architecture (PSA™) and L.E.A.C. Protocol™.

    Afficher plus Afficher moins
    50 min
  • Al Hiring Bias: Amazon's Recruiter Al Failure | Human Signal Failure File 001
    Feb 11 2026

    EPISODE DESCRIPTION


    🎧 When institutions embed AI into decision workflows, the primary risk isn't "bad models."


    It's governance failure.


    In this episode of the Human Signal Failure File, I examine two critical cases:


    🚌 Spokane Transit: A navigation system routed a double-decker bus toward a low bridge, shearing off the upper deck and injuring passengers. Routes were vetted on paper, but nobody asked: "What happens when navigation confidently detours a 13.5-foot vehicle toward a 12.5-foot bridge?"


    💼 Amazon Hiring Tool: Trained on a male-dominated technical workforce, the AI learned to penalize women. Even after engineers stripped obvious terms, they couldn't guarantee it wasn't reconstructing gender through proxies.


    The real thesis:


    The hazard of institutional AI is not the widget. It's the workflow.


    The defensive move? Treat AI as a workflow design problem:

    • ​Run stress tests on safety-critical processes
    • ​Build incident playbooks with clear triggers
    • ​Create guardrails for when to suspend and audit


    AI doesn't just automate decisions. It automates institutional blind spots.


    Will leaders build the tests, guardrails, and exit ramps that keep those blind spots from becoming the new normal?


    SUBSCRIBE & SUPPORT


    Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.


    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://gofund.me/117dd0d3d


    Every contribution sustains the signal.


    ABOUT THE HOST


    Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.


    PRODUCTION NOTES


    Host & Producer: Dr. Tuboise Floyd

    Creative Director: Jeremy Jarvis


    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


    CONNECT


    • ​LinkedIn: linkedin.com/in/tuboise
    • ​Email: tuboise@humansignal.io
    • ​GoFundMe: https://gofund.me/117dd0d3d


    TRANSCRIPT


    Full transcript available upon request at support@humansignal.io


    TAGS/KEYWORDS


    AI Governance, Risk Management, AI Policy, Tech Leadership, Institutional AI, Future of Work, AI Ethics, Governance Failure, Enterprise AI, Government AI, Spokane Transit, Amazon Hiring Bias, Workflow Design


    HASHTAGS


    #AIGovernance #RiskManagement #AIPolicy #TechLeadership #InstitutionalAI #FutureOfWork #HumanSignal #AIEthics #GovernanceFailure


    LEGAL


    © 2026 Dr. Tuboise Floyd. All rights reserved.

    Content is part of the Presence Signaling Architecture (PSA™) and L.E.A.C. Protocol™.

    Afficher plus Afficher moins
    4 min
  • Social Media Destroys Strategic Focus | Noise Discipline Framework for Tech Leaders Brief
    Feb 6 2026

    EPISODE DESCRIPTION


    🎧 Noise Discipline: Why Builders Must Treat Social Feeds as Enemy Territory


    In this brief, I discuss how high-speed social media feeds damage our ability to think deeply by softening our fact-checking instincts, increasing stress, and causing source amnesia—where we forget where ideas came from and mistake them for our own thoughts.


    What You'll Learn:

    • How social feeds degrade critical thinking capabilities

    • The real cost of source amnesia on strategic decision-making

    • Why constant scrolling softens fact-checking instincts

    • The stress cascade triggered by high-velocity information


    The Solution: Noise Discipline


    Treat feeds like radiation zones:

    • Set timers and enforce strict exposure limits

    • Question who wrote what and what they're selling

    • Skip anything that doesn't help you build

    • Reclaim your attention as a strategic asset


    This isn't about productivity hacks. It's about survival in an environment designed to colonize your attention.


    SUBSCRIBE & SUPPORT


    Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.


    Support Human Signal:

    Help fuel six months of new episodes, visual briefs, and honest playbooks.

    🔗 https://gofund.me/117dd0d3d


    Every contribution sustains the signal.


    ABOUT THE HOST


    Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.


    PRODUCTION NOTES


    Host & Producer: Dr. Tuboise Floyd

    Creative Director: Jeremy Jarvis


    Tech Specs:

    Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


    CONNECT


    • LinkedIn: linkedin.com/in/tuboise

    • Email: tuboise@humansignal.io

    • GoFundMe: https://gofund.me/117dd0d3d


    TRANSCRIPT


    Full transcript available upon request at support@humansignal.io


    TAGS/KEYWORDS


    Noise Discipline, Social Media Strategy, Information Hygiene, Attention Management, Critical Thinking, Source Amnesia, Digital Minimalism, Builder Mindset, Cognitive Load, Strategic Focus, Deep Work


    HASHTAGS


    #NoiseDiscipline #AttentionManagement #BuilderMindset #DigitalMinimalism #CriticalThinking #HumanSignal #DeepWork #StrategicFocus #InformationHygiene


    LEGAL


    © 2026 Dr. Tuboise Floyd. All rights reserved.

    Content is part of the Presence Signaling Architecture (PSA™) and L.E.A.C. Protocol™.


    Ready to publish! 🚀

    Afficher plus Afficher moins
    1 min