Couverture de Human Signal with Dr. Tuboise Floyd Authentic Intelligence in a Digital World

Human Signal with Dr. Tuboise Floyd Authentic Intelligence in a Digital World

Human Signal with Dr. Tuboise Floyd Authentic Intelligence in a Digital World

De : Dr. Tuboise Floyd
Écouter gratuitement

À propos de ce contenu audio

Human Signal with Dr. Tuboise Floyd Authentic Intelligence in a Digital World is an asymmetric strategy briefing for people working inside AI disrupted institutions. The market has split in two the Consumption Economy is noise content checklists compliance and the Investment Economy is signal infrastructure physics sovereignty. Human Signal is the intelligence feed for the Investment Economy we do not trade in content we trade in leverage. Hosted by Dr Tuboise Floyd Principal Technical Strategist and Creative Director Jeremy Jarvis the show covers asymmetric strategy critical infrastructure and the physics of risk for the GovCon and Builder Class sectors. If this mission resonates with you you can support the Human Signal launch fund to fuel six months of new episodes visual briefs and honest playbooks at https://gofund.me/117dd0d3d © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture (PSA™) and L.E.A.C. Protocol™.581592 Economie Management Management et direction
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • AI Activism for Insiders | This Is Not Ethics Work Brief
      Feb 20 2026

      AI Activism for Insiders: This Is Not Ethics Work


      🧠 About Human Signal


      Human Signal monitors governance patterns across frontier AI labs, tracking the gap between stated safety commitments and operational reality. Through the L.E.A.C. Protocol and tools, like Noise Discipline and Workflow Thesis, we identify where governance erodes under capital pressure and where external oversight needs to be applied.


      Production notes:


      Tech Specs:

      Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


      📧 Contact & Subscribe


      LinkedIn: linkedin.com/in/tuboise

      Email: tuboise@humansignal.io

      GoFundMe: https://gofund.me/117dd0d3d


      Support Human Signal:

      Help fuel six months of new episodes, visual briefs, and honest playbooks.

      🔗 https://gofund.me/117dd0d3d


      Every contribution sustains the signal.


      📜 Transcript


      Full transcript available upon request at support@humansignal.io


      🏷️ Tags


      #AI #FutureOfWork #ResponsibleAI #AIGovernance #AIActivism

      Afficher plus Afficher moins
      2 min
    • The Anthropic Exodus and Governance Collapse | Human Signal Failure File 002
      Feb 20 2026

      Episode Summary


      On February 9th, 2026, Anthropic's head of safeguards research, Mrinank Sharma, resigned—and his departure tells us everything about what happens when billion-dollar infrastructure commitments collide with safety protocols.


      This episode examines how AI labs build world-class safeguards on paper while struggling to maintain them in practice. We explore the gap between stated safety commitments and operational reality, and why that gap is where systemic risk accumulates.


      🔑 Key Topics Covered


      The Signal, Not Just Personnel

      - Mrinank Sharma's resignation as organizational telemetry

      - Sharma's critical research areas: reality distortion in AI chatbots, AI-assisted bioterrorism defense, and sycophancy prevention

      - Why departures from safety leadership roles are data points in governance collapse patterns


      Infrastructure Economics vs. Safety

      - The capital-intensive reality: lithography, GPUs, data centers, and energy

      - How financial models lock organizations into velocity-prioritizing postures

      - The mechanism of slow-motion governance collapse


      The Public-Private Governance Gap

      - U.S. Department of Labor's AI Literacy Framework and public-side initiatives

      - The irony of raising the AI literacy floor while the ceiling cracks in frontier labs

      - Where systemic risk accumulates in this disconnect


      The L.E.A.C. Protocol Framework

      Dr. Floyd introduces Human Signal's analytical framework for understanding AI governance failures.


      🔗 Resources & Links


      Referenced Frameworks & Projects

      - L.E.A.C. Protocol Framework: https://youtube.com/shorts/VpDm5LnW20g?si=J6nz3wPQz3c97-1r

      - Project Cerebellum: https://projectcerebellum.com

      - TAIMScore - Structured assessment tool for AI governance evaluation: https://projectcerebellum.com/#taimscore

      - U.S. Department of Labor AI Literacy Framework - Federal guidance on AI skills and safeguards: https://www.dol.gov/sites/dolgov/files/ETA/advisories/TEN/2025/TEN%2007-25/TEN%2007-25%20(complete%20document).pdf


      Key Research Areas (Mrinank Sharma)

      - AI chatbot reality distortion effects

      - AI-assisted bioterrorism defense mechanisms

      - Sycophancy in AI models and powerful user interactions


      Related Reading

      - Anthropic's published safety commitments and responsible scaling policy

      - Analysis of frontier AI lab governance structures

      - Case studies in AI safety leadership turnover


      📥 Episode Audio Files


      Full Episode Segments:

      1. ​Introduction - 22 seconds
      2. ​Sharma's Resignation & Governance Gap - 59 seconds
      3. ​Sharma's Track Record & Organizational Telemetry - 61 seconds
      4. ​Infrastructure & Financial Pressures - 67 seconds
      5. ​L.E.A.C. Framework Analysis - 2 minutes 4 seconds
      6. ​Closing & Sign-off - 29 seconds


      🧠 About Human Signal


      Human Signal monitors governance patterns across frontier AI labs, tracking the gap between stated safety commitments and operational reality. Through the L.E.A.C. Protocol and tools, like Noise Discipline and Workflow Thesis, we identify where governance erodes under capital pressure and where external oversight needs to be applied.


      Production notes:


      Tech Specs:

      Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


      📧 Contact & Subscribe


      LinkedIn: linkedin.com/in/tuboise

      Email: tuboise@humansignal.io

      GoFundMe: https://gofund.me/117dd0d3d


      Support Human Signal:

      Help fuel six months of new episodes, visual briefs, and honest playbooks.

      🔗 https://gofund.me/117dd0d3d


      Every contribution sustains the signal.


      📜 Transcript


      Full transcript available upon request at support@humansignal.io


      🏷️ Tags


      #AIGovernance #AIEthics #Anthropic #AISafety #TechPolicy #FrontierAI #GovernanceCollapse #AIResearch #MachineLearning #TechAccountability #AIInfrastructure #ProjectCerebellum #LEACProtocol


      © 2026 Human Signal. All rights reserved.

      Afficher plus Afficher moins
      7 min
    • The Governance Gap - Why AI Contracts Outpace Control Systems
      Feb 14 2026

      The Governance Gap - Why AI Contracts Outpace Control Systems


      EPISODE DESCRIPTION


      Is your leadership signing AI contracts faster than they're building governance?


      That gap is where the lawsuits, scandals, and quiet institutional failures live. It's how you wake up with a 'successful AI pilot' and a mess in risk, workforce, and public trust.


      The Critical Problem:


      Organizations are racing to deploy AI without the control systems, oversight mechanisms, and governance frameworks needed to manage the technology safely. The result? A dangerous gap between what leadership promises and what operations can actually deliver.


      Who This Is For:

      • ​Mid-career operators inside AI-disrupted institutions
      • ​Federal IT leaders watching risky deployments unfold
      • ​University CIOs managing AI rollouts without adequate governance
      • ​Enterprise strategists caught between innovation pressure and risk reality
      • ​Policy teams trying to create guardrails after the fact


      The Solution:


      Human Signal is an independent strategy lab that helps institutional operators close the governance gap from the inside. We provide the frameworks, playbooks, and strategic guidance to reshape—or stop—bad AI deployments before they break your institution.


      Key Takeaway:


      You don't have to wait for leadership to figure this out. Mid-career operators have the leverage to intervene, redirect, and demand better governance before the failures compound.


      This isn't about slowing down innovation. It's about surviving it.


      ABOUT DR. TUBOISE FLOYD


      Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.


      He used to be the person quietly fixing other people's broken systems. Now he builds and broadcasts his own.


      SUBSCRIBE & SUPPORT


      Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.


      Support Human Signal:

      Help fuel six months of new episodes, visual briefs, and honest playbooks.

      🔗 https://gofund.me/117dd0d3d


      Every contribution sustains the signal.


      PRODUCTION NOTES


      Host & Producer: Dr. Tuboise Floyd

      Creative Director: Jeremy Jarvis


      Tech Specs:

      Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.


      CONNECT


      • ​Website: HumanSignal.io
      • ​LinkedIn: linkedin.com/in/tuboise
      • ​Email: tuboise@humansignal.io
      • ​GoFundMe: https://gofund.me/117dd0d3d


      TRANSCRIPT


      Full transcript available upon request at support@humansignal.io


      TAGS/KEYWORDS


      Governance Gap, AI Governance, AI Contracts, Risk Management, Institutional AI, Mid-Career Operators, Federal IT, Enterprise AI, AI Deployment, Control Systems, AI Oversight, Strategic Governance, AI Policy, Institutional Risk


      HASHTAGS


      #GovernanceGap #AIGovernance #AIContracts #RiskManagement #InstitutionalAI #MidCareer #FederalIT #EnterpriseAI #HumanSignal #AIDeployment #StrategicGovernance


      LEGAL


      © 2026 Dr. Tuboise Floyd. All rights reserved.

      Content is part of the Presence Signaling Architecture (PSA™) and L.E.A.C. Protocol™.

      Afficher plus Afficher moins
      1 min
    Aucun commentaire pour le moment