Épisodes

  • Episode 20- From K-12 to Harvard: Bridging the AI Literacy Gap with Dr. Zahra Ahmed
    Apr 7 2026

    In this episode, Bob and Jess sit down with Zahra Amed to explore the fluid boundary between K-12 education and the rigorous expectations of higher education in the age of generative AI. Zahra brings a unique perspective, having moved from school programs at children's museums to training faculty at Harvard.

    The conversation moves beyond the technical mechanics of AI. It focuses on the human elements that technology cannot replicate: social-emotional learning, restorative practices, and the "durable" skills of judgment and critique. She explains why we must treat AI as a "makerspace" for tinkering rather than a repository of answers, and how institutional "walled gardens" can help close the emerging digital divide.

    Key Discussion Points

    Metacognition and the Baseline Shift

    The entry point for college students is shifting. It is no longer enough to arrive with information; students must arrive with an awareness of their own thinking.

    The Metacognitive Question: Students should ask, "What is AI doing for me, and what am I still responsible for?"

    AI as a Thinking Coach: Moving from "Recall" to "Refine," using AI to fill gaps and expand on original thoughts rather than replacing them.

    Durable vs. Vulnerable Tasks

    How do we protect learning that requires human reasoning?

    Vulnerable Tasks: Processes or formulas that AI can automate without deeper understanding.

    Durable Tasks: Human judgment, transfer of knowledge, and original critique.

    The Shift in Assessment: Harvard faculty are beginning to grade how students explain and critique AI-generated ideas, rather than the raw output itself.

    The Digital Divide 2.0

    Equity is no longer just about having a laptop; it's about the quality of the intelligence you can access.

    Premium vs. Free: The widening gap between students using advanced paid models and those on inferior versions.

    The Walled Garden: Harvard's "AI Sandbox," a secure internal platform that provides equitable access to faculty and students while maintaining data privacy.

    Upskilling through Modeling and "Play"

    Resistance to new technology often stems from a lack of practical exposure.

    The 7-Day Rule: Professional development only sticks if it is applied to a real task (like syllabus design) within a week.

    Live Tinkering: The most effective faculty workshops involve live modeling—demonstrating the "messy" process of prompting and refining in real-time.

    Afficher plus Afficher moins
    38 min
  • Episode 19- Context, Presence, and the Messy Work of Learning with Dr. William Rice
    Mar 31 2026

    Dr. William Rice steps into the Executive Director role at ACES this summer. His philosophy is clear: leadership is an embodied practice requiring physical presence and proximity. In this episode, we discuss protecting the human element of education as technology grows increasingly sophisticated.

    The Meaning of Models Drawing from his background as a chemical engineer, Dr. Rice views math as building models to understand the world, rather than procedural calculation. Machines handle the heavy computation now. If we simply reward students for playing in a procedural sandbox, we leave them unequipped for a reality where human context separates meaningful work from automated noise.

    Observation in Special Education Technology offers a unique kind of support in special education by tracking massive volumes of daily observational data. It helps identify long-term trends a busy educator might miss. Still, a machine cannot replace the physical intuition and empathy of a teacher interpreting subtle, non-verbal cues.

    Navigating the Software Flood School districts face a constant barrage of new applications. Dr. Rice suggests a deliberate pause to avoid tool creep. Evaluating new technology must prioritize compliance and rigorous alignment with the agency's mission. Foundational AI literacy matters more than a fragmented landscape of apps; we must understand how these systems function and where their biases lie.

    A Question for Reflection: When evaluating the digital tools in your own work, how are you ensuring the technology serves the human context rather than replacing the productive struggle of learning?

    Afficher plus Afficher moins
    39 min
  • Episode 18- The AI Gap in Education- Dr. Jess White, Tim Howes
    Mar 22 2026

    Guests Dr. Jess White and Tim House join Bob Hutchins for a conversation about what's actually happening with AI in schools right now.

    In This Episode

    The three of us recently spoke to a group of student teachers at Sacred Heart University in Connecticut. Around 50-60 seniors, all preparing to enter classrooms. When asked if they'd received any formal AI training or integration frameworks, not a single hand went up.

    That moment set the tone for everything we talked about in this episode.

    We get into why the gap exists, what it looks like in practice, and what educators can do about it now.

    What We Covered

    The AI training gap Future educators are entering classrooms without formal AI preparation. The training that does exist tends to stay at the theory level. It doesn't go deep enough to be useful in a real classroom.

    The "cheating" question One student teacher asked how to handle job interviews when a district might view AI use negatively. That question told us a lot. Many educators want to use these tools but feel caught between what's practical and what's politically safe.

    AI across grade levels Jess breaks down what AI literacy actually looks like from kindergarten through high school.

    Tool breakdown: LLMs Each of us shared where we land on the ChatGPT vs. Claude vs. Gemini conversation, and more importantly, why different tools serve different purposes.

    Vibe coding and what's coming The ability for everyday teachers to build their own tools is closer than most people think. That changes the economics of educational technology significantly.

    The CRAFT Prompting Framework Jess walks through the ACES Center for AI prompting model:

    C = Context

    R = Role

    A = Audience

    F = Format

    T = Task

    Resources

    The Human Loop Newsletter: weekly articles, prompts, and practical AI resources. Link below.

    Connect

    Subscribe to The Human Loop newsletter for weekly AI literacy content, reading recommendations, and prompts you can use right away.

    https://www.acespdsi.org/contact

    Afficher plus Afficher moins
    32 min
  • Episode 17- Myriam Da Silva, the CEO of CheckIT Learning- The Neuroscience of Learning
    Jan 26 2026
    Episode Summary

    In this episode of Future Proof Education host Bob Hutchins sits down with Myriam Da Silva, the CEO of CheckIT Learning and the architect behind the AI neuro mentor known as Cleo. Myriam shares her journey from being a struggling student who found her own path to learning, to becoming an educational leader focused on the intersection of neuroscience and technology. The conversation explores how we can move beyond traditional curriculum completion to focus on the underlying mechanics of how the brain actually processes information.

    https://www.checkitlearning.com/

    Afficher plus Afficher moins
    29 min
  • Episode 16- Beyond the Hard Skills: Building Human Resilience in the Age of AI- Drew Brown
    Dec 19 2025
    When technical tasks like writing and coding are increasingly handled by machines, what remains for the human professional? Drew Brown, who leads the Strategic Communication and Innovation program at Texas Tech, joins Bob Hutchins to discuss why "people skills" are the new hard skills. Key Discussion Points

    The Market's New Demand: Why industry leaders are prioritizing conflict resolution, priority sorting, and emotional intelligence over software proficiency.

    The "Clunky" Nature of Change: How institutional evolution is rarely gradual, usually requiring a crisis or a significant shift in data to force a new paradigm.

    AI in the Trades: A look at how entrepreneurs in blue-collar industries are using AI to build training databases and improve field operations.

    Killing the Written Exam: Drew's innovative approach to "oral defenses" in crisis communication, where students must defend their strategies in real-time rather than submitting a paper.

    The Difference Between Output and Outcome: Shifting the metric of success from "how fast can we do a task" to "how much have we actually grown as leaders."

    Afficher plus Afficher moins
    42 min
  • Episode 15- Season 2- Academic Integrity in the Age of AI- Tim Howes
    Oct 22 2025

    Season 2 opens with a raw and timely conversation between Bob Hutchins and Tim Howes about one of education's biggest challenges: academic integrity in the age of generative AI.

    As schools race to adapt, many are responding with surveillance and bans, but Bob and Tim ask a deeper question-what if the problem isn't AI, but the system itself?

    They explore how generative tools are not creating dishonesty but revealing cracks in outdated assessment models, why detection software erodes trust, and how educators can rethink learning through transparency, reflection, and prompt literacy instead of punishment.

    🧠 Key Themes

    Redefining Academic Integrity:
    Integrity is no longer about "doing your own work," but about demonstrating your own judgment and transparency in using AI responsibly.

    Policing vs. Trust:
    Detection tools and AI surveillance create a culture of suspicion. True integrity grows from relationships, mentorship, and open dialogue.

    Systemic Rot in Education:
    AI didn't cause dishonesty—it exposed how transactional learning and grade-focused systems fail to nurture genuine understanding.

    Prompts as the New 'Show Your Work':
    Instead of grading AI outputs, teachers can assess the quality of a student's prompts, which reveal depth of knowledge and critical thinking.

    Inequity and the AI Divide:
    Well-resourced schools teach AI fluency. Others teach avoidance. Without intervention, AI will widen the digital divide between students who learn to use it creatively and those who are punished for it.

    💬 Quotes from the Episode

    "Detection doesn't build honesty. Relationships do." – Tim Howes

    "If AI can do the task, maybe it's the wrong task." – Tim Howes

    "Integrity wasn't working before AI. Generative tools just made the cracks visible." – Bob Hutchins

    "Grading the prompt, not the output, reveals the student's understanding." – Bob Hutchins

    Afficher plus Afficher moins
    40 min
  • Episode 14- Teaching Humans, Not Machines
    Sep 4 2025

    In this special episode, we pause our usual conversations with educators and leaders to reflect on an essay I wrote, Teaching Humans, Not Machines. For decades, schools trained students to follow procedures, memorize answers, and perform for tests. The irony is that artificial intelligence now does those things better than we ever could.

    We look at how systems shaped students to value compliance over curiosity, performance over presence, recall over reasoning—and how that left us vulnerable to disruption. But we also ask a deeper question: what capacities remain irreducibly human?

    From attention and intellectual courage to real curiosity and creativity, these are the skills that can't be automated. They're also the very foundation of citizenship and meaningful life. If we want education to serve people instead of machines, we need to reclaim and cultivate them.

    The article is here at https://bobhutchins.substack.com/p/teaching-humans-not-machines

    Afficher plus Afficher moins
    10 min
  • Episode 13- The Must-Teach List: Building AI Literacy Into Every Subject with Tim Howes
    Aug 11 2025

    📝 Episode Summary:
    In this back-to-school conversation, Bob Hutchins sits down with Tim Howes to explore how schools can embed AI literacy into the core of teaching and learning—without losing the skills, creativity, and human connection that make education meaningful.

    Together, they tackle questions on what belongs on a "must teach" list when AI can already perform so many traditional academic tasks, and how to ensure AI literacy isn't siloed off as a tech elective. They discuss the importance of transferable skills like critical thinking, collaboration, and adaptability, as well as emerging competencies such as prompt engineering, evaluating AI outputs, and blending human judgment with AI-generated insights.

    The discussion also dives into the balance between AI-assisted learning and friction-rich, manual experiences that build resilience, the role regional service centers like ACES can play in shaping state and national AI priorities, and how today's first-graders might graduate looking very different from today's seniors.

    🔍 Topics Covered:

    What to prioritize on the "must teach" list in an AI era

    Why AI literacy should live alongside—and within—core subjects

    Skills that will gain importance because of AI, not in spite of it

    Prompt engineering as a foundational literacy skill

    Evaluating and interpreting AI outputs with a critical eye

    Balancing AI-assisted learning with productive struggle

    The role of service centers in piloting curriculum and influencing policy

    A vision for the AI-literate graduate of the future

    🧠 Key Quotes:

    "AI shouldn't be a siloed tech skill—it's an appliance we'll use in every discipline."

    "Prompting well is just asking better questions—and that's a skill we've always needed."

    "We can use AI to create more manual learning experiences, not fewer."

    📌 Who It's For:
    Educators, administrators, policymakers, curriculum designers, and anyone shaping the future of learning in the age of AI.

    🔗 Resources & Mentions:

    ACES Innovation and AI initiatives

    AI literacy integration models for K–12

    Federal AI literacy funding announcements

    Prompt engineering strategies for educators

    Afficher plus Afficher moins
    32 min