Couverture de ibl.ai

ibl.ai

ibl.ai

De : ibl.ai
Écouter gratuitement

À propos de cette écoute

ibl.ai is a generative AI education platform based in NYC. This podcast, curated by its CTO, Miguel Amigot, focuses on high-impact trends and reports about AI.Copyright 2024 All rights reserved.
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • McKinsey: Seizing the Agentic AI Advantage – A CEO Playbook
      Jun 20 2025

      Summary of https://www.mckinsey.com/~/media/mckinsey/business%20functions/quantumblack/our%20insights/seizing%20the%20agentic%20ai%20advantage/seizing-the-agentic-ai-advantage.pdf

      McKinsey & Company report, "Seizing the Agentic AI Advantage," examines the current "gen AI paradox," where widespread adoption of generative AI has led to minimal organizational impact.

      The authors explain that AI agents, which are autonomous and goal-driven, can overcome this paradox by transforming complex business processes beyond simple task automation. The report outlines a strategic shift required for CEOs to implement agentic AI effectively, emphasizing the need to move from scattered experiments to integrated, large-scale transformations.

      This includes reimagining workflows around agents, establishing a new agentic AI mesh architecture, and addressing the human and governance challenges associated with deploying autonomous AI. Ultimately, the text argues that successful adoption of agentic AI will redefine how organizations operate, compete, and create value.

      • The Generative AI Paradox: Despite widespread adoption, nearly eight in ten companies using generative AI (gen AI) report no significant bottom-line impact. This "gen AI paradox" stems from an imbalance where easily scaled "horizontal" enterprise-wide tools (like copilots and chatbots) provide diffuse, hard-to-measure gains, while more transformative "vertical" (function-specific) use cases remain largely stuck in pilot mode.
      • Agentic AI as the Catalyst: AI agents offer a way to overcome this paradox by automating complex business processes. Unlike reactive gen AI tools, agents combine autonomy, planning, memory, and integration to become proactive, goal-driven virtual collaborators, unlocking potential far beyond mere efficiency gains.
      • Reinventing Workflows is Crucial: Realizing the full potential of agentic AI requires more than simply plugging agents into existing workflows; it necessitates reimagining and redesigning those workflows from the ground up, with agents at the core. This involves reordering steps, reallocating responsibilities between humans and agents, and leveraging agents' strengths like parallel execution and real-time adaptability for transformative impact.
      • New Architecture and Enablers for Scale: To effectively scale agents, organizations need a new AI architecture paradigm called the "agentic AI mesh". This composable, distributed, and vendor-agnostic framework enables agents to collaborate securely across systems while managing risks like uncontrolled autonomy and sprawl. Additionally, scaling requires critical enablers such as upskilling the workforce, adapting technology infrastructure, accelerating data productization, and deploying agent-specific governance mechanisms.
      • The CEO's Mandate and Human Challenge: The primary challenge in scaling agentic AI is not technical but human: earning trust, driving adoption, and establishing proper governance for autonomous systems. CEOs must lead this transformation by concluding the experimentation phase, realigning AI priorities with strategic programs, redesigning AI governance, and launching high-impact agent-driven projects to redefine how their organizations operate.
      Afficher plus Afficher moins
      25 min
    • LEGO/The Alan Turing Institute: Understanding the Impacts of Generative AI Use on Children
      Jun 19 2025

      Summary of https://www.turing.ac.uk/sites/default/files/2025-05/combined_briefing_-_understanding_the_impacts_of_generative_ai_use_on_children.pdf

      Presents the findings of a research project on the impacts of generative AI on children, combining both quantitative survey data from children, parents, and teachers with qualitative insights gathered from school workshops.

      The research, guided by a framework focusing on children's wellbeing, explores how children use generative AI for activities like creativity and learning. Key findings indicate that nearly a quarter of children aged 8-12 have used generative AI, primarily ChatGPT, with usage varying by factors such as age, gender, and educational needs.

      The document also highlights parent, carer, and teacher concerns regarding potential exposure to inappropriate content and the impact on critical thinking skills, while noting that teachers are generally more optimistic about their own use of the technology than its use by students.

      The research concludes with recommendations for policymakers and industry to promote child-centered AI development, improve AI literacy, address bias, ensure equitable access, and mitigate environmental impacts.

      • Despite a general lack of research specifically focused on the impacts of generative AI on children, and the fact that these tools have often not been developed with children's interests, needs, or rights in mind, a significant number of children aged 8-12 are already using generative AI, with ChatGPT being the most frequently used tool.
      • The patterns of generative AI use among children vary notably based on age, gender, and additional learning needs. Furthermore, there is a clear disparity in usage rates between children in private schools (52% usage) and those in state schools (18% usage), indicating a potential widening of the digital divide.
      • There are several significant concerns shared by children, parents, carers, and teachers regarding generative AI, including the risk of children being exposed to inappropriate or inaccurate information (cited by 82% and 77% of parents, respectively), worries about the negative impact on children's critical thinking skills (shared by 76% of parents/carers and 72% of teachers), concerns about environmental impacts, potential bias in outputs, and teachers reporting students submitting AI-generated work as their own.
      • Despite concerns, the research highlights potential benefits of generative AI, particularly its potential to support children with additional learning needs, an area children and teachers both support for future development. Teachers who use generative AI also report positive impacts on their own work, including increased productivity and improved performance on teaching tasks.
      • To address the risks and realize the benefits, the sources emphasize the critical need for child-centred AI design, meaningful participation of children and young people in decision-making processes, improving AI literacy for children, parents, and teachers, and ensuring equitable access to both the tools and educational resources about them.
      Afficher plus Afficher moins
      22 min
    • OpenAI: Disrupting Malicious Uses of AI – June 2025
      Jun 19 2025

      Summary of https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf

      Report detailing OpenAI's efforts to identify and counter various abusive activities leveraging their AI models. It presents ten distinct case studies of disrupted operations, including deceptive employment schemes, covert influence operations, cyberattacks, and scams.

      The report highlights how threat actors, often originating from China, Russia, Iran, Cambodia, and the Philippines, utilized AI for tasks ranging from generating social media content and deceptive resumes to developing malware and social engineering tactics.

      OpenAI emphasizes that their use of AI to detect these activities has paradoxically increased visibility into malicious workflows, allowing for quicker disruption and sharing of insights with industry partners.

      • OpenAI's mission is to ensure that artificial general intelligence (AGI) benefits all of humanity by deploying AI tools to solve difficult problems and defend against various abuses. This includes preventing AI use by authoritarian regimes, and combating covert influence operations (IO), child exploitation, scams, spam, and malicious cyber activity.
      • OpenAI has successfully detected, disrupted, and exposed a range of abusive activities by leveraging AI as a force multiplier for their expert investigative teams. These malicious uses of AI include social engineering, cyber espionage, deceptive employment schemes (like the "IT Workers" case), covert influence operations (such as "Sneer Review," "High Five," "VAGue Focus," "Helgoland Bite," "Uncle Spam," and "STORM-2035"), cyber operations ("ScopeCreep," "Vixen," and "Keyhole Panda"), and scams (like "Wrong Number").
      • These malicious operations originated from various global locations, demonstrating a widespread threatscape. Four of the ten cases in the report likely originated from China, spanning social engineering, covert influence operations, and cyber threats. Other disruptions involved activities from Cambodia (task scam), the Philippines (comment spamming), and covert influence attempts potentially linked with Russia and Iran. Additionally, deceptive employment schemes showed behaviors consistent with North Korea (DPRK)-linked activity.
      • Threat actors utilized AI to evolve and scale their operations, yet this reliance also increased their exposure and aided in their disruption. For example, AI was used for automating resume creation, generating social media content, translating messages for social engineering, and developing malware. Paradoxically, this integration of AI into their workflows provided OpenAI with insights, enabling quicker identification and disruption of these threats.
      • AI investigations are an evolving discipline, and ongoing disruptions help refine defenses and contribute to a broader understanding of the AI threatscape. OpenAI emphasizes that each disrupted operation improves their understanding of how threat actors abuse their models, allowing them to refine their defenses and share findings with industry peers and authorities to strengthen collective defenses across the internet.
      Afficher plus Afficher moins
      24 min

    Ce que les auditeurs disent de ibl.ai

    Moyenne des évaluations utilisateurs. Seuls les utilisateurs ayant écouté le titre peuvent laisser une évaluation.

    Commentaires - Veuillez sélectionner les onglets ci-dessous pour changer la provenance des commentaires.

    Il n'y a pas encore de critique disponible pour ce titre.