Couverture de AI-FLUENT Podcast

AI-FLUENT Podcast

AI-FLUENT Podcast

De : Ilona Vinogradova
Écouter gratuitement

À propos de ce contenu audio

AI-Fluent is my new podcast where I talk with storytellers from around the world about journalism and storytelling in all its shapes and forms, its marriage with AI and other technology, and innovative thinking. Most of my guests are from the Global South - Latin America, Asia, the Middle East, Africa, so it’s a rare opportunity for those of you who are interested in the subject to listen to people with different perspectives, different challenges, and solutions they have to offer.Copyright 2024 All rights reserved. Sciences sociales
Épisodes
  • Brazil's Aos Fatos' Director of Innovation on Creating Bots that Tackle Misinformation and More Imaginative Ways of Using AI in Journalism
    Mar 10 2025

    In a new episode of AI-FLUENT, I am talking to Bruno Fávero, a journalist who became Director of Innovation at one of Brazil's leading fact-checking websites Aos Fatos.

    They developed their own bots that tackle misinformation and tools that not only document digital lies and hate but also how the "distorted algorithms" of apps and platforms contributed to their spread.

    So, what can we all learn from Aos Fatos' business model with its focus on tech and fact-checking?

    Main Things We've Discussed:

    • How to become a Director of Innovation without a tech background
    • Apart from investment in tech, what else contributes to Aos Fatos' success
    • Aos Fatos' business model and how they create their own tech products
    • "We create tools to solve a specific problem, not for the sake of creating a new tool"
    • What the Fatima bot is and how it helps to fight misinformation
    • What kind of relationship Aos Fatos has with its audience
    • How they try to reach out to Gen Z
    • The word "innovation" and how it has become empty
    • A new tool to fact-check live events/debates, etc.
    • Distorted algorithms and Aos Fatos' project called Golpeflix
    • Social media platforms, how they became unhealthy and how journalists can navigate them to distribute quality journalism
    • How the perception of facts and truth has changed in Brazil in recent years
    • How the media industry took people's trust for granted and now needs to earn it by being more transparent and diligent
    • Relationships with Meta and other Big Tech companies: liability, yet necessity? Can these relationships be re-negotiated?
    • How social media has contributed to the loss of trust in professional journalists
    • The biggest challenge Aos Fatos faces as a newsroom and Bruno as a Director of Innovation, and what the solutions to those challenges are
    • The biggest misconception of generative AI in the context of journalism/storytelling
    • Ways to use generative AI more creatively - creation of new user interfaces might be one of them
    • A lifehack from Bruno on how to use smaller generative AI models
    • The future of journalism
    Afficher plus Afficher moins
    49 min
  • India’s Tattle Co-Founder Tarunima Prabhakar on Future of AI in Addressing Harmful Online Content and Fighting Misogyny with Equitable AI
    Mar 1 2025

    Tattle, one of India's pioneering civic tech organisations using AI to combat online gender-based violence.

    In this new episode of AI-FLUENT, Tattle's co-founder Tarunima Prabhakar shares insights into Tattle's innovative projects, including Uli, which helps women navigate harmful online content, and their Deepfake Analysis Unit deployed during India's recent elections.

    We explore the complex challenges of making technology accessible to less tech-savvy women while balancing relationships with Big Tech platforms and governments.

    Main Topics We Discussed in This Episode:

    • What is the role of civic tech organisations in India?
    • One of Tattle's first projects - Uli - was aimed at helping women deal with harmful and offensive online content. How does Uli work?
    • What about rural, less tech-savvy women, how to help them?
    • How does Tattle leverage AI and machine learning to identify and combat abusive or harmful online content? What are the key technical challenges in building such systems?
    • Another of Tattle's projects is called Deepfake Analysis Unit. It was introduced during India's 2024 elections, but continues to work until now. They collaborate on it with fact-checkers and forensic experts. How does it work?
    • How does Tattle work with social media platforms or other online spaces to implement its tools and what are the biggest challenges in getting these platforms to adopt Tattle's solutions?
    • On relationships with Big Tech and how they can be re-imagined/re-negotiated
    • On collusion between Big Tech and governments
    • There's a risk that tech solutions like Uli may only reach a small, tech-savvy subset of women. How does Tattle ensure it doesn't create a bubble that excludes those who need these tools the most?
    • Where do they see the future of AI in addressing harmful online content? Are there emerging technologies or approaches that could revolutionise this space?
    • On the most painful lessons learnt as co-founder of a civic tech organisation
    • Lifehack from Tarunima for those who want to start their own civic tech startup.
    • What is Tarunima's personal criteria of impact and success?
    Afficher plus Afficher moins
    47 min
  • Rana Arafat on AI Manipulation, Disinformation and Algorithmic Bias during India’s 2024 Elections and the Israel-Gaza War
    Feb 23 2025

    We usually talk about biased data and inaccurate interpretations regarding technology that comes from the East, China for example. Yet we rarely discuss the lack of transparency and biased data produced by Western tech companies.

    So it was refreshing to have this conversation with Rana Arafat, Assistant Professor in Digital Journalism at City, University of London.

    How are Arab newsrooms, especially in Egypt, Lebanon and the UAE, adapting to generative AI technologies? How do the governments of these countries regulate and control AI technology, and how do Big Tech companies operate in authoritarian countries in the Middle East?

    In this new episode of AI-FLUENT, we also discussed Indian Elections and the Israel-Gaza War through the lens of AI manipulation, disinformation and algorithmic bias.

    Main Topics We Discussed in This Episode:

    • The most surprising discovery for Rana whilst researching AI manipulation, generative disinformation, and algorithmic bias in the Global South
    • How technology in authoritarian countries oppresses people more than empowers them
    • The governments' involvement in regulating and controlling generative AI technology. The Saudi government, for example, has its own chatbot, as does the UAE government. How unbiased is their data?
    • The importance of cross-pollination between journalists, developers, product designers, etc.
    • Rana specifically examined Egyptian, Lebanese and United Arab Emirates newsrooms on three levels: narrative, practice and technological infrastructure. What did she discover about all three levels?
    • How Big Tech companies operate in authoritarian countries in the Middle East
    • Regarding algorithmic bias - how Rana researched it and her most important findings
    • Algorithms as a form of censorship
    • What shadowbanning is and how it was used during the Israel-Palestine war
    • How pro-Palestinian content creators tried to evade social media algorithms
    • The Indian elections and how generative AI technology was used and misused during the 2024 elections
    • How we as a media industry and society can protect ourselves from technology which, as we see, can be used as a weapon of propaganda and misinformation. What are possible solutions?
    • Regarding the teaching of future journalists - what's missing in the current education system? Is it keeping pace with all the rapid changes in the tech and media industry?
    Afficher plus Afficher moins
    41 min
Aucun commentaire pour le moment