Couverture de Ep. 11: Rethinking responsible AI through human rights

Ep. 11: Rethinking responsible AI through human rights

Ep. 11: Rethinking responsible AI through human rights

Écouter gratuitement

Voir les détails

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

This episode looks at a paper that makes a quiet but important shift in how responsible AI is framed. The authors argue that instead of building ethical principles from scratch, we should start from the human rights frameworks that already exist. These frameworks are familiar in law, politics, and civil society but less so in AI design.The paper suggests that using human rights as a reference point helps clarify what’s at stake. It draws attention to whose interests are being protected, which harms are made visible, and where accountability sits when systems cause harm. Rather than focusing on technical metrics, the rights framing asks how AI systems interact with people’s ability to speak, act, or be heard—and how those interactions are shaped by context, culture, and power.In this episode, we explore how that shift changes what is noticed, who is included, and how responsibility is structured. We also reflect on where behavioural science intersects with these ideas—especially in shaping attention, perceived legitimacy, and the ways people interpret fairness in system-driven environments.The authors bring a wide range of experience from both inside and outside the tech industry. K. Sabeel Rahman (formerly of the Office of Science and Technology Policy), Margaret Mitchell (then at Hugging Face), Timnit Gebru (founder of DAIR, the Distributed AI Research Institute), and Iason Gabriel (a research scientist at DeepMind) have each worked at the intersection of AI ethics, governance, and civil rights. Prabhakaran, V., Mitchell, M., Gebru, T., & Gabriel, I. (2022). A human rights-based approach to responsible AI. arXiv preprint arXiv:2210.02667.Companion reflection: A framework that reframes the ethical questionThis paper proposes something deceptively simple: that AI ethics would benefit from rooting its values not in abstract principles or technical ideals, but in the already-contested terrain of human rights. Written by researchers across DAIR, DeepMind, Hugging Face, and Mozilla, it reframes responsible AI not as a matter of value specification, but of rights protection. The argument is that systems should be assessed not by what they claim to optimise, but by what they risk displacing, especially in relation to the people most likely to be harmed.At its core, this isn’t just a legal or philosophical shift. It’s a cognitive one. The paper asks us to move from thinking about system properties to recognising patterns of harm and redirecting ethical attention from the internal logic of the model to the external social conditions it reshapes. That move is not only moral, but psychological. It changes what is perceived, who is legible, and which consequences come into view.From system metrics to harm perceptionThe authors are asking for a shift in how we look at harm. Much of AI ethics has focused on terms like fairness or robustness - ideas that tend to be defined inside the system, based on what’s technically measurable. When ethical thinking starts there, it often stays close to the model and what gets missed are the wider consequences for the people on the receiving end.The rights framing starts from a different point. It begins with what people are entitled to like the ability to speak, to act, or to be included in decisions that affect them. Framing things this way brings attention back to the external context: who is affected, under what conditions, and with what constraints on their ability to respond.There’s also a cultural dimension: rights frameworks have been shaped through decades of debate, across legal systems and political movements, which makes them more than just a checklist of protections. They carry assumptions about whose claims are recognised, and on what terms. When AI systems developed in one cultural setting are used globally, those underlying assumptions become crucially important. While the rights lens doesn’t resolve the tension, it does help to make it visible.Key ideasWhat we pay attention to depends on how harm is framedWhen we evaluate systems by looking at whether they’re fair or transparent, we’re often relying on internal criteria - does the system follow its own rules, or meet a technical definition? But harm isn’t always visible from that vantage point. A human rights framing draws the lens outward, toward the people affected, and the conditions that shape their vulnerability. It shifts the question from what the system is doing to what it is enabling or making harder to contest.Being included changes how people experience fairnessThe paper points out that many AI systems are built without meaningful input from the people they affect. A rights-based approach treats that as more than a design flaw. It recognises that participation itself shapes how people judge legitimacy. When people are excluded from decision-making, they are more likely to see the system as arbitrary or imposed even if the outcomes look defensible on paper. It’s not just what ...
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Aucun commentaire pour le moment