AI fairness can’t be solved by code alone, it must be negotiated socially

The authors identify the key paradox of AI fairness: while fairness is a moral and social concern, the field is currently governed by technical metrics. These algorithmic definitions, group fairness, individual fairness, or counterfactual fairness, measure mathematical equality but often ignore the social contexts in which AI operates.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 13-10-2025 09:04 IST | Created: 13-10-2025 09:04 IST
AI fairness can’t be solved by code alone, it must be negotiated socially
Representative Image. Credit: ChatGPT

A new study published in AI & Society challenges the current dominance of computer science in setting the terms of artificial intelligence (AI) fairness. The researchers argue that fairness cannot be reduced to mathematical formulas or algorithmic tweaks. Instead, it should be treated as a negotiated social process that actively involves lay citizens, social scientists, and technical experts on equal footing.

The study, titled “Negotiating AI Fairness: A Call for Rebalancing Power Relations”, explores how the concept of fairness is defined, interpreted, and applied across disciplines and user communities. Drawing on expert interviews, surveys, and co-creation workshops with groups affected by AI systems, the research reveals deep rifts between technical and social understandings of fairness, and proposes a collaborative framework to bridge them.

Why technical fixes fall short

The authors identify the key paradox of AI fairness: while fairness is a moral and social concern, the field is currently governed by technical metrics. These algorithmic definitions, group fairness, individual fairness, or counterfactual fairness, measure mathematical equality but often ignore the social contexts in which AI operates.

Computer scientists, the study finds, tend to frame fairness as a problem of optimization, solvable through data balancing or bias mitigation tools such as AIF360 and Fairlearn. Yet, as the researchers emphasize, a technically “unbiased” system can still reproduce unfair outcomes when the data or design reflect structural inequalities. For example, hiring or credit-scoring algorithms may comply with fairness metrics while perpetuating discrimination embedded in historical datasets.

Sociologists and ethicists, by contrast, interpret fairness as a contextual and relational concept, one that depends on who benefits, who bears the risks, and who has the power to define what counts as fair. According to the study, this social perspective is largely sidelined in mainstream AI development, where fairness is operationalized as a statistical property rather than a lived experience.

Through 29 interviews with AI professionals and academics, the researchers found that social scientists and technical experts often speak past one another. While both camps agree that fairness is important, their epistemological starting points differ: one seeks abstraction and universality, while the other values specificity and situated knowledge. The result is a communication gap that hinders effective collaboration and narrows the moral scope of AI ethics.

What lay users want from “Fair” AI

A distinctive strength of the study is its inclusion of lay perspectives, drawn from six participatory workshops with groups at risk of algorithmic discrimination, migrants, LGBTQI+ individuals, women in precarious work, and minority researchers among them. These sessions revealed how ordinary users understand and experience fairness in daily interactions with AI-driven systems, such as automated identity verification, recruitment platforms, and financial screening tools.

Participants consistently viewed fairness not as an abstract property but as a matter of agency, transparency, and recourse. They wanted clear explanations for automated decisions, visible human oversight, and accessible ways to appeal or correct errors. Many expressed unease about data misuse and the opacity of AI systems.

Interestingly, while participants recognized benefits in automation, speed, efficiency, and fraud prevention, they also saw fairness as relational and emotional. For example, in identity verification contexts, users feared being misclassified due to lighting, camera quality, or physical appearance, which could translate into social exclusion. These fears were strongest among vulnerable groups who already experience systemic bias.

The findings reveal that trust in AI fairness depends not only on technical accuracy but on procedural justice, whether individuals feel seen, heard, and able to contest outcomes. The authors argue that including these voices early in AI design can surface hidden harms and make fairness frameworks more grounded and legitimate.

Toward a negotiated model of fairness

The study proposes a new model for negotiating fairness across three fronts: between technical and social disciplines, between experts and lay publics, and between conflicting values within AI systems themselves.

First, the authors call for shared educational infrastructures that enable computer scientists and social scientists to learn each other’s languages. This could include interdisciplinary courses, joint research labs, and “boundary objects” such as co-created fairness metrics that embody both technical and social insights.

Second, fairness should be treated as a co-designed, iterative process, not a static checklist. The authors describe participatory exercises in which stakeholders identify which types of errors matter most for fairness in a given context. For example, in financial applications, participants prioritized equalizing true-positive rates (accurately identifying eligible users) over raw acceptance rates. This allows fairness to be tailored to social priorities rather than imposed from outside.

Third, the paper stresses the need for power redistribution in the AI ecosystem. Fairness debates, the authors argue, often remain confined to elite spaces, labs, conferences, and policy boards, where affected communities have limited influence. To correct this imbalance, they recommend embedding participatory governance mechanisms directly into AI workflows, such as community review panels, fairness monitoring boards, and transparent model reporting practices.

The authors also highlight a crucial ethical point: sometimes the fairest solution is not to deploy AI at all. They urge developers and policymakers to consider whether automation is necessary in sensitive domains and to acknowledge cases where human discretion is indispensable.

Fairness in AI cannot be “solved” through code alone, the study asserts. Instead, it must be co-produced by those who build systems, those who regulate them, and those who live with their consequences. 

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback