Global push for AI in healthcare risks deepening inequality
The ethical stakes become even clearer when algorithmic systems are introduced into low-resource or postcolonial settings. The study’s case analysis of Benin illustrates how healthcare AI designed in Global North contexts can clash with local infrastructures, clinical practices, and moral economies of care.
Artificial intelligence (AI) technologies are being rapidly embedded into healthcare systems worldwide. But new research warns that this technological expansion risks reinforcing global inequalities and reshaping care in ways that marginalize local knowledge, especially in low-resource settings.
A new study Algorithmic Ethics and Healthcare Pluralism: Rethinking Care Between Automation and Global Inequality, published in AI & Society, examines these concerns. The research argues that algorithmic healthcare systems should be understood not just as technical tools, but as normative infrastructures that encode values, priorities, and assumptions about what care should be. When these systems are transferred across vastly different healthcare environments, they can unintentionally displace established practices and deepen existing power imbalances.
Algorithms as hidden decision-makers in modern healthcare
Much of the public debate around AI in healthcare focuses on performance metrics such as accuracy, efficiency, and cost reduction. According to the study, this framing obscures a more fundamental issue: algorithmic systems actively shape how medical decisions are made, which forms of knowledge are considered legitimate, and how authority is distributed between clinicians, patients, and machines.
The study introduces a key distinction between immanent normativity and deliberative normativity. Immanent normativity refers to the values embedded directly into algorithmic systems through data selection, model design, and optimization goals. These choices determine what outcomes are prioritized and which variables matter. Deliberative normativity, by contrast, emerges through social processes of reasoning, contestation, and justification among clinicians, patients, and communities.
The study finds that most healthcare AI systems privilege immanent normativity at the expense of deliberative processes. Algorithmic recommendations are often treated as authoritative by default, particularly in highly regulated healthcare environments where standardization and liability concerns encourage adherence to automated outputs. This dynamic can narrow the space for clinical judgment, reduce interpretive flexibility, and shift responsibility from human actors to technical systems.
In the Global North, these pressures are reinforced by dense regulatory frameworks that emphasize documentation, traceability, and procedural compliance. While such regulations aim to promote trustworthy AI, the study argues they often overlook how algorithmic systems reshape the moral foundations of care. Ethical oversight becomes focused on compliance rather than on how care is experienced, negotiated, and justified in practice.
The research also highlights that AI adoption in healthcare is far from seamless. Ethnographic studies cited in the paper show that algorithmic systems are often integrated unevenly, generating friction with existing workflows and requiring significant informal labor by clinicians to make them usable. These tensions reveal that ethical challenges arise not only from design flaws, but from how technologies interact with real-world clinical environments.
When AI meets low-resource healthcare systems
The ethical stakes become even clearer when algorithmic systems are introduced into low-resource or postcolonial settings. The study’s case analysis of Benin illustrates how healthcare AI designed in Global North contexts can clash with local infrastructures, clinical practices, and moral economies of care.
Healthcare facilities in Benin often operate under conditions of chronic scarcity, including limited equipment, unreliable electricity, fragmented data systems, and heavy reliance on paper records. In this context, algorithmic tools that assume stable digital infrastructures and standardized datasets frequently fail to align with daily clinical realities. The problem is not simply technical incompatibility, but epistemic mismatch.
The study’s fieldwork documents how care in Benin relies heavily on relational knowledge, tacit expertise, and informal clinical networks. Nurses, technicians, and community health workers routinely draw on personal experience, peer consultation, and patient narratives to make decisions under uncertainty. These practices form a vernacular infrastructure of care that compensates for systemic gaps.
When externally developed algorithmic tools are introduced without adaptation, they can displace these locally grounded forms of knowledge. Risk scores, diagnostic thresholds, and classification models trained on distant populations may overlook social, environmental, and relational factors that are critical to understanding illness in context. As a result, algorithmic recommendations may appear normatively incomplete or irrelevant, even when technically accurate.
The study notes that clinicians in Benin often engage in selective non-use of algorithmic tools. Rather than rejecting technology outright, they suspend or override automated recommendations when they conflict with contextual judgment. This non-use functions as a form of ethical and epistemic resistance, signaling that legitimacy in care depends on responsiveness to lived realities rather than on computational authority alone.
The research situates these findings within broader critiques of data colonialism and digital paternalism. Algorithmic systems introduced through donor programs or global health initiatives can reproduce patterns of dependency, where local healthcare systems are expected to conform to external metrics and categories. Even when systems are not fully adopted, their presence reshapes expectations, accountability structures, and funding priorities.
In this sense, the absence or partial adoption of AI in low-resource settings should not be seen simply as a deficit. The study argues that these gaps often preserve space for plural forms of care that resist reduction to standardized models. Ethical governance, therefore, must account for what is lost when automation overrides relational and community-based practices.
Rethinking ethical AI through subsidiarity and solidarity
Rather than proposing a new universal checklist for ethical AI, the study advances a governance framework grounded in two principles derived from its comparative analysis: technological subsidiarity and epistemic solidarity. Together, these principles aim to realign algorithmic systems with the moral and social foundations of care across diverse contexts.
Technological subsidiarity holds that algorithmic systems should support, not replace, human judgment. Decisions should remain anchored where contextual understanding and relational expertise are strongest. In practice, this means designing AI tools that assist clinicians without becoming default authorities, preserving space for interpretation, disagreement, and moral reasoning.
In highly regulated healthcare systems, subsidiarity helps counter digital paternalism, where algorithmic outputs gain authority due to institutional pressures rather than demonstrated contextual fit. In low-resource settings, it guards against the imposition of external decision models that fail to reflect local conditions. The principle does not oppose innovation but demands that automation remain accountable to human judgment.
Epistemic solidarity complements this approach by addressing whose knowledge counts in algorithmic systems. The study argues that ethical AI must recognize and include plural forms of knowing, including narrative, experiential, and community-based knowledge. This requires moving beyond narrow definitions of fairness based solely on statistical representation.
Epistemic solidarity calls for participatory design, contestability, and governance mechanisms that allow clinicians and patients to challenge algorithmic outputs. It also demands attention to the epistemic categories embedded in data and models, asking whose realities are made visible and whose are erased.
These principles together support what the study describes as situated normativity. Ethical evaluation begins from the concrete conditions of care rather than from abstract universal rules. This approach rejects both technocratic universalism and cultural relativism, emphasizing instead that ethical AI must be negotiated through ongoing engagement with local practices, values, and vulnerabilities.
- FIRST PUBLISHED IN:
- Devdiscourse

