Global South users bear hidden cost of AI misalignment

The authors argue that AI fairness and global accessibility cannot be evaluated solely through model-centric metrics. Instead, companies and policymakers must account for the real-world labour users perform to compensate for broken assumptions built into frontier AI systems.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 14-11-2025 21:55 IST | Created: 14-11-2025 21:55 IST
Global South users bear hidden cost of AI misalignment
Representative Image. Credit: ChatGPT

Artificial intelligence companies are overlooking a critical component of global AI deployment, according to a new study that shows that people in the Global South already perform substantial hidden labour to adapt frontier AI systems to their everyday realities, and that current development practices push the cost of misalignment onto users instead of the systems’ creators.

The findings appear in the white paper “Alignment Debt: The Hidden Work of Making AI Usable,” published by YUX Design. The study introduces the concept of alignment debt, defined as the cumulative burden users experience when AI systems fail to match cultural, linguistic, infrastructural, epistemic, or interactional contexts of use. Based on survey data from AI users in Kenya and Nigeria, the study shows that these misalignments are widespread, measurable, and structurally patterned and that they directly influence how people use and verify AI outputs.

The authors argue that AI fairness and global accessibility cannot be evaluated solely through model-centric metrics. Instead, companies and policymakers must account for the real-world labour users perform to compensate for broken assumptions built into frontier AI systems.

AI misalignment is not occasional, it is the default experience

The researchers show that misalignment is not an occasional disruption but a routine condition of use. Among the 385 respondents scored for alignment debt, every single user experienced at least one type of misalignment, and most faced multiple.

The study identifies four primary forms of alignment debt. Cultural and linguistic debt emerges when AI fails to interpret accents, dialects, code-switching patterns, or region-specific references. Users must adjust how they phrase questions, moderate tone, or strip away cultural markers to receive relevant responses. Infrastructural debt arises when systems assume fast, stable connectivity and powerful devices, while real conditions in Kenya and Nigeria often involve slow networks, high data costs, and mobile-only access, forcing users to spend more time and money on basic interactions.

Epistemic debt refers to situations where AI outputs are inaccurate, misleading, poorly sourced, or reliant on non-local knowledge, pushing users to perform substantial verification. Interaction debt captures the friction generated when the AI’s interaction style, prompting expectations, or response patterns do not reflect local communication norms or task flows, requiring repeated rephrasing and workarounds.

The prevalence of misalignment stresses that AI tools are primarily built around high-resource assumptions from Western contexts. The study finds that 51.9% of users face cultural or linguistic misalignment, 43.1% experience infrastructural challenges, 33.8% encounter epistemic misalignment, and 14% report interaction design mismatch. Crucially, these burdens stack: only two-thirds of users experience a single type of misalignment, while more than one-third confront two or more simultaneously.

According to the authors, this pattern reveals a structural problem in AI design. Instead of treating misalignment as a fringe issue affecting a minority of users, the study demonstrates that misalignment is a systemic feature of how frontier AI interacts with Global South conditions. The significant overlap among debt types reflects the multi-dimensional mismatch between model assumptions and real-world usage environments.

How context shapes user burden: Differences between Kenya and Nigeria

The research also examines how alignment debt manifests across different national contexts. While both Kenya and Nigeria share similarities in age distribution, education levels, and mobile-first usage patterns, the study finds distinct differences in infrastructural and interaction-related burdens.

In Kenya, 47% of respondents report infrastructural debt, compared to 33.9% in Nigeria. The authors attribute this to poorer bandwidth conditions in the Kenyan sample, where slow loading, weak signal strength, and higher data costs intensify the effort required to use AI tools. Infrastructural misalignment becomes a double burden: interaction takes longer, and compensatory verification is more expensive.

Interaction debt shows an even sharper divide. In Kenya, 17.4% of users struggle with AI interaction patterns, compared to just 6.1% in Nigeria. This suggests that interface expectations, task structures, and communication norms embedded in AI tools align more closely with Nigerian users’ digital environment than with Kenyan users’ needs. It also highlights that interaction design mismatches cannot be solved merely through better localisation; they require deeper adaptation to how users in different cultural contexts structure tasks and communicate intentions.

Despite these differences, cultural and epistemic misalignments remain similarly common across both countries. This consistency indicates that frontier AI models fundamentally under-represent African languages, local knowledge, and regional contexts, producing uniform gaps regardless of country-specific infrastructure or user behaviour.

The study treats these variations as evidence that alignment debt is both systemic and context-dependent. Some types of burden arise from global training biases, while others are shaped by local technological ecosystems. The authors argue that effective solutions will require addressing both layers: global model design must better represent African linguistic and cultural diversity, while local infrastructure realities must be factored into interface and deployment choices.

Verification as hidden user labour and why it intensifies with debt

The study also investigates the relationship between alignment debt and user verification behaviour. Verification, the process of checking AI outputs against external sources, emerges as the clearest behavioural cost imposed by epistemic misalignment.

The study finds that 84.6% of users verify AI responses through search engines, online references, academic sources, or other tools. Users experiencing epistemic debt verify at a rate of 91.5%, compared with 80.8% among those with no epistemic misalignment. This demonstrates a direct behavioural impact: when AI systems produce unreliable or non-local information, users must exert extra cognitive and temporal labour to check accuracy.

Importantly, the study shows that verification intensity scales with cumulative alignment debt. As users accumulate more types of misalignment, they check more sources. Those with a single debt type consult around one and a half sources, while users bearing all four types consult an average of 3.5. This pattern reveals a compounding effect: as misalignment deepens, so does the magnitude of compensatory user labour.

Infrastructural debt has an inverse relationship with verification frequency. Users with poor connectivity or high data costs verify less, not because they trust AI more, but because verification itself is expensive and time-consuming. This finding highlights how infrastructural limitations amplify the risk of misinformation, creating a situation where users who most need verification are the least capable of performing it.

The study interprets verification patterns as evidence that alignment debt is not an abstract metric but a lived experience with measurable behavioural consequences. Users invest time, attention, data, and effort to fill the gap between AI system assumptions and real-world conditions.

A new framework for designing AI that works for the Global South

The study provides a detailed design and policy agenda for reducing alignment debt. According to the authors, AI systems must be shaped around real-world user contexts rather than expecting users to adapt to system limitations.

For cultural and linguistic alignment, they call for expanded representation of African languages, dialects, code-switching patterns, and communication norms in training corpora. Models should be designed to handle ambiguity and ask clarifying questions rather than defaulting to Western norms.

For infrastructural alignment, the authors recommend low-bandwidth modes, offline functionality, progressive loading, and clear data cost indicators. This ensures that economically or technologically disadvantaged users are not forced to absorb disproportionate burdens.

Improving epistemic alignment requires better sourcing, regionally relevant references, clearer confidence indicators, and more transparent system reasoning. This is crucial in information-sensitive tasks where faulty AI outputs could have serious consequences.

For interaction alignment, the study proposes prompt templates, contextual suggestions, and adaptive interfaces that reduce the skill burden of interacting with AI systems.

The authors argue that alignment debt should become a standard metric in AI evaluation and product design. Measuring the user-side work required to correct system misalignment could guide companies toward building tools that are globally equitable.

At a policy level, the authors urge African governments to incorporate alignment debt considerations into national AI strategies, procurement standards, and regulatory frameworks. Public investment in language resources, regional datasets, and compute infrastructure is essential to reduce structural misalignment.

The authors warn that as global adoption of AI accelerates, user burden must not remain invisible. Companies cannot claim their models are safe or fair if they rely on users to compensate for misalignment.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback