AI readiness metrics overlook language, trust and public accountability

AI readiness metrics overlook language, trust and public accountability
Representative image. Credit: ChatGPT

A new study published in AI & Society argues that global and regional artificial intelligence (AI) readiness rankings may be giving governments an incomplete picture of preparedness by focusing heavily on infrastructure, innovation and formal policy frameworks while undermeasuring who can understand, challenge and shape AI systems. This gap, as the study claims, is especially consequential for Latin America, where AI development is unfolding amid deep inequalities, post-colonial knowledge hierarchies, digital dependence and uneven public capacity.

The study, titled "The epistemic readiness gap: rethinking AI readiness indices through ILIA 2025," analyzes the Latin American Artificial Intelligence Index, or ILIA 2025, as a major regional benchmark for AI capacity across 19 Latin American and Caribbean countries, insisting that readiness metrics must account not only for digital infrastructure and governance institutions but also for linguistic inclusion, public contestability, community data rights and the quality of trust in AI systems.

Why AI readiness rankings may be measuring capacity too narrowly

AI readiness indices have become influential tools in global technology governance. Governments, investors, international organizations and policy experts use them to assess which countries are best positioned to develop, regulate and benefit from artificial intelligence. These indices typically rank countries by indicators such as broadband access, cloud and data-center capacity, research output, talent pipelines, startup activity, AI adoption and the existence of national AI strategies.

The study challenges the assumption that such measures are enough. It argues that a country can appear ready for AI by conventional standards while still being poorly prepared to ensure that affected communities can understand AI-mediated decisions, participate in governance, access services in their own languages or control how their data is used.

The article calls this mismatch an "epistemic readiness gap." In simple terms, the gap appears when material and institutional capacity advance faster than the social and knowledge conditions needed for fair, accountable and inclusive AI. A country may build data centers, train AI specialists, pass a national AI strategy and expand public-sector automation, but still leave citizens without clear explanations, appeal channels or meaningful influence over systems that affect their lives.

Many readiness indices, it argues, are not neutral descriptions of technological capacity. They are political instruments that help define what counts as progress. By rewarding infrastructure, investment, research and formal policy structures, they can encourage governments to pursue capital-intensive AI development while giving less visibility to social accountability, linguistic diversity and community-level governance.

Latin America has growing AI ambitions but also faces persistent gaps in investment, labor capacity, institutional strength and digital inclusion. ILIA 2025, developed by the UN Economic Commission for Latin America and the Caribbean and regional partners, represents a major attempt to map AI capacity from within the region rather than relying solely on global rankings shaped by Northern priorities. The study recognizes ILIA as a significant regional achievement, not as a flawed project to be dismissed.

The author argues that ILIA's strongest comparative measures remain concentrated in material and institutional categories. The index assesses enabling factors, research, development, adoption and governance. These dimensions are important, but they do not fully capture whether AI systems reflect the languages, knowledge traditions, rights and decision-making power of the communities affected by them.

According to the study, AI readiness should answer three questions often left implicit: ready for what, ready for whom and ready on whose terms. Existing metrics often assume that countries should be ready to compete in the global AI economy, attract investment, deploy AI in public services and expand innovation ecosystems. However, an epistemic justice perspective asks whether people and communities can act as knowers, critics and decision-makers within AI systems, rather than being treated only as users, data sources or policy targets.

This holds significance because AI systems are not just technical tools, they are knowledge systems. They influence who is heard, whose experience becomes legible, whose language is supported, whose data is extracted and whose objections are recognized. If readiness rankings do not measure these conditions, they risk presenting a country as prepared even when its AI ecosystem reproduces inequality.

Five areas expose the epistemic readiness gap in Latin America's AI benchmark

The study examines ILIA 2025 through five axes of epistemic inclusion: linguistic and epistemic diversity; participatory and relational governance; epistemic accessibility and contestability; community data governance and epistemic sovereignty; and the foundations and quality of trust.

Linguistic and epistemic diversity

It asks whether AI infrastructures and public services account for indigenous, minoritized and local languages, as well as non-dominant knowledge traditions. The study finds that ILIA acknowledges regional and locally grounded AI development, but its comparative architecture does not strongly measure whether AI-enabled public services operate in indigenous or local languages. Human capital indicators include English proficiency and AI training, while data and research indicators track availability, governance and output. But the index does not systematically assess whether datasets, models and interfaces reflect the region's linguistic and epistemic diversity.

That omission is politically significant. In Latin America, where indigenous and Afro-descendant communities have long faced exclusion from public institutions and knowledge systems, AI tools that operate mainly in dominant languages can deepen existing inequalities. A government may digitize services and deploy AI while leaving large communities unable to use, understand or challenge those systems on equal terms.

Participatory and relational governance

This axis examines whether affected communities have durable roles in AI decision-making. Here, the study finds that ILIA performs better than many technocratic readiness frameworks because it includes societal involvement in national AI strategy design. This is an important strength. However, the article argues that the index does not fully distinguish between symbolic consultation and real influence.

That distinction is crucial. Public participation can mean a short consultation exercise with little effect on policy, or it can mean reserved seats, recurring representation, agenda-setting power and co-decision authority. The study argues that AI readiness metrics should not merely ask whether people were consulted. They should ask who participated, whether marginalized groups had institutional power and whether communities could shape AI projects before they were deployed.

Epistemic accessibility and contestability

It focuses on whether people can receive understandable reasons for AI-assisted decisions and challenge them through effective channels. The study finds that ILIA engages with data protection, privacy and responsible AI, but does not yet convert contestability into a strong comparative metric. Legal safeguards and regulatory frameworks may exist, but they do not automatically guarantee that a welfare recipient, patient, student, debtor or citizen can understand why an AI-assisted decision was made or appeal it effectively.

This is one of the study's key warnings. Public-sector AI can affect access to benefits, healthcare, education, justice, policing and administrative services. If readiness indices reward governments for deploying AI without measuring whether affected people can contest outcomes, they risk encouraging administrative automation without accountability.

Community data governance and epistemic sovereignty

This fourth axis asks whether communities can shape how their data is collected, used, shared and reused. The study says ILIA gives attention to data governance and endogenous regional AI development, which is an important step. But national or regional AI autonomy is not the same as community-level control. A country may reduce dependence on foreign platforms and still centralize data extraction in ways that give affected communities little say.

The article calls for stronger measurement of collective data rights, indigenous data sovereignty, community data stewardship and benefit-sharing. These issues are essential in contexts shaped by digital extractivism, where data from people, territories and public systems can become an input for AI development without adequate consent, control or return of benefits.

Foundations and quality of trust

This addresses how public trust in AI is built. ILIA recognizes the importance of trustworthy AI through data protection, safety, ethics and responsible governance. But the study argues that trust should not be measured only through laws or ethics documents. AI systems can generate user trust through interface design, anthropomorphic cues, persuasive communication or institutional framing, even when users do not fully understand the system.

This raises a risk that public confidence may be engineered rather than earned. A chatbot or AI assistant in a public service may appear helpful, neutral or human-like while still being opaque, limited or difficult to challenge. The study argues that readiness metrics should distinguish between informed trust based on explanation and accountability, and affective or parasocial trust produced by design choices.

Together, the five axes show that ILIA incorporates some epistemic concerns, especially in its governance dimension, but often in indirect or uneven ways. The study does not accuse ILIA of ignoring these problems. Its more precise claim is that the index recognizes several of them narratively or programmatically but does not yet operationalize them with the same force as infrastructure, research, adoption and formal governance.

What must change in AI readiness metrics

The author proposes a practical reform agenda rather than a wholesale replacement of ILIA. Epistemic inclusion should be built into the existing three-pillar structure of the index rather than placed in a separate category that might be treated as optional. The goal is to change what becomes visible as AI progress.

The first proposed family of indicators would measure structured participation by under-represented communities in AI governance. This would go beyond counting consultation and assess whether marginalized groups have formal representation, recurring roles and real influence in AI councils, ethics bodies and advisory boards. In policy terms, the question is whether communities can shape priorities and safeguards before AI projects are funded, built and deployed.

The second family would measure AI services and projects in indigenous and minoritized languages. This would include whether AI-enabled public systems in health, welfare, education, justice and other high-stakes areas are accessible beyond dominant languages. It would also assess whether states support language resources, speech tools and models for under-represented linguistic communities.

The third would measure community data governance and epistemic sovereignty. Indicators could examine whether laws and policies recognize collective data rights and whether public AI projects include mechanisms for affected communities to govern data collection, reuse and sharing. This would help separate national AI capacity from genuine community control over data relations.

The fourth proposed family would measure accessibility and contestability in AI-assisted public decisions. Readiness would include whether people have a right to receive reasons, whether those reasons are understandable to non-experts and whether independent bodies can review decisions and provide remedies. This would make it harder for countries to score well on public-sector AI deployment while leaving citizens with weak appeal rights.

The fifth one would measure the quality and foundations of trust. Indicators could assess whether public AI systems clearly disclose their non-human status, whether guidelines address anthropomorphic design, and whether institutions distinguish between informed confidence and emotional dependence or user capture. This would help ensure that trust in AI is tied to accountability rather than mere acceptance.

The study also proposes a cross-cutting Epistemic Inclusion Score to sit alongside existing readiness measures. Such a score would not replace ILIA's current dimensions but would reveal whether strong material readiness is accompanied by meaningful inclusion. Countries that expand infrastructure and adoption while neglecting language access, contestability or community data rights would see those weaknesses reflected in their readiness profile.

On the whole, AI readiness is not simply a technical or economic condition. It is also a democratic and epistemic condition. A country is not fully prepared for AI if its citizens cannot understand public AI systems, challenge automated or AI-assisted decisions, participate in governance or protect their collective knowledge and data from extraction.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback