AI governance needs a reset: Aligning AI metrics with ethics and sustainability
Traditional AI metrics predominantly focus on technological advancement, economic competitiveness, and innovation potential. These include indicators such as AI talent availability, investment in AI research, and computational infrastructure. While these factors are essential in determining a nation's AI capabilities, they often fail to capture broader concerns such as fairness, transparency, privacy, and societal well-being.
Artificial intelligence (AI) is rapidly reshaping societies, economies, and governance frameworks. However, a crucial question remains: how should we measure AI's impact, and are current AI metrics aligned with policy objectives?
A new study titled "AI Metrics and Policymaking: Assumptions and Challenges in the Shaping of AI," authored by Konstantinos Sioumalas-Christodoulou and Aristotle Tympas, published in AI & Society, explores the dissonance between AI evaluation frameworks and National Artificial Intelligence Strategies (NAIS). The research highlights critical issues in AI measurement, revealing a need for recalibrated metrics that consider ethical, societal, and sustainable development aspects alongside economic and technological benchmarks.
The limitations of current AI metrics
Traditional AI metrics predominantly focus on technological advancement, economic competitiveness, and innovation potential. These include indicators such as AI talent availability, investment in AI research, and computational infrastructure. While these factors are essential in determining a nation's AI capabilities, they often fail to capture broader concerns such as fairness, transparency, privacy, and societal well-being. The study argues that current global AI indices - such as the Global AI Index (GAI), the Government AI Readiness Index (GAIRI), and the Artificial Intelligence and Democratic Values Index (AIDVI) - overemphasize AI’s economic and competitive aspects while neglecting ethical governance and public trust.
By analyzing AI policies across 43 countries, the study found that while NAIS documents often emphasize human-centered AI, inclusivity, and sustainable development, these priorities are rarely reflected in widely used AI performance metrics. This misalignment creates a gap between AI policymaking and the mechanisms used to track its success, raising concerns about whether AI is being developed and deployed in ways that truly benefit society.
The Need for Ethical and Inclusive AI Measurement
One of the central findings of the study is the need to integrate ethical and social considerations into AI metrics. AI’s rapid adoption raises concerns about bias, privacy violations, and algorithmic accountability - issues that are largely absent from existing indices. The research suggests that without comprehensive metrics for fairness, social impact, and sustainability, AI policies risk reinforcing existing inequalities rather than mitigating them.
For instance, data governance is a critical component of AI regulation, yet many AI indices only measure data accessibility rather than evaluating protections against misuse, surveillance, and bias. Similarly, automation and workforce displacement are among the biggest societal concerns linked to AI, yet economic indices often frame AI's impact solely in terms of job creation without considering potential disruptions to labor markets. The study calls for the introduction of multidimensional AI metrics that balance technological performance with social and ethical considerations.
Aligning AI policy with global development goals
The study underscores the importance of aligning AI evaluation frameworks with the United Nations Sustainable Development Goals (SDGs). AI has the potential to drive positive change in fields such as healthcare, climate action, and education, yet most AI indices fail to account for these contributions. Policymakers need to move beyond assessing AI merely through the lens of innovation and competition and instead focus on how AI systems contribute to long-term societal well-being.
AIDVI, one of the few AI indices that attempts to measure democratic accountability and ethical governance, still lacks sufficient depth in assessing real-world AI applications. The study suggests incorporating AI impact assessments, public trust indices, and transparency audits into policymaking processes to ensure AI development aligns with ethical guidelines and international commitments to responsible AI governance.
The future of AI measurement in policymaking
To create an AI ecosystem that benefits all of society, the study advocates for a shift from purely quantitative AI metrics to more nuanced, qualitative assessments. This includes integrating stakeholder perspectives, such as civil society organizations and ethicists, into AI evaluation frameworks. Governments and international organizations must collaborate to develop standardized, holistic AI indicators that measure not only computational power and investment levels but also AI’s ethical integrity, inclusivity, and societal impact.
As AI continues to evolve, so must the ways we assess its role in our world. The study concludes that updating AI metrics to reflect a broader set of values, including fairness, accountability, and sustainability, is crucial for ensuring AI policies serve the public good. With the right measurement tools in place, AI can be harnessed not just for economic growth but for a more equitable and ethical global future.
- FIRST PUBLISHED IN:
- Devdiscourse

