AI in public administration shifts toward data-centric governance models


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-03-2026 07:19 IST | Created: 23-03-2026 07:19 IST
AI in public administration shifts toward data-centric governance models
Representative image. Credit: ChatGPT

A new study reveals that the success of artificial intelligence (AI) in public administration depends less on advanced algorithms and more on how institutions govern data, embed human oversight, and align technology with ethical accountability. The research, based on a real-world European public sector case, highlights a shift in how governments approach AI, moving from technical experimentation to structured governance frameworks that prioritize transparency, traceability, and institutional trust.

Published in AI & Society, the study titled “Data-centric AI governance for responsible organizational value: evidence from a European public administration” examines the implementation of an AI-powered legislative monitoring system within a Spanish public institution. The research provides rare evidence showing how responsible AI is not an abstract principle but a practical outcome shaped by data practices, infrastructure, and organizational routines.

Data governance, not algorithms, drives AI success

The study focuses on the DGOBCAN-AI system, developed to automate the daily review of Spain’s Official State Gazette and identify legislation relevant to the Canary Islands. Initially, the task required a legal analyst to manually scan dozens of documents each day, identifying a small fraction of relevant updates through expert judgment.

Early attempts to automate this process using conventional model-centric approaches failed to deliver reliable results. The system struggled due to extreme data imbalance, with only about 0.27 percent of documents considered relevant. This limitation exposed a critical insight: performance failures were not due to weak algorithms but poor data conditions and insufficient governance structures.

In response, the project shifted toward a data-centric AI approach. Instead of focusing on refining models, the team prioritized improving data quality, annotation processes, and governance mechanisms. This included iterative labeling of data, continuous validation by experts, and structured data pipelines designed to ensure consistency and accountability.

The transition marked a turning point. By embedding governance into the technical workflow, the system improved reliability, transparency, and reproducibility. Tools such as Airflow and MLflow enabled full tracking of model behavior, while human validation ensured that outputs remained aligned with institutional priorities.

This approach challenges a dominant narrative in AI development that prioritizes model complexity. The study shows that in real organizational settings, especially in the public sector, data governance plays a more decisive role in determining outcomes.

The findings also highlight the importance of traceability. Rather than relying on explainable algorithms alone, the system ensures accountability through detailed logs, version control, and reproducible workflows. This allows institutions to audit decisions even when underlying models remain difficult to interpret.

Human oversight remains central to AI decision-making

The study makes clear that human expertise remains indispensable. The system was explicitly designed as a decision-support tool, not a replacement for human judgment.

Legal analysts play a critical role in validating AI outputs, correcting errors, and refining the system over time. Their involvement ensures that the model learns from real-world expertise and adapts to changing institutional needs. This human-in-the-loop approach transforms AI from an autonomous system into a collaborative tool that enhances professional decision-making.

The research shows that this integration of human oversight is not merely a safeguard but a core component of system performance. Without continuous validation, the model would struggle to maintain accuracy due to the complexity and ambiguity of legislative data.

Human oversight also addresses ethical concerns. By keeping decision authority in the hands of public officials, the system avoids risks associated with automated decision-making, such as bias, lack of accountability, and erosion of trust.

At the organizational level, the introduction of AI reshapes how work is performed. Routine tasks such as document screening are automated, allowing staff to focus on higher-value activities like interpretation and strategic analysis. This shift enhances institutional capacity while preserving professional autonomy.

The study further shows that ethical responsibility is embedded in everyday practices rather than imposed externally. Continuous monitoring, validation, and feedback loops ensure that the system operates within defined ethical boundaries. This operationalization of ethics represents a significant departure from traditional approaches that treat responsibility as a compliance requirement.

Responsible Public Value redefines AI impact in government

The study discusses the concept of Responsible Public Value, which redefines how AI impact is measured in the public sector. Rather than focusing solely on efficiency, the concept integrates three interdependent dimensions: operational performance, data and technical governance, and ethical responsibility.

The research shows that these elements must work together to generate meaningful value. Efficiency gains alone are insufficient if they are not supported by accountability and trust. Similarly, ethical principles must be embedded in technical systems to have practical impact.

This relationship is expressed through a conceptual framework in which public value emerges from the interaction between innovation and responsibility. The findings suggest that AI creates value only when institutions maintain control over its operation and ensure alignment with public interest.

The DGOBCAN-AI case illustrates this dynamic in practice. The system reduces workload and improves efficiency, but its success depends on governance mechanisms that ensure reliability and legitimacy. These include structured data pipelines, human oversight, and continuous learning processes.

The study also highlights the role of infrastructure in enabling responsible AI. A hybrid local and cloud-based system was used to balance cost efficiency with data sovereignty. This setup allowed the institution to scale its operations while maintaining control over sensitive data.

Importantly, the research shows that responsible AI does not require large financial investments. The system operates at a relatively low cost, demonstrating that effective AI governance is achievable even in resource-constrained environments.

However, the study identifies several limitations that shape the boundaries of AI deployment. Data scarcity remains a persistent challenge, requiring ongoing human involvement. Model opacity continues to limit interpretability, even with strong governance mechanisms in place. Additionally, reliance on external cloud services introduces potential risks related to technological dependency. These constraints highlight that AI adoption is not a one-time implementation but an ongoing process of alignment between technology, data, and institutional practices.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback