Regulating AI in the Real World: Why Capacity Building Matters More Than Rules

UNESCO’s report argues that artificial intelligence cannot be governed with traditional, one-time regulatory checks, and that supervisory authorities must instead build continuous, learning-based institutions capable of interpreting AI behaviour in real-world contexts. It concludes that capacity building, through tools like observatory units, monitoring systems, and regulatory sandboxes is the key to making AI oversight effective, trustworthy, and aligned with public values.


CoE-EDP, VisionRICoE-EDP, VisionRI | Updated: 09-01-2026 09:36 IST | Created: 09-01-2026 09:36 IST
Regulating AI in the Real World: Why Capacity Building Matters More Than Rules
Representative Image.

An UNESCO report, "Pathways on Capacity Building for AI Supervisory Authorities," was produced by UNESCO in partnership with the Dutch Authority for Digital Infrastructure (RDI), with financial support from the European Union’s Technical Support Instrument. It builds on discussions from the first UNESCO Expert Roundtable on AI Supervision, held in Paris in May 2025, and includes contributions from institutions such as the Tony Blair Institute, EUSAiR, Rise Sweden, The Future Society, the Brazilian Data Protection Authority (ANPD), and the Center for AI and Digital Policy. The report starts from a shared concern: artificial intelligence is spreading rapidly across society, but the institutions responsible for overseeing it are struggling to keep up.

AI systems now influence decisions in areas like credit, healthcare, hiring, education, and public information. Unlike older technologies, AI systems learn from data, adapt over time, and behave differently depending on how and where they are used. The report argues that this makes traditional regulatory approaches, based on fixed rules, one-time approvals, and technical checklists, insufficient. Supervisory authorities face not only a technical challenge, but an institutional one: how to understand, monitor, and intervene in systems that constantly change.

Why Old Regulatory Models Fall Short

The report explains that many regulators initially looked to models from aviation, nuclear safety, or pharmaceuticals when thinking about AI oversight. These sectors rely on stable standards and predictable risks. AI does not work that way. Its behaviour often cannot be fully understood by inspecting code, and its effects only become visible once systems interact with real people, data, and institutions.

For years, researchers hoped that “explainable AI” would solve this problem by making AI models fully transparent. The report shows why this expectation has proven unrealistic. Attempts to break complex models into simple explanations have delivered limited practical value for regulators. As a result, the focus is shifting away from trying to fully decode AI systems and toward building institutions that can interpret their behaviour in context. What matters is not knowing everything about a model, but knowing enough to judge whether it is lawful, fair, safe, and aligned with public values.

From Technical Control to Interpretative Supervision

This shift is captured in the report’s central idea of interpretative supervision. Rather than relying on static compliance checks, interpretative supervision treats oversight as a continuous process of observation, learning, and judgment. Supervisors must be able to ask practical questions: Is an AI system producing biased outcomes? Is its performance changing over time? Is it creating new risks that were not anticipated at launch?

To make this approach concrete, the report introduces the OBSERVE framework, developed most clearly by the Tony Blair Institute. The framework proposes that regulators build dedicated observatory units, use real-time monitoring tools, draw on external expertise, and store evidence from past incidents and enforcement actions. Together, these elements help authorities move from reacting after harm occurs to identifying problems early and responding proportionately.

Learning by Testing: The Role of Sandboxes

A major part of the report focuses on AI regulatory sandboxes, which allow AI systems to be tested under regulatory supervision. The report stresses that sandboxes are not loopholes or free passes for companies. Instead, they are learning tools for regulators themselves. By observing how systems behave in practice, supervisors can better understand risks, clarify how laws apply, and improve future regulation.

In the European Union, the report examines how the EU AI Act requires Member States to establish sandboxes and how projects like EUSAiR aim to coordinate them across countries. Successful sandboxes, the report argues, must be connected to wider innovation and testing infrastructure, and must operate as ongoing processes rather than one-off experiments.

Brazil’s experience offers important lessons. The report describes how the Brazilian Data Protection Authority used sandboxes not just to test AI systems, but to change how public institutions work. It highlights resistance within bureaucracies, driven by fear of legal liability and institutional risk, and argues that this resistance must be addressed through training, legal clarity, and cultural change, not ignored.

Capacity Building as the Real Challenge

The report’s final message is clear and consistent: AI governance will fail without strong institutions. Laws and ethical principles, including UNESCO’s own Recommendation on the Ethics of AI, matter only if supervisory authorities have the skills, tools, and confidence to apply them. Effective supervision requires technical knowledge, but also cooperation across sectors, openness to learning, and the ability to adapt as technology evolves.

By drawing on real-world experience from Europe, Latin America, and beyond, the report shows that supervision and innovation do not have to be enemies. When done well, oversight can reduce uncertainty, protect the public, and support responsible technological progress. Capacity building, the report concludes, is no longer a secondary issue, it is the central task for governments seeking to govern artificial intelligence in the public interest.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback