Why governance dominates global debates on ethical artificial intelligence
Responsible AI is not a fixed concept but a negotiated one. Rather than analyzing official policy documents alone, the authors turn to expert discourse generated through the World University Network, a multinational academic consortium that convened a series of webinars focused on responsible and ethical AI. These discussions brought together scholars, policymakers, and practitioners from multiple countries and disciplines, creating a rare dataset of live, interactive expert communication.
Global debates over artificial intelligence (AI) are increasingly dominated by calls for responsibility, ethics, and accountability, yet there remains little agreement on what responsible AI actually means in practice. Governments, technology firms, and international organizations continue to release ethical frameworks and policy guidelines, but critics argue that these efforts often remain abstract, fragmented, and disconnected from real-world implementation.
That challenge is examined in the study Mapping WUN Expert Discourse on Responsible and Ethical AI: A Multinational Expert Network Analysis, published in Frontiers in Communication. The research offers a systematic analysis of how global experts frame responsible and ethical AI through sustained dialogue rather than isolated policy statements, revealing both areas of convergence and critical blind spots in current governance debates.
How global experts define responsible AI through communication
Responsible AI is not a fixed concept but a negotiated one. Rather than analyzing official policy documents alone, the authors turn to expert discourse generated through the World University Network, a multinational academic consortium that convened a series of webinars focused on responsible and ethical AI. These discussions brought together scholars, policymakers, and practitioners from multiple countries and disciplines, creating a rare dataset of live, interactive expert communication.
Using computational text analysis and semantic network mapping, the researchers examine how frequently key concepts appear and how they are connected across expert conversations. This approach allows them to move beyond surface-level keyword counts and uncover the deeper structure of ethical AI discourse. The results show that experts consistently anchor responsible AI around governance, which emerges as the central organizing concept linking technical, legal, and social concerns.
Governance is closely connected to accountability, transparency, responsibility, privacy, and regulation. Rather than treating these ideas as separate principles, experts frame them as interdependent components of a broader system. Accountability relies on transparency, transparency depends on governance mechanisms, and governance requires clear responsibility and oversight. This interconnected framing suggests that experts view responsible AI less as a checklist of ethical values and more as an institutional and communicative process.
The study also finds that technical concerns such as safety, security, explainability, and bias are rarely discussed in isolation. Instead, they are embedded within governance conversations that emphasize oversight, standards, and public trust. This contrasts with industry narratives that often prioritize technical fixes while downplaying structural and institutional dimensions. In expert discourse, technical design choices are consistently linked to social consequences and regulatory responsibility.
Another key insight is the role of communication itself. Experts do not merely describe responsible AI; they actively construct shared meaning through dialogue. The study shows that responsible AI emerges as a communicative infrastructure, shaped through ongoing exchange rather than top-down definition. This finding challenges the assumption that ethical AI can be fully codified through static principles or compliance frameworks.
Governance dominates while equity lags behind
While the study identifies strong consensus around governance-oriented themes, it also exposes notable gaps. One of the most significant is the relative absence of equity in expert discourse. Although fairness and justice appear frequently, equity remains weakly connected and underrepresented compared to other ethical concepts.
This distinction matters. Fairness and justice often focus on equal treatment and non-discrimination within systems, while equity addresses unequal starting conditions, historical disadvantage, and structural power imbalances. The study argues that the dominance of fairness over equity reflects a broader tendency in global AI ethics debates to favor universal principles that are easier to operationalize while sidelining context-specific concerns tied to geography, culture, and social inequality.
The authors suggest that this imbalance may stem from the composition of global expert networks, which are often centered in the Global North and shaped by Western legal and philosophical traditions. As a result, discussions of responsible AI may insufficiently address how AI systems affect marginalized communities differently across regions, particularly in the Global South. Issues such as data extraction, labor exploitation, and uneven access to technological benefits receive comparatively limited attention.
The network analysis also reveals that responsibility and ethics are frequently discussed together but are not always clearly differentiated. Responsibility tends to be associated with actors and institutions, such as developers, regulators, and organizations, while ethics is framed more abstractly around values and norms. The study suggests that clearer articulation of responsibility, including who is accountable for harm and how enforcement occurs, remains a challenge even among experts.
At the same time, the findings show strong alignment between expert discourse and major international policy frameworks. Concepts emphasized by organizations such as UNESCO, the OECD, IEEE, and the World Economic Forum are well represented in the network. This indicates that global expert discussions and formal policy efforts are reinforcing one another, at least at the level of governance principles.
However, alignment does not necessarily translate into completeness. The study argues that convergence around governance risks crowding out deeper engagement with equity, power, and historical context. Without addressing these dimensions, responsible AI frameworks may struggle to achieve legitimacy among communities most affected by AI-driven decisions.
What the findings mean for AI policy and global cooperation
According to the study, responsible AI cannot be reduced to technical compliance or ethical slogans. Instead, it is produced through sustained communication across disciplines, sectors, and borders. Multinational expert networks play a crucial role in shaping how ethical concerns are framed, prioritized, and translated into policy.
For policymakers, the findings suggest that effective AI governance requires more than adopting existing ethical principles. It requires creating spaces for continuous dialogue that allow ethical, legal, and technical perspectives to evolve together. Static frameworks risk becoming outdated or disconnected from emerging challenges, particularly as AI systems grow more complex and socially embedded.
The study also highlights the importance of reflexivity in expert communities. By mapping their own discourse, the authors show how certain themes dominate while others recede. This self-awareness is essential if global AI governance is to move beyond consensus toward inclusivity. Incorporating equity more explicitly would require expanding who participates in expert conversations and whose experiences shape ethical priorities.
For technology developers, the research underscores that responsible AI is not solely a design problem. Technical solutions such as explainable models or bias mitigation tools must be embedded within governance structures that define accountability and oversight. Without institutional support, technical fixes risk becoming symbolic gestures rather than meaningful safeguards.
The study also reinforces the idea that trust in AI systems is socially constructed. Transparency and accountability are not merely technical attributes but communicative ones. They depend on how systems are explained, regulated, and discussed in public. Expert discourse plays a critical role in shaping these narratives and, by extension, public confidence in AI governance.
At a global level, the research calls multinational academic networks as important intermediaries between local contexts and international policy agendas. These networks can bridge disciplinary silos and national boundaries, but only if they actively address imbalances in representation and perspective. Expanding participation from underrepresented regions and disciplines would strengthen the legitimacy of responsible AI discourse.
- FIRST PUBLISHED IN:
- Devdiscourse

