Global South Leaders Push for Collective AI Safety Action

Serving as the closing dialogue of the International AI Safety Coordination track, the session focused on practical mechanisms to align innovation with public trust, fundamental rights and long-term global stability.


Devdiscourse News Desk | New Delhi | Updated: 20-02-2026 20:52 IST | Created: 20-02-2026 20:52 IST
Global South Leaders Push for Collective AI Safety Action
Speakers stressed that for the Global South, collaboration on AI safety is no longer optional—it is a technological and economic imperative. Image Credit: X(@PIB_India)
  • Country:
  • India

As frontier artificial intelligence systems advance at unprecedented speed, global policymakers are racing to ensure that governance mechanisms keep pace. At the India AI Impact Summit 2026, the session titled “International AI Safety Coordination: What Policymakers Need to Know” brought together ministers, multilateral leaders and AI safety experts to examine how developing economies can actively shape global AI safety frameworks—rather than remain rule-takers in a fragmented landscape.

Serving as the closing dialogue of the International AI Safety Coordination track, the session focused on practical mechanisms to align innovation with public trust, fundamental rights and long-term global stability.

From Diplomatic Alignment to Technological Necessity

Speakers stressed that for the Global South, collaboration on AI safety is no longer optional—it is a technological and economic imperative.

With AI already being deployed across sectors such as:

  • Public health

  • Agriculture

  • Education

  • Social protection

  • Public service delivery

countries must move beyond isolated national efforts toward:

  • Shared risk assessments

  • Interoperable governance frameworks

  • Coordinated safety tools

  • Evidence exchange across borders

The coming phase of AI governance, participants noted, will be defined by whether institutions can build capacity and operationalise common standards quickly enough to match accelerating technological advances.

Singapore: Regulation Must Be Evidence-Based

Josephine Teo, Minister for Digital Development and Information, Singapore, underscored the importance of evidence-driven policymaking and globally interoperable standards.

Drawing parallels with aviation safety, she argued that AI governance must rely on testing and simulation rather than intuition.

Without international coordination, she warned, “fragmentation will persist, trust will weaken, and the safe scaling of frontier technologies will become far more difficult.”

Malaysia: Strong Domestic Capacity First

Gobind Singh Deo, Minister of Digital Development and Information, Malaysia, emphasised that credible regional cooperation depends on strong national institutional foundations.

He highlighted the need for middle powers to:

  • Strengthen enforcement capabilities

  • Build domestic AI governance expertise

  • Develop institutional capacity

He pointed to platforms such as the ASEAN AI Safety Network as mechanisms to convert shared commitments into operational risk-sharing and preparedness systems.

OECD: Trust Is Built on Inclusion and Evidence

Mathias Cormann, Secretary-General, OECD, stressed that public trust will determine AI’s long-term trajectory.

“Trust in AI is built through inclusion and objective evidence,” he said.

Cormann called for coordinated action across governments, industry and civil society to close the widening gap between innovation and oversight. He noted that in some cases it may be necessary “to slow down, test, monitor and share information” to ensure systems function as intended and respect fundamental rights.

World Bank: Design Safety from the Start

Sangbu Kim, Vice President for Digital and AI, World Bank, focused on embedding safety into AI systems at the design stage, especially in low-capacity environments.

He highlighted tools such as:

  • Red-teaming exercises

  • Risk simulations

  • Shared threat intelligence

  • Continuous monitoring mechanisms

Describing AI as both “the spear and the shield,” Kim argued that managing risks requires continuous learning and structured global partnerships before large-scale deployment.

Frontier AI and the Governance Window

Jann Tallinn, AI investor, Founding Engineer of Skype and Co-Founder of the Future of Life Institute, placed the discussion within the competitive dynamics of frontier AI development.

He warned that intense competition among leading labs makes unilateral restraint unlikely. However, he argued that the concentration of compute and capital in advanced AI development could actually make governance more feasible—if global alignment is achieved.

Political awareness and coordinated international action, he noted, are critical at this stage.

A 12–18 Month Operational Agenda

Across institutional and regional perspectives, the session converged on a practical near-term roadmap for the next 12–18 months:

  • Establish shared safety benchmarks

  • Create structured information-sharing mechanisms

  • Build coordinated institutional capacity

  • Strengthen South–South collaboration

  • Move from high-level principles to operational cooperation

For developing economies, speakers emphasised that collective action offers a pathway to shape AI governance frameworks rather than merely adapt to rules set elsewhere.

Shaping the Next Phase of AI Governance

The discussion underscored a pivotal moment in global AI governance. As frontier capabilities accelerate, safety coordination must evolve just as rapidly.

For the Global South, the message was clear: collaboration is not just about alignment—it is about agency. By pooling expertise, evidence and institutional capacity, developing economies can help ensure that AI scales in ways that strengthen public trust, protect fundamental rights and support long-term global stability.

 

Give Feedback