Why legal safeguards alone can’t tame AI’s role in climate crisis

AI is often cast as a solution to climate change, yet it paradoxically contributes to the problem it claims to solve. According to Sander, AI technologies are significant climate consumers. The environmental toll stems from massive resource extraction, energy-intensive model training, and rebound effects that accelerate consumption rather than temper it. AI’s carbon footprint includes direct emissions from data centers, indirect impacts like enabling oil and gas exploration, and societal consequences such as over-reliance on automation that undermines sustainable practices.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 25-04-2025 17:45 IST | Created: 25-04-2025 17:45 IST
Why legal safeguards alone can’t tame AI’s role in climate crisis
Representative Image. Credit: ChatGPT

In a world accelerating toward both ecological collapse and rapid technological transformation, the entwinement of artificial intelligence (AI) and climate change governance is raising difficult questions. As governments and industries increasingly deploy AI to predict and mitigate climate threats, they are also inheriting a tangled web of risks: environmental consumption, surveillance overreach, and a technocratic narrowing of climate discourse.

A new study by legal scholar Barrie Sander of Leiden University, published in the Netherlands Quarterly of Human Rights, critically examines this precarious intersection. Titled “Confronting Risks at the Intersection of Climate Change and Artificial Intelligence: The Promise and Perils of Rights-Based Approaches”, the paper challenges the prevailing assumption that rights-based governance is sufficient to steer us safely through this dual crisis.

How does AI amplify the climate crisis?

AI is often cast as a solution to climate change, yet it paradoxically contributes to the problem it claims to solve. According to Sander, AI technologies are significant climate consumers. The environmental toll stems from massive resource extraction, energy-intensive model training, and rebound effects that accelerate consumption rather than temper it. AI’s carbon footprint includes direct emissions from data centers, indirect impacts like enabling oil and gas exploration, and societal consequences such as over-reliance on automation that undermines sustainable practices.

Equally troubling is the uneven distribution of these environmental harms. Benefits accrue to the Global North through technological profits and digital infrastructure, while the Global South disproportionately bears the brunt of extraction, e-waste, and ecological degradation. This duality underlines the need to assess AI not only as a technological tool but as a socio-political actor with real-world consequences.

Beyond emissions, AI is transforming climate governance itself. It’s being embedded in mitigation projects, urban management, and agriculture, but often without adequate attention to context or justice. Risks such as maladaptation, techno-centrism, and corporate co-option are prevalent. Instead of addressing root causes, AI-enabled systems sometimes reinforce existing inequalities or suppress critical voices. States have also used AI for surveillance under the guise of climate policy, targeting activists and migrants rather than systemic polluters.

Can human rights frameworks govern climate-AI risks?

Sander’s study critically interrogates whether rights-based approaches are equipped to manage these complex intersections. Human rights, while historically instrumental in shaping legal accountability, face three key challenges in this new landscape: concretisation, individualism, and marketised managerialism.

First, concretisation refers to the difficulty of translating abstract legal rights like those enshrined in the EU Charter or the UN Guiding Principles on Business and Human Rights into enforceable standards for AI applications. Current frameworks struggle to quantify AI’s lifecycle emissions or to regulate platform-driven climate misinformation. Recent EU regulations such as the AI Act and Digital Services Act offer some footholds, but their vague language and reliance on voluntary compliance weaken their potential. For instance, while the AI Act includes provisions on energy transparency, its enforcement mechanisms remain tied to industry-led standard-setting bodies, raising concerns of regulatory capture.

Second, the individualism challenge stems from the mismatch between human rights law's focus on individual grievances and the collective, systemic harms posed by AI. Climate AI harms often manifest at the societal level, through population-wide data exploitation, behavioral manipulation, and environmental damage, but human rights processes typically require specific personal injury to trigger action. Sander suggests expanding rights frameworks to recognize compound and intersectional harms, and to center marginalized voices through design practices that prioritize those most affected by AI systems.

Who really holds the power in AI governance?

Perhaps the most damning critique in the study is aimed at marketised managerialism - the quiet ceding of regulatory power to the very industries causing harm. Both soft law instruments like the UNGPs and more recent legislative tools like the Corporate Sustainability Due Diligence Directive lean heavily on corporate self-assessment, certification schemes, and internal audits. This enables companies to perform compliance theatre without substantively altering harmful practices.

As Sander notes, AI is often developed within a market logic that rewards exponential computational growth regardless of environmental cost. Voluntary sustainability codes and energy disclosures, while symbolically important, rarely challenge the underlying incentive structures driving ecological exploitation. The real danger is that rights-based systems, without deeper transformation, may end up legitimizing rather than curbing the power of Big Tech.

To address this, the study advocates a strategic deployment of rights discourse. Rights should not be viewed as panaceas, but as tactical tools within broader social movements demanding structural change. This includes red-line regulations that ban harmful AI applications (e.g., spyware and border surveillance tech), procedural reforms that expand collective standing in court, and new accountability mechanisms centered on public interest rather than profit margins.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback