AI governance across EU, US, and China fails to address rising energy and carbon footprint
Artificial intelligence (AI) policy across the world’s major economies is expanding rapidly, but its environmental cost remains largely invisible. As governments race to regulate AI risks related to safety, bias, and accountability, the physical footprint of large-scale computation continues to grow with little restraint. New research suggests this imbalance is not accidental but structural, embedded in how AI governance frameworks are designed and enforced.
The study The Environmental Blind Spot of AI Policy: Energy, Infrastructure, and the Systematic Externalization of Sustainability, published in Sustainability, examines AI policies in the European Union, the United States, and China and finds that despite sharp differences in political systems and regulatory styles, all three regimes converge on the same outcome: none treats environmental sustainability as a binding constraint on AI deployment or scale.
The result, the authors argue, is a global policy architecture that promotes AI expansion while systematically displacing its environmental consequences onto energy systems, ecosystems, and future climate targets.
Sustainability treated as an afterthought in AI governance
AI systems depend on data centers, high-performance computing clusters, global cloud networks, and mineral-intensive hardware supply chains. These infrastructures require continuous electricity, water for cooling, land for facilities, and carbon-intensive manufacturing processes. Yet most AI regulations treat sustainability as a secondary concern, addressed through efficiency improvements or voluntary commitments rather than enforceable limits.
This approach misunderstands the scale of the problem. Efficiency gains, while real, are consistently outpaced by growth in total computational demand. As AI models grow larger, are deployed more widely, and are subject to increasing compliance and safety requirements, overall energy use continues to rise even as individual systems become more efficient. Without absolute limits, improvements in performance per watt do not translate into lower emissions.
The authors frame sustainability not as a normative goal to be balanced against innovation, but as a hard material constraint. Energy availability, carbon budgets compatible with climate targets, infrastructure capacity, and resource limits define what is physically feasible. AI policy, they argue, has largely failed to internalize these constraints, instead allowing scale decisions to proceed without reference to environmental boundaries.
Measures designed to strengthen rights protection and safety often increase computational load. Transparency requirements, auditability, traceability, and alignment techniques all add layers of processing that consume additional energy. While these safeguards are justified from a legal and ethical standpoint, their environmental impact is rarely assessed or regulated.
The study identifies this pattern as structural rather than incidental. Sustainability is invoked rhetorically in policy documents, but it remains weakly operationalized. Energy use, emissions, and infrastructure expansion are treated as downstream effects rather than as factors that should condition whether and how AI systems are developed and deployed.
Different systems, same environmental outcome
To test whether this pattern holds across political and regulatory contexts, the authors conduct a comparative analysis of AI policy regimes in the European Union, the United States, and China. These jurisdictions differ sharply in governance style, industrial strategy, and enforcement mechanisms, yet the study finds a striking convergence in how environmental issues are handled.
In the EU, AI governance is anchored in a comprehensive rights-based framework. The AI Act establishes a risk-based system of obligations, particularly for high-risk applications, with strong enforcement mechanisms and significant penalties. However, the regulation contains no binding provisions on energy consumption, carbon emissions, or computational scale. Nor does it impose environmental conditions on data center location, model training, or public procurement of AI systems.
This omission is further aggravated by the EU’s material dependence on external infrastructure. Most cloud computing capacity serving European demand is controlled by non-European providers, and advanced semiconductor manufacturing remains largely outside the region. As a result, much of the environmental impact associated with AI deployment occurs beyond EU territory, effectively shifting emissions outside the scope of European carbon accounting. Despite ambitious climate commitments, AI expansion proceeds without environmental constraints embedded in AI-specific regulation.
The United States presents a different configuration but reaches a similar outcome. American firms dominate global cloud infrastructure and advanced semiconductor design, giving the country substantial control over AI development and deployment. Federal AI governance remains fragmented, relying on sectoral rules, state initiatives, and voluntary principles rather than comprehensive binding regulation.
Environmental limits play little role in this model. There are no federal caps on data center energy use tied specifically to AI, no mandatory carbon budgets for model training, and no environmental conditions attached to public subsidies for computational infrastructure. While companies promote efficiency gains and climate pledges, absolute energy demand continues to rise. The policy environment prioritizes innovation and market leadership, leaving sustainability largely outside the core logic of AI governance.
China’s approach is shaped by state-led industrial policy and a focus on technological self-sufficiency. Massive investments in domestic data centers, cloud platforms, and AI models aim to reduce dependence on foreign technology. Regulatory oversight emphasizes political control and social stability, integrating AI governance into a centralized administrative framework.
Environmental considerations exist, particularly through efficiency standards and incentives to locate data centers in regions with abundant renewable energy. However, these measures do not impose absolute limits on computational growth. Rapid expansion of AI infrastructure continues, with energy demand increasing faster than efficiency improvements. Sustainability functions as a conditional factor within a growth-oriented strategy rather than as a binding constraint on scale.
Across all three jurisdictions, the study finds no enforceable environmental restrictions that directly condition the size, energy intensity, or deployment rate of AI systems. Despite divergent regulatory philosophies, the material outcome is the same: AI expands without being governed by environmental limits aligned with climate goals.
Why AI policy keeps externalizing environmental costs
The authors identify several mechanisms that explain why environmental externalization persists across jurisdictions. One key driver is compliance-driven computational expansion. As AI systems are subjected to stricter safety, transparency, and accountability requirements, their computational overhead increases. Alignment techniques, monitoring systems, audits, and safeguards all add processing layers that consume energy. These costs are rarely accounted for in policy design, even though they scale with deployment.
A second mechanism is infrastructure duplication driven by technological sovereignty. Governments seeking strategic autonomy invest in domestic data centers, semiconductor facilities, and foundational models. While politically appealing, this strategy often leads to redundant infrastructure, higher total energy use, and increased emissions, especially when deployed in regions with carbon-intensive electricity or limited grid capacity. Environmental assessment is typically secondary to strategic considerations.
A third mechanism lies in incentives that reward scale without imposing absolute limits. AI policy frameworks promote performance gains, competitiveness, and rapid deployment. Efficiency improvements are celebrated, but there are no ceilings on total resource use. As demand grows, aggregate energy consumption rises, offsetting efficiency gains and reinforcing a pattern of externalization.
These mechanisms interact with geopolitical competition. No major jurisdiction is willing to impose unilateral environmental constraints that could slow its AI sector relative to rivals. The result is a collective outcome in which all actors expand computational capacity while shifting environmental costs outside the decision-making framework of AI policy.
The study emphasizes that this dynamic does not stem from a lack of awareness about AI’s environmental footprint. Rather, it reflects a governance choice to treat sustainability as external to decisions about scale, infrastructure, and deployment. Climate policy and AI policy evolve on parallel tracks, with limited integration.
The case for binding environmental limits
Addressing AI’s environmental impact requires a shift in policy design. Sustainability must be treated as a binding condition that shapes whether, where, and how AI systems are deployed. Without this shift, AI governance will continue to prioritize growth, safety, and competitiveness while undermining climate objectives.
The authors outline several avenues for integrating environmental limits into AI policy. These include enforceable carbon budgets for large-scale model training, mandatory emissions and energy reporting tied specifically to AI systems, and environmental conditionality in public procurement and funding. Infrastructure planning decisions, including data center siting and grid integration, would need to be aligned with climate targets rather than treated as separate issues.
They also note that integrating environmental constraints will involve trade-offs. Some computationally intensive practices, including certain compliance mechanisms or redundant infrastructure strategies, may need to be reconsidered under strict carbon limits. This does not imply abandoning rights protections or safety measures, but it does require prioritization and design choices that account for material constraints.
The study does not deny that AI can contribute to sustainability in specific contexts, such as energy optimization, smart grids, or logistics efficiency. However, it argues that these benefits do not offset the environmental impact of unchecked computational scaling. Without binding limits, sector-specific gains are overwhelmed by aggregate growth.
- FIRST PUBLISHED IN:
- Devdiscourse

