AI’s carbon footprint is bigger than training models
Efficiency gains in artificial intelligence (AI) are often presented as evidence that the technology can scale sustainably. New research suggests those gains may be masking a broader rise in total energy use.
In Beyond Efficiency: A Systematic Review of Energy Consumption and Carbon Footprint Across the AI Lifecycle, published in Sustainability, researchers analyze how rebound effects and expanded deployment offset efficiency improvements in AI systems.
Why AI’s energy impact does not end with training
The study challenges a major assumption in AI sustainability debates: that the environmental cost of AI is primarily driven by training large models. While training remains energy-intensive, especially for state-of-the-art models, the review finds that inference often becomes the dominant source of energy consumption once systems are deployed at scale.
In applications where AI models serve millions of users or operate continuously, inference workloads accumulate rapidly. Recommendation systems, conversational agents, image recognition services, and automated decision tools may run around the clock, generating sustained electricity demand that rivals or exceeds the one-time cost of training. The authors note that this shift is particularly pronounced as AI systems move from experimental use to embedded infrastructure.
The study also highlights how inference demand is closely tied to user behavior and service design. Low-latency requirements, always-on availability, and personalization features all increase computational load. As AI services expand globally, these demands scale across data centers operating in regions with different energy mixes, further complicating efforts to assess emissions.
The review also highlights the role of hardware. Specialized accelerators such as GPUs and TPUs are central to modern AI, but their production carries a substantial carbon cost. Short replacement cycles, driven by rapid performance improvements and competitive pressure, mean that embodied emissions from manufacturing and disposal contribute significantly to AI’s lifecycle footprint.
These factors undermine approaches that focus narrowly on optimizing algorithms or reducing training energy alone. The authors argue that without addressing deployment scale and hardware lifecycles, efficiency gains risk being overwhelmed by growth.
The rebound effect and hidden drivers of AI emissions
The study finds strong rebound effects in AI systems. Improvements in computational efficiency often lower the cost per task, but instead of reducing total energy use, they encourage wider adoption, higher usage frequency, and expanded application domains.
As models become cheaper to run, organizations deploy them more broadly, integrate them into additional workflows, and increase query volume. In consumer-facing systems, lower costs enable features such as continuous personalization and real-time interaction, further increasing inference demand. The study finds that these dynamics frequently offset efficiency gains, leading to net increases in energy consumption.
The authors also identify organizational and market drivers that amplify AI’s environmental impact. Competitive pressure to release larger and more capable models incentivizes frequent retraining and hardware upgrades. Cloud-based deployment concentrates demand in hyperscale data centers, where localized grid constraints and carbon intensity vary widely.
Importantly, the study shows that carbon emissions from AI are not uniform. Identical workloads can produce vastly different emissions depending on where and when they are run. Regions with fossil-heavy grids impose higher carbon costs than those with cleaner energy mixes, yet most AI systems remain largely unaware of these differences. As a result, operational decisions that optimize for performance or cost may inadvertently maximize emissions.
The review further notes that current reporting practices rarely capture these dynamics. Voluntary disclosures often focus on energy efficiency metrics or isolated benchmarks, leaving rebound effects and indirect emissions largely unaddressed.
Rethinking governance for sustainable AI systems
The authors argue that sustainability must be treated as a design constraint throughout the AI lifecycle, rather than an afterthought addressed through offsets or selective reporting.
One implication is the need for standardized lifecycle assessments that include training, inference, hardware manufacturing, and infrastructure impacts. Without consistent measurement, comparisons between systems remain misleading, and incentives for genuine emissions reduction remain weak.
The study also calls for closer integration between AI development and energy system management. Carbon-aware scheduling, where workloads are dynamically shifted based on grid conditions and renewable availability, is identified as a promising direction. Such approaches require coordination across software, hardware, and energy markets, moving beyond the siloed optimization common today.
The authors argue that voluntary guidelines are unlikely to be sufficient. Binding standards, procurement rules, and disclosure requirements may be necessary to align AI innovation with climate goals. They point out that sustainability trade-offs should be made explicit, allowing regulators and the public to weigh the benefits of AI deployment against its environmental costs.
- FIRST PUBLISHED IN:
- Devdiscourse

