AI emissions far higher than reported as aging chips quietly burn more energy
The study asserts that sustainability must be embedded into the operational fabric of AI systems rather than treated as an after-the-fact calculation. Current carbon-reporting frameworks emphasize the energy source powering each inference, but rarely account for the embodied carbon of hardware or the long-term impacts of device degradation.
Artificial intelligence (AI) adoption is accelerating across industries, powering scientific breakthroughs, global business operations and digital infrastructure. But as demand scales, so does concern about the environmental toll of the hardware powering AI systems. A new scientific analysis warns that current carbon-aware computing strategies overlook a critical factor: hardware degradation. Without accounting for how chips age and their energy consumption rises, the AI sector risks dramatically underestimating emissions and shortening the lifespan of its most expensive equipment.
The research, published as “Federated carbon intelligence for sustainable AI: Real-time optimization across heterogeneous hardware fleets” in MRS Energy & Sustainability, introduces a federated system that dynamically routes AI workloads through mixed fleets of accelerators based on grid carbon intensity, energy efficiency, and real-time hardware health. Its central finding is clear: smarter scheduling can reduce emissions by up to 45 percent and extend fleet lifespan by more than a year, reshaping how the AI industry should think about sustainability.
Current approaches, many promoted as “carbon-aware,” still treat hardware as static. By assuming consistent performance and energy use over time, they ignore degradation that silently increases power draw as chips age. This oversight leads to higher operational emissions and premature hardware retirement, both of which undermine global sustainability commitments.
Real-time intelligence across fleets outperforms conventional carbon-aware strategies
The new framework, called Federated Carbon Intelligence (FCI), combines three pillars: dynamic grid emissions data, workload trace analysis, and hardware state-of-health monitoring. This structure enables real-time decisions that select not only the greenest energy source but also the healthiest device for each inference job. Using a combination of reinforcement learning and graph neural networks, the system evaluates each accelerator’s current efficiency and expected degradation trajectory before routing computation.
This marks a departure from existing carbon-aware scheduling approaches that focus mainly on locating low-carbon electricity. The authors argue that carbon intensity alone is no longer enough. As AI accelerators age, they consume more power per inference and generate more heat, increasing cooling requirements and accelerating further wear. Without accounting for these dynamics, conventional scheduling can actually concentrate workloads on already degraded devices, compounding inefficiency and raising long-term emissions.
The study evaluates mixed fleets of 1,000 accelerators across a five-year simulation period, including NVIDIA A100 and H100 GPUs, Google TPUv5i, and Cerebras WSE-2 processors. The scenarios model real-world conditions in which devices are distributed across data centers in regions with varying grid carbon intensities, cooling efficiency, and operational constraints.
Under these conditions, FCI reduces cumulative CO₂ emissions by up to 45 percent, with an average improvement of 37 percent over static allocation. The system also extends the lifespan of hardware fleets by approximately 1.6 years, allowing organizations to postpone costly upgrades and reduce the embodied carbon associated with manufacturing new advanced chips.
These gains place FCI ahead of two baseline strategies: a carbon-only scheduler and a degradation-only scheduler. The former lowers emissions but accelerates hardware wear by repeatedly selecting the most efficient devices. The latter preserves hardware health but sacrifices carbon performance. Only the integrated FCI approach balances both, shaping a more sustainable and resilient AI ecosystem.
Hardware aging emerges as a hidden driver of rising AI emissions
The study highlights the role of hardware aging, a factor widely overlooked in sustainability discussions. Accelerators do not operate at fixed efficiency levels. As they process trillions of operations, microscopic wear accumulates in memory cells, interconnects and transistor structures. This increases energy leakage, raises thermal output, and pushes cooling systems to work harder.
The study notes that unoptimized scheduling accelerates aging by directing disproportionate workloads to the fastest or most accessible devices. Over time, these devices require more power to complete the same inference task, eroding initial efficiency advantages and ultimately diminishing gains from carbon-aware routing.
Furthermore, because aging effects accumulate unevenly across fleets, entire clusters may develop performance bottlenecks that increase queuing times, drive up system-wide energy use and create unexpected spikes in emissions.
By modeling these aging mechanisms, FCI distributes workloads to avoid premature degradation of high-efficiency accelerators. This more balanced strategy allows devices to age more uniformly and predictably, reducing the likelihood of sudden failures, costly downtime, and emergency replacements. In aggregate, it transforms sustainability from a surface-level operational metric into a long-term lifecycle characteristic of AI infrastructure.
Without lifecycle-aware intelligence, the industry risks a blind surge in emissions as older fleets continue operating long past peak efficiency and new carbon-intense fabrication cycles ramp up to replace them.
Deployment challenges highlight gaps in AI sustainability infrastructure
While the study demonstrates significant emissions reductions and efficiency gains, it also acknowledges several barriers to real-world implementation.
- Telemetry standardization is lacking. Different hardware vendors expose thermal, voltage, current and performance data through incompatible interfaces. Without standardized health metrics, integrating heterogeneous fleets becomes difficult. The authors argue that industry-wide telemetry protocols are needed to ensure accurate and consistent health modeling.
- Integration with existing orchestration systems requires new tooling. Platforms such as Kubernetes and Slurm were not designed with hardware degradation or federated carbon intelligence in mind. Retrofitting these systems demands substantial engineering effort, especially for organizations managing distributed AI workloads across continents.
- Real-world deployment must navigate policy constraints. Regulations governing data locality, national energy markets, and renewable commitments influence whether workloads can move across borders. Federated carbon intelligence must respect these constraints, striking a balance between environmental impact, system performance and legal compliance.
Despite these obstacles, the study positions FCI as a critical step toward self-optimizing AI systems, infrastructure that actively adapts to both environmental and hardware realities without human intervention.
In the coming years, AI workloads will increase dramatically due to the rise of multimodal models, autonomous agents and edge-to-cloud ecosystems. Without intelligent load balancing grounded in sustainability, global emissions from AI data centers could escalate rapidly, erasing efficiency gains made elsewhere in the tech sector.
A path toward lifecycle-aware AI infrastructure
The study asserts that sustainability must be embedded into the operational fabric of AI systems rather than treated as an after-the-fact calculation. Current carbon-reporting frameworks emphasize the energy source powering each inference, but rarely account for the embodied carbon of hardware or the long-term impacts of device degradation.
FCI introduces a new paradigm where AI infrastructure becomes a continuously adapting ecosystem. It evaluates future carbon costs, hardware aging curves, and workload demand simultaneously. This dynamic approach shifts sustainability away from static carbon labels toward a real-time behavior of the system itself.
Based on their findings, the authors identify several priorities for the industry:
- Adopt lifecycle-aware scheduling. Organizations should treat hardware degradation as a first-class variable in optimizing workloads.
- Improve visibility into hardware health. Vendors and fleet operators must develop richer telemetry pipelines to allow accurate aging predictions.
- Integrate sustainability at the orchestration layer. Carbon-aware scheduling should evolve into carbon-and-lifecycle-aware orchestration, coordinated across clouds, regions and device classes.
- Plan hardware purchases around aging trajectories. Fleet planners should shift from time-based to condition-based replacement models to reduce embodied emissions.
- Align policy with federated optimization. Regulators should update data-transfer and carbon-reporting rules to support sustainability-driven routing.
These measures will help transform AI from a rapidly growing emissions source into a more accountable, lifecycle-conscious component of global digital infrastructure.
- READ MORE ON:
- AI sustainability
- federated carbon intelligence
- hardware degradation
- carbon-aware computing
- data center emissions
- AI energy optimization
- sustainable AI infrastructure
- GPU aging
- lifecycle-aware scheduling
- green computing research
- AI carbon footprint reduction
- accelerator fleet management
- FIRST PUBLISHED IN:
- Devdiscourse

