AI’s energy crisis prompts push for greener, more human-like intelligence

The authors argue that AI should evolve in the same way humans do: selectively engaging with relevant information, conserving resources, and applying contextual reasoning. They draw parallels between human cognition and a proposed AI mechanism called selective neural activation, where only the relevant parts of a model activate for a given task. This principle could drastically reduce energy demands while enhancing interpretability and responsiveness.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-10-2025 18:41 IST | Created: 29-10-2025 18:41 IST
AI’s energy crisis prompts push for greener, more human-like intelligence
Representative Image. Credit: ChatGPT

Artificial intelligence may be transforming industries, but it is also leaving behind a massive carbon footprint, warns a new study by researchers from the University of South Dakota that argues that it is time to rethink the way AI is built and trained.

The research, titled “Toward Carbon-Neutral Human AI: Rethinking Data, Computation, and Learning Paradigms for Sustainable Intelligence,” lays out a blueprint for a new kind of AI, one that learns like humans, adapts continuously, and operates within ecological and ethical limits.

The growing carbon burden of artificial intelligence

The study notes that training a single large model can release as much carbon dioxide as several cars over their entire lifetimes. This environmental toll, according to Santosh and his colleagues, reflects an industry-wide dependence on scale rather than efficiency. Current machine learning practices focus on massive datasets, deeper neural networks, and intensive computation, an approach the authors describe as “data and compute maximalism.”

The researchers argue that the obsession with scale has reached a point of diminishing returns. Beyond a certain threshold, adding more data or parameters yields minimal accuracy gains but dramatically increases energy consumption. The paper calls this the “myth of big data”, stressing that more information does not necessarily mean better intelligence. Instead, true progress lies in smarter, adaptive systems that can do more with less.

The study points to the COVID-19 pandemic as a clear example of why traditional AI fails in dynamic environments. Models trained on outdated or static datasets were unable to respond to new developments, exposing the weakness of data-heavy but rigid learning systems. On the other hand, smaller models guided by human feedback and active learning showed greater agility. This observation led to the central proposal of the research: developing Human AI (HAI), a carbon-aware, human-in-the-loop system designed to mimic how people learn and reason.

Human AI: Learning efficiently under real-world constraints

The study presents the Human AI (HAI) framework, which blends human cognitive principles with AI architectures to create systems that are both efficient and sustainable. Unlike current machine learning models that require full retraining when new data arrives, HAI evolves continuously. It updates itself incrementally, integrating small pieces of new information instead of rebuilding from scratch.

The authors propose four interconnected components that make this framework viable:

  • A Meta-Learning Core – This element enables rapid adaptation to new tasks using minimal data, eliminating the need for repeated full-scale training.
  • An Active Data Selector – The system identifies the most informative data points for human annotation, reducing unnecessary processing and labeling work.
  • A Carbon-Aware Scheduler – It manages computation based on energy efficiency, scheduling intensive tasks during periods of low carbon output or renewable energy availability.
  • A Human Feedback Interface – This ensures that human oversight remains central to learning and decision-making, creating transparency and ethical accountability.

By combining these elements, HAI aims to create lifelong learning systems that remain flexible, responsible, and energy-efficient. The model’s design mirrors how humans acquire knowledge, selectively, contextually, and with an awareness of limitations. Instead of treating human input as a one-time training resource, HAI treats it as a continuous collaboration.

The study also focuses on carbon-aware computing. The authors introduce a performance metric that weighs both accuracy and environmental cost, creating what they term the “carbon-accuracy tradeoff curve.” This benchmark encourages developers to optimize not just for speed and precision, but also for sustainability. By making energy efficiency a measurable component of AI success, the paper pushes for a cultural shift in how AI progress is evaluated.

The framework also incorporates mechanisms to prevent “catastrophic forgetting,” a common problem in continuous learning. By maintaining a selective memory buffer, HAI preserves prior knowledge while integrating new data efficiently. This approach reduces the need for repeated retraining, cutting both energy use and computational waste.

Redefining intelligence for the sustainable age

Next up, the authors position Human AI as part of a broader movement toward “cognitive minimalism.” The concept challenges the assumption that intelligence is tied to computational size or resource consumption. Instead, it defines intelligence as the ability to adapt effectively within constraints, environmental, cognitive, and ethical.

The authors argue that AI should evolve in the same way humans do: selectively engaging with relevant information, conserving resources, and applying contextual reasoning. They draw parallels between human cognition and a proposed AI mechanism called selective neural activation, where only the relevant parts of a model activate for a given task. This principle could drastically reduce energy demands while enhancing interpretability and responsiveness.

The paper outlines a path toward carbon-neutral machine learning, advocating for AI models that respect both planetary and human boundaries. It introduces a multi-objective optimization model that balances four competing priorities: carbon budget, data efficiency, continual learning stability, and human attention. Every algorithmic update, the authors argue, should be evaluated not only on accuracy but also on ecological cost and ethical necessity.

This shift also has governance implications. The researchers emphasize that human oversight should not be viewed as a constraint but as a governance mechanism built into the design of AI systems. By integrating humans directly into learning and decision cycles, HAI ensures accountability and compliance with emerging legal frameworks such as the EU AI Act and global sustainability goals.

The study envisions a future where AI systems do not compete with human intelligence but co-evolve alongside it. In this paradigm, AI becomes a partner in decision-making, one that adapts dynamically while remaining grounded in human judgment. Such systems would prioritize adaptability and sustainability over brute computational power, redefining what progress means in the age of intelligent machines.

To sum up, the current trajectory of AI development, driven by larger models, endless data collection, and escalating energy use, is not sustainable. They call for an industry-wide transition from “compute maximalism” to human-centric sustainability. This new model, they argue, would not only curb AI’s environmental footprint but also make systems more resilient, explainable, and trustworthy.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback