AI can drive sustainability gains if human values remain central


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 28-12-2025 11:11 IST | Created: 28-12-2025 11:11 IST
AI can drive sustainability gains if human values remain central
Representative Image. Credit: ChatGPT

Artificial intelligence is now considered a policy instrument capable of shaping how societies respond to climate change, public health crises, food insecurity, and urban stress. A new large-scale academic review shows that this shift is no longer theoretical. Research published over the last five years indicates that human-centered approaches to AI are rapidly consolidating into a distinct field with direct implications for achieving the United Nations Sustainable Development Goals, while also exposing serious gaps in equity, governance, and global participation.

The study, titled Human-Centered AI to Accelerate the SDGs: Evidence Map (2020–2024) and published in the journal Sustainability, systematically maps global scientific production at the intersection of artificial intelligence, ethics, and sustainable development. The research offers one of the most comprehensive overviews to date of how AI research is aligning with the 2030 Agenda, and where it is falling short.

A rapidly expanding research field with clear priorities

The review documents a sharp acceleration in academic output related to human-centered artificial intelligence and sustainability, particularly after 2022. What began as a marginal research area at the start of the decade has become a fast-growing interdisciplinary domain spanning computer science, engineering, environmental studies, health, urban planning, and governance. Annual publication volumes increased more than tenfold between 2020 and 2024, signaling that the ethical and social dimensions of AI are no longer peripheral concerns in sustainability research.

Across this expanding literature, the authors identify three tightly interconnected pillars shaping the field. The first is technical performance, centered on machine learning, deep learning, and data-intensive modeling. These methods underpin most AI applications linked to sustainability, from climate forecasting to energy optimization. The second pillar focuses on explainability and human-centered design, including transparency, interpretability, accountability, and bias mitigation. This strand reflects growing recognition that AI systems influencing public policy, health, or resource management must be understandable and contestable by humans. The third pillar consists of socio-environmental applications, where AI tools are deployed to address concrete challenges tied to the Sustainable Development Goals, including clean energy, resilient cities, biodiversity protection, food systems, and disaster risk reduction.

Rather than treating these dimensions as separate, the literature increasingly frames them as mutually dependent. Technical advances without ethical safeguards risk amplifying inequality and environmental harm, while governance frameworks without robust technical foundations struggle to deliver measurable impact. Human-centered AI, as defined across the reviewed studies, places human values and societal outcomes at the core of the AI lifecycle, from problem formulation and data selection to deployment and evaluation.

The research also highlights the journals driving this debate. Interdisciplinary outlets such as Sustainability, alongside technically oriented journals like IEEE Access and Applied Sciences, account for a substantial share of publications. This distribution reflects the hybrid nature of the field, which blends computational innovation with social, environmental, and policy analysis.

From Energy and Health to Cities and Biodiversity

Energy systems emerge as one of the most mature application areas. AI-driven optimization of data center cooling, smart grids, and building energy management has delivered documented reductions in energy consumption and emissions, aligning directly with goals on affordable and clean energy. These applications demonstrate how algorithmic control and predictive analytics can improve efficiency in infrastructure-intensive sectors, provided they are deployed with transparency and oversight.

Public health is another area where human-centered AI shows tangible impact. Machine learning models are increasingly used for early disease detection, medical imaging, and epidemiological surveillance. Systems that analyze multilingual data streams and mobility patterns have demonstrated the ability to flag emerging health threats ahead of official alerts, strengthening early warning capacities and emergency preparedness. The review underscores that such gains are most effective when paired with ethical data governance and safeguards against surveillance overreach.

In agriculture, AI-supported decision platforms combine weather forecasting, soil monitoring, and predictive analytics to improve crop management and reduce resource waste. These tools have been shown to support productivity gains while lowering inputs such as water and fertilizer, contributing to food security and responsible consumption objectives. The authors note that human-centered design is critical in this context, as adoption depends on trust, usability, and alignment with local farming practices.

Urban systems represent another major frontier. AI-enabled traffic management, waste collection optimization, and building energy controls have demonstrated reductions in emissions, fuel use, and operational costs. These applications support goals related to sustainable cities and infrastructure resilience, particularly when integrated into broader planning frameworks rather than deployed as isolated technological fixes.

Environmental protection and biodiversity conservation also feature prominently in the literature. Computer vision models trained on millions of images now automate wildlife monitoring at scales previously impossible, accelerating responses to poaching and habitat loss. In parallel, AI-supported acoustic monitoring systems detect illegal logging in near real time, enabling faster enforcement. In marine environments, machine learning models applied to vessel tracking data help identify illegal fishing activities, strengthening compliance with conservation regulations.

Across these domains, the review finds consistent evidence that AI can shorten the cycle between detection, decision, and action. When designed around human needs and institutional capacities, AI systems enhance situational awareness and policy responsiveness. However, the authors stress that these benefits are context-dependent and cannot be assumed to scale automatically across regions or sectors.

Global Gaps, Governance Risks, and the Cost of Intelligence

Despite the optimistic trajectory, the study identifies structural weaknesses that could undermine the role of AI as a sustainability accelerator. One of the most significant is geographic imbalance. Research output is heavily concentrated in Europe, North America, and parts of East Asia, with limited representation from the Global South. Countries such as Italy, the United States, China, Germany, and Australia dominate publication counts, while regions most vulnerable to climate change and development challenges remain underrepresented in authorship and leadership roles.

This imbalance raises concerns about whose priorities shape AI-driven sustainability agendas. The review warns that AI systems developed using data, infrastructures, and regulatory assumptions from the Global North risk being misaligned with local realities when transferred to developing contexts. Without adaptation, such systems may reproduce existing inequalities, marginalize local knowledge, and exacerbate digital divides rather than closing them.

The authors also highlight the paradox at the heart of sustainable AI. While AI applications can reduce emissions and improve resource efficiency in sectors like energy, transport, and agriculture, the computational infrastructure required to train and operate advanced models carries a growing environmental footprint. Energy-intensive data centers and large-scale model training contribute to rising electricity demand and associated emissions, particularly where power grids remain carbon-intensive.

This tension has prompted increasing calls within the literature for rigorous measurement of AI’s own environmental costs. The study notes a growing emphasis on assessing energy consumption, carbon emissions, and lifecycle impacts of AI systems as part of a human-centered approach. Without such transparency, claims that AI supports sustainability risk overlooking hidden trade-offs.

Governance challenges extend beyond environmental costs. The review documents persistent risks related to algorithmic bias, opacity, and accountability, especially in applications affecting vulnerable populations. Human-centered AI frameworks emphasize participatory design, human oversight, and explainability as mechanisms to address these risks. However, the authors find that many studies still treat ethical considerations as secondary to technical performance, rather than as core design requirements.

Another limitation identified in the study relates to the evidence base itself. Most documented applications remain localized case studies or pilot projects. While these demonstrate feasibility and potential impact, they do not yet constitute a robust basis for generalization. Scaling AI solutions for sustainability requires institutional capacity, regulatory alignment, and long-term evaluation, elements that are often missing from current implementations.

The authors argue that policy frameworks must evolve in parallel with technological advances. National AI strategies and sustainability policies need to be explicitly linked, with clear performance metrics tied to Sustainable Development Goal targets. Public investment, regulatory incentives, and international cooperation are identified as critical levers for steering AI innovation toward public value rather than narrow commercial gains.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback