Trustworthy AI systems overlook environmental accountability

Trustworthy AI systems overlook environmental accountability
Representative image. Credit: ChatGPT

A new study warns that the systems designed to ensure trustworthy AI are failing to address one of the most critical dimensions of accountability: sustainability. The environmental footprint of these technologies is expanding, raising urgent concerns about how they are governed.

A new study by Fatemeh Ahmadi Zeleti of the University of Galway, titled "Systems Governance for Trustworthy AI: A Framework for Environmental Accountability" and published in Systems, examines the shortcomings of existing AI trust mechanisms in integrating environmental accountability into governance frameworks. The research introduces a systems-based framework that positions AI trust mechanisms not just as ethical safeguards but as governance infrastructures capable of managing environmental impacts across the full lifecycle of AI systems.

AI trust frameworks overlook environmental accountability despite rising impact

AI is now embedded in critical public and industrial systems, from urban mobility and logistics to energy management and infrastructure planning. These systems promise efficiency gains through optimisation and automation, but they also generate significant environmental costs, including high energy consumption, data centre loads, water use, and hardware-related emissions.

Despite this growing footprint, the study finds that existing AI trust mechanisms remain narrowly focused on ethical and legal dimensions such as fairness, transparency, and privacy. Certification schemes, trust labels, and assurance frameworks are widely used to signal compliance with these principles, yet they rarely incorporate measurable environmental criteria. As a result, AI systems can be considered "trustworthy" while still contributing to ecological degradation.

The research identifies this as a structural gap in AI governance. Trust mechanisms are designed to translate ethical principles into assessable criteria, but sustainability is typically treated as a peripheral concern rather than an operational requirement. Environmental impacts, which occur across the entire lifecycle of AI systems from training to deployment and disposal, are largely excluded from certification processes.

This disconnect reflects broader institutional priorities. Regulatory frameworks such as the EU AI Act emphasize risk management and fundamental rights, reinforcing a focus on ethical compliance. Meanwhile, environmental considerations remain difficult to measure and standardize, limiting their integration into governance systems.

The analysis of nine major AI trust initiatives reveals a consistent pattern. None of the examined frameworks require measurable environmental performance indicators such as energy use, carbon emissions, or resource efficiency. While some initiatives reference sustainability at a conceptual level, these references are not translated into auditable or enforceable criteria.

This absence has significant implications. Without measurable indicators, environmental impacts remain invisible within governance processes, preventing effective oversight and accountability. The study argues that this lack of visibility weakens the feedback mechanisms needed to align AI development with sustainability goals.

A systems governance framework links trust, sustainability, and accountability

To address this gap, the study introduces a three-dimensional analytical framework that integrates environmental performance into AI governance. The framework reconceptualises trust mechanisms as socio-technical systems that shape how information flows, how decisions are made, and how accountability is enforced.

The framework is built around three core dimensions: trust-building effectiveness, governance readiness and institutional integration, and sustainable adoption. Together, these dimensions capture how AI systems communicate accountability, how governance structures support oversight, and how systems adapt over time through feedback loops.

Trust-building effectiveness focuses on how information about AI performance is communicated to stakeholders. In current systems, transparency mechanisms highlight ethical safeguards but fail to disclose environmental impacts. This limits the ability of users, regulators, and organisations to assess the sustainability of AI systems.

Governance readiness examines how environmental accountability is embedded within institutional structures, including certification criteria, audit processes, and regulatory frameworks. The study finds that most trust mechanisms lack formal integration of environmental metrics, reflecting a governance design that prioritizes ethical compliance over resource management.

Sustainable adoption addresses the lifecycle dimension of AI systems, emphasizing the need for continuous monitoring, feedback, and adaptation. The absence of environmental performance indicators disrupts this process, preventing systems from learning and improving over time in response to ecological impacts.

The study highlights the role of Environmental Performance Indicators (EPIs) as a critical component of this framework. EPIs function as governance tools that define what is measured, reported, and improved. Metrics such as energy consumption, carbon intensity, and hardware lifecycle impacts can provide the basis for accountability, but only if they are embedded within certification and reporting systems.

Without such integration, sustainability remains symbolic rather than operational. The research emphasizes that indicators must be measurable, disclosed, and subject to verification to influence organisational behaviour. This distinction is central to the study's argument that environmental accountability must move from principle to practice.

Real-world cases reveal governance gaps in AI-driven systems

The study applies its framework to two real-world urban mobility systems: Helsinki's Whim application and Barcelona's smart mobility ecosystem. These cases illustrate how AI can contribute to sustainability while also exposing governance limitations.

Helsinki's Whim platform uses AI-driven route optimisation and multimodal planning to reduce reliance on private cars. The system integrates real-time data and algorithmic decision-making to improve transport efficiency. While the platform promotes sustainability through reduced emissions and increased use of shared mobility, its environmental performance is not systematically measured or disclosed within governance frameworks.

Environmental claims associated with the platform remain largely narrative rather than verifiable. Metrics such as carbon emissions per trip or energy use are not integrated into user-facing systems or certification processes. This reflects a broader issue in privately operated AI systems, where market incentives prioritize performance and scalability over environmental accountability.

Barcelona's smart mobility system presents a different model, characterized by strong public governance and institutional oversight. The city integrates AI-driven monitoring and optimisation into a broader smart city framework, supported by open data platforms and public transparency initiatives. Environmental indicators such as air quality and emissions are monitored and made accessible to stakeholders.

However, even in this more advanced governance context, environmental accountability is not embedded within AI-specific trust mechanisms. Sustainability is managed at the system level rather than through certification or assurance processes. This highlights a disconnect between broader environmental governance and AI-specific accountability frameworks.

The comparison underscores a key finding: environmental performance in AI systems is influenced by governance conditions rather than technology alone. Both cases demonstrate the potential of AI to support eco-efficient outcomes, but neither provides a standardized or verifiable framework for measuring and reporting environmental impacts.

Bridging the gap between AI governance and sustainability goals

AI is seen as a potential tool for advancing sustainability goals, including climate action, energy efficiency, and urban resilience. However, the absence of operational environmental indicators limits the ability of policymakers to evaluate and manage its impact.

Without standardized metrics, sustainability claims related to AI remain difficult to verify, undermining alignment with global objectives such as the Sustainable Development Goals. The research emphasizes that integrating EPIs into governance frameworks is essential for bridging this gap.

The study also highlights the importance of responsible resource governance, which extends AI accountability beyond ethical considerations to include material and energy flows. This approach recognizes that AI systems are embedded in resource-intensive infrastructures and that governance must address their environmental implications across the entire lifecycle.

From a systems perspective, trust mechanisms play a critical role in shaping behaviour. By defining what information is measured and disclosed, they influence how organisations design, deploy, and evaluate AI systems. Integrating environmental indicators into these mechanisms can therefore drive more sustainable practices.

However, the study acknowledges that this integration presents challenges. Environmental impacts are often distributed across complex supply chains, making measurement and standardization difficult. Incorporating sustainability criteria into certification processes may also increase complexity and cost, requiring careful design and prioritisation.

Toward environmentally accountable AI governance

Current AI trust mechanisms are insufficient for addressing the environmental challenges associated with digital transformation. While they provide valuable tools for ethical assurance, they lack the capacity to manage ecological impacts, creating a gap in governance.

To address this, the research calls for a shift toward integrated governance frameworks that combine ethical, technical, and environmental accountability. This includes embedding measurable environmental indicators into certification schemes, enhancing transparency through public reporting, and strengthening institutional alignment with sustainability standards.

The study also outlines recommendations for key stakeholders. Academic research should focus on developing and validating integrated frameworks that incorporate environmental dimensions. Regulators should mandate disclosure of energy use, emissions, and resource impacts, while aligning AI governance with climate policy objectives. Industry and certification bodies should adopt sustainability-by-design approaches, integrating environmental metrics into development and assurance processes.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback