Green AI dilemma: Sustainability gains come at the cost of user confidence


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 30-03-2026 06:42 IST | Created: 30-03-2026 06:42 IST
Green AI dilemma: Sustainability gains come at the cost of user confidence
Representative image. Credit: ChatGPT

Artificial intelligence (AI) systems are driving unprecedented environmental costs, and a new study finds that making these costs visible to users can sharply alter behavior. Their research reveals that energy transparency tools embedded in AI interfaces significantly nudge users toward more sustainable choices, but may simultaneously reduce perceived quality and satisfaction, exposing a critical trade-off at the heart of responsible AI design.

The study, titled “Good for the Planet, Bad for Me? Intended and Unintended Consequences of AI Energy Consumption Disclosure,” was presented at the CHI 2026 Conference on Human Factors in Computing Systems. It examines how disclosing the energy consumption of AI models influences user decisions, behavior, and perception, offering one of the first empirical insights into how sustainability signals shape real-world interaction with generative AI systems.

The research highlights a growing dilemma: making AI more environmentally transparent can encourage greener choices, but may unintentionally erode trust in the very systems users adopt.

Energy disclosure dramatically shifts AI model choices

The study is based on a simple but powerful intervention: energy consumption disclosure. Researchers designed an experiment involving 365 participants, asking them to choose between two AI models, one framed as high-performance and energy-intensive, and another presented as more efficient but less powerful.

The results were striking. When no energy information was provided, only a small fraction of users selected the energy-efficient model. However, when an energy label was introduced, the proportion of users choosing the sustainable option surged dramatically. The odds of selecting the efficient model increased more than twelvefold, demonstrating a strong behavioral impact rarely seen in digital nudging interventions.

This finding challenges earlier assumptions that nudges typically produce only modest effects. In contrast, the presence of a clear energy label proved to be a decisive factor, outweighing even users’ pre-existing environmental attitudes in many cases. The study shows that interface design, rather than personal values alone, can strongly shape decision-making in AI systems.

The mechanism behind this shift is rooted in nudge theory, which suggests that small changes in how choices are presented can influence behavior without restricting freedom. By making the environmental cost of AI visible, users were more likely to align their decisions with sustainability goals.

As regulators increasingly push for transparency in AI systems, energy labeling could become a standard feature, similar to efficiency labels used for appliances or nutrition labels on food products. However, the study also highlights that such transparency introduces a direct trade-off for users. Choosing a smaller, energy-efficient model often implies lower performance, creating a tension between sustainability and functionality that users must actively navigate.

No behavioral change after choice but clear perception bias emerges

While energy labels successfully influenced initial decisions, the study found no evidence that these choices altered how users interacted with AI systems afterward. Participants who selected energy-efficient models did not use them more intensively or differently compared to those who chose high-performance models.

This finding contradicts expectations based on moral licensing theory, which suggests that individuals who make an ethical choice may later compensate by behaving less responsibly. In this case, users who opted for the greener AI model did not increase their usage or engage in more resource-intensive behavior.

However, the absence of behavioral change does not mean the intervention had no further consequences. Instead, the study uncovered a significant psychological effect: users who chose the energy-efficient model consistently reported lower satisfaction and perceived quality.

Importantly, this perception gap emerged despite the fact that both models in the experiment were technically identical. The difference was purely in how they were labeled and presented to users. This phenomenon aligns with the concept of the placebo effect, where expectations shape subjective experience. In this case, users assumed that the “eco-friendly” model was less capable, leading them to rate its performance more negatively even when it delivered the same results.

The findings suggest that user perception in AI systems is highly sensitive to framing. Labels, ratings, and contextual cues can significantly influence how users evaluate performance, regardless of actual output quality. This creates a new challenge for developers: improving transparency may inadvertently undermine user confidence if sustainability is framed as a compromise rather than a benefit.

Sustainability push creates ‘Good for the Planet, Bad for Me’ trade-off

While energy disclosure can effectively promote pro-environmental behavior, it also risks creating a perception that sustainable choices come at the cost of personal benefit.

This “good for the planet, bad for me” dynamic highlights the complexity of integrating sustainability into user-facing technologies. Unlike traditional products such as lightbulbs, where efficiency improvements do not reduce performance, AI systems often require users to balance energy savings against potential drops in output quality.

The research shows that this trade-off is not just technical but psychological. Even when performance differences are minimal or nonexistent, users may still perceive sustainable options as inferior due to expectations shaped by labeling. For designers, this presents a critical challenge. Simply adding energy labels may not be enough. Instead, more sophisticated strategies are needed to frame sustainability as a positive feature rather than a limitation.

The study suggests several possible approaches. These include emphasizing the strengths of efficient models, such as faster response times or cost savings, and designing interfaces that guide users toward sustainable choices without highlighting perceived drawbacks. Another potential solution lies in automation. Rather than asking users to choose between models, systems could automatically route queries to the most efficient model capable of handling the task, reducing the cognitive burden on users while maintaining performance standards.

As AI adoption continues to grow, even small changes in user behavior can have large-scale environmental impacts. Encouraging millions of users to choose more efficient models could substantially reduce the energy footprint of AI systems. The study also underscores the importance of user education. Transparency alone is not sufficient; users must also understand the implications of their choices and feel confident that sustainable options can meet their needs.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback