Sustainable AI remains possible but only with strong governance and regulation
One of the most pressing concerns is environmental impact. Training and running large AI models requires vast computing power, which in turn demands significant electricity and water resources. Major technology companies have reported sharp increases in carbon emissions in recent years, driven largely by the expansion of data centers needed to support AI services. These emissions gains risk offsetting progress made in other areas of decarbonization, particularly if AI deployment continues to accelerate without parallel investment in clean energy and efficiency.
The collision of sustainability and technology is now raising a critical question for policymakers and industry alike: can artificial intelligence (AI) grow without undermining the very systems it claims to improve?
That question is examined in the study May AI be sustainable? An outlook on sustainability and technology, published in AI & Society. The review offers a wide-ranging assessment of artificial intelligence through the lens of sustainability, arguing that AI’s future depends on whether it can meet environmental, economic, and social goals at the same time rather than advancing one at the expense of the others .
Redefining sustainability in the age of artificial intelligence
The study challenges how sustainability is often misunderstood in technology debates. Sustainability is not limited to reducing carbon emissions or energy use. Instead, it rests on three equally important pillars: environmental protection, economic viability, and social well-being. A technology that excels in only one of these areas cannot be considered sustainable if it damages the others over time.
Applying this framework to artificial intelligence reveals a complex picture. AI has demonstrated a strong ability to improve efficiency, reduce costs, and enhance decision-making across many sectors. In healthcare, AI systems have already surpassed human specialists in early detection of diseases such as breast and prostate cancer, allowing earlier treatment and better patient outcomes. In logistics, manufacturing, and transport, AI-driven optimization has improved safety, reduced waste, and lowered operational costs. In assistive technologies, AI-powered prosthetics and exoskeletons are improving mobility and quality of life for older adults and people with disabilities.
These gains align closely with social and economic sustainability goals. Affordable healthcare, safer transport, and accessible assistive devices all contribute to long-term societal resilience. The authors argue that when AI is designed with these outcomes in mind, it can play a direct role in supporting sustainable development.
However, the paper stresses that sustainability cannot be claimed based on benefits alone. Every technology must also be assessed for its lifecycle costs, unintended consequences, and long-term systemic effects. In this respect, artificial intelligence presents serious risks that are often overlooked in public discussions focused on innovation and growth.
One of the most pressing concerns is environmental impact. Training and running large AI models requires vast computing power, which in turn demands significant electricity and water resources. Major technology companies have reported sharp increases in carbon emissions in recent years, driven largely by the expansion of data centers needed to support AI services. These emissions gains risk offsetting progress made in other areas of decarbonization, particularly if AI deployment continues to accelerate without parallel investment in clean energy and efficiency.
The study also highlights a common flaw in how AI’s environmental footprint is measured. Many optimistic assessments focus only on the final training stage of models while ignoring the energy-intensive development process, repeated experiments, and ongoing operational demands. When these factors are included, the true environmental cost of advanced AI systems becomes far more substantial.
Economic gains versus social disruption
Artificial intelligence has proven effective at reducing labor costs and increasing productivity, making it highly attractive to businesses. Yet these same efficiencies can threaten job security, income stability, and fair labor practices if not carefully managed.
The authors warn that current economic systems often reward automation over human augmentation. Tax structures and corporate incentives tend to favor replacing workers with AI rather than using AI to support and enhance human skills. This dynamic risks widening income inequality, concentrating wealth among technology owners, and weakening the bargaining power of workers across many sectors.
Evidence cited in the study suggests that AI-driven restructuring is already reshaping labor markets. Companies adopting AI at scale have slowed hiring and reduced headcounts, even as productivity and profits increase. While some employees may benefit from higher wages or reduced workloads, many others face job loss, downward pressure on wages, or increased performance demands tied to AI-enhanced benchmarks.
The paper also draws attention to less visible labor issues within the AI ecosystem itself. The development of AI models relies heavily on human labor for data labeling, content moderation, and model testing. These tasks are often outsourced to low-paid workers who face high workloads, strict performance monitoring, and limited job security. Such practices undermine social sustainability by shifting the hidden costs of AI development onto vulnerable groups.
Another key risk identified is the concentration of power. Control over advanced AI systems is held by a relatively small number of companies and institutions, giving them disproportionate influence over information flows, labor markets, and even political processes. Without effective oversight, this concentration could reshape power balances between nations, corporations, and citizens in ways that are difficult to reverse.
The authors argue that these trends challenge the assumption that technological progress naturally leads to shared prosperity. Instead, they point to the need for deliberate policy choices that ensure AI-driven economic gains are distributed fairly and support long-term social stability.
Cultural integrity, regulation, and the path to sustainable AI
The study also addresses the relationship between generative AI and cultural sustainability. Generative models for text, images, music, and video are trained on massive collections of existing human-created content, much of it protected by copyright. The widespread use of such material without consent, attribution, or compensation has triggered legal disputes and protests across creative industries.
The authors argue that this issue goes beyond legal compliance. Cultural production depends on fair recognition and reward for creative labor. If artists, writers, and journalists are systematically excluded from the economic value generated by AI systems trained on their work, incentives to create original content may decline. Over time, this could reduce the diversity and quality of cultural output available for both human audiences and future AI training.
The study also highlights technical risks linked to this trend. Research shows that AI models trained increasingly on AI-generated content rather than human-created material can degrade in quality, producing repetitive, less accurate, and less diverse outputs. In this sense, exploiting creative labor without sustaining it may undermine the long-term viability of AI itself.
Misinformation and manipulation represent another major threat to social sustainability. AI systems can produce highly persuasive but false content, amplifying existing biases and enabling large-scale deception. The authors note that while such risks are often framed as technical problems, they are deeply social in nature. Users may lack the skills or resources to critically evaluate AI-generated information, making them vulnerable to manipulation in political, financial, and social contexts.
Privacy and surveillance concerns further complicate the picture. AI-powered data analysis, facial recognition, and behavioral monitoring tools are increasingly used by governments and corporations. In some contexts, these technologies are already enabling intrusive surveillance and social control, particularly in authoritarian settings. Without strong legal safeguards, the expansion of AI risks eroding personal freedoms and democratic norms.
Despite these challenges, the study does not conclude that sustainable AI is unattainable. Instead, it outlines conditions under which AI could align with sustainability principles. Central to this vision is governance. The authors call for clear regulatory frameworks that address environmental impact, labor practices, data use, and accountability. They emphasize that regulation should not aim to slow innovation for its own sake, but to steer AI development toward outcomes that support long-term societal goals.
The paper also calls for interdisciplinary approaches. Sustainable AI cannot be achieved through technical design alone. It requires collaboration between engineers, economists, social scientists, policymakers, and affected communities. Decisions about how AI is deployed, who benefits from it, and who bears its costs are fundamentally political and ethical questions, not purely technical ones.
- FIRST PUBLISHED IN:
- Devdiscourse

