AI is reshaping education, but sustainability is far from assured
Education systems around the world are embracing artificial intelligence (AI) as a solution to rising costs, expanding enrollment, and digital transformation goals. However, new research published in the journal Sustainability states that these gains do not automatically translate into sustainable education and may, under certain conditions, weaken equity, academic integrity, and the long-term social mission of education.
The study, titled Sustainable Education in the Age of Artificial Intelligence and Digitalization: A Value-Critical Approach, challenges the assumption that technological innovation automatically advances sustainability in education. Instead, it argues that AI can either support or undermine sustainable education depending on how it is governed, regulated, and ethically framed.
When efficiency replaces educational purpose
AI-driven educational reform often substitutes efficiency for sustainability. In many policy and institutional contexts, sustainability is implicitly redefined as scalability, optimization, and measurable performance. AI systems are praised for automating assessment, predicting student outcomes, and streamlining administration, but these gains are frequently treated as evidence of educational sustainability without examining whether they align with deeper educational aims.
The research describes this dynamic as an efficiency–sustainability substitution mechanism. Under this mechanism, sustainability becomes synonymous with what can be quantified, automated, and benchmarked. Learning outcomes are increasingly measured through dashboards, predictive analytics, and performance indicators, while formative goals such as critical thinking, moral development, and democratic participation receive less attention.
This shift does not require explicit market rhetoric. It often emerges through modernization narratives that equate digital transformation with progress. As a result, educational meaning is displaced by technical performance, and sustainability discourse becomes metric-driven rather than value-driven. The study warns that this transformation risks hollowing out education’s human dimension, turning learning into a managed process rather than a formative experience.
The authors argue that this substitution can be resisted, but only under specific conditions. AI supports sustainable education when its adoption is subordinated to explicit educational purposes, ethical priorities, and human oversight. When educators retain authority over pedagogical decisions and when values such as dignity and social responsibility guide system design, efficiency gains do not automatically crowd out educational meaning.
Datafication, commodification, and the governance problem
Datafication emerges as a critical pathway through which AI can undermine sustainable education. AI systems dramatically expand the collection and analysis of educational data, including behavioral traces, performance metrics, and predictive risk profiles. These data streams increasingly shape institutional decision-making, redefining accountability around analytics and comparative metrics.
The research outlines a governance pathway that links datafication to commodification. As educational processes are translated into data, learning and student behavior become assets that can be managed, exchanged, and monetized. Platform-based governance aligns educational value with market logic, emphasizing competitiveness, visibility, and efficiency over ethical and pedagogical considerations.
Under weak regulatory conditions, AI-driven platforms accelerate this transformation. Learning analytics dashboards, automated feedback systems, and predictive modeling tools subtly reshape how success is defined and how resources are allocated. Students are increasingly represented as data profiles, while educational outcomes are reduced to standardized outputs compatible with market coordination.
The study stresses that this outcome is not technologically inevitable. It is mediated by governance choices. Public-oriented data governance, transparency in algorithmic decision-making, and limits on extractive data practices can constrain commodification. When institutions treat AI as a public educational tool rather than a market instrument, the datafication pathway can be redirected toward supportive rather than extractive outcomes.
However, the authors caution that current trends favor platform dominance and managerial accountability. Without deliberate intervention, AI adoption risks reinforcing commercial priorities at the expense of educational values, particularly in higher education systems exposed to international competition and privatization pressures.
Equity, ethics, and the conditions for sustainable AI in education
The research finds that AI often amplifies existing inequalities. Access to AI-driven educational tools depends on infrastructure, institutional capacity, and digital literacy, all of which vary sharply across regions and social groups.
Students from low-income, rural, and marginalized backgrounds frequently benefit less from AI-based systems, not because of technology itself but because of uneven governance and resource distribution. AI does not generate equity autonomously. Instead, it magnifies existing structural conditions. In well-governed contexts, AI can support inclusion and personalized learning. In poorly regulated environments, it deepens exclusion and bias.
Ethical concerns further complicate sustainability claims. The automation of assessment and feedback raises questions about authorship, originality, and academic integrity. Generative AI tools blur boundaries between assistance and substitution, potentially weakening critical reasoning and responsibility for knowledge production. Algorithmic bias, opacity, and surveillance-driven monitoring threaten trust in educational institutions and undermine epistemic integrity.
The authors introduce the concept of normative friction as a necessary condition for sustainable AI adoption. Normative friction refers to ethical constraints, deliberative oversight, and institutional safeguards that slow down purely instrumental adoption. Where such friction is absent, AI integration follows managerial rationalities focused on what works and what scales. Where it is present, AI remains subordinate to educational purpose.
The study highlights culturally and value-based educational contexts as analytically revealing environments. In settings where education is closely tied to moral formation and epistemic authority, AI’s impact becomes especially visible. These contexts expose how AI can commodify knowledge, displace teacher authority, and reshape the meaning of learning when not carefully governed.
Importantly, the authors do not treat these contexts as exceptional. Instead, they function as sensitivity amplifiers that reveal dynamics present across education systems more broadly. The same AI system may be enabling in one context and corrosive in another, depending on governance, values, and institutional priorities.
A conditional model for sustainable education in the AI era
AI supports sustainable education only when sustainability is not substituted by efficiency metrics, when datafication is governed by public-oriented accountability, when ethical constraints shape adoption, when equity is treated as a governance responsibility, and when epistemic integrity is actively protected.
This framework moves beyond the simplistic claim that AI has benefits and risks. Instead, it specifies mechanisms and boundary conditions that determine outcomes. The decisive factor is not technology itself but the ethical, institutional, and political frameworks that govern its use.
At the policy level, the findings challenge innovation-centered education strategies that prioritize competitiveness and digital adoption. Sustainable education requires governance frameworks that align AI deployment with social justice, human dignity, and long-term educational aims. This includes robust data protection, algorithmic accountability, and investment in reducing digital divides.
For educational practice, the study reinforces the central role of educators. AI must remain a supportive instrument rather than an authoritative substitute. Sustainable integration depends on preserving reflective pedagogy, fostering critical engagement with AI outputs, and maintaining human relationships at the core of learning.
- READ MORE ON:
- AI in education sustainability
- sustainable education AI
- artificial intelligence education ethics
- AI digitalization education
- education sustainability risks
- AI governance education
- ethical AI education
- digital transformation education
- AI and social equity education
- future of sustainable education
- FIRST PUBLISHED IN:
- Devdiscourse

