Digital literacy no longer optional, it’s a fundamental right in AI age


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 15-10-2025 22:50 IST | Created: 15-10-2025 22:50 IST
Digital literacy no longer optional, it’s a fundamental right in AI age
Representative Image. Credit: ChatGPT

Artificial intelligence has entered classrooms faster than policymakers can legislate its boundaries. A new study dissects this regulatory gap, revealing how Europe’s AI laws intersect with the emerging right to digital literacy. Published in Frontiers in Computer Science, the study “Legal Perspectives on AI and the Right to Digital Literacy in Education" provides one of the first comprehensive legal analyses of how the EU AI Act reshapes responsibilities for educators, governments, and students across Europe.

The authors argue that while AI promises personalized learning and administrative efficiency, it also brings profound legal and ethical challenges that demand immediate attention. From algorithmic grading and automated admissions to emotion-recognition systems and generative learning tools, the study identifies a critical need for transparency, accountability, and human oversight in educational AI systems.

Defining digital literacy as a legal right

The paper establishes that digital literacy is no longer a policy preference but a legal entitlement. Drawing from the Charter of Fundamental Rights of the European Union and the Greek Constitution, the study situates digital literacy within the broader rights to education, information, and personal development.

The authors highlight Articles 5A and 16 of the Greek Constitution, which ensure citizens’ access to modern technology and education. They interpret these provisions as implicitly supporting a right to digital literacy, meaning that every student should have equitable access not only to digital tools but also to the skills required to navigate them critically and responsibly.

At the European level, the study connects this right with EU directives on education and digital inclusion, which emphasize that equal access to technology is essential for democratic participation in the digital age. The authors argue that as artificial intelligence becomes embedded in educational settings, the right to digital literacy must evolve to include AI literacy - the ability to understand, interpret, and challenge algorithmic decisions that influence learning outcomes.

This interpretation aligns with a growing consensus among European scholars that education systems must prepare students to function not only as digital citizens but also as AI-literate citizens, capable of recognizing bias, understanding automated reasoning, and exercising their rights in the face of algorithmic decision-making.

AI as a high-risk technology in classrooms

The study situates the EU AI Act (Regulation 2024/1689) at the center of the educational debate. According to this regulation, most AI systems used in educational contexts, such as those determining student admissions, grading, or behavioral monitoring, are officially designated as “high-risk.”

This classification carries significant implications. Institutions deploying such systems must conduct fundamental rights impact assessments, ensure algorithmic transparency, maintain human oversight, and document compliance with data protection regulations. The study emphasizes that these obligations are not optional; they are now integral to lawful educational governance under EU law.

The authors explore several case scenarios to illustrate the legal complexities. In automated examinations, AI tools may streamline assessment but risk reinforcing biases embedded in training data. AI-based admissions systems, while efficient, can violate equality principles if their decision-making processes are opaque. Even more concerning are emotion-recognition technologies, systems that analyze facial expressions or physiological signals to gauge student engagement. Under the EU AI Act, these technologies are prohibited in education except for narrowly defined medical or safety purposes.

The paper also examines how generative AI tools, such as text, image, and code generator, are blurring boundaries between learning assistance and academic integrity. The authors note that while generative systems can foster creativity, they also raise questions about authorship, originality, and fairness in educational evaluation. The challenge, they argue, lies in ensuring that AI complements human judgment rather than replacing it.

In this context, explainable AI (XAI) emerges as a legal and ethical imperative. The study emphasizes the importance of ensuring that educators and students can understand and challenge the reasoning behind algorithmic decisions. Tools like LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (Shapley Additive Explanations) are cited as examples of interpretability methods that could help demystify AI operations in schools and universities. However, the authors caution that these tools remain technically complex and may require targeted teacher training and simplified reporting to be effective.

Human oversight, ethical accountability, and policy reform

Primarily, the paper insists that AI in education must remain human-centered. Algorithms should support, not supplant, teachers. The researchers warn that excessive reliance on AI could lead to a “technocratic shift” in education, where data-driven efficiency replaces emotional intelligence and ethical guidance.

The authors propose a tiered roadmap for policy reform:

  • Short-term measures should include the integration of AI literacy into curricula, publication of AI explainability scores for tools used in classrooms, and the creation of clear accountability mechanisms defining who bears responsibility for algorithmic errors.
  • Medium-term actions should focus on inclusive access to AI systems, public funding for ethical AI research, and the establishment of educational ethics committees to oversee the deployment of emerging technologies.
  • Long-term strategies must ensure continuous monitoring, evaluation, and governance, with public reports and open audits that maintain public trust in AI-driven education systems.

The authors frame these reforms as necessary steps to balance innovation with human dignity, autonomy, and privacy. They argue that the educational mission of developing critical, empathetic, and socially responsible citizens cannot be fulfilled if technology is allowed to operate without ethical constraints.

The study also underscores the need for cross-disciplinary collaboration among legal experts, educators, data scientists, and policymakers. Only through such collaboration can societies design governance models that ensure AI systems align with democratic principles and cultural values.

A human-centered future for AI in education

The researchers acknowledge that AI has already become indispensable in administrative and pedagogical processes. Yet, they stress that its integration must never come at the expense of human agency. The right to digital literacy, as they define it, is not limited to learning how to use technology, it encompasses the ethical and civic competence to question and shape the technologies that govern educational life.

The study calls for a shared governance model in which transparency, fairness, and human oversight define the digital transformation of education. The authors advocate for educational policies that prioritize critical digital awareness over technological adoption alone, ensuring that every citizen gains the skills and knowledge to thrive responsibly in an AI-driven society.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback