How generative AI is redefining teaching, learning, and academic authority

Generative AI does not function like earlier educational technologies. Unlike learning management systems or digital textbooks, large language models actively participate in core cognitive activities. Reading, writing, coding, translation, and ideation are no longer exclusively human tasks but hybrid workflows shared between humans and machines.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-12-2025 15:36 IST | Created: 23-12-2025 15:36 IST
How generative AI is redefining teaching, learning, and academic authority
Representative Image. Credit: ChatGPT

Large language models are rapidly embedding themselves into everyday academic practices, from drafting essays and summarizing readings to generating code and structuring arguments. What is emerging is not simply a new toolset but a deep reconfiguration of how knowledge is produced, validated, and learned. Now a growing body of research suggests that the stakes are not technical efficiency, but human agency itself.

A research paper titled Cyber Humanism in Education: Reclaiming Agency through AI and Learning Sciences, published on arXiv, argues that generative AI systems have become part of the cognitive and institutional infrastructure of education, demanding a fundamental rethink of how learning, teaching, and governance are designed in AI-rich environments.

When AI becomes educational infrastructure, not a tool

Generative AI does not function like earlier educational technologies. Unlike learning management systems or digital textbooks, large language models actively participate in core cognitive activities. Reading, writing, coding, translation, and ideation are no longer exclusively human tasks but hybrid workflows shared between humans and machines.

This shift has profound implications. When AI systems generate explanations, propose solutions, or structure arguments, they shape what counts as legitimate knowledge and how problems are framed. The paper warns that treating these systems as neutral tools obscures their role as active participants in meaning-making. AI becomes a co-author of educational practice, influencing not just outcomes but the epistemic process itself.

The research highlights several risks arising from this transformation. One is epistemic automation, where learners increasingly accept AI-generated outputs without fully understanding the reasoning behind them. Another is cognitive offloading, in which key intellectual tasks are delegated to machines, potentially weakening students’ capacity for judgment, problem formulation, and critical reasoning. At the professional level, the study raises concerns about the de-professionalisation of educators, as planning, feedback, and assessment risk being shifted from teachers to opaque algorithmic systems.

At the same time, the paper avoids technological determinism. It acknowledges that generative AI can support learning by scaffolding metacognition, offering personalized feedback, and enabling new forms of collaborative inquiry. The decisive factor is not whether AI is present, but how it is integrated. The study frames this as an infrastructural challenge rather than a pedagogical add-on, arguing that education systems must confront how AI reshapes the conditions under which knowledge is constructed.

Cyber humanism and the rise of algorithmic citizenship

To address these challenges, the paper introduces Cyber Humanism in Education as a conceptual and practical framework. Cyber Humanism builds on but moves beyond Digital Humanism. While Digital Humanism emphasizes protecting human values from technological harm, Cyber Humanism starts from the premise that humans and computational systems are already entangled in the co-production of knowledge, culture, and institutions.

Within this framework, AI systems are understood as cognitive infrastructures. They influence which questions are asked, which answers appear plausible, and whose contributions are amplified or marginalized. This reframing leads to a redefinition of roles within education. Students and educators are not merely users of AI systems but epistemic agents and algorithmic citizens.

Algorithmic citizenship is a central concept in the study. It refers to the rights and responsibilities individuals have in relation to the algorithmic systems that shape their opportunities, obligations, and learning trajectories. In educational settings, this extends beyond technical literacy. Algorithmic citizens are expected to understand how AI systems work, interrogate their assumptions, and participate in decisions about their design, deployment, and governance.

The research connects this idea to established findings in the Learning Sciences. Epistemic agency, the capacity to initiate, regulate, and evaluate knowledge-building processes, has long been recognized as central to meaningful learning. The study argues that generative AI fundamentally alters how epistemic agency is exercised. When learners co-author work with AI systems trained on vast and opaque datasets, the locus of agency becomes ambiguous unless explicitly addressed through design and governance.

From this perspective, the challenge is not to shield education from AI, but to redesign learning environments so that agency is preserved and expanded. Cyber Humanism positions educators and learners as stakeholders in AI infrastructures, with a legitimate role in shaping rules, norms, and institutional strategies surrounding AI use.

Reclaiming agency through reflexive learning and institutional design

The paper operationalizes Cyber Humanism through three interrelated pillars: reflexive competence, algorithmic citizenship, and dialogic design.

Reflexive competence refers to the ability of learners and educators to critically examine how AI systems participate in their cognitive processes. It extends traditional metacognition by including reflection on what is delegated to AI, why it is delegated, and what risks that delegation entails. The study argues that cognitive sovereignty depends on this reflexivity. Without it, efficiency gains may come at the cost of understanding and judgment.

Dialogic design addresses how humans and AI interact in learning environments. Rather than positioning AI as an authoritative source of answers, dialogic approaches treat it as a fallible interlocutor. Learners are encouraged to compare multiple AI-generated responses, critique their assumptions, and situate them alongside human and disciplinary sources. This prevents the silent elevation of AI outputs into unquestioned authority and keeps human interpretation central.

The third pillar, algorithmic citizenship, extends beyond the classroom. It includes participation in institutional decision-making about AI procurement, assessment policies, data governance, and compliance with regulatory frameworks such as the EU AI Act. The study argues that treating these issues as purely administrative removes educators and learners from decisions that directly affect their agency.

To demonstrate how these principles can be enacted, the paper presents case studies from higher education centered on prompt-based learning. In these designs, prompts are not treated as interface commands but as objects of inquiry. Students analyze how different prompts lead to different outputs, reflect on bias and uncertainty, and document their interactions with AI systems. Natural language becomes a bridge between everyday reasoning and computational thinking, making problem formulation visible and negotiable.

The research also introduces a new professional role, the Conversational AI Educator, formalized through a certification pathway within the EPICT ecosystem. This role recognizes that educators require specialized competencies to design, manage, and govern AI-rich learning environments. The certification maps AI literacy, pedagogical design, and ethical awareness onto existing European and global competence frameworks, translating high-level policy goals into concrete professional practice.

The study does not ignore tensions. It documents risks related to increased workload, digital stress, equity of access, and the normalization of AI-generated fluency that may weaken standards of evidence and argumentation. Reclaiming agency, the paper concludes, is not a one-time design choice but an ongoing negotiation shaped by institutional support, governance structures, and professional recognition.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback