Teachers are embracing AI in education while quietly fearing it could replace them

Teachers are embracing AI in education while quietly fearing it could replace them
Representative image. Credit: ChatGPT

A new review warns that teachers' emotional responses to AI may become just as important as their technical skills in determining whether the technology succeeds in classrooms. Researchers found that while many language teachers see the technology as a powerful tool for innovation, personalization, and workload reduction, they also experience anxiety, ethical uncertainty, fear of professional replacement, and emotional strain linked to the rapid pace of technological change.

The study, titled "Bridging language teachers' AI literacy and AI-induced emotions: A systematic review and a framework for future research," published in Language Teaching Research, analyzed dozens of empirical studies to examine how language teachers understand AI technologies, how they emotionally respond to them, and how these cognitive and emotional dimensions interact during AI integration in educational settings.

AI literacy becoming a core requirement for language teachers

AI literacy has rapidly evolved from a supplementary digital skill into a central professional competency for language teachers. The researchers describe AI integration in language education as a "paradigm shift" that is fundamentally altering teaching practices, assessment systems, lesson planning, and teacher-student interaction.

According to the review, AI in language education now includes a wide range of technologies such as natural language processing, automated writing assessment, intelligent tutoring systems, automatic speech recognition, chatbots, and generative AI platforms. These technologies are increasingly influencing how teachers prepare materials, evaluate learners, and conduct classroom activities.

The researchers bring together findings from 22 empirical studies focused on language teachers' AI literacy and identified four major dimensions shaping educators' preparedness for AI-driven teaching environments: understanding AI, applying AI in pedagogy, evaluating AI critically, and addressing AI ethics.

  1. Understanding AI: This involves teachers gaining foundational awareness of how AI systems function, what their limitations are, and how they can be used responsibly in language education. The review found that many teachers increasingly recognize AI as an unavoidable part of modern education, yet their understanding remains uneven across institutions and regions.
  2. Applying AI: It focuses on practical classroom integration. Teachers are increasingly using AI tools for lesson planning, content generation, personalized learning tasks, and formative assessment. The study found that AI is often being treated as a collaborative assistant capable of generating teaching resources, adapting materials to learner needs, and supporting differentiated instruction.
  3. Evaluating AI: Effective AI use requires active teacher involvement rather than passive reliance on automated systems. Teachers must continuously evaluate AI-generated outputs, refine prompts, and align AI use with pedagogical goals. One of the most significant findings involved the emergence of "prompt literacy" as a critical teaching skill. Teachers increasingly need the ability to design effective prompts for generative AI systems, refine those prompts iteratively, and critically assess the resulting outputs. The review describes prompt literacy as both a technical and pedagogical competency that shapes how effectively AI tools can support language learning.
  4. Addressing AI ethics: Teachers across multiple studies expressed worries about algorithmic bias, AI hallucinations, plagiarism, overreliance on automated systems, data privacy, and declining student autonomy. These concerns were not treated as peripheral issues but as key elements of AI literacy itself. Researchers found that ethical awareness is becoming deeply embedded in teachers' understanding of AI competence. Many educators are increasingly aware that AI-generated content may contain inaccuracies, reinforce bias, or undermine academic integrity if used without careful supervision.

The study argues that AI ethics should not be viewed as an optional add-on to technical literacy. Instead, ethical reasoning must operate as the "moral core" of AI literacy, influencing how teachers evaluate, apply, and regulate AI technologies in classrooms.

Despite growing interest in AI integration, the review identified substantial barriers preventing teachers from fully engaging with these technologies. Many teachers reported limited digital competence, insufficient training, lack of institutional guidance, inadequate technical support, and uncertainty about how to align AI tools with curriculum objectives.

These barriers are particularly severe when institutions introduce AI tools without comprehensive professional development programs. Teachers frequently reported feeling unprepared to manage AI-related challenges despite recognizing the technology's educational potential. Researchers also observed significant demand for structured professional learning opportunities focused specifically on AI in language education. Teachers called for ongoing training that combines technical instruction with ethical guidance, classroom applications, and emotional support.

Teachers experiencing both excitement and anxiety over AI adoption

The review found that AI integration is generating a highly complex emotional landscape among language teachers. The researchers analyzed 12 empirical studies focused specifically on AI-induced emotions and identified four major emotional categories shaping teachers' responses to AI technologies.

  • Challenge emotions includes feelings such as excitement, curiosity, enthusiasm, and creative engagement. Teachers often experienced these emotions when they viewed AI as an opportunity to improve teaching efficiency, personalize instruction, or explore innovative learning activities. Many educators reported that AI reduced repetitive workloads, simplified lesson preparation, and enabled more flexible classroom design. These positive emotional experiences often reinforced continued experimentation with AI tools. Teachers frequently experienced satisfaction and relief when AI helped reduce administrative burdens or streamline routine teaching tasks. Personalized learning functions also contributed to positive emotions by allowing teachers to better address diverse learner needs. However, the review simultaneously identified widespread negative emotional responses associated with AI adoption.
  • Deterrence emotions includes anxiety, fear, worry, and stress. Teachers often experienced these emotions when they perceived AI as technically difficult, disruptive, or insufficiently regulated. Many teachers experienced emotional strain due to unfamiliarity with AI systems, lack of institutional training, and uncertainty regarding AI's long-term impact on teaching careers. Some educators worried that AI might eventually reduce the professional value of language teachers or fundamentally alter traditional teaching roles.
  • Loss emotions involved frustration, dissatisfaction, anger, and emotional exhaustion. These emotions often emerged when teachers felt they lacked control over AI implementation or perceived institutional pressure to adopt technologies they did not fully trust. Some teachers expressed concern that AI could weaken teacher-student relationships, undermine authentic human interaction, or diminish the emotional dimensions of language learning. Others feared becoming overly dependent on automated systems or losing professional autonomy.
  • Achievement emotions included happiness, enjoyment, and professional fulfilment. Teachers experienced these emotions when AI tools successfully improved learning outcomes, enhanced engagement, or supported innovative teaching strategies.

Teachers rarely experience purely positive or purely negative emotions regarding AI. Instead, emotional responses are often deeply ambivalent. Many educators simultaneously viewed AI as exciting and threatening, useful and risky, empowering and destabilizing. Teachers often expressed optimism about AI's educational potential while also worrying about ethical risks, professional displacement, and declining learner independence.

Notably, emotional responses are strongly influenced by teachers' perceptions of control. Teachers who felt capable of understanding and managing AI tools were more likely to experience positive emotions, while those who felt overwhelmed or unsupported tended to develop anxiety and resistance.

The review found that institutional environments also play a major role in shaping emotional responses. Schools and universities that provide collaborative support systems, peer learning communities, and structured training tend to foster more positive emotional experiences. On the other hand, institutions that implement AI without adequate support often intensify emotional distress among educators.

New framework links AI literacy directly to teacher emotions

To bridge the gap between technical competence and emotional experience, the researchers proposed a new conceptual framework grounded in Appraisal Theory, a psychological model explaining how cognitive evaluations shape emotional responses.

The framework argues that teachers' emotional reactions to AI are directly influenced by how they cognitively appraise AI technologies. When teachers perceive AI as an opportunity that aligns with their pedagogical goals and believe they have sufficient control over its implementation, they are more likely to experience positive emotions such as excitement and professional empowerment.

In contrast, when teachers perceive AI as a threat to professional identity, teaching autonomy, or classroom relationships, they are more likely to experience anxiety, fear, frustration, and resistance.

The framework identifies two major appraisal processes shaping emotional responses.

  • Primary appraisal involves evaluating whether AI represents an opportunity or a threat. Teachers who see AI as supporting innovation, improving efficiency, or enhancing instruction tend to respond positively. Those who associate AI with job insecurity or pedagogical disruption tend to react negatively.
  • Secondary appraisal focuses on perceived control. Teachers assess whether they possess the skills, institutional support, and adaptability needed to manage AI-related changes successfully.

These appraisals are heavily shaped by broader social and institutional contexts, including school leadership, policy frameworks, professional culture, and peer support systems.

The study holds that improving AI literacy alone will not guarantee successful AI integration in education. Emotional support structures are equally important.

  • Institutions must develop modular AI literacy training programs specifically tailored to language educators. These programs should combine technical instruction with ethical guidance, emotional awareness, and classroom-focused applications.
  • Professional learning communities where teachers can openly discuss AI-related concerns, share practical experiences, and collectively develop coping strategies are also crucial.
  • Scenario-based ethical training is also another critical need. Researchers argue that teachers require structured opportunities to navigate realistic dilemmas involving plagiarism, data privacy, algorithmic bias, and AI-assisted assessment.
  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback