AI will not replace university teachers but control fears are rising
The findings suggest that uncertainty, not automation, is the primary driver of stress among academic staff. Teachers appear concerned about unclear rules, opaque decision-making, and the rapid introduction of AI tools without sufficient safeguards. This anxiety reflects broader tensions in higher education, where technological adoption often outpaces ethical frameworks, staff training, and regulatory clarity.
New research shows that while university teachers acknowledge the growing presence of AI in higher education, their dominant concern is not job loss but the risk that AI systems may outpace institutional control, governance, and ethical oversight. The findings point to a widening gap between technological acceleration and universities’ ability to manage its consequences for academic labor.
The study, titled Will AI Replace Us? Changing the University Teacher Role, published in the peer-reviewed journal Societies, combines survey data from 453 Ukrainian university teachers collected between 2023 and 2025 with a large-scale bibliometric analysis of global academic literature on artificial intelligence and education. The results challenge popular narratives about AI-driven academic job displacement and instead highlight deeper institutional vulnerabilities tied to control, trust, and governance.
Teachers do not expect AI to replace them, but anxiety persists
Across five independent survey waves, the study finds that university teachers consistently reject the idea that AI will replace them within the next five years. Responses show a stable pattern of skepticism toward direct automation of the teaching profession, with most respondents selecting negative or uncertain answers when asked whether AI would substitute their roles. Expectations of replacement remain well below neutral levels across all groups, suggesting no widespread belief that AI will make university teachers obsolete in the near term.
Despite this, the absence of replacement anxiety does not translate into confidence about AI’s role in higher education. A separate question reveals significantly higher concern over whether AI technologies could spiral beyond institutional control. This fear remains consistently stronger than job replacement expectations across all survey periods. In some groups, concern about loss of control reaches or slightly exceeds neutral levels, indicating a persistent unease even among faculty who do not feel personally threatened by automation.
The divergence between low replacement fear and higher control anxiety is central to the study’s conclusions. According to the authors, university teachers are not reacting to AI as a labor-saving device that threatens employment but as a powerful and unpredictable system that could undermine academic norms, professional autonomy, and institutional stability if poorly managed. This distinction reframes AI anxiety as a governance problem rather than a workforce displacement issue.
The findings suggest that uncertainty, not automation, is the primary driver of stress among academic staff. Teachers appear concerned about unclear rules, opaque decision-making, and the rapid introduction of AI tools without sufficient safeguards. This anxiety reflects broader tensions in higher education, where technological adoption often outpaces ethical frameworks, staff training, and regulatory clarity.
Loss of control emerges as the core institutional risk
The study interprets faculty perceptions through the lens of Dynamic Capabilities Theory, a management framework that focuses on how organizations sense change, seize opportunities, and transform internal structures. Within this framework, teachers’ responses are treated not as isolated psychological reactions but as early warning signals of institutional readiness or fragility.
Low expectations of replacement indicate weak sensing signals related to direct labor market disruption. In contrast, stronger fears of losing control over AI point to unresolved challenges at the seizing and transforming stages. Universities may recognize AI’s potential but struggle to integrate it responsibly into governance structures, human resource policies, and academic standards.
This imbalance has practical consequences. When faculty perceive AI as unpredictable or insufficiently governed, trust erodes. Reduced trust can weaken engagement, innovation, and willingness to adopt new tools. Over time, this may affect teaching quality, academic integrity, and institutional resilience, even if jobs themselves remain secure.
According to the research, fears of loss of control are not extreme or panic-driven. Most responses fall between mild concern and uncertainty rather than alarm. However, the persistence of this anxiety across multiple years signals a structural issue rather than a temporary reaction to novelty. As AI systems become more embedded in assessment, content generation, and administrative decision-making, unresolved governance gaps risk becoming a long-term liability for universities.
Importantly, the study shows that these concerns are not tied to a single moment or cohort. The consistency of results across five independent samples suggests that apprehension about AI control is stable rather than episodic. This stability underscores the need for institutional responses rather than individual coping strategies.
From technology users to architects of AI-driven learning
The analysis traces how the global academic understanding of the university teacher’s role has evolved alongside AI adoption. By analyzing over 26,000 research publications from 2021 to 2025, the authors identify a clear shift in how educators are positioned within AI-driven education systems.
In the early phase, teachers were largely portrayed as adopters and facilitators of new technologies. Research focused on digital tools, online learning, and efficiency gains, with educators positioned at the periphery of technological change. Their role centered on implementation rather than decision-making, reflecting a view of AI as an external innovation to be integrated into existing pedagogical models.
As generative AI tools gained prominence, academic discourse began to reposition teachers more centrally. By 2023 and 2024, research increasingly linked educators to issues of academic integrity, assessment design, and ethical oversight. Teachers emerged as mediators between students and AI systems, responsible for interpreting technological impact, safeguarding standards, and guiding responsible use.
By 2025, the literature reflects a more mature and strategic conception of the teaching role. Educators are described as designers and architects of AI-enhanced learning environments. Rather than reacting to technology, teachers are seen as shaping curricula, guiding ethical implementation, and embedding AI literacy into education. This shift aligns teaching with broader goals of innovation, sustainability, and workforce development.
The study argues that this evolution directly contradicts narratives of teacher displacement. Instead of diminishing, the professional role of educators becomes more complex and central. Teaching moves away from content delivery toward mentoring, critical thinking, and contextualization of knowledge in an environment where information is abundant but judgment remains scarce.
This transformation also links higher education to global sustainability goals. By developing students’ ethical reasoning, digital competence, and adaptability, teachers contribute to quality education, decent work, and innovation. AI becomes a tool that amplifies the need for human expertise rather than replacing it.
Governance, not automation, will define AI’s impact on higher education
The key challenge identified is not technological capability but institutional capacity. Universities must develop governance structures that address faculty concerns, clarify responsibilities, and ensure that AI adoption supports rather than destabilizes academic work.
For governments, this includes national strategies that integrate digital resilience, teacher training, and ethical standards into higher education policy. For institutions, it requires transparent AI guidelines, investment in digital literacy, and mechanisms to monitor staff well-being during technological transitions. For researchers, it calls for renewed attention to human-AI interaction models that place educators at the center of learning ecosystems.
The research also highlights the risk of misinterpreting faculty anxiety. Treating fears of AI control as resistance to innovation misses their diagnostic value. These perceptions signal where institutions may lack readiness, coordination, or trust. Ignoring them could deepen instability even in the absence of job losses.
- READ MORE ON:
- artificial intelligence in higher education
- AI and university teachers
- AI impact on teaching jobs
- AI governance in universities
- generative AI in education
- future of university teaching
- AI and academic work
- AI ethics in higher education
- digital transformation of universities
- AI and faculty perceptions
- FIRST PUBLISHED IN:
- Devdiscourse

