Why teachers are still not ready for AI in the classroom?
Educators fear losing control over classroom dynamics and instructional decisions. AI-based systems, such as performance prediction models or automated grading tools, raise fears that teachers might be sidelined or monitored in unwanted ways. These anxieties align with global concerns about the rise of algorithmic management and data-driven oversight in workplaces.
A major investigation into Romanian pre-university education has uncovered deep and persistent negative attitudes among teachers toward artificial intelligence, raising significant concerns as AI-driven systems continue to spread across Europe’s education landscape. The findings show that emotional fears, ethical distrust and low digital confidence dominate educators’ perceptions, suggesting that successful AI integration in schools will require far more than technical infrastructure and policy directives.
The study, “Not Ready for AI? Exploring Teachers’ Negative Attitudes Toward Artificial Intelligence,” published in Societies, surveyed 1110 teachers across Romania, capturing detailed insights into their anxieties, levels of trust, and readiness to use AI tools. The results reveal that many educators remain unconvinced that AI belongs in the classroom, with fears rooted not only in perceived threats to teaching autonomy but also in broader social, ethical and emotional concerns. Despite widespread digital transformation in schools, confidence in AI remains low, raising questions about the future of AI-assisted teaching.
Teachers see AI as a threat
The study found that negative attitudes toward AI cluster into two major dimensions: perceived AI threat and distrust in AI fairness and ethics. These dimensions reflect deep-seated concerns among teachers about AI’s potential to cause harm, disrupt traditional teaching practices or be misused within institutions and society.
Perceived AI threat emerged as the strongest and most consistent negative attitude. Many teachers view AI as a technology that could undermine human decision-making, erode personal privacy or alter social interactions in unpredictable ways. This emotional reaction reflects broader public anxieties seen in global debates around automation, digital surveillance and the expanding influence of algorithms on daily life.
The second major factor, distrust in AI fairness, centers not on emotional fear but on skepticism regarding the ethical and organizational handling of AI systems. Teachers expressed doubts about whether AI tools operate justly, respect privacy or avoid bias. These concerns mirror international discussions on algorithmic fairness, data governance and the risk of inequitable outcomes produced by opaque digital systems.
Gender played a significant role in shaping perceived AI threat. Female teachers were more likely to express fear-based concerns, a trend also observed in studies exploring gendered differences in technology anxiety. However, gender had minimal influence on ethical distrust, suggesting that skepticism toward fairness and transparency is widespread across the teaching population.
Interestingly, teachers’ age, professional experience and teaching rank showed no significant influence on either dimension. This uniformity suggests that concerns about AI cut across professional seniority and generational lines, making resistance a system-wide issue rather than one limited to specific cohorts.
Digital literacy and personal AI experience reduce fear, but urban teachers show higher ethical skepticism
Digital competence serves as a protective factor, sharply reducing negative attitudes toward AI. Teachers with stronger digital skills, including information literacy, navigation abilities and security awareness, tend to feel more in control when interacting with new technologies. This confidence directly reduces emotional fears and increases comfort with AI systems.
Digital literacy, however, does not entirely eliminate concerns. The research shows that even digitally skilled teachers may continue to worry about institutional misuse or ethical risks. Still, the correlation is clear: the more digitally confident teachers are, the more likely they are to approach AI with curiosity rather than fear.
The study also reveals a notable urban–rural divide. Teachers working in urban settings exhibited significantly higher levels of distrust toward the ethical and organizational dimensions of AI. This may reflect greater exposure to public discourse on algorithmic bias, data abuse and tech-sector controversies, conversations that tend to dominate media in densely populated regions. Rural teachers, with less exposure to these debates, may experience fewer ethical concerns, even if their digital competence is lower.
Direct personal experience with AI proved to be an important moderating factor. Teachers who frequently used AI tools in daily life, such as travel platforms, entertainment systems or shopping algorithms, reported lower perceived threat. This suggests that familiarity reduces fear, even when the tools in question are not education-related. Personal use appears to humanize AI, transforming it from an abstract concept into a practical tool.
Professional use of AI in teaching also reduced negative attitudes, though less effectively than personal use. Many teachers may interact with AI in a limited or indirect manner within school systems, which may not provide enough exposure to build trust or confidence. The findings reinforce that meaningful and sustained engagement with AI is essential for reducing resistance.
The study also highlights a concerning pattern: while digital competence improves attitudes, the initial emotional barrier remains high. Many teachers are still at an early stage of AI literacy and rely heavily on self-taught digital skills. Without structured, institutional training, these educators face a fragmented and inconsistent learning environment, increasing their vulnerability to misinformation and anxiety.
Negative attitudes stem from more than technology
The study identifies deeper drivers of teachers’ negative attitudes that extend far beyond technical familiarity. These include institutional pressures, uncertainty about the future of teaching and concerns about the ethical implications of AI governance.
Educators fear losing control over classroom dynamics and instructional decisions. AI-based systems, such as performance prediction models or automated grading tools, raise fears that teachers might be sidelined or monitored in unwanted ways. These anxieties align with global concerns about the rise of algorithmic management and data-driven oversight in workplaces.
Ethical concerns reflect worries about how AI systems handle student data, evaluate learners or influence teacher performance. Teachers often lack clear information about how decisions are made by AI tools, leading to uncertainty about fairness and accountability. This skepticism is intensified by the absence of institutional frameworks that explain AI governance or protect teachers’ professional autonomy.
The authors make the case that institutional readiness plays a substantial role in shaping teachers’ attitudes. Schools that lack clear communication regarding AI policies or do not provide robust professional development contribute to greater fear and mistrust. Teachers who feel unsupported by their institutions are more likely to develop defensive or resistant attitudes.
They also highlight the importance of multidimensional AI literacy, which goes beyond technical competence. Effective AI literacy includes critical thinking, ethical reasoning, reflective practice and the ability to evaluate the implications of AI tools in different contexts. Research cited in the study suggests that collaborative approaches, including peer learning, group reflection and ethical discovery workshops, help teachers build confidence and reduce fear.
The findings illustrate that negative attitudes toward AI are rooted in complex psychosocial mechanisms. Emotional, ethical and institutional factors intertwine, shaping resistance that cannot be resolved solely through technical training. Policymakers and educational leaders must therefore adopt a holistic approach, recognizing that strong AI literacy programs must include emotional reassurance, ethical clarity and structured organizational support.
- FIRST PUBLISHED IN:
- Devdiscourse

