Why students reject powerful AI tools even when they improve learning
The research introduces the Model of Acceptance of Artificial Intelligence Devices, known as the MIDA model, designed specifically to address limitations in existing technology acceptance frameworks. Traditional models often emphasize functional utility and ease of use but treat emotional responses as secondary or external factors. The MIDA model places emotion at the center of the acceptance process.
A new peer-reviewed study shows that willingness to use AI in higher education depends far less on technological sophistication than on how students cognitively and emotionally experience these tools, particularly in terms of trust, effort, and stress.
Published in the journal Computers, the study titled “Model of Acceptance of Artificial Intelligence Devices in Higher Education” examines how university students decide whether to embrace or resist AI-based devices in academic settings. The research proposes and validates a new conceptual framework that integrates cognitive expectations and emotional responses into a single acceptance pathway, offering one of the most detailed empirical accounts to date of how students actually respond to AI in education.
A new model explains how students decide to accept AI
The research introduces the Model of Acceptance of Artificial Intelligence Devices, known as the MIDA model, designed specifically to address limitations in existing technology acceptance frameworks. Traditional models often emphasize functional utility and ease of use but treat emotional responses as secondary or external factors. The MIDA model places emotion at the center of the acceptance process.
The model structures acceptance as a sequential pathway. Contextual variables such as perceived value, anthropomorphism, and perceived risk shape students’ cognitive expectations. These expectations, in turn, trigger emotional responses that directly influence willingness to use AI devices.
Performance expectancy reflects the degree to which students believe AI tools will improve their academic outcomes. Effort expectancy captures how demanding students perceive the use of AI devices to be. These two cognitive evaluations do not act independently. They generate emotional states that either encourage or discourage adoption.
The study identifies three core emotions as critical mediators: trust, anxiety, and stress. Performance expectancy reduces anxiety and stress while increasing trust. Effort expectancy, by contrast, raises anxiety and stress, creating psychological friction that undermines acceptance even when perceived benefits are high.
This structure reveals why many students express ambivalence toward AI. A system can be perceived as valuable and powerful, yet still rejected if it generates excessive stress or cognitive burden. The model demonstrates that acceptance is not a rational cost-benefit calculation alone but an emotionally mediated process.
Evidence from a large student sample
To validate the MIDA model, the study analyzes survey data collected from 517 university students. Responses were examined using covariance-based structural equation modeling, allowing the researchers to test both direct and indirect relationships among contextual, cognitive, and emotional variables.
The results confirm the model’s core assumptions. Willingness to use AI devices is not driven directly by perceived value, anthropomorphism, or perceived risk. Instead, these contextual factors influence acceptance indirectly through performance expectancy and effort expectancy, which then shape emotional responses.
Trust is identified as the strongest direct predictor of willingness to use AI devices. Students who trust AI systems are significantly more likely to accept them, regardless of other concerns. Stress, on the other hand, has a clear negative effect on acceptance, acting as a psychological barrier even when performance benefits are acknowledged.
Anxiety also plays a role, though its impact is less direct. Anxiety is influenced by both performance and effort expectancy, but it does not independently determine acceptance in the same way trust and stress do. This distinction suggests that discomfort alone does not prevent adoption unless it escalates into sustained stress.
One of the study’s most notable findings is the asymmetry between acceptance and objection. Emotional factors explain why students choose to use AI devices, but they do not fully explain why students reject them. This indicates that objection may be driven by additional concerns beyond emotion, such as ethical reservations, data privacy fears, or perceived threats to academic integrity.
Implications for universities and AI developers
Design choices that reduce effort expectancy can have a disproportionate impact on acceptance. Interfaces that are intuitive, transparent, and supportive lower stress and anxiety, indirectly strengthening trust. Conversely, complex systems that require extensive adaptation may trigger emotional resistance even when they offer clear benefits.
Trust formation emerges as a central governance challenge. The study shows that trust is not an automatic consequence of performance. It must be cultivated through reliability, transparency, and responsible integration. Students need to feel confident that AI devices support their learning rather than replace their judgment or compromise their autonomy.
The findings also highlight the importance of managing stress associated with AI use. Stress arises not only from technical difficulty but also from fear of dependence, loss of control, and uncertainty about long-term consequences. Universities introducing AI tools without adequate guidance, training, or support risk undermining acceptance through unintended psychological pressure.
Notably, the study does not frame AI resistance as technophobia. Student hesitation is shown to be rational and emotionally grounded. Resistance reflects concern about cognitive overload, emotional strain, and trustworthiness rather than simple aversion to technology.
- FIRST PUBLISHED IN:
- Devdiscourse

