Students turn to AI only when it feels meaningful, not just useful
A new multi-institutional study analyses why students choose to use AI tools and why, in some cases, they hesitate. Based on survey data from 673 university students, the research reframes AI adoption not as a simple response to usefulness or efficiency, but as a decision shaped by how students experience control, authorship, and agency in AI-mediated learning environments.
Published in Behavioral Sciences under the title Empowered or Constrained? Digital Agency, Ethical Implications, and Students’ Intentions to Use Artificial Intelligence, the study examines how students’ sense of agency interacts with their cognitive evaluations of AI, revealing a conditional pattern in which perceived value and perceived benefits play very different motivational roles depending on whether students feel in control or constrained by their learning context.
Why agency matters more than enthusiasm for AI
While AI tools promise personalization, efficiency, and enhanced learning support, they also reshape how responsibility and control are distributed between students and automated systems. Previous studies have often measured AI adoption by tracking usage rates, perceived usefulness, or general attitudes. According to the authors, these approaches overlook a critical psychological layer, namely how students perceive their own role as active agents when AI systems mediate academic work.
To address this gap, the study distinguishes between two forms of digital agency. Sense of positive agency refers to students’ feelings of autonomy, intentionality, and control over their actions. Sense of negative agency reflects feelings of passivity, reduced control, or being guided by external forces, including technological systems. Rather than treating these as opposites on a single scale, the authors examine how they interact to shape students’ evaluations of AI.
The findings challenge a common assumption that students who feel confident and in control are automatically more willing to adopt AI. Positive agency alone did not directly predict whether students intended to use AI tools. Instead, agency influenced intention only through cognitive evaluations of AI, and only under certain conditions. This result reframes AI adoption as an indirect, meaning-driven process rather than a straightforward expression of confidence or technological openness.
When students reported low levels of negative agency and felt relatively unconstrained, their intention to use AI was mainly linked to perceived benefits. In these contexts, AI was approached as a functional resource that could save time, improve efficiency, or support academic tasks. However, this benefit-driven motivation weakened sharply as students experienced greater loss of control or external pressure in their learning environment.
Value overtakes utility when control feels threatened
The study provides a clear distinction between perceived benefits and perceived value of AI. While perceived benefits focus on instrumental outcomes such as efficiency or task facilitation, perceived value captures a broader appraisal of whether AI use is meaningful, relevant, and aligned with a student’s academic goals.
The data show that perceived value is the strongest and most stable predictor of students’ intention to use AI. More importantly, its influence grows as students’ sense of negative agency increases. In situations where students feel less control over their learning processes, value-based evaluations become the dominant motivational driver. In other words, when agency feels threatened, students are more likely to engage with AI if they see it as genuinely worthwhile and personally significant, not merely useful.
This asymmetric pattern marks a departure from conventional models of educational technology adoption. The study demonstrates that perceived benefits motivate AI use primarily when students feel agentic and self-directed. Once that sense of control erodes, instrumental advantages are no longer sufficient. At higher levels of negative agency, perceived benefits lose their motivational force altogether, while perceived value continues to predict intention to use AI.
The authors interpret this shift as a conditional appraisal mechanism rather than a compensatory reaction. Students are not turning to AI simply to regain lost control. Instead, they appear to reassess the role of AI in their academic identity and goals. Under conditions of constraint, AI adoption becomes less about efficiency and more about whether the technology fits meaningfully into the student’s learning trajectory.
Statistically, the full model explains more than half of the variance in students’ intention to use AI, a level of explanatory power that is rare in survey-based educational research. This strength underscores the central claim of the study: AI engagement in higher education is shaped by deep psychological processes related to agency and meaning, not just surface-level attitudes toward technology.
Implications for higher education policy and AI governance
When AI is adopted primarily for its perceived benefits, engagement tends to remain efficiency-oriented. This raises familiar concerns about over-reliance on automated systems, erosion of academic skills, and blurred boundaries of authorship. By contrast, the value-driven engagement observed under higher negative agency suggests a more reflective relationship with AI, one that implicitly touches on questions of appropriateness, alignment, and responsibility, even if students do not frame these concerns in ethical terms.
For universities, the findings suggest that AI integration strategies focused solely on productivity gains or technical training may miss the mark. Supporting student agency appears to be just as important as demonstrating functional advantages. AI literacy initiatives, course design, and institutional policies that emphasize intentional use, authorship, and self-directed learning may help prevent passive or dependency-driven engagement with AI tools.
The research also challenges institutions to rethink how AI is positioned within academic workflows. When AI is framed as a substitute for cognitive effort, students experiencing high negative agency may disengage or rely on it uncritically. When AI is framed as a scaffold that supports exploration and learning autonomy, it is more likely to be perceived as valuable rather than merely useful.
The authors acknowledge several limitations, including the cross-sectional design and reliance on self-reported data. They caution against causal interpretations and call for longitudinal and cross-cultural studies to test whether these agency-conditioned patterns hold across different educational systems.
- FIRST PUBLISHED IN:
- Devdiscourse

