Students rely on AI yet fear dependency and integrity risks
Ethical concerns play a major role in how students interact with AI. The study finds widespread anxiety about plagiarism, academic integrity, and the legitimacy of AI-generated outputs. Students express uncertainty about where institutional boundaries lie, especially when rules are unclear or inconsistently communicated.
New research suggests that beneath high adoption rates and strong perceptions of usefulness in higher education, AI is also introducing subtle behavioral and psychological costs that are reshaping how students engage with learning, ethics, and their own competence.
The study, titled Navigating Ambivalence: Artificial Intelligence and Its Impact on Student Engagement in Engineering Education, was published in the journal Behavioral Sciences. The research provides a detailed examination of how engineering students experience AI not as a purely enabling tool, but as a source of simultaneous empowerment and strain.
Based on data from engineering students at a Chilean public university, the study challenges the dominant narrative that AI adoption in education is an unqualified success. Instead, it shows that widespread use is accompanied by growing uncertainty, ethical anxiety, and behavioral caution, pointing to a deeper form of ambivalence that institutions have yet to fully address.
High adoption masks growing psychological tension
The research finds that AI tools are already deeply embedded in engineering education. More than seven in ten surveyed students reported using AI regularly, primarily to save time, improve conceptual understanding, and enhance the quality of academic work. Students described AI as especially useful for clarifying complex topics, supporting problem-solving, and accelerating routine tasks.
This high adoption rate might suggest smooth integration. However, the study reveals that frequent use does not translate into uncritical acceptance. Instead, students experience AI as a double-edged presence in their academic lives. While they value its efficiency, they also report persistent concerns about overreliance, diminished learning depth, and erosion of independent thinking.
A key finding is the emergence of cognitive dependence anxiety. Many students worry that regular AI use could weaken their own analytical skills over time. Rather than seeing AI as a neutral assistant, they perceive it as a tool that may quietly substitute effort and reasoning, raising doubts about whether their academic achievements genuinely reflect their abilities.
This tension shapes behavior. Students often limit when and how they use AI, even when it could improve performance. The study shows that engagement with AI is not linear or enthusiastic but cautious and strategic. Students constantly evaluate whether using AI helps learning or undermines it, reflecting an internal negotiation rather than seamless adoption.
The findings suggest that AI’s impact on education cannot be understood solely through usage statistics. Behavioral and psychological responses matter just as much, and these responses are marked by ambivalence rather than confidence.
Ethics, trust, and uncertainty shape AI engagement
Ethical concerns play a major role in how students interact with AI. The study finds widespread anxiety about plagiarism, academic integrity, and the legitimacy of AI-generated outputs. Students express uncertainty about where institutional boundaries lie, especially when rules are unclear or inconsistently communicated.
This uncertainty creates behavioral restraint. Even students who recognize AI’s value often hesitate to use it fully, fearing accusations of misconduct or unfair advantage. Rather than encouraging open exploration, AI becomes a source of risk management, with students trying to balance performance gains against reputational and academic consequences.
Trust in AI-generated information also emerges as a critical issue. Students question the accuracy, reliability, and transparency of AI outputs, particularly when they cannot verify sources or reasoning. This skepticism does not stop usage, but it changes how AI is used. Students frequently double-check results or limit AI’s role to preliminary exploration rather than final answers.
The study frames these behaviors through self-determination theory, which emphasizes the importance of autonomy, competence, and confidence in sustaining engagement. AI supports autonomy by giving students control over pace and access to information. At the same time, it threatens competence when students feel unsure whether success comes from their own understanding or from AI assistance.
This contradiction creates emotional strain. Students report feeling both empowered and uneasy, productive yet uncertain. The study identifies this emotional ambivalence as a defining feature of current AI engagement in education.
Gender differences reveal uneven confidence, not access
The study analyses gender-related differences in AI engagement. Female students consistently rated AI as highly useful across multiple dimensions, including learning support and efficiency. At the same time, they were more likely than male students to describe AI as difficult to apply effectively.
This pattern points to a confidence and trust gap rather than a usage gap. Female students do not reject AI or undervalue it. Instead, they experience greater uncertainty about how to use it appropriately and safely within academic norms. This suggests that structural or cultural factors shape AI engagement beyond simple access or technical skill.
The findings challenge assumptions that increased availability of AI tools automatically leads to equitable outcomes. Even when usage rates are similar, psychological experiences differ. Without targeted support, these differences may widen over time, creating uneven learning experiences despite uniform access.
The study describes this dynamic as an emerging AI engagement gap, where students’ confidence, trust, and ethical clarity determine how effectively they can benefit from AI. Addressing this gap requires more than training in tool usage. It demands explicit guidance, transparent policies, and institutional support that normalize uncertainty and provide clear boundaries.
Institutional silence amplifies hidden costs
Students report that universities often promote innovation and efficiency while offering limited guidance on acceptable AI use. This silence forces students to self-regulate, increasing anxiety and inconsistent behavior.
Without clear frameworks, students rely on personal judgment or peer norms, which vary widely. This environment discourages open discussion and reinforces cautious, sometimes defensive engagement with AI. Instead of fostering critical learning, AI becomes a private coping mechanism rather than a shared educational resource.
The study argues that this gap between technological adoption and institutional governance is a key driver of hidden behavioral costs. When expectations are unclear, students internalize responsibility for ethical decision-making without adequate support. This increases stress and undermines confidence, even among high-performing students.
The authors emphasize that AI integration should not be treated as a purely technical issue. Pedagogical design, ethical education, and transparent communication are essential to prevent AI from becoming a source of disengagement rather than empowerment.
Rethinking AI’s role in education
AI’s impact on student engagement is fundamentally ambivalent. It enhances efficiency and access to knowledge while simultaneously introducing uncertainty, ethical tension, and psychological strain. These effects do not cancel each other out. They coexist, shaping a new and complex learning environment.
Importantly, the research does not argue against AI adoption. Instead, it calls for a more nuanced approach that acknowledges both benefits and costs. Students are not resisting AI. They are negotiating its role in their learning, often without sufficient guidance.
The findings suggest that the next phase of AI integration in education should focus less on expanding access and more on supporting responsible, confident use. Clear institutional policies, ethical literacy, and open dialogue are critical to reducing ambivalence and unlocking AI’s educational potential.
- FIRST PUBLISHED IN:
- Devdiscourse

