ChatGPT-era students feel smarter with AI, but also more likely to cheat

Researchers argue that the implications extend beyond student conduct to how educational institutions, tool developers, and policymakers approach AI literacy. The authors recommend embedding ethical training and IT mindfulness into higher education curricula, not only to enhance academic integrity but to foster a generation of responsible AI users in the workforce.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 28-03-2025 10:10 IST | Created: 28-03-2025 10:10 IST
ChatGPT-era students feel smarter with AI, but also more likely to cheat
Representative Image. Credit: ChatGPT

A new empirical study has raised ethical concerns over the increasing psychological identification students form with artificial intelligence technologies, revealing that strong AI identity may inadvertently foster unethical behavior such as academic dishonesty. However, the research also identifies IT mindfulness as a critical safeguard capable of moderating this risk and promoting responsible AI use in educational settings.

Published by researchers from the University of North Texas, University of Texas at Tyler, and Kennesaw State University, the study titled "AI Identity, Empowerment, and Mindfulness in Mitigating Unethical AI Use" used a theoretical model and structural equation modeling (SEM) to analyze data from 240 college students with experience using AI tools like ChatGPT. The findings show that while a stronger AI identity contributes positively to psychological empowerment, improving confidence, autonomy, and academic engagement, it also correlates with a higher likelihood of unethical AI usage, including cheating and plagiarism.

The study defines "AI identity" as the extent to which individuals internalize artificial intelligence as a core part of their academic or professional self-concept. As AI tools become increasingly integrated into classrooms, students who identify with these technologies tend to feel more competent and in control, what the researchers label as psychological empowerment. This empowerment, composed of four dimensions, meaning, competence, choice, and impact, enables students to feel they can effectively use AI to shape learning outcomes.

However, this sense of empowerment carries a double edge. According to the findings, students who feel more capable and autonomous in using AI are also more likely to misuse it for personal gain. The data analysis revealed a strong, statistically significant relationship between psychological empowerment and unethical behavior, particularly the misuse of AI technologies to complete assignments dishonestly. This underscores a paradox: while empowerment can boost motivation and academic performance, it can also lead to entitlement and moral complacency if left unchecked.

Mindfulness in information technology emerged in the study as a moderating force, reducing the likelihood of unethical conduct by prompting ethical reflection and caution in AI use. Students high in IT mindfulness were less likely to allow their empowered AI identity to translate into unethical decisions. Defined as a user’s conscious awareness and attentiveness during interactions with digital technologies, IT mindfulness encourages thoughtful engagement with AI tools, helping students pause and consider the broader consequences of their actions.

The study tested four hypotheses using partial least squares structural equation modeling (PLS-SEM). The first confirmed that AI identity positively influences psychological empowerment. The second found that psychological empowerment, in turn, increases the likelihood of unethical AI use. The third and fourth hypotheses explored the moderating role of mindfulness in IT, both of which were statistically validated: mindfulness reduced the impact of AI identity on empowerment and also weakened the link between empowerment and unethical behavior.

Lead author Dr. Mayssam Tarighi Shaayesteh stated that the findings are a call for balance. “AI technologies offer immense opportunities for student growth and innovation, but the psychological boost they provide must be accompanied by ethical grounding,” he said. “Mindfulness training could become a vital part of preparing students for responsible digital citizenship.”

The authors emphasize that their findings align with broader debates about digital ethics, particularly as AI becomes more embedded in professional and educational domains. While tools like ChatGPT offer productivity gains and creative potential, the blurred boundaries between assistance and academic dishonesty remain a concern. Students often perceive AI as non-human, which reduces feelings of guilt and makes it easier to justify misuses - a phenomenon previously observed in consumer behavior studies and confirmed here in educational settings.

The study's model accounts for 36% of the variance associated with unethical AI use and 31% of the variance in psychological empowerment, suggesting robust explanatory power. The survey design also controlled for common method bias, and demographic analysis revealed a diverse sample across age, gender, race, and educational background.

Researchers argue that the implications extend beyond student conduct to how educational institutions, tool developers, and policymakers approach AI literacy. The authors recommend embedding ethical training and IT mindfulness into higher education curricula, not only to enhance academic integrity but to foster a generation of responsible AI users in the workforce.

The findings also contribute to ongoing philosophical discussions around identity and agency in the age of automation. AI identity is increasingly shaping how individuals relate to their work, make decisions, and define personal success. As more tasks are delegated to machines, the perception of agency can become inflated, empowering individuals in one moment and distorting their moral compass in another.

The research calls to strike a delicate balance - encouraging student engagement with AI while reinforcing ethical accountability through structured interventions like mindfulness workshops, ethical use guidelines, and transparency logs that track AI tool usage. Developers of AI-based educational tools are also urged to incorporate features that prompt self-reflection, such as usage reminders and ethical nudges embedded in the user interface.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback