Academic roles and gender shape ethical use of AI in higher education
This discrepancy is largely attributed to the professional obligations of educators, who are more frequently exposed to discussions about authorship, plagiarism, copyright law, and responsible research practices. In contrast, students, especially those in undergraduate programs, were found to have limited exposure to structured ethical training, with only 41.09% recognizing the risks associated with AI-generated content.
A new study sheds light on how academic roles, gender, and usage experience influence ethical awareness surrounding generative AI tools in higher education. The peer-reviewed research, titled "Exploring the Ethical Implications of Using Generative AI Tools in Higher Education," was published in Informatics.
Drawing on survey data from 883 students, teachers, and researchers, the study reveals that while generative AI (GenAI) tools such as ChatGPT, Microsoft Copilot, and Gemini are increasingly embedded in academic routines, significant disparities exist in how users perceive the ethical risks tied to their use. The researchers found that teachers and researchers exhibit the highest levels of awareness regarding ethical responsibilities, while students, particularly undergraduates, demonstrate a lower understanding, raising concerns over academic integrity and responsible AI adoption.
How does ethical awareness of AI use differ between academic roles?
The study’s primary aim was to analyze how perceptions of AI ethics differ across roles in the academic community. Teachers and researchers consistently demonstrated greater understanding of potential negative consequences, personal responsibility, and the ethical principles involved in AI tool usage. For instance, 72.71% of them showed high awareness of ethical frameworks compared to only 27.29% of students.
This discrepancy is largely attributed to the professional obligations of educators, who are more frequently exposed to discussions about authorship, plagiarism, copyright law, and responsible research practices. In contrast, students, especially those in undergraduate programs, were found to have limited exposure to structured ethical training, with only 41.09% recognizing the risks associated with AI-generated content.
However, the study also found signs of adaptability among students. Those with more than three months of experience using GenAI tools showed a marked improvement in their understanding of responsibilities, with 89.98% demonstrating greater ethical engagement. This suggests that hands-on experience, paired with institutional guidance, can effectively raise ethical standards across academic levels.
What role does gender play in understanding AI ethics?
The second major finding of the research underscores the role of gender in shaping ethical perceptions of AI use. Across all surveyed categories, awareness of ethical principles, user responsibility, and understanding of negative consequence, female respondents consistently outperformed their male counterparts.
Specifically, 78.02% of women showed a clear understanding of the consequences of GenAI use, compared to just 21.98% of men. The trend continued with responsibility awareness (81.12% women vs. 18.88% men) and ethical principle comprehension (75.73% women vs. 24.47% men). The study’s authors noted that these findings are consistent with prior research suggesting women often report higher ethical sensitivity and stronger compliance with social responsibility in both academic and corporate environments.
While these gender differences narrowed slightly with longer AI tool usage, they remained statistically significant, indicating the need for tailored ethical education strategies. The authors recommend targeted interventions, particularly for male users, to close the ethics gap in GenAI engagement.
Does longer AI usage improve ethical understanding- and will awareness drive adoption?
The research also explored whether continued exposure to GenAI tools improves ethical awareness and whether this awareness influences future use. A strong positive correlation was observed between the duration of AI use and ethical awareness. Respondents who had used tools for more than three months demonstrated significantly higher levels of understanding across all ethical dimensions, including issues related to copyright, authorship, transparency, and academic integrity.
The strongest correlation was between awareness of user responsibility and understanding of ethical principles (r = 0.874), suggesting that as users become more conscious of their obligations, they also become more ethically literate. However, the study found that ethical awareness alone does not strongly predict future AI adoption.
While ethical understanding enhances user trust and may reduce misuse, it is not the sole driver of adoption. The study's ANOVA and correlation tests revealed weak links between ethical concern and continued GenAI usage. This indicates that other factors such as perceived usefulness, ease of access, peer influence, and institutional policy, play larger roles in shaping behavioral intent.
The authors argue that responsible AI integration in higher education cannot rely solely on ethical training. Instead, institutions must combine ethical awareness with practical, policy-driven guidance and technical literacy to foster responsible, widespread AI adoption.
- READ MORE ON:
- generative AI ethics in education
- Generative AI Tools in Higher Education
- ethical implications of AI in universities
- responsible AI use in academia
- AI tools in higher education
- ethical AI adoption in education
- how teachers perceive AI in education
- ChatGPT ethical concerns in education
- AI in the classroom
- FIRST PUBLISHED IN:
- Devdiscourse

