Students praise GenAI’s usefulness, academics warn of overreliance and cheating
Students primarily employ AI tools to enhance academic productivity, assist with personalized learning, brainstorm ideas, and draft or refine assignments. These applications highlight GenAI’s potential as an educational equalizer, especially in aiding non-native English speakers or students with less access to academic support.

Generative artificial intelligence is reshaping the higher education landscape, yet perceptions of its role and risks remain divided among those who use it most: students and academics. A newly published study titled “Perceptions of Generative AI Tools in Higher Education: Insights from Students and Academics at Sultan Qaboos University” in Education Sciences sheds light on the nuanced acceptance, concerns, and future expectations surrounding AI tools like ChatGPT in the academic environment of Oman’s leading university.
Drawing on survey data from 555 students and 168 academics, the study employed the Technology Acceptance Model (TAM) to assess how both groups perceived generative AI across five dimensions: actual use, perceived ease of use, perceived usefulness, perceived challenges, and intention to use. The researchers offer one of the most comprehensive analyses to date of generative AI integration in Gulf-region higher education, presenting a framework not only for understanding current trends but also for crafting institutional responses.
How familiar are students and academics with generative AI tools like ChatGPT?
Awareness and usage of generative AI tools are already widespread at Sultan Qaboos University. Among students, 88% reported using ChatGPT 3.5, while 20% had experimented with ChatGPT Plus-4. Academics were not far behind, with 82% using ChatGPT 3.5 and 32% familiar with the advanced Plus-4 version. Despite the shared familiarity, students and faculty demonstrate notably different usage patterns and motivations.
Students primarily employ AI tools to enhance academic productivity, assist with personalized learning, brainstorm ideas, and draft or refine assignments. These applications highlight GenAI’s potential as an educational equalizer, especially in aiding non-native English speakers or students with less access to academic support.
Academics, on the other hand, report using GenAI to improve teaching efficiency, developing assessments, designing lesson plans, customizing content delivery, and handling administrative tasks like grading. Their use of GenAI is both instructional and operational, suggesting a broader institutional impact if such practices become standardized.
Do students and academics perceive the usefulness and risks of GenAI differently?
Using the TAM framework, the study uncovered significant perceptual differences across all variables between students and academics. Students found GenAI more useful overall (mean score of 3.85 vs. 3.64 for academics), praising its ability to save time, deliver structured content across disciplines, and provide anonymous academic help. Statements such as “ChatGPT can provide information in diverse fields” and “ChatGPT is useful in research” scored highest among students, underlining their preference for GenAI as a knowledge companion and tutor.
Academics, however, rated GenAI slightly higher in ease of use (mean of 3.96 vs. 3.85 for students), suggesting that faculty members are technically confident in navigating the tools. Yet this comfort is tempered by heightened concern. Academics expressed greater worry over GenAI’s impact on academic integrity, the potential for student over-reliance, and the challenges of distinguishing between AI-generated and student-authored content. The most alarming perception among faculty was that GenAI could facilitate cheating, with a mean rating of 3.96.
Students, by contrast, recognized functional limitations, such as ChatGPT’s tendency to produce factual errors, biased outputs, or fabricated citations, but were generally less alarmed by the ethical risks. Interestingly, both groups rated over-reliance on GenAI as the most pressing shared challenge, citing potential erosion of critical thinking and creativity as a major concern.
What factors shape future intentions to use GenAI tools in higher education?
Despite divergent perspectives on usefulness and challenges, both students and academics expressed overwhelming support for the continued use of generative AI at Sultan Qaboos University. Around 81% of students and 86% of academics stated their intention to keep using such tools. A significant majority, 68% of students and 74% of faculty, advocated for officially embracing GenAI within the university setting.
However, a concerning finding emerged regarding institutional policy awareness. Over 63% of students and 74% of academics reported being unaware of any existing guidelines regulating the ethical use of generative AI at SQU. This policy vacuum represents a critical barrier to safe and responsible GenAI adoption. Since the survey was conducted, the university has moved to fill this gap by publishing documents such as “Guidelines for Using GenAI Tools in Teaching, Learning, and Assessment” and “SQU AI Guidelines for Administrative Purposes.”
The study’s authors recommend a more structured governance approach to GenAI, including the implementation of an AI disclosure policy. Such a policy would require students to document how and when they use AI tools in academic work, including prompt examples and appended outputs - mirroring standards already adopted in scientific publishing.
Moreover, the researchers argue that higher education institutions must commit to comprehensive, ongoing training programs, not just in AI functionality, but in ethics, prompt engineering, and critical thinking. The goal is to shift GenAI from a content substitute to a learning complement, fostering human-AI collaboration that reinforces, rather than replaces, essential academic skills.
The findings also indicate that effective GenAI adoption is not purely a matter of technological readiness. Cultural acceptance, trust in AI, personal motivation, and regulatory clarity all shape whether students and educators feel empowered or inhibited when using these tools. Future studies, the authors suggest, should compare perceptions across cultural and institutional contexts, employ mixed-method designs to capture deeper insights, and track how AI integration evolves over time in real classroom environments.
- READ MORE ON:
- generative AI in education
- ChatGPT in higher education
- AI adoption in universities
- student perceptions of AI
- academic integrity and AI
- differences in student and faculty perceptions of ChatGPT
- ethical concerns of generative AI in higher education
- benefits of using GenAI tools in learning
- GenAI in higher education
- FIRST PUBLISHED IN:
- Devdiscourse