Students choose ChatGPT for efficiency, not human-like traits
The emphasis students placed on functionality over anthropomorphism highlights a shift in how AI is perceived within academic environments. Generative AI is recognized less as a conversational companion and more as an efficient problem-solving assistant.
Effectiveness and trust outweigh human-like characteristics in influencing long-term use of generative AI tools like ChatGPT, reveals a new study published in Electronics. The findings come at a time when higher education institutions are rapidly updating policies and rethinking digital literacy requirements to reflect the rise of generative AI.
The study, “Assessing ChatGPT Adoption in Higher Education: An Empirical Analysis,” is based on responses from 477 students enrolled at the Bucharest University of Economic Studies. It examines the psychological and functional drivers of ChatGPT adoption using a structural equation modeling framework. The research integrates constructs from the Technology Acceptance Model, the Unified Theory of Acceptance and Use of Technology and the Expectation–Confirmation Model, while introducing new dimensions specific to AI tools, such as human-like and AI-like perceived social presence.
The study’s results provide strong evidence that students adopt ChatGPT primarily because it is helpful, efficient, and reliable, not because it mimics human conversation. The authors highlight that trust in AI grows directly from satisfaction, which in turn depends on how easy and useful students find ChatGPT in supporting their academic tasks.
Usefulness and ease of use remain core predictors of student satisfaction
The study identifies perceived usefulness and ease of use as the strongest predictors of satisfaction. Students consistently reported that ChatGPT improved the speed and quality of their work, supported learning tasks and helped them perform better academically. These findings reinforce established technology acceptance research while offering new insights into how generative AI tools function in an educational context.
Ease of use played a similarly significant role. Students who found ChatGPT intuitive, clear in its responses and simple to integrate into their workflow expressed much higher satisfaction. The relationship between ease and satisfaction was strengthened by hedonic motivation—the enjoyment students associated with using the tool. When interaction felt smooth and engaging, students were more inclined to invest time exploring ChatGPT’s capabilities, enhancing both ease and perceived effectiveness.
The authors also show that expectation-confirmation dynamics shape student perceptions. When ChatGPT matched or exceeded expectations regarding accuracy, relevance and clarity, satisfaction increased, which in turn drove trust. This suggests that consistent performance is critical to maintaining student adoption. If the tool delivers reliable assistance throughout different academic tasks, students gain confidence in its capabilities.
A noteworthy finding is the limited impact of human-like characteristics. Perceived human-like social presence had only a small influence on trust, indicating that students do not require the system to resemble a human conversational partner to use it comfortably. Instead, the AI-like presence, reflecting intelligence, responsiveness and task-oriented interaction, contributed more to ease of use.
The emphasis students placed on functionality over anthropomorphism highlights a shift in how AI is perceived within academic environments. Generative AI is recognized less as a conversational companion and more as an efficient problem-solving assistant.
Trust emerges as the key factor driving long-term use
Trust is identified as the most powerful determinant of loyalty, the likelihood that students will continue using ChatGPT into the future. The authors describe satisfaction as the primary driver of trust. When students felt that ChatGPT met their academic needs, provided correct information and delivered useful assistance, they became more confident in depending on it for future tasks.
Trust was also linked to confirmation and ease of use. Students who felt comfortable navigating the system and who experienced consistent performance were more likely to trust it. Trust, in turn, shaped their attitude toward long-term use. Those with high trust showed strong intentions to continue using ChatGPT, integrate it into more complex tasks and rely on it as a routine academic tool.
Perceived usefulness also directly predicts loyalty. Students are willing to adopt ChatGPT repeatedly when they experience tangible academic benefits. This indicates that higher education settings may see a sustained and increasing reliance on generative AI, provided the tools maintain high performance standards.
Interestingly, trust was not significantly influenced by the perceived human-like dimension of ChatGPT. Although students noted ChatGPT’s ability to simulate conversational cues, this did not determine whether they trusted the system. Instead, accuracy, speed and task support shaped trustworthiness.
This finding challenges the idea that anthropomorphic AI is essential for academic settings. Students value competence over personality. For educational institutions, this suggests that integrating AI tools should prioritize reliability, transparency and task alignment instead of focusing on making these tools more human-like.
Implications for universities navigating the rise of generative AI
The study provides critical guidance for universities as they integrate AI tools into coursework, student support services and digital infrastructure. One of the central implications is that institutions must prioritize building clear guidelines around AI use. Students are adopting ChatGPT rapidly, and the study shows that its utility plays a central role in academic life. Without structured policies, students may rely on AI without adequate awareness of its limitations.
Training and AI literacy programs emerge as essential next steps. Since students base trust on their own experience rather than institutional recommendations, universities must help them develop strategies for verifying information, evaluating AI-generated content and understanding potential inaccuracies. A well-informed student body can maximize the benefits of AI while minimizing risks related to misinformation or overreliance.
The authors also highlight the importance of ensuring accuracy and transparency in AI systems used by educational institutions. Because trust and satisfaction determine adoption, universities need to ensure that AI tools integrated into formal learning management systems are tested for reliability and aligned with academic standards.
The study’s findings around perceived social presence have direct design implications. Students appreciate efficiency and intelligence more than attempts to simulate human behavior. This suggests that universities should adopt tools that emphasize task performance rather than conversational mimicry. Systems that provide precise answers, fast processing and strong academic alignment may be more successful in gaining long-term use.
Additionally, universities must consider how ChatGPT fits into assessment design. As adoption grows, academic integrity concerns become more complex, requiring updated evaluation methods and clearer instructions from faculty. The results indicate that students will continue using ChatGPT, so institutions must adapt rather than resist this shift.
- FIRST PUBLISHED IN:
- Devdiscourse

