AI in higher education thrives on peer influence and ease of use, not trust alone
The research highlights that AI adoption does not occur in a vacuum. Students do not evaluate technology purely on technical merit or abstract benefits. Instead, they respond to cues from their social and institutional environment. Peer discussions, shared practices, and implicit approval from faculty all contribute to whether AI becomes a trusted academic aid or remains a marginal tool.
Artificial intelligence (AI) is rapidly becoming a routine part of student life in higher education. Yet as universities invest heavily in AI tools and digital infrastructure, a fundamental question remains unresolved: what actually drives students to adopt and consistently use artificial intelligence in their studies. A new empirical study suggests the answer has less to do with trust in technology and more to do with social influence, acceptance, and perceived usefulness within everyday academic environments.
In a study titled “Modeling Student Acceptance of AI Technologies in Higher Education: A Hybrid SEM–ANN Approach,” published in Future Internet, researcher Charmine Sheena R. Saflor examines the behavioral and psychological factors shaping AI adoption among college students. Using advanced statistical and machine learning methods, the research challenges common assumptions about trust and reveals acceptance as the strongest predictor of whether students actually use AI tools in practice.
Social influence shapes AI adoption more than technical confidence
The study is based on the Technology Acceptance Model, a widely used framework for explaining how users come to accept and use new technologies. However, rather than relying on traditional linear analysis alone, the research applies a hybrid approach that combines structural equation modeling with artificial neural networks. This allows the study to capture both causal relationships and complex, nonlinear patterns in student behavior.
Based on survey data from 200 undergraduate students at a public higher education institution in the Philippines, the findings show that AI adoption is deeply social. Students are more likely to engage with AI tools when they see peers, instructors, and the broader academic community using them positively. Social influence significantly affects students’ confidence in their ability to use AI, as well as their perception of risks associated with the technology.
This social dimension plays a critical role in shaping self-efficacy, or the belief that one can successfully use AI for academic tasks. When AI is normalized within classrooms, study groups, and institutional systems, students feel more capable and less hesitant. Conversely, when AI use appears isolated or controversial, perceived risks rise and willingness to engage declines.
The research highlights that AI adoption does not occur in a vacuum. Students do not evaluate technology purely on technical merit or abstract benefits. Instead, they respond to cues from their social and institutional environment. Peer discussions, shared practices, and implicit approval from faculty all contribute to whether AI becomes a trusted academic aid or remains a marginal tool.
This finding carries important implications for universities seeking to manage AI integration responsibly. Policies that focus solely on technical training or ethical warnings may overlook the social dynamics that ultimately determine student behavior.
Acceptance outweighs trust in predicting real AI use
Trust in AI, often emphasized in policy and institutional discourse, does not directly lead to actual use. While trust is influenced by students’ behavioral intentions and general attitudes toward AI, it does not independently predict whether students will integrate AI into their academic routines.
Instead, acceptance emerges as the dominant factor. Acceptance reflects a student’s overall readiness to incorporate AI into daily learning activities, including willingness to experiment, adapt study habits, and see AI as a legitimate academic resource. The analysis shows that once acceptance is established, actual use follows, regardless of abstract trust considerations.
The artificial neural network component of the study reinforces this conclusion. When ranking predictors of actual AI use, acceptance stands out as the most influential variable, followed by self-efficacy and behavioral intention. Trust plays a secondary role, while factors such as attitude and perceived risk influence adoption indirectly rather than decisively.
This challenges the assumption that building trust alone will drive responsible AI use. Students may trust AI systems conceptually but still choose not to use them if they do not see clear relevance to their studies or feel confident integrating them into coursework. Conversely, students may actively use AI tools even while holding reservations about accuracy or ethical implications, especially if acceptance within their peer group is high.
The study suggests that universities should rethink how they frame AI adoption strategies. Emphasizing acceptance through practical integration, clear use cases, and supportive environments may be more effective than focusing narrowly on trust-building narratives.
Ease of use and perceived risk shape attitudes toward AI
While acceptance dominates as the key driver of AI use, the study also identifies important supporting factors that influence student behavior. Perceived ease of use significantly affects behavioral intention, meaning that students are more inclined to adopt AI tools when they find them intuitive, accessible, and efficient for academic tasks.
This finding underscores the importance of usability in educational AI systems. Complex interfaces, unclear instructions, or poorly integrated platforms can discourage use even among students who are otherwise open to AI. When AI tools align smoothly with existing workflows, such as writing assignments, research, or data analysis, adoption increases.
Perceived risk also plays a meaningful role, particularly in shaping attitudes and perceptions of usefulness. Students express concerns about inaccurate outputs, over-reliance, misuse, and potential academic consequences. These concerns can dampen enthusiasm, especially when guidance on appropriate use is unclear or inconsistent.
However, the study finds that perceived risk does not automatically prevent AI use. Instead, its impact depends on the broader context of acceptance and support. When students feel confident in their ability to evaluate AI outputs and believe that AI use is socially and institutionally accepted, perceived risks become manageable rather than prohibitive.
This nuanced relationship highlights the need for balanced AI governance in higher education. Overly restrictive policies may increase anxiety and discourage responsible experimentation, while a lack of guidance can leave students exposed to misuse and confusion. Clear expectations, training in critical evaluation, and open discussion of risks can help students navigate AI use more effectively.
Implications for universities and education policy
From automated writing assistants to data analysis tools, AI is already embedded in student workflows, often faster than institutional policies can adapt. Universities seeking sustainable and responsible adoption should focus on fostering environments where AI use is transparent, supported, and aligned with learning objectives.
This includes incorporating AI literacy into curricula, not just as technical training but as part of broader academic skills development. Teaching students how to critically assess AI outputs, understand limitations, and use tools ethically can strengthen self-efficacy and reduce perceived risk. Faculty engagement is also crucial, as instructor attitudes strongly shape social norms around AI use.
The study also highlights the value of hybrid analytical approaches in education research. By combining theory-driven modeling with machine learning prediction, the research captures both why students adopt AI and how strongly different factors influence real behavior. This approach offers a more realistic picture of technology adoption in complex social systems like universities.
- READ MORE ON:
- AI acceptance in higher education
- student AI adoption
- artificial intelligence in universities
- AI technology acceptance model
- student use of AI tools
- social influence AI learning
- AI usability education
- behavioral intention AI
- AI adoption drivers education
- higher education AI integration
- FIRST PUBLISHED IN:
- Devdiscourse

