From tool to threat? Why students don’t fully trust AI in education
A new study shows that while students are embracing AI tools at scale, they remain cautious about trusting them as legitimate sources of knowledge. The research highlights a growing divide between usage and trust, raising concerns about how future educators will integrate AI into teaching without compromising academic standards and critical thinking.
Published in Algorithms under the title "Trust, Education, and Artificial Intelligence: Adoption, Explainability, and Epistemic Authority Among Teacher-Education Undergraduates in Greece," the study analyses how pre-service teachers interact with AI in higher education. Based on responses from 363 undergraduate students in Greece, the research reveals a complex and layered relationship between AI adoption, trust, and educational legitimacy.
High adoption, low trust: The emerging AI paradox in education
The study finds that AI technologies are no longer peripheral tools but have become deeply integrated into students' everyday and academic lives. An overwhelming majority of respondents reported using AI in daily activities, with over 90 percent indicating regular interaction. In academic contexts, more than 80 percent confirmed using AI to support their studies, signaling that generative AI tools are now embedded in the core learning process.
This widespread adoption reflects AI's growing role in tasks such as summarizing content, generating explanations, organizing ideas, and assisting with academic writing. Students increasingly rely on these tools for efficiency and convenience, using them as a first step in problem-solving and information gathering. The findings suggest that AI is reshaping not only how students complete assignments but also how they approach learning itself.
However, the study identifies a clear disconnect between usage and trust. Despite high adoption rates, only a small fraction of students reported full confidence in AI-generated outputs. Most respondents expressed moderate trust, with a dominant pattern of conditional reliance. A large majority indicated that they trust AI outputs only sometimes, depending on context, while a significant portion reported skepticism toward the accuracy and reliability of AI-generated answers.
This divergence is described as an "adoption–trust paradox," where students use AI extensively for practical purposes but hesitate to grant it epistemic authority. The findings suggest that students differentiate between AI as a useful tool and AI as a credible source of knowledge. This distinction is critical in educational settings, where trust in information directly impacts learning outcomes and academic integrity.
Verification culture emerges as students challenge AI authority
The study investigates how students actively manage their trust in AI systems. Many students engage in verification practices to assess the credibility of AI-generated content. These practices include cross-checking information with textbooks, academic databases, and trusted online sources, as well as refining prompts to test the consistency of responses.
This verification-based approach indicates that students are not passive users of AI but active evaluators of its outputs. The study suggests that this behavior may reflect an emerging pedagogical mindset, particularly among pre-service teachers who are training to become future educators. These students appear to treat knowledge as something that must be validated and contextualized, rather than simply consumed.
The study highlights persistent concerns about the reliability of AI systems. Students reported encountering inconsistent answers across different interactions, factual inaccuracies, and contradictions in AI-generated content. These experiences contribute to a cautious approach, reinforcing the need for external validation before accepting AI outputs as trustworthy.
The underlying cause of this skepticism is linked to the technical architecture of large language models. These systems generate responses based on probabilistic predictions rather than verified knowledge bases, which can result in fluent but occasionally incorrect or misleading information. The lack of transparency in how answers are produced further complicates trust, as users cannot easily trace the reasoning behind AI-generated responses.
This opacity creates a fundamental challenge for education systems. While AI tools can enhance learning efficiency, their limitations require users to develop critical evaluation skills. The study argues that without proper guidance, students may struggle to distinguish between accurate and unreliable information, potentially undermining learning quality.
Redefining educational authority in the age of AI
In addition to individual usage patterns, the study explores how AI is reshaping broader concepts of authority and legitimacy in education. Traditionally, knowledge in academic settings has been anchored in teachers, textbooks, and institutional frameworks. The rise of AI introduces a new, algorithm-driven source of information that challenges these established hierarchies.
The findings reveal that students are willing to accept AI as a support tool but remain hesitant to recognize it as an authoritative entity. While they acknowledge its usefulness in simplifying tasks and providing quick answers, they resist assigning it roles that require judgment, empathy, or deeper understanding. For example, respondents showed low willingness to accept AI as a teacher or as a substitute for human interaction in learning environments.
This distinction underscores a broader shift in how educational authority is negotiated. AI is being integrated into the learning process, but its role is carefully bounded by students who continue to prioritize human oversight and institutional validation. The study suggests that this selective acceptance reflects an ongoing struggle over epistemic authority, where AI competes with traditional sources of knowledge but does not fully replace them.
Factor analysis within the study further supports this interpretation, revealing multiple dimensions of student attitudes toward AI. These include strong support for integrating AI into education and recognition of its practical benefits, alongside clear limitations in relational trust and acceptance of autonomous AI systems. This multidimensional structure highlights that attitudes toward AI are not uniform but vary depending on context and function.
Trust in AI is not solely a technical issue but also a social and institutional one. Students' perceptions are shaped by broader concerns about misinformation, bias, surveillance, and accountability. These factors influence how AI is evaluated within the educational ecosystem and contribute to ongoing debates about its role in academic settings.
Policy and pedagogy: Navigating the future of AI in education
The key challenge is no longer whether AI will be used, but how it can be incorporated in ways that preserve academic integrity and promote meaningful learning. One key recommendation is the need for structured AI literacy within curricula. Students must be equipped not only with the ability to use AI tools but also with the skills to evaluate their outputs critically. This includes training in verification techniques, source triangulation, and responsible use of AI-generated content.
The study also calls for clearer institutional guidelines on AI use. Universities must define acceptable practices, establish transparency requirements, and ensure that students understand the boundaries between assistance and academic misconduct. This is particularly important as generative AI blurs traditional notions of authorship and originality.
In addition, the research highlights the importance of human oversight in AI-assisted learning. Teachers are expected to play a central role in guiding students, helping them interpret AI outputs and integrate them into broader learning frameworks. Rather than replacing educators, AI is positioned as a tool that enhances teaching when used responsibly.
At a policy level, the study calls for governance frameworks that address issues such as data privacy, algorithmic transparency, and accountability.
- FIRST PUBLISHED IN:
- Devdiscourse