AI skills grow through everyday student interaction


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 31-01-2026 18:50 IST | Created: 31-01-2026 18:50 IST
AI skills grow through everyday student interaction
Representative Image. Credit: ChatGPT

What does it actually mean for students to become AI-literate? A new large-scale study suggests that AI literacy is not primarily learned through formal instruction or technical training, but through repeated, everyday interaction in which students test, negotiate with, and adapt to the limits of systems like ChatGPT.

The study, titled Learning to Live with AI: How Students Develop AI Literacy Through Naturalistic ChatGPT Interaction and released as an academic preprint, examines how undergraduate students engage with ChatGPT over an extended period of real-world use. The research analyzes how AI literacy emerges organically through lived experience rather than structured curricula.

AI literacy emerges through everyday academic practice

The research shows that students do not approach ChatGPT as a neutral tool, but as a flexible collaborator whose role shifts depending on context. Over time, students develop recurring ways of interacting with the system, which the authors describe as use genres. These genres shape how students frame questions, interpret responses, and decide whether to trust or reject AI output.

One dominant genre positions ChatGPT as an academic workhorse. In this mode, students rely on the system for explanations, summaries, brainstorming, and problem-solving support across disciplines. Importantly, the study finds that even within this genre, students vary how much authority they grant the AI. Some treat responses as provisional drafts that require verification, while others use them as starting points for deeper inquiry.

Another genre frames ChatGPT as a metacognitive partner. Students use the system to reflect on their own thinking, plan study strategies, or clarify confusion about concepts they do not fully understand. In these interactions, the AI functions less as an answer generator and more as a thinking scaffold, helping students articulate questions they struggle to form independently.

The study also documents a more personal mode of engagement, where students treat ChatGPT as an emotional companion. In moments of stress, anxiety, or self-doubt, students turn to the system for reassurance, motivation, or perspective. While not the primary focus of AI literacy debates, this genre plays a role in shaping how comfortable students feel interacting with AI and how frequently they rely on it.

Across all genres, the authors observe that AI literacy develops through repetition. Students refine how they ask questions, learn what kinds of prompts yield useful responses, and adjust expectations based on prior successes and failures. This process unfolds gradually and informally, embedded within everyday academic routines rather than isolated training sessions.

Learning happens when AI gets things wrong

Students frequently encounter incorrect answers, vague explanations, or misleading confidence from ChatGPT. Instead of disengaging, many respond by entering a repair process that becomes central to their developing AI literacy.

Repair literacy, as defined in the study, involves recognizing when AI output is flawed, diagnosing the nature of the problem, and taking corrective action. This may include rephrasing prompts, adding context, requesting sources, or explicitly challenging the system’s assumptions. Through these interactions, students gain insight into how ChatGPT generates responses and where its limitations lie.

The research shows that students who engage more actively in repair develop a more nuanced understanding of AI behavior. They learn that the system is sensitive to wording, prone to confident errors, and uneven across domains. This experiential knowledge proves more durable than abstract warnings about AI hallucinations or bias.

Repair work is not purely technical. It also involves emotional regulation and judgment. Students must manage frustration when the system fails, decide whether further engagement is worthwhile, and assess the credibility of outputs under time pressure. These skills, the authors argue, are essential components of functional AI literacy in real-world settings.

The study finds that repair interactions often lead to deeper learning outcomes than successful first attempts. By struggling with AI errors, students become more reflective users who question outputs rather than accepting them at face value. This dynamic challenges narratives that portray AI as undermining critical thinking. In practice, the authors find that AI failures can prompt critical engagement when students are equipped to recognize and address them.

Trust, skepticism, and knowing when not to use AI

Students do not maintain a fixed level of trust in ChatGPT. Instead, trust fluctuates based on task type, prior experiences, and perceived risk. Over time, students develop situational strategies for deciding when AI is useful and when it should be avoided.

For low-stakes tasks such as brainstorming or clarifying general concepts, students are more willing to rely on AI output. In contrast, for high-stakes assessments or factual claims that carry academic consequences, students tend to verify responses or cross-check with external sources. This selective trust reflects growing epistemic awareness rather than blind dependence.

The study also documents moments of deliberate disengagement. When students encounter repeated errors or feel uncertain about accuracy, they sometimes abandon the system altogether for a given task. Knowing when not to use AI emerges as a key dimension of literacy, challenging simplistic models that equate competence with frequency of use.

The authors argue that this trust calibration represents a mature form of AI literacy. It combines technical understanding with ethical and epistemic judgment, enabling students to navigate AI systems responsibly. Rather than treating skepticism as resistance, the study positions it as a sign of learning.

Formal instruction often focuses on tool capabilities and ethical guidelines, but may overlook the value of experiential learning through real use. The study suggests that creating space for guided experimentation, reflection on failure, and discussion of trust may be more effective than prescriptive rules.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback