Campus AI use surges, but many students still struggle with basic AI skills
Instead of assigning students fixed literacy scores, the AI CAR framework identifies where each student sits on the competency continuum. It also provides a ranking based on proximity to higher or lower competency zones. This ranking is not intended for grades or penalties. Its purpose is to guide instructional support and curriculum design.
Universities across the world are rushing to update curricula for the artificial intelligence era. Amidst this shift, institutions face an urgent need to understand how students actually use these systems, how confident they feel and whether their habits support responsible development of AI skills.
A new peer-reviewed study published in Applied Sciences shows that students do not fall into simple categories like skilled or unskilled but instead spread across a competency continuum shaped by habits, emotions, ethics and technical use.
The research, titled “AI Competency Assessment and Ranking: A Framework for Higher Education,” presents a new data-driven model called AI CAR, which evaluates students based on real behavior patterns and attitudes instead of relying on simple self-report scales. Their work is based on responses from 686 university students across the Valencian Community in Spain and shows how students differ in frequency of AI use, emotional reactions, concern about academic integrity and willingness to revise outputs from AI tools.
The findings give universities a new way to understand AI readiness among students, while also offering a path to build policies that help learners grow toward higher-competency profiles.
Competency continuum
Students use AI tools to draft essays, revise assignments, explain complex topics, write code, summarize reading materials and prepare for tests. The authors note that a simple literacy checklist cannot capture the complexity of this new learning environment. Instead, they analyze AI use as a combination of behavior, awareness, ethics and emotion.
The data collected from students covers several dimensions. These include how often they use AI in academic and non-academic tasks, how often they revise or check AI outputs, how much they fear plagiarism, how concerned they are about overreliance, how much curiosity or motivation they experience while using AI, and how much stress or doubt AI triggers. The study also records demographic information and self-rated digital competence.
To identify competency patterns, the authors apply k-means clustering to group students into profiles. Then they use topological data analysis, a method that detects shapes and connections within complex data, to confirm that the clusters are not random but belong to a smooth, ordered continuum of competency. Finally, multinomial logistic regression is used to explore how factors such as study field, gender or self-rated digital skill influence the likelihood of belonging to each profile.
The results reveal that students form four profiles that differ in both intensity and quality of AI use. These profiles are not separate categories but points along a broad range of competency development.
Four AI competency profiles explain how students learn, struggle and adapt
The four profiles identified in the study are Active-Cautious, Low-Engagement, Balanced-Confident and High-Use-Vigilant. Together, they show a progression that reflects both increasing use and better integration of AI tools.
The Active-Cautious profile reflects students who use AI regularly but remain careful. They revise outputs at a moderate pace, show concern about plagiarism and try to avoid overreliance. They are curious about AI and motivated to learn, yet they proceed with attention to rules and ethics. This profile reflects early but responsible engagement.
The Low-Engagement profile includes students who rarely use AI. Despite their low activity, they have a surprisingly higher sense of overreliance, suggesting uncertainty about how to use AI responsibly. They show the lowest motivation and weakest integration of AI into their studies. This group needs targeted support to understand how to apply AI tools without fear or confusion.
The Balanced-Confident profile represents students who use AI frequently, revise outputs consistently and manage risks effectively. They do not show high concern about plagiarism and rarely feel stressed while using AI. They have strong curiosity and motivation. This profile reflects stable, self-regulated learning with AI.
The High-Use-Vigilant profile sits at the top of the competency continuum. These students use AI very frequently, revise outputs more than any other group, and maintain low overreliance. They are ethically alert and maintain steady, low levels of stress. Their use is intense but well regulated and thoughtful. They integrate AI tools into nearly all aspects of their academic work without losing control or oversight.
Together, these profiles show a steady path from rare and uncertain AI use toward confident, reflective, intensive and responsible engagement. The authors emphasize that students should not be forced into fixed categories but supported to move along the continuum toward higher levels of competency.
Digital skill, STEM background and emotions strongly influence competency level
The study shows that certain factors make students more likely to belong to higher-competency profiles. One of the strongest predictors is self-rated digital competence. Students who consider themselves digitally skilled have higher chances of being in the Balanced-Confident or High-Use-Vigilant profiles. This suggests that foundational digital literacy plays a key role in shaping AI readiness.
Field of study is also important. Students in science, technology, engineering and mathematics programs are more likely to occupy higher-competency groups. These students often have more exposure to computational thinking and digital tools, which may make them more confident in managing AI systems.
Emotional patterns also shape competency. Students in higher-competency profiles show high curiosity and motivation, and low stress or anxiety. Lower-competency profiles show weaker curiosity and higher emotional discomfort. This finding indicates that developing AI competency is not only a technical process but also an emotional one. Successful integration requires comfort, trust and positive engagement.
Ethical awareness also plays an important role. In the highest profiles, students revise outputs carefully and maintain strong ethical attention without showing high fear or stress. In contrast, lower-competency students report more fear of plagiarism despite using AI less. This difference highlights the need for better academic integrity education, not to create fear but to encourage confident and responsible use.
AI CAR offers a new model for assessing and ranking student AI competency
Instead of assigning students fixed literacy scores, the AI CAR framework identifies where each student sits on the competency continuum. It also provides a ranking based on proximity to higher or lower competency zones. This ranking is not intended for grades or penalties. Its purpose is to guide instructional support and curriculum design.
AI CAR allows universities to identify which groups need training, which groups can handle advanced AI tasks and which groups may be at risk of misuse due to uncertainty or overreliance. It can also help institutions design programs tailored to each profile, such as improved digital literacy, ethical guidance or advanced AI integration strategies.
The authors point out that the framework aligns with real student behavior. Instead of measuring what students say they can do, it evaluates what they actually do, how they feel and how they regulate their own learning.
The analysis also confirms that AI competency is not static. Students move along the continuum as they gain experience, develop confidence and refine their habits. This means AI education should be ongoing, practical and matched to student needs.
- FIRST PUBLISHED IN:
- Devdiscourse

