AI in education shifts from grading tool to motivation engine


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 13-04-2026 08:13 IST | Created: 13-04-2026 08:13 IST
AI in education shifts from grading tool to motivation engine
Representative image. Credit: ChatGPT

Researchers are testing how AI can not only forecast student performance but also actively influence motivation, engagement, and academic outcomes through data-driven interventions embedded within teaching frameworks.

A recent study titled “Application of Generative Artificial Intelligence for Innovative Teaching,” published in Applied Sciences, explores this shift through a real-world classroom experiment conducted by Nikola Kadoić, Jelena Gusić Munđar, and Tena Jagačić from the University of Zagreb. The research investigates how generative AI can be integrated into teaching as a predictive and motivational tool, using a novel “betting shop” classroom activity to assess both performance forecasting and student perception.

The findings reveal a complex picture. While AI demonstrated only moderate predictive accuracy, it significantly enhanced student engagement and motivation, suggesting that the value of generative AI in education may lie less in precision and more in its ability to shape behavior and learning experiences.

Predictive AI models show limited accuracy but strong behavioral impact

The study focuses on a Business Decision Analysis course, widely regarded as one of the most challenging in the graduate program. Researchers deployed generative AI to analyze a wide range of student data, including attendance, learning management system activity, self-assessment scores, and preparatory quiz results. Based on these inputs, the AI generated predicted exam scores, risk indices, growth trends, and personalized motivational feedback for each student.

The predictive model operated as a composite system, aggregating multiple indicators into a single forecast. However, the results showed that while the model produced predictions close to actual outcomes at a group level, it struggled to accurately predict individual student performance.

Statistical analysis revealed no significant difference between predicted and actual scores overall, suggesting no systematic bias. Yet this apparent accuracy masked substantial variation at the individual level. The correlation between predicted and achieved results was weak, indicating that the model had limited ability to distinguish between high- and low-performing students.

Despite this limitation, more than 60 percent of students exceeded their predicted scores. This pattern reflects not only the constraints of the prediction model, which limited forecasts to a defined range, but also a potential behavioral effect. Students appeared to treat predictions as targets to surpass, turning AI-generated forecasts into motivational benchmarks rather than fixed expectations.

The research suggests that predictive AI in education may function as a hybrid tool. Rather than serving purely as a forecasting mechanism, it can act as a behavioral catalyst, influencing how students prepare for assessments and respond to perceived expectations.

Gamified AI integration drives engagement and positive student perception

The study presents a gamified “betting shop” activity, where students were challenged to outperform AI predictions. The mechanism was simple but effective. If a student achieved a higher score than predicted, they received a small reward, reinforcing a sense of competition and achievement.

This approach transformed predictive analytics into an interactive and emotionally engaging experience. Students received individualized reports detailing their predicted performance, learning patterns, and areas for improvement, along with motivational messages tailored to their data profiles.

The impact on student perception was overwhelmingly positive. Survey results showed that nearly half of participants rated the AI-driven activity as extremely useful for passing the course, while no students rated it negatively. Qualitative feedback highlighted key themes such as motivation, usefulness, and increased engagement with course material.

Students frequently described the activity as interesting, helpful, and encouraging, indicating that the integration of AI into the learning process enhanced both cognitive and emotional aspects of learning. The presence of personalized feedback, combined with a playful competitive structure, appeared to increase student involvement and commitment.

The study also recorded a notable shift in academic performance compared to previous cohorts. Average exam scores rose significantly, suggesting that the combination of predictive analytics and gamified design may contribute to improved learning outcomes. However, researchers caution that these improvements cannot be definitively attributed to the AI intervention alone due to potential cohort differences.

The activity reframed AI from a passive tool into an active participant in the learning process. Instead of merely generating content or assisting with tasks, the AI system became a source of feedback, challenge, and motivation.

Data-driven learning indicators reveal complex student performance patterns

The study introduced a set of analytical indicators designed to interpret student learning behavior. These included a risk index, average growth metric, and consistency score, each derived from patterns in student activity and performance.

The risk index measured the likelihood of academic difficulty based on factors such as low engagement and inconsistent results. Average growth tracked improvement over time, while consistency reflected the stability of performance across tasks.

Analysis of these indicators revealed important insights into learning dynamics. Students with higher predicted scores tended to show stronger improvement trends and lower risk levels. At the same time, a negative relationship emerged between consistency and growth, suggesting that students with stable performance patterns improved less over time compared to those with more variability.

Cluster analysis further identified four distinct student profiles. Some students were consistently high-performing with low risk, while others showed strong growth despite initial underperformance. A separate group exhibited unstable learning patterns and higher risk, highlighting the need for targeted support.

These findings underscore the complexity of student learning behavior and the limitations of one-size-fits-all predictive models. While AI can identify patterns and generate insights, its effectiveness depends on how these outputs are interpreted and applied within the educational context.

Human oversight is vital. Generative AI required iterative prompting and validation due to occasional inconsistencies and hallucinations in output. This reinforces the need for a human-in-the-loop approach, where educators actively review and refine AI-generated insights.

Challenges around accuracy, transparency, and ethical use

The integration of generative AI in education raises several challenges. One of the most significant issues is the lack of transparency in how AI models generate predictions. Unlike traditional statistical methods, generative AI relies on pattern recognition without clearly defined parameters, making it difficult to replicate or validate results.

Data quality and structure also play a critical role. Inaccurate or incomplete input data can lead to unreliable predictions, while inconsistencies in AI processing may require repeated adjustments. The study reported instances where the model generated incorrect or inconsistent outputs, highlighting limitations in handling structured datasets.

Ethical considerations are equally important. The use of student data must comply with data protection regulations, including anonymization and secure handling. The study ensured compliance by removing personal identifiers and focusing solely on performance-related data.

Another concern is the sustainability of relying on commercial AI tools. Educational institutions must consider long-term factors such as cost, access, and dependency on external providers. As AI technologies continue to evolve, maintaining stable and scalable implementations will be a key challenge.

A shift from prediction to engagement in AI-driven education

The value of AI lies in its ability to integrate multiple data sources, generate personalized insights, and create interactive learning experiences. When combined with thoughtful pedagogical design, these capabilities can enhance both teaching and learning outcomes.

The study also aligns with broader trends in educational technology, where the emphasis is moving toward personalized and adaptive learning environments. By providing real-time feedback and tailored recommendations, AI systems can support more individualized learning pathways.

The research also highlights the need for balance. While AI can enhance teaching, it cannot replace the role of educators. Human judgment, critical thinking, and contextual understanding remain essential for interpreting AI outputs and ensuring effective learning experiences.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback