Personalized learning with AI still shallow without advanced data models

Personalized learning with AI still shallow without advanced data models
Representative image. Credit: ChatGPT

Large language models (LLMs) are changing the way students learn and how teachers teach, but their effectiveness depends heavily on integration with other technologies and strong pedagogical oversight. A study published in the journal Information highlights both the promise and the risks of deploying generative AI in classrooms at scale, raising urgent questions about accuracy, personalization, and educational integrity.

Titled "Large Language Models in Intelligent Education Systems: New Educational Perspectives—A Systematic Review," the research is based on more than 8,000 academic records across major databases to map the evolving role of LLMs in education. The authors conclude that while these systems are already embedded in learning environments, their long-term success hinges on combining them with knowledge graphs, ontologies, learning analytics, and new pedagogical models.

AI tutors, automated feedback, and the rise of personalized learning

LLMs are now active across nearly every layer of the educational process, from tutoring and assessment to curriculum design and accessibility. Their most visible impact lies in personalized learning, where AI systems can adapt explanations, generate tailored content, and provide real-time feedback based on student needs.

LLMs function as conversational tutors, capable of breaking down complex concepts, offering alternative explanations, and guiding students through problem-solving steps. They are also widely used in automated assessment, grading short answers, generating feedback, and supporting formative evaluation. This ability to respond instantly and at scale is reshaping expectations around student support, especially in digital learning environments.

Apart from individual learning, the technology is increasingly embedded in institutional workflows. Teachers use LLMs to generate lesson plans, quizzes, and instructional materials, while students rely on them for writing assistance, coding help, and research support. The models also improve accessibility by translating content, simplifying text, and offering multiple formats for understanding material.

However, this apparent flexibility masks deeper structural limitations. While LLMs can simulate personalization, they lack persistent learner models and long-term understanding of student progress. As a result, their adaptation remains shallow, often relying on immediate context rather than comprehensive learning histories.

Reliability, bias, and the limits of standalone AI in classrooms

LLMs face significant challenges that limit their direct application in education. Chief among these is the risk of hallucinations, where models generate plausible but incorrect information. In an academic setting, such errors can reinforce misconceptions and undermine learning outcomes.

The study identifies multiple systemic weaknesses. LLMs often lack alignment with curricula, meaning their outputs may not match institutional standards or learning objectives. They also show limited pedagogical awareness, failing to structure explanations in ways that support effective teaching. Inconsistent output quality, bias inherited from training data, and limited explainability further complicate their use in classrooms.

These issues are particularly acute in specialized domains such as medicine or engineering, where accuracy is critical. Evaluations cited in the research show that performance can vary widely across subjects and tasks, with accuracy sometimes dropping below acceptable thresholds for educational use.

Another concern is cognitive dependency. The ease of access to AI-generated answers may reduce students' engagement with problem-solving and critical thinking. The authors warn that without careful design, LLMs could encourage passive learning rather than intellectual development.

To address these risks, the study argues that LLMs cannot function effectively as standalone tools. Instead, they must be embedded within broader systems that enforce accuracy, structure knowledge, and align outputs with educational goals.

Hybrid AI systems seen as the future of intelligent education

The future of AI in education lies in hybrid architectures that combine large language models with complementary technologies. These include knowledge graphs for structured information, ontologies for semantic reasoning, and learning analytics for tracking student behavior and performance.

Knowledge graphs, for example, allow systems to map relationships between concepts, enabling more accurate and explainable recommendations. Ontologies provide formal frameworks for aligning content with curricula, while learning analytics help identify knowledge gaps and personalize instruction at a deeper level.

Retrieval-augmented generation is a key technique for improving reliability. By connecting LLMs to verified educational resources such as textbooks and course materials, RAG systems ensure that responses are grounded in factual, domain-specific knowledge rather than relying solely on pre-trained data.

The study also points to the growing role of intelligent agents and multi-agent systems, which extend beyond simple chatbots. These systems can diagnose student knowledge, adapt learning paths, and coordinate multiple AI tools to deliver more coherent educational experiences.

Data trends reinforce this shift toward integration. According to the review, research on combining LLMs with knowledge graphs and ontologies has grown rapidly in recent years, signaling a broader move toward neuro-symbolic AI in education.

However, many of these approaches remain underdeveloped. While knowledge graphs are widely explored, areas such as prompt engineering frameworks and agent-based systems still lack sufficient research, indicating gaps in the current ecosystem.

Governance, ethics, and the challenge of responsible AI adoption

Personalized learning requires access to sensitive student data, including performance metrics and behavioral patterns, creating risks around data security and misuse. The authors call for strict institutional policies, including anonymization, data minimization, and compliance with privacy regulations. They also highlight the importance of transparency, arguing that students and teachers must understand how AI systems generate responses in order to trust and use them effectively.

Ethical considerations extend beyond data protection. Bias in AI outputs can reinforce inequalities, while opaque decision-making processes can undermine accountability. The study calls for a combination of technical safeguards, human oversight, and educational strategies to mitigate these risks.

Teachers, in particular, are key to this transition. Rather than being replaced by AI, they are expected to act as mediators, guiding students in the use of LLMs and ensuring that learning remains active and critical. The most effective model, the authors argue, is one of AI-assisted learning, where technology supports but does not replace human judgment.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback