AI in childhood education: A revolution or a risk to critical thinking?
The study warns that over-reliance on generative AI could lead to "intellectual deskilling," a phenomenon where children may become less capable of independent critical thinking and problem-solving.
In an era where technology is reshaping every aspect of life, education stands at a pivotal crossroads. The rise of generative AI in classrooms has sparked both excitement and concern among educators, policymakers, and parents. While AI-powered tools like ChatGPT, Bard, and other generative models promise to revolutionize learning by providing instant knowledge and personalized tutoring, they also pose profound risks to intellectual development. How much should children rely on AI for learning? Could it enhance their cognitive skills, or will it erode essential abilities like problem-solving, creativity, and critical thinking?
A recent study titled "Generative AI and Childhood Education: Lessons from the Smartphone Generation" by Octavian-Mihai Machidon, published in AI & Society (2025), explores these pressing questions. The study draws parallels between the rise of smartphone dependency among children and the growing reliance on AI-driven learning tools, cautioning that AI could inadvertently contribute to "intellectual deskilling." By analyzing historical patterns, cognitive science, and emerging technological trends, Machidon presents a compelling argument for a balanced approach to AI in education, ensuring it serves as an enhancement rather than a replacement for human cognition.
Risks of intellectual deskilling
The study warns that over-reliance on generative AI could lead to "intellectual deskilling," a phenomenon where children may become less capable of independent critical thinking and problem-solving. Drawing from Jonathan Haidt’s research on smartphone dependency, the paper highlights how replacing free play with screen-based activities has negatively affected children's executive functions, such as focus, self-regulation, and emotional control. Similarly, by outsourcing cognitive tasks to AI, children may gradually lose the ability to engage deeply with problems, diminishing their perseverance and intellectual curiosity.
The paper also references Shannon Vallor’s concept of "moral deskilling," originally used to describe how AI reliance can weaken ethical decision-making. Extending this idea to education, the study suggests that students who frequently turn to AI-generated answers may fail to develop essential metacognitive skills - the ability to reflect on and regulate one’s own thinking processes. This cognitive offloading could lead to a shallow understanding of topics, where students believe they comprehend concepts but actually lack deep engagement and retention.
The challenge of instant gratification
Another major concern raised in the study is the impact of AI-driven instant gratification on learning behaviors. Generative AI provides immediate answers, which can discourage children from working through complex problems independently. This ease of access to solutions may reduce resilience and the willingness to engage in trial-and-error learning, both of which are critical for developing a growth mindset.
Moreover, the study suggests that the gamification of AI learning tools could further reinforce short attention spans. While interactive AI-based platforms may make learning more engaging, they also risk creating an environment where students seek quick rewards rather than long-term intellectual challenges. This shift in learning dynamics may lead to a decline in problem-solving perseverance, making it harder for students to adapt to real-world challenges that require patience and deep thinking.
Finding a balance: AI as a learning aid, not a replacement
Despite these concerns, the study does not argue for a complete rejection of AI in education. Instead, it advocates for structured AI integration, where AI tools complement rather than replace human cognitive processes. Machidon suggests that AI should be treated similarly to calculators in mathematics - useful for efficiency but not as a substitute for foundational learning.
To mitigate risks, the study recommends limiting unsupervised AI access before high school and establishing AI-free learning environments for younger students. This approach aligns with recommendations for smartphone usage, where delaying unrestricted access allows children to develop crucial cognitive skills before introducing potentially addictive technology. Schools should emphasize problem-based learning, critical discussions, and self-reflective exercises to ensure that AI acts as an enabler of deep learning rather than an escape from cognitive effort.
Future of AI in education: Responsible implementation
As generative AI becomes an inevitable part of education, the challenge is to implement policies that encourage responsible AI use while safeguarding intellectual development. The study concludes that educational institutions must strike a balance between leveraging AI’s benefits and preventing cognitive dependency.
This involves creating guidelines for AI use in schools, fostering awareness of metacognition, and promoting ethical AI literacy from an early age. Policymakers, educators, and parents must collaborate to ensure that AI serves as an intellectual assistant, not an intellectual crutch. By drawing lessons from the smartphone generation, society has a unique opportunity to shape AI policies that foster critical thinking, independence, and lifelong learning habits in children.
- FIRST PUBLISHED IN:
- Devdiscourse

