Beyond big data: Why AI must learn like a human to overcome its limits
The prevailing approach in AI development follows the "scaling law," which assumes that increasing computational power and training data will naturally lead to more advanced reasoning capabilities. This assumption, however, does not fully explain why AI models often exhibit fragility when exposed to slight variations in task conditions - a phenomenon commonly referred to as the robustness challenge.
Artificial Intelligence (AI) has made significant strides in recent years, achieving high-level reasoning capabilities that allow language models to generate coherent text, solve complex mathematical problems, and even simulate creative processes. However, despite these advancements, AI systems still struggle with robustness in real-world scenarios and exhibit glaring weaknesses in basic problem-solving tasks that are intuitive to humans. The question remains: Why do large language models (LLMs) perform well on advanced tasks while failing at simpler ones?
A recent study titled "The Philosophical Foundations of Growing AI Like a Child" by Dezhi Luo, Yijiang Li, and Hokin Deng from the University of Michigan, the University of California San Diego, and Carnegie Mellon University, addresses this paradox. The study argues that the discrepancy between human cognitive development and machine learning is at the root of these limitations. Unlike human intelligence, which develops through a gradual learning process grounded in fundamental cognitive structures, AI models rely on large-scale data processing without an equivalent foundation of "core knowledge." The paper explores the empirical evidence for core knowledge in human cognition, analyzes why LLMs fail to acquire it, and proposes a novel approach to integrating core knowledge into AI systems using cognitive prototyping and synthetic data generation.
Scaling up vs. growing up: The developmental gap
The prevailing approach in AI development follows the "scaling law," which assumes that increasing computational power and training data will naturally lead to more advanced reasoning capabilities. This assumption, however, does not fully explain why AI models often exhibit fragility when exposed to slight variations in task conditions - a phenomenon commonly referred to as the robustness challenge. Furthermore, the study highlights that AI models fall prey to Moravec’s Paradox, wherein tasks that are easy for humans (such as visual perception or basic arithmetic) remain difficult for machines, while tasks that are challenging for humans (such as playing chess at a grandmaster level) are relatively easier for AI.
The researchers argue that these challenges stem from the fundamental difference in how humans and AI acquire knowledge. Humans develop intelligence incrementally, building complex skills on top of foundational abilities acquired during early childhood. In contrast, AI models attempt to grasp all forms of reasoning simultaneously through vast amounts of linguistic data, without the underlying cognitive scaffolding that humans rely on. This difference suggests that a more effective approach to AI development would involve training models in a way that mirrors human cognitive growth - starting with basic principles and gradually progressing to higher-order reasoning.
Core knowledge and AI’s developmental shortcomings
One of the key arguments in the paper is that human intelligence is built upon "core knowledge," a set of fundamental cognitive abilities present from infancy. These include an intuitive understanding of objects, numbers, space, and social interactions, which serve as the foundation for more advanced cognitive skills. Empirical research in developmental psychology has shown that infants actively form hypotheses about the world and refine their understanding through direct experience.
By contrast, current AI models lack such core knowledge. Instead of developing an understanding of the world through gradual learning, they process massive amounts of text data without establishing fundamental cognitive structures. This absence explains why AI models can excel at tasks requiring pattern recognition and statistical inference but fail in basic logical reasoning or common-sense understanding.
The study also explores why LLMs do not acquire core knowledge despite their ability to process vast amounts of data. The researchers propose three key reasons: (1) AI lacks hardwired domain-specific faculties that humans are born with, (2) fundamental knowledge is "buried too deeply" within AI models, making it difficult for them to extract and apply core concepts, and (3) AI learning is "groundless," meaning that it does not follow a structured developmental trajectory similar to human learning.
Towards a new approach: Growing AI like a child
To bridge the gap between AI and human cognitive development, the researchers propose a novel approach: training AI models like a child grows and learns. This involves designing a structured learning process that mimics how humans acquire foundational knowledge before developing complex skills. The study outlines two key strategies to achieve this goal: cognitive prototyping and synthetic data generation using physical simulations.
Cognitive prototyping involves systematically exposing AI models to learning environments modeled after classic developmental psychology experiments. For example, tasks such as the "three-mountain task" used to test children’s ability to understand different perspectives can be adapted to AI training. By integrating AI models into controlled environments that replicate human learning conditions, researchers can ensure that AI acquires core knowledge in a structured manner.
Additionally, synthetic data generation through physics-based simulations allows AI to engage with representations of real-world environments. By using engines like Mujoco or Genesis, researchers can create virtual spaces where AI learns fundamental cognitive principles through simulated experiences, much like how a child learns through interaction with the physical world.
This approach challenges the prevailing assumption that scaling alone is sufficient to achieve human-like intelligence. Instead, it advocates for a developmental pathway where AI models acquire knowledge progressively, ensuring that higher-level reasoning is built upon a solid cognitive foundation. The researchers argue that by adopting this methodology, AI can achieve greater robustness, improved adaptability, and more reliable decision-making in real-world applications.
Conclusion: Rethinking AI’s path to intelligence
The study by Luo, Li, and Deng presents a compelling case for a paradigm shift in AI development. Instead of focusing solely on increasing data and computational power, the next generation of AI models should be designed to learn like humans - through structured experiences that build foundational knowledge before advancing to complex reasoning.
By integrating core knowledge into AI through cognitive prototyping and synthetic data generation, researchers can address key weaknesses in current models and pave the way for more resilient and adaptable AI systems. This approach not only enhances AI’s ability to generalize knowledge but also opens new possibilities for aligning machine intelligence with human cognitive processes.
As AI continues to evolve, the question is no longer just about making machines smarter, but about making them learn in a way that reflects human intelligence. Growing AI like a child may be the key to unlocking the next frontier in artificial intelligence, ensuring that machines develop understanding, adaptability, and true problem-solving capabilities.
- READ MORE ON:
- human intelligence vs AI
- Big data
- AI scaling laws
- AI limits
- FIRST PUBLISHED IN:
- Devdiscourse

