When AI Learns From Itself: The Hidden Risks to Collective Knowledge Systems
AI systems that learn from and feed back into human-generated data can distort collective knowledge by amplifying existing biases, especially when they update too quickly. More localized and slower AI designs help preserve diverse information and lead to more accurate and robust learning outcomes.
Artificial intelligence is no longer just a tool for answering questions. It is becoming the main way people access, interpret, and share information. A new study by researchers from the Massachusetts Institute of Technology, Columbia University, Dartmouth College, and the National Bureau of Economic Research shows that this shift is quietly transforming how societies learn.
Modern AI systems are trained on large amounts of human-generated content. But here’s the catch: their outputs are also feeding back into the same data pool. Over time, AI learns from the content it helped create. This creates a feedback loop in which the line between original knowledge and AI-generated information blurs.
The study explores what this means for collective understanding and whether AI is helping society get closer to the truth or drifting away from it.
When AI Starts Shaping Beliefs
To explain this, the researchers use a simple idea. People form beliefs by listening to others in their network. Now imagine adding an AI system that collects everyone’s opinions, summarizes them, and feeds the result back to everyone.
This is already happening in real life. People increasingly rely on AI for answers, summaries, and explanations. But the study finds that AI doesn’t just pass along information. It changes whose voices matter more.
Because AI is trained using certain data sources, it gives more importance to some groups over others. This means it can amplify certain viewpoints while reducing others, even without any intentional bias.
The Hidden Risk of Fast AI
One of the most important findings is about how quickly AI systems update.
If an AI updates very fast, it reflects current beliefs almost instantly. But those beliefs are often already biased due to social factors like echo chambers or group influence. When AI learns from these biased beliefs and sends them back into the system, it reinforces them.
This creates a cycle where the same distortions keep getting stronger. The faster the system updates, the stronger this effect becomes. In such cases, the study finds that it becomes almost impossible to design an AI that consistently improves learning.
In simple terms, faster AI can actually make society less informed.
Can AI Fix Social Bias?
The study also looks at how AI interacts with social inequality.
In many cases, some groups dominate online content. When AI is trained mostly on data from these groups, their views become more influential. As divisions in society grow, this imbalance gets worse, leading to poorer overall learning.
One solution might be to give more weight to underrepresented groups. But the results are not straightforward. Sometimes this helps correct bias. Other times, it overcompensates and creates new distortions.
The outcome depends on how connected different groups are. This shows that fairness in AI is not just about adjusting data. It is deeply tied to how society itself is structured.
Why Smaller AI Systems Might Work Better
The researchers also compare large, global AI systems with smaller, more focused ones.
Today’s AI models usually combine information from everyone and apply a single approach to all topics. While this works at scale, it also creates strong feedback loops and spreads errors widely.
An alternative is to use multiple smaller AI systems, each focused on specific topics or communities. These “local” systems rely on more relevant information and limit the spread of mistakes.
The study finds that these smaller systems consistently produce better learning outcomes. In contrast, a single large system cannot perform well in every area because it cannot balance all perspectives at once.
The Big Takeaway
AI is not just a neutral tool. It actively shapes how knowledge is formed and shared. The way AI systems are designed matters a lot. Fast, centralized systems that pull from broad data sources may seem efficient, but they risk amplifying bias and reducing the diversity of information. Slower, more focused systems may be less flashy, but they produce more reliable outcomes.
As AI becomes a central part of how we access information, the key question is not just how powerful these systems are, but how wisely they are built and used.
- READ MORE ON:
- Artificial intelligence
- Modern AI systems
- AI systems
- global AI systems
- AI
- FIRST PUBLISHED IN:
- Devdiscourse

