The silent takeover: How algorithms are governing childhood

Predictive analytics, learning management systems, and behavior-monitoring tools now operate continuously, producing real-time assessments of attention, performance, and compliance. These systems promise early intervention and personalized pathways, but they also introduce a new logic of control based on anticipation rather than response.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 30-12-2025 19:21 IST | Created: 30-12-2025 19:21 IST
The silent takeover: How algorithms are governing childhood
Representative Image. Credit: ChatGPT

Researchers warn that AI is not merely supporting childhood development but actively reshaping how children experience time, agency, judgment, and participation in society. This transformation raises urgent questions about who controls the conditions under which children grow, learn, and become citizens in an algorithmically mediated world.

A study titled “Ontological Capture: AI, Childhood, and the Algorithmic Governance of Becoming,” published in AI & Society, argues that AI-driven systems do more than manage behavior. They reconfigure the very foundations of human development, narrowing the space for ethical reflection, civic imagination, and autonomous becoming.

Algorithmic systems move from support to governance in childhood

The paper traces how AI has moved from assisting educational administration to governing childhood at a structural level. Predictive analytics, learning management systems, and behavior-monitoring tools now operate continuously, producing real-time assessments of attention, performance, and compliance. These systems promise early intervention and personalized pathways, but they also introduce a new logic of control based on anticipation rather than response.

According to the study, algorithmic governance differs fundamentally from traditional forms of discipline. Instead of correcting behavior after deviation occurs, AI systems work pre-emptively, shaping conduct before alternatives can emerge. By predicting risk, performance, or disengagement, these platforms channel children toward predefined outcomes while narrowing the space for uncertainty, resistance, or exploration.

The authors describe this process as ontological capture, a condition in which algorithmic systems shape not only actions but the underlying possibilities of becoming. Childhood, once characterized by experimentation, ambiguity, and temporal openness, is increasingly compressed into sequences of measurable outputs. Time is reorganized around immediacy and constant responsiveness, leaving little room for reflection, delay, or moral deliberation.

This shift, the study argues, transforms how children learn to relate to authority. Governance becomes embedded in interfaces, metrics, and automated feedback rather than embodied in human relationships. Compliance is encouraged through design rather than instruction, producing what the authors identify as an algorithmic form of subject formation.

From education to algorithmic citizenship

The authors argue that predictive systems habituate children to a form of citizenship defined by optimization, responsiveness, and alignment with algorithmic expectations. Civic participation is reframed as performance within systems that reward predictability and penalize deviation.

Drawing on critical social theory, the paper shows how algorithmic governance replaces judgment with calculation. Ethical reasoning, which requires time, uncertainty, and dialogue, is increasingly sidelined in favor of instant feedback loops. Children learn to respond to signals rather than reflect on consequences, to optimize behavior rather than question underlying values.

This process, the authors argue, produces an algorithmic citizen whose agency is constrained not through force but through infrastructure. The system does not prohibit dissent directly; instead, it renders dissent inefficient, invisible, or irrelevant. By continuously ranking, scoring, and nudging behavior, AI systems subtly align individuals with institutional goals while presenting those goals as neutral or inevitable.

This transformation is not limited to formal education. Similar logics now operate across digital platforms used by children and young people, from content recommendation systems to behavioral tracking tools. Together, these systems create an ecosystem in which becoming human is increasingly mediated by machinic rhythms rather than social or ethical deliberation.

The authors warn that this shift has long-term consequences for democratic life. When civic capacities such as judgment, responsibility, and imagination are narrowed during childhood, societies risk producing compliant subjects rather than critically engaged citizens.

Stewardship as a counter-model to algorithmic control

In response to these concerns, the study does not call for the rejection of AI but for a fundamental reorientation of how it is governed and deployed. The authors propose stewardship leadership as an ethical counter-model to algorithmic governance.

Stewardship emphasizes care, attentiveness, and responsibility for sustaining the conditions of human becoming rather than optimizing outcomes. In contrast to predictive control, stewardship accepts uncertainty and values processes that cannot be fully measured or automated. It prioritizes relational depth, temporal openness, and moral reflection as essential elements of education and civic life.

The study outlines practical orientations aligned with stewardship, including slowing down learning processes, protecting spaces for unstructured play, encouraging embodied and arts-based practices, and preserving silence and non-instrumental time. These practices resist ontological capture by refusing to reduce human development to data points or performance indicators.

Importantly, stewardship reframes leadership as an ethical responsibility rather than a managerial function. Educators, policymakers, and system designers are positioned not as controllers of outcomes but as guardians of conditions that allow children to grow into ethically grounded, socially responsive individuals.

The authors stress that stewardship does not reject technology outright. Instead, it demands that AI systems remain subordinate to human values that cannot be fully encoded into algorithms. Governance, in this view, must be accountable to the lived experience of children rather than abstract measures of efficiency or success.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback