AI is quietly altering human cognition


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 28-12-2025 11:17 IST | Created: 28-12-2025 11:17 IST
AI is quietly altering human cognition
Representative Image. Credit: ChatGPT

Artificial intelligence has become a daily cognitive companion for students, professionals, and institutions, quietly shaping how people search for information, write, learn, reason, and communicate. As AI tools move deeper into everyday life, a central question is gaining urgency: what does it actually mean to be “AI literate” in a world where machines do more than automate tasks and begin to influence human thinking itself?

New research suggests that current approaches to AI literacy may be missing a critical dimension. While most public discussions focus on technical skills, ethical rules, or policy oversight, the study finds that people are increasingly concerned about how AI affects their own mental processes. These concerns point to a growing need for what researchers describe as metacognitive AI literacy, an ability to recognize and reflect on how AI reshapes human thought, judgment, creativity, and belief formation.

The findings come from the study Metacognitive AI Literacy: Findings from an Interactive AI Fair, published in AI & Society

AI literacy moves beyond technical know-how

The study examined how attendees at the Interactive AI Fair understood and engaged with artificial intelligence across a wide range of disciplines. Participants included undergraduate and graduate students, faculty, staff, and members of the local community. Most attendees already used AI tools frequently, especially large language models, for academic work, professional tasks, and everyday activities.

Despite this widespread use, detailed technical understanding of AI systems was limited. Instead, most participants displayed what the researchers classify as practical AI literacy. This included basic familiarity with how AI tools function, when they might be useful, and how they could be applied to tasks such as writing, research, coding, and content creation. Many attendees expressed interest in improving their efficiency and competitiveness by learning how to use AI more effectively in education and work.

At the same time, the research shows that practical literacy alone does not capture the full scope of public engagement with AI. Participants repeatedly raised questions about accuracy, reliability, authorship, and appropriate use, particularly in academic contexts. Concerns about whether AI-generated content could be trusted, how it should be cited, and where responsibility lies for errors or misinformation were common across sessions.

To interpret these findings, the researchers applied a classic framework from science literacy research, originally developed by Benjamin Shen. The framework divides literacy into three interconnected dimensions: practical, civic, and cultural. When applied to AI, this framework reveals a broader understanding of what people expect from AI literacy initiatives.

Civic AI literacy emerged as a major area of concern. Participants showed strong interest in governance, regulation, and accountability, reflecting anxiety about AI’s growing influence on public life. Issues such as bias, surveillance, misinformation, geopolitical power, and unequal access to AI tools dominated discussions. Many attendees questioned how governments and companies should regulate AI, who should be held responsible for harm, and how democratic oversight can keep pace with rapid technological change.

Cultural AI literacy also surfaced throughout the event. Attendees expressed fascination with AI’s creative potential and its ability to accelerate scientific discovery, preserve knowledge, and expand access to information. At the same time, there was unease about AI’s impact on human creativity, memory, and meaning-making. Participants debated whether AI enhances or diminishes originality, whether it changes how people read and write, and how it might alter long-standing cultural practices in art, education, and communication.

Together, these three dimensions confirm that public engagement with AI is already far more complex than basic skill-building. However, the study’s most significant finding lies beyond Shen’s original framework.

Metacognitive effects raise new concerns

Across interviews, observations, and survey responses, the researchers identified a recurring theme that did not fit neatly into existing models of AI literacy. Many participants were less worried about what AI does and more concerned about what it does to them.

These concerns focused on the internal effects of AI use, particularly how frequent interaction with AI systems might influence thought patterns, reasoning habits, creativity, and communication styles. Participants described unease about becoming overly dependent on AI feedback, losing confidence in their own ideas, or unconsciously adopting the structure and tone of AI-generated content.

This phenomenon is what the authors define as metacognitive AI literacy. Metacognition refers to the ability to reflect on one’s own thinking processes, including how beliefs are formed, how confidence is assigned to information, and how decisions are made. In the context of AI, metacognitive literacy involves recognizing how AI tools can shape cognition, often without users being fully aware of the influence.

The study shows that many AI users already sense these effects intuitively. Participants raised concerns that AI-assisted writing could blur the line between refinement and authorship, that AI summaries might replace deep reading, and that repeated exposure to fluent but potentially flawed AI output could erode critical thinking. Some worried that AI’s confidence and coherence could lead users to overtrust its responses, even when the information is incomplete or incorrect.

These anxieties were not limited to students. Faculty members and professionals expressed similar concerns, particularly about education. There was apprehension that AI could encourage surface-level engagement, shortcut learning processes, or reshape academic norms around effort, originality, and evaluation.

The researchers argue that these concerns are not secondary or speculative. They reflect a structural feature of modern AI systems. Unlike earlier technologies, AI tools are opaque, generative, adaptive, and often designed to mimic human interaction. Users cannot easily see how outputs are produced, AI systems generate novel content rather than fixed results, and repeated interaction can create feedback loops that influence future outputs and user behavior.

As a result, AI does not simply assist thinking. It participates in it. This makes metacognitive awareness essential. Without it, users may struggle to judge the reliability of AI-generated information, recognize subtle biases, or maintain control over their own reasoning processes.

Why education and policy must adapt

The study’s conclusions carry significant implications for education, policy, and public engagement with AI. The authors argue that AI literacy programs must expand beyond technical instruction and ethical guidelines to include explicit attention to metacognitive effects.

In educational settings, this means designing assignments and curricula that encourage students to reflect on how they use AI and how it shapes their thinking. Rather than banning AI outright or treating it as a neutral productivity tool, educators are urged to integrate reflective practices that help learners evaluate when AI supports understanding and when it may undermine it.

For policymakers and AI developers, the findings highlight the need for greater transparency. Users need clearer information about what AI systems can and cannot do, how outputs are generated, and where limitations lie. Without this, metacognitive risks increase, as users may assign unwarranted confidence to AI-generated content or fail to recognize its influence on their beliefs and decisions.

The research also suggests that civic engagement around AI regulation must consider cognitive and cultural impacts, not just technical safety or economic efficiency. If AI systems shape public discourse, information ecosystems, and individual reasoning, then governance frameworks must address these effects explicitly.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback