AI is changing how people think at work, not just what tasks they do

Concepts such as digital literacy or AI literacy typically focus on awareness and basic understanding of how AI systems work. According to the authors, this framing assumes that AI is an external tool that workers must learn to operate. That assumption no longer holds. Modern AI systems write, summarize, analyze, recommend, and adapt, operating less like passive software and more like cognitive collaborators.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 14-01-2026 17:43 IST | Created: 14-01-2026 17:43 IST
AI is changing how people think at work, not just what tasks they do
Representative Image. Credit: ChatGPT

Workforce strategies for artificial intelligence have been built on a false assumption: that knowing about AI is enough. A new research framework challenges that logic, arguing that the real fault line in the AI economy is not between technical and non-technical workers, but between societies that can operate fluently in AI-mediated environments and those that cannot.

That argument is laid out in The AI Pyramid: A Conceptual Framework for Workforce Capability in the Age of AI, published as a research paper by NAAMII and Tangible Careers. The paper rejects training-first approaches to AI readiness and instead frames human capability as a system-level requirement, calling for a deliberate distribution of AI-related skills across populations, institutions, and economies as generative systems reshape how cognitive work is performed.

From AI literacy to AI nativity

The study challenges the dominant language used in policy and organizational discussions around AI skills. Concepts such as digital literacy or AI literacy typically focus on awareness and basic understanding of how AI systems work. According to the authors, this framing assumes that AI is an external tool that workers must learn to operate. That assumption no longer holds. Modern AI systems write, summarize, analyze, recommend, and adapt, operating less like passive software and more like cognitive collaborators.

To capture this shift, the authors introduce the concept of AI nativity. AI nativity does not mean technical expertise or the ability to build AI systems. Instead, it refers to a behavioral and cognitive orientation in which individuals integrate AI fluidly into everyday reasoning and work practices. AI-native workers routinely frame problems in ways AI systems can engage with, evaluate outputs critically, and combine machine-generated suggestions with human judgment. Crucially, this capability also includes knowing when AI should not be used, recognizing bias, uncertainty, and ethical risk.

This distinction matters because evidence shows that AI exposure is now concentrated in non-routine, high-skill occupations. Generative AI affects tasks such as drafting, analysis, communication, and ideation, functions that sit at the heart of professional knowledge work. As a result, AI nativity is no longer a niche skill for technologists. It is becoming a baseline requirement for participation in modern work environments.

The shift from AI literacy to AI nativity is not gradual or incremental. It is a qualitative transformation comparable to the difference between studying a language and becoming fluent in it. Literacy prepares people to recognize AI. Nativity prepares them to think with AI as part of their cognitive environment. Without this deeper integration, productivity gains remain uneven, coordination breaks down, and organizations struggle to adapt as AI reshapes workflows.

The AI pyramid and the distribution of capability

To move beyond abstract skill debates, the paper introduces the AI Pyramid, a conceptual framework that organizes AI-related capabilities into three interdependent layers: AI Native, AI Foundation, and AI Deep. The pyramid is not intended as a career ladder or hierarchy of prestige. Instead, it describes how capabilities must be distributed across populations for AI-enabled systems to function at scale.

At the base of the pyramid lies AI Native capability. This layer includes the broad population that works in AI-mediated environments but does not design or engineer AI systems. The authors argue that this layer is the most critical, not the least. Without widespread AI nativity, higher levels of technical capability cannot deliver value. Teams cannot coordinate effectively, decisions degrade, and AI outputs are misused or misunderstood.

AI Native capability is defined by observable behavior rather than formal knowledge. It is reflected in how people structure tasks for AI, interrogate results, and exercise judgment in situations involving uncertainty or ethical trade-offs. The study stresses that this capability must extend across sectors and roles, especially given evidence that AI exposure is highest among educated, professional workers rather than entry-level or routine occupations.

The middle layer of the pyramid, AI Foundation capability, includes those who build, integrate, and maintain AI-enabled systems. These are engineers, applied data practitioners, and technical generalists responsible for translating organizational goals into working AI tools. Unlike AI nativity, which emphasizes behavioral fluency, Foundation capability is expressed through system-building outcomes. It involves managing data pipelines, integrating models into workflows, ensuring reliability, and governing deployment within real-world constraints.

The authors highlight that this layer faces the fastest rate of skill obsolescence. As AI tools and architectures evolve rapidly, static degrees and one-time certifications lose value. Sustaining AI Foundation capability requires continuous learning anchored in real implementation problems rather than abstract instruction.

At the apex of the pyramid sits AI Deep capability. This layer includes researchers and scientists who advance AI itself by developing new models, algorithms, and theoretical insights. The study makes clear that AI Deep capability is not required in every organization. Its importance lies at the level of national and global innovation systems, where breakthroughs diffuse outward and shape the tools used by Foundation builders and AI-native users.

The pyramid structure reflects functional dependency rather than individual progression. A society with strong research talent but weak AI nativity struggles to translate innovation into productivity. Conversely, widespread tool adoption without sufficient system-building expertise leads to fragile deployments and governance failures. The framework argues that imbalance across layers undermines the economic and social returns of AI.

Why workforce training is no longer enough

One of the paper’s key claims is that traditional workforce development models cannot keep pace with AI-driven change. Historically, skills were acquired through discrete programs, courses, and certifications designed to transfer stable knowledge. AI disrupts this model by continuously reshaping what effective practice looks like. Prompting techniques, workflow orchestration, and judgment strategies evolve as AI systems gain new capabilities.

The authors argue that AI capability must be treated as infrastructure rather than episodic training. Infrastructure, in this context, refers to systems that make skills visible, measurable, and continuously adaptable. The study identifies three core components of such infrastructure: measurement, learning, and credentialing.

Measurement infrastructure defines what it means to be AI-native, Foundation-level, or Deep-level and updates those definitions as AI evolves. Learning infrastructure embeds capability development into real work through problem-based learning, allowing individuals to acquire skills by solving the tasks they actually face. Credentialing infrastructure enables capabilities to be recognized and trusted across organizations and labor markets without relying on static qualifications.

Problem-based learning plays a central role in this model. Rather than teaching AI concepts in isolation, learning begins with real problems and introduces AI methods when they are relevant to solving those problems. This approach aligns with how AI skills are used in practice, where context, judgment, and iteration matter more than theoretical mastery.

The paper links this learning model to situated learning theory, which holds that expertise develops through participation in authentic practices within communities. For AI capability, this means embedding learning directly into work processes rather than separating training from application. Competency-based assessment replaces course completion as the primary signal of skill, focusing on demonstrated performance across contexts.

At scale, this approach requires shared skill ontologies that define how AI-related competencies relate to roles and tasks. These ontologies allow capabilities to be tracked over time, compared across populations, and updated as systems change. Without such infrastructure, governments and organizations lack visibility into where AI capability gaps exist and how they evolve.

Labor market data and educational attainment statistics cannot capture the dynamic nature of AI-related skills. As a result, governments struggle to plan investments, detect inequality, or design targeted interventions. The authors argue that digital public infrastructure for workforce capability is essential to address these blind spots.

Research shows that AI assistance can deliver large productivity gains, particularly for less experienced workers, but only when access and capability are widespread. Without systematic development of AI nativity, benefits concentrate among early adopters, while others fall behind. The AI Pyramid offers a way to align learning, measurement, and policy to prevent such divergence.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback