AI prompts cannot replace tacit human skill


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 26-02-2026 19:11 IST | Created: 26-02-2026 19:11 IST
AI prompts cannot replace tacit human skill
Representative Image. Credit: ChatGPT

Human expertise has long been understood as something deeper than rule-following, grounded instead in tacit engagement with the world. As chatbots dispense increasingly detailed guidance, the boundary between explicit instruction and practical skill is under renewed scrutiny.

This boundary is examined in From Explicit Prompts to Tacit Engagements: Understanding in Practice with LLMs, published in AI & Society, where the author explores how prompt-based AI systems interact with embodied human cognition and whether machine outputs can meaningfully inform real-world action.

The paradox of prompts and practical skill

LLMs operate through deliberate prompting. A user submits a text query, and the system generates a response designed to be coherent, informative, and contextually appropriate. These exchanges are propositional by design. They are explicit, thematic, and structured around articulated questions and answers.

However, most human expertise does not function in that way. Drawing on philosophical distinctions made famous by Gilbert Ryle, the study differentiates between “knowing that,” which concerns explicit facts and propositions, and “knowing how,” which concerns skilled action. Tacit knowledge is central to the latter. A person who knows how to ride a bicycle or balance professional obligations does not typically rely on articulated rules during performance.

The challenge arises when users attempt to translate AI-generated “knowing that” into embodied “knowing how.” For example, someone might ask an LLM how to cook a complex dish or assemble a structure. The instructions are explicit and textual. But the performance requires physical coordination, situational awareness, and context-sensitive judgment. The machine provides description; the human must enact it.

The author frames this as a philosophical puzzle. If LLM outputs are propositional and detached from the user’s environment, how can they contribute to the development of tacit skill? The answer, he argues, lies not in the machine but in the structure of human understanding.

Dreyfus, McDowell, and the debate over tacit knowledge

To clarify the stakes, the study revisits a well-known philosophical dispute between Hubert Dreyfus and John McDowell. Dreyfus contended that human expertise depends on embodied, context-sensitive coping rather than rule-following. According to this view, absorbed engagement in a situation cannot be reduced to internal representations or conceptual reasoning.

McDowell,on the other hand, argued that conceptual capacities permeate perception itself. For him, even unreflective experience is conceptually structured, allowing continuity between perception and judgment.

The author examines both positions and identifies limits in each. Dreyfus highlights the importance of embodied engagement but struggles to explain how rule-based instruction transitions into skillful performance. McDowell secures epistemic continuity between tacit and explicit knowledge but risks overstating the role of conceptual reflection.

The study turns to Martin Heidegger to resolve this tension. Heidegger’s phenomenology offers a framework in which understanding permeates both tacit and explicit engagements without reducing either to detached cognition. Human beings are always already situated in a world structured by practical relevance. Objects appear not as neutral entities but as usable, serviceable, or obstructive within a broader network of purposes.

In this framework, tacit and explicit knowledge are not opposites. Instead, explicit thematization emerges from a background of practical understanding. This insight becomes central to understanding how LLM-generated outputs might influence real-world action.

Human–LLM interaction and the frame problem

The study then applies Heidegger’s account to contemporary AI use. The author argues that the so-called frame problem, traditionally attributed to AI systems, must be understood relationally. The issue is not simply that machines struggle to determine relevance. Rather, relevance is co-constituted in the interaction between user, system, and purpose.

LLMs are trained on vast datasets and generate outputs based on statistical patterns. They are not embedded in the user’s physical environment. As a result, their guidance is decontextualized. A recipe suggestion may assume access to ingredients that are unavailable. A building instruction may overlook local constraints. A lesson plan may misalign with a classroom’s actual needs.

The burden of contextualization therefore falls on the user. Tacit knowledge cannot be downloaded from a chatbot. It must be constructed in situ. The user must interpret the output, adapt it to the environment, and monitor its application.

The study identifies practical triangulation as a core competency in this process. Users must refine prompts, assess the limits of AI outputs, draw on prior experience, and remain alert to mismatches between textual advice and real-world conditions. This reflective vigilance becomes a necessary counterweight to AI’s persuasive fluency.

In educational contexts, the research highlights documented cases where ChatGPT-generated lesson plans required significant human revision. Errors included conceptual confusion, misaligned instructional strategies, and unsafe recommendations. These examples illustrate the limits of relying on AI as an autonomous instructor.

Interpretation, circumspection, and the construction of tacit knowledge

A key concept in the study is circumspection, drawn from Heidegger’s analysis of practical engagement. Circumspection refers to the unthematic, skillful use of tools and objects in a meaningful context. It is the kind of awareness that allows someone to navigate a task fluidly without constant deliberation.

LLM outputs can support circumspection indirectly by highlighting relevant entities and functional relations. A system might suggest an overlooked tool or propose an alternative method. In doing so, it expands the user’s horizon of possibilities.

However, this expansion occurs only within the user’s pre-existing understanding. Interpretation precedes and shapes application. The user must already grasp the practical significance of cooking, building, teaching, or coding in order to make sense of AI-generated guidance.

This asymmetry distinguishes human–AI interaction from traditional teacher–student relationships. Human instructors can observe a learner’s performance, adjust feedback dynamically, and respond to tacit cues. LLMs cannot directly perceive embodied action or contextual nuance. Even as multimodal capabilities develop, including image and voice input, the translation of explicit instruction into tacit competence remains a human achievement.

Looking ahead, the author acknowledges that technological developments may narrow the gap between machine guidance and embodied learning. Multimodal AI systems capable of processing images or live video could provide more context-sensitive responses. Domain-specific fine-tuning may produce outputs tailored to particular practices.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback