‘Glosslighting’ in AI: How words are inflating expectations of technology
Artificial intelligence (AI) is not only transforming industries but also reshaping the very language used to describe technological progress, according to a new philosophical analysis. It argues that the words used in AI research, marketing, and policy are not neutral descriptors but powerful tools that shape perception, influence investment, and redefine public understanding of what artificial intelligence actually is.
The study, titled "Strategic Polysemy in AI Discourse: A Philosophical Analysis of Language, Hype, and Power," was presented at the 2026 ACM Conference on Fairness, Accountability, and Transparency (FAccT '26). It examines how commonly used AI terms such as "hallucination," "reasoning," "agent," and "alignment" carry multiple meanings at once, blending technical definitions with everyday human associations to create what the authors describe as a systematic pattern of linguistic ambiguity.
AI language drives hype, investment, and public perception
The research identifies a core mechanism shaping modern AI discourse: the strategic use of polysemy, or words with multiple related meanings. In everyday language, polysemy is a natural and often harmless feature. However, the study claims that in AI, this ambiguity is frequently exploited to produce persuasive effects across different audiences.
Terms such as "intelligence," "thinking," and "learning" carry strong cognitive and human-centered meanings. When applied to machine learning systems, these words can suggest capabilities that go far beyond what the underlying technology actually performs. In technical terms, most AI systems operate through statistical pattern recognition and probabilistic outputs rather than genuine understanding or reasoning.
The authors highlight how this linguistic flexibility allows the same term to function differently depending on context. For researchers and engineers, a term like "reasoning" may refer to intermediate computational steps or structured outputs. For the public and policymakers, the same term may imply human-like cognition or decision-making capacity. This dual interpretation creates a gap between technical reality and perceived capability, which can significantly influence how AI systems are evaluated and trusted.
The study introduces the concept of "glosslighting" to describe this phenomenon. Glosslighting refers to the use of familiar, often anthropomorphic language in a technical context, while retaining the ability to retreat to a narrower definition when challenged. This enables developers, companies, and institutions to benefit from the persuasive power of human-like descriptions without fully committing to those interpretations.
This dynamic plays a primary role in the AI hype cycle. By framing systems as capable of "thinking," "planning," or "understanding," AI technologies are positioned not merely as tools but as entities with advanced cognitive abilities. This framing fuels excitement, attracts funding, and accelerates adoption, even when the underlying capabilities remain limited or highly specialized.
The study traces this pattern back to the origins of AI itself, noting that the term "AI" was originally coined to attract attention and secure funding. Over time, similar linguistic strategies have continued to shape how new technologies are introduced, marketed, and understood.
Anthropomorphic terminology masks technical limitations
A key finding of the analysis is that anthropomorphic language does more than simplify complex ideas. It actively obscures the limitations and operational nature of AI systems. Words like "hallucination," for example, suggest a human-like perceptual error, while in reality they describe statistical outputs that deviate from expected patterns.
Similarly, terms such as "chain-of-thought reasoning" or "introspection" imply internal cognitive processes. In practice, these refer to structured sequences of generated text or outputs that resemble reasoning but do not involve awareness, intention, or understanding. The study emphasizes that these labels can lead to widespread misunderstanding, particularly when they are adopted in media reporting and policy discussions.
The term "agent" offers another example of this linguistic distortion. In everyday usage, an agent is an autonomous entity capable of making decisions and acting intentionally. In AI, the term often refers to a program that maps inputs to outputs or follows predefined loops. Despite this, industry narratives frequently describe AI agents as autonomous decision-makers, reinforcing perceptions of independence and intelligence.
The concept of "alignment" also illustrates the gap between technical and public interpretations. While commonly associated with ensuring that AI systems reflect human values, in practice it often refers to narrow optimization processes such as adjusting outputs based on training data or predefined constraints. This difference allows developers to signal ethical responsibility while relying on limited technical implementations.
These linguistic patterns are not isolated. The study shows that they are widespread across AI discourse, spanning academic publications, corporate communications, media coverage, and policy debates. As these terms circulate, they become normalized, shaping how AI systems are described and understood at a societal level.
This normalization creates a feedback loop. Once anthropomorphic terms are widely adopted, they influence how new technologies are framed, which in turn reinforces their use. Media amplification and investment incentives further strengthen this cycle, making it difficult to replace ambiguous language with more precise alternatives.
Ethical and policy risks emerge from linguistic ambiguity
The study identifies significant ethical and structural risks associated with glosslighting. One of the most immediate concerns is the erosion of public understanding. When AI systems are described using human-like terminology, users may overestimate their capabilities and reliability, leading to misplaced trust.
This distortion has direct implications for decision-making in high-stakes contexts such as healthcare, finance, and governance. If policymakers and practitioners rely on inflated perceptions of AI capabilities, they may deploy systems without fully understanding their limitations or failure modes.
The study also highlights how linguistic ambiguity diffuses accountability. When AI systems are framed as autonomous agents, responsibility for their actions becomes less clear. Failures can be attributed to the system itself rather than to the designers, developers, or institutions behind it. This shift complicates efforts to establish oversight and enforce regulatory standards.
Another critical issue is the role of language in reinforcing power asymmetries. The actors who define and control AI terminology, primarily researchers, corporations, and industry leaders, also shape how the technology is perceived and governed. This gives them significant influence over public discourse, funding priorities, and regulatory agendas.
The research further links glosslighting to the broader political economy of AI. In a competitive landscape driven by rapid innovation and investment, compelling narratives are essential for attracting attention and resources. Ambiguous, anthropomorphic language serves this purpose by making technologies appear more advanced, accessible, and inevitable.
However, this comes at a cost. The study warns that glosslighting can enable pseudoscientific claims, amplify speculative narratives about artificial general intelligence, and obscure the real challenges associated with deploying AI systems. It can also contribute to cycles of hype and disillusionment, similar to previous waves of technological overpromising.
The authors argue that addressing these issues requires more than improved communication. It demands a shift in how AI is described, evaluated, and regulated. Researchers and developers are encouraged to adopt more precise terminology that reflects the actual mechanisms of AI systems, even if this reduces their rhetorical appeal.
Additionally, journalists and policymakers play a crucial role in shaping public understanding. By critically examining the language used in AI discourse and avoiding uncritical adoption of anthropomorphic terms, they can help create a more accurate and grounded narrative around the technology.
- FIRST PUBLISHED IN:
- Devdiscourse