Hidden harms of workplace AI threaten skills and professional dignity


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 02-02-2026 09:29 IST | Created: 02-02-2026 09:29 IST
Hidden harms of workplace AI threaten skills and professional dignity
Representative Image. Credit: ChatGPT

From healthcare and finance to logistics and public administration, AI systems are promoted as tools that improve efficiency, accuracy, and productivity. Yet as adoption accelerates, a quieter set of consequences is emerging, raising concerns not about immediate system failures but about how prolonged reliance on AI may be altering human judgment, skills, and professional identity.

These concerns are the focus of a new peer-reviewed study titled From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms for Dignified Human-AI Interaction, published as a research paper on arXiv and under review for academic publication. Authored by Upol Ehsan, Samir Passi, Koustuv Saha, Todd McNutt, Mark O. Riedl, and Sara Alcorn, the study presents one of the most detailed longitudinal examinations to date of how AI systems affect workers over time, beyond short-term performance metrics.

Based on a year-long investigation of AI use in radiation oncology, the research states that many of the most serious harms associated with workplace AI are asymptomatic. These harms do not immediately degrade outcomes or trigger alarms but gradually erode human expertise, autonomy, and dignity, often remaining invisible until they become deeply entrenched.

When productivity gains mask long-term human costs

The study challenges dominant narratives around AI-driven productivity. In many professional settings, AI systems are evaluated primarily on efficiency gains, error reduction, and throughput. In the clinical environment examined by the authors, AI-assisted tools improved workflow speed and consistency in radiation treatment planning, reinforcing the perception that automation was delivering unambiguous benefits.

However, the longitudinal nature of the study revealed a more complex picture. Over time, clinicians began to rely increasingly on AI-generated recommendations, even in cases where manual expertise had previously been central. This shift did not immediately reduce performance quality, which made it difficult for organizations to detect emerging problems. Instead, it subtly changed how professionals engaged with their work.

The authors document a gradual decline in hands-on skill application, as repetitive reliance on automated outputs reduced opportunities for deliberate practice. Clinicians reported diminished confidence in their own judgment when AI systems were unavailable or produced unexpected results. This phenomenon, described as skill atrophy, was not caused by negligence but by structural changes in workflow that positioned AI as the default decision-maker.

The study notes that these effects were not limited to technical skills. Professional intuition, contextual reasoning, and the ability to recognize edge cases also weakened over time. Because AI systems handled routine and complex cases alike, workers had fewer chances to engage deeply with challenging scenarios that traditionally reinforced expertise.

The research introduces the concept of asymptomatic AI harms to describe this pattern. These harms remain hidden precisely because conventional performance indicators continue to look positive. Productivity increases, error rates decline, and organizational targets are met, even as human capability quietly deteriorates.

The AI-as-Amplifier paradox in high-stakes work

The study’s core analytical insight is captured in what the authors call the “AI-as-Amplifier Paradox.” AI systems are designed to amplify human capability by automating complex tasks and enhancing decision-making. At the same time, by consistently outperforming humans in speed and pattern recognition, they can unintentionally diminish the very expertise they are meant to support.

In radiation oncology, this paradox played out as clinicians shifted from active planners to overseers of automated processes. While oversight remained essential, the nature of professional engagement changed. Workers increasingly evaluated AI outputs rather than constructing solutions themselves, narrowing the scope of their cognitive involvement.

Over the course of the study, this shift led to a form of overreliance. When AI outputs aligned with expectations, human scrutiny decreased. When discrepancies arose, clinicians reported uncertainty about whether to trust their own judgment or defer to the system. This dynamic introduced new forms of cognitive stress and decision fatigue, particularly in high-stakes situations.

The study also identifies what it terms identity commoditization. As AI systems became central to workflow, some professionals felt their role was reduced to maintaining system efficiency rather than exercising expert judgment. This perception affected job satisfaction and raised concerns about long-term professional relevance, especially among early-career practitioners who feared losing opportunities to develop mastery.

Importantly, the authors stress that these outcomes were not the result of poor system design or malicious intent. Instead, they emerged from well-intentioned efforts to optimize efficiency without accounting for how human expertise develops and is sustained over time. The paradox highlights a fundamental tension in workplace AI deployment: systems that succeed by traditional metrics may still undermine human resilience.

Designing for dignity and long-term human agency

To address these challenges, the study proposes a shift in how organizations evaluate and govern AI systems at work. Rather than focusing solely on output metrics, the authors argue for incorporating dignity, agency, and expertise preservation as core evaluation criteria.

Key to this approach is the Dignified Human-AI Interaction framework introduced in the paper. The framework emphasizes sociotechnical immunity, defined as the capacity of organizations and systems to detect, absorb, and recover from slow-moving human harms caused by AI integration. This includes mechanisms for early detection of skill erosion, such as monitoring changes in human decision patterns and confidence levels over time.

The framework also calls for intentional workflow design that preserves opportunities for human practice and learning. In the clinical context, this could involve rotating responsibility between manual and AI-assisted planning, creating protected spaces for skill reinforcement, and ensuring that AI systems explain their outputs in ways that support human understanding rather than replace it.

Training and professional development are also reframed. Instead of treating AI as a static tool, the authors argue that organizations must continuously update training programs to reflect evolving human-AI dynamics. This includes preparing workers to challenge AI outputs, recognize system limitations, and maintain independent judgment under pressure.

At the organizational level, the study urges leaders to rethink success metrics. Short-term productivity gains should be weighed against long-term workforce sustainability. Failure to do so, the authors warn, risks creating brittle systems that depend heavily on automation while lacking the human expertise needed to respond to unexpected events or system failures.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback