Hidden behavioral costs of AI collaboration at work

Job insecurity alone does not automatically translate into harmful workplace behavior. The study’s key contribution lies in identifying how insecurity reshapes employee responses, specifically by increasing knowledge hiding. Knowledge hiding refers to the intentional withholding of information, ideas, or expertise when colleagues request it, even though sharing would be appropriate and beneficial.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 28-12-2025 10:58 IST | Created: 28-12-2025 10:58 IST
Hidden behavioral costs of AI collaboration at work
Representative Image. Credit: ChatGPT

While much of the public debate has focused on productivity gains of integrating AI in everyday workplace processes, a new behavioral study published in the journal Behavioral Sciences suggests that closer collaboration between employees and AI systems may also produce unintended human consequences. Specifically, the research finds that working alongside AI can heighten employees’ sense of job insecurity and quietly encourage them to withhold knowledge from colleagues, undermining collaboration inside organizations.

The study, titled From Synergy to Strain: Exploring the Psychological Mechanisms Linking Employee–AI Collaboration and Knowledge Hiding, examines how psychological responses to AI integration shape employee behavior in knowledge-intensive firms, offering one of the clearest empirical looks to date at the hidden social costs of workplace AI adoption.

Based on data collected from employees in Chinese organizations that have actively deployed AI systems, the research challenges the assumption that human–AI collaboration automatically strengthens knowledge sharing and innovation. Instead, it shows that when employees perceive AI as a threat to their job security or professional relevance, they may respond defensively by concealing information, expertise, or insights that could otherwise benefit their teams.

When collaboration with AI triggers job insecurity

The study is based on cognitive appraisal theory, which holds that individuals interpret new technologies not only in terms of their objective capabilities but also through subjective assessments of threat and opportunity. Applying this framework to the workplace, the authors argue that employees do not experience AI collaboration in a uniform way. Instead, they actively evaluate what working with AI means for their own job stability, status, and future prospects.

To test this idea, the researchers conducted a three-wave time-lagged survey involving 348 employees from knowledge-intensive enterprises where AI systems were already integrated into daily work. The time-lagged design allowed the authors to separate cause and effect more clearly, reducing the risk that observed relationships were driven by short-term mood or common response bias.

The results show a consistent pattern. Employees who reported higher levels of collaboration with AI systems also reported stronger feelings of job insecurity over time. This insecurity did not stem from explicit job loss announcements or restructuring events, but from a more subtle psychological process. As AI systems demonstrated their ability to analyze data, make recommendations, or automate tasks previously performed by humans, employees began to question their own replaceability and long-term value to the organization.

This finding is significant because it reframes job insecurity as an internal, perception-driven outcome of AI collaboration rather than a direct result of layoffs or automation announcements. Even in organizations where jobs were not immediately threatened, the mere presence of capable AI systems was enough to activate concerns about future redundancy or diminished relevance.

The study highlights that this effect is particularly pronounced in knowledge-intensive roles, where expertise, judgment, and information-sharing are central to performance. In such contexts, AI’s ability to replicate or augment cognitive tasks may be perceived as encroaching on core aspects of professional identity.

Why insecurity leads to knowledge hiding

Job insecurity alone does not automatically translate into harmful workplace behavior. The study’s key contribution lies in identifying how insecurity reshapes employee responses, specifically by increasing knowledge hiding. Knowledge hiding refers to the intentional withholding of information, ideas, or expertise when colleagues request it, even though sharing would be appropriate and beneficial.

The analysis shows that job insecurity acts as a psychological bridge between AI collaboration and knowledge hiding. Employees who felt more insecure about their jobs were significantly more likely to engage in behaviors such as giving partial information, delaying responses, or pretending not to know relevant details. These behaviors are not overtly confrontational and can easily go unnoticed by managers, yet they erode trust and collective performance over time.

From a psychological perspective, the study interprets knowledge hiding as a self-protective coping strategy. When employees perceive their positions as threatened, knowledge becomes a form of personal leverage. By withholding expertise, employees attempt to preserve their unique value, making themselves harder to replace and maintaining a sense of control in an uncertain environment.

Importantly, the study finds that AI collaboration does not directly cause knowledge hiding. Instead, the effect is indirect and mediated by job insecurity. This distinction matters for organizational response. Reducing AI use or slowing digital transformation would not necessarily solve the problem. Instead, organizations must address how employees interpret and emotionally respond to AI integration.

The findings also help explain why some AI deployments fail to deliver expected collaboration and innovation benefits. Even as AI systems improve technical efficiency, the social fabric of the organization may weaken if employees retreat into defensive behaviors. Over time, this can reduce learning, slow problem-solving, and undermine the very advantages AI was meant to provide.

Trust in AI as a critical buffer

Not all employees respond to AI collaboration in the same way. A central insight from the study is the moderating role of trust in AI. Trust in AI refers to the extent to which employees believe that AI systems are reliable, fair, and supportive rather than threatening or opaque.

The analysis shows that employees with higher levels of trust in AI experienced weaker increases in job insecurity when collaborating with AI systems. As a result, the indirect pathway from AI collaboration to knowledge hiding was significantly reduced. In contrast, employees who distrusted AI were more likely to interpret collaboration as a signal of impending displacement, amplifying insecurity and defensive behavior.

This finding underscores that psychological context matters as much as technological capability. Trust shapes whether AI is perceived as a partner that enhances human work or as a competitor that undermines it. Where trust is low, even well-designed AI systems may provoke resistance and counterproductive behavior.

The study suggests several factors that influence AI trust, including transparency about how AI systems work, clarity about how AI outputs are used in decision-making, and assurances that AI is intended to augment rather than replace human roles. While these elements were not directly tested in the study, they are implied by the theoretical framework and empirical results.

From a management perspective, the findings point to trust-building as a strategic priority in AI adoption. Technical training alone is insufficient. Employees also need psychological reassurance that AI integration will not erode their career prospects or devalue their expertise.

Implications for organizations adopting AI

The findings challenge the assumption that AI collaboration naturally fosters openness and shared learning. Without deliberate intervention, AI may instead intensify competition over knowledge and status, particularly in environments where performance evaluation and promotion are closely tied to individual expertise.

Next up, the findings highlight job insecurity as a critical but often overlooked side effect of AI adoption. Organizations frequently focus on technical readiness and return on investment while underestimating how employees interpret technological change. Addressing job insecurity requires more than generic change management. It involves clear communication about role evolution, reskilling pathways, and the long-term place of human expertise alongside AI.

Third, the study positions trust in AI as a lever that organizations can actively influence. Transparent governance, explainable AI systems, and consistent messaging about AI’s purpose can reduce fear-driven responses. Leaders play a key role in shaping these perceptions, particularly when they frame AI as a tool for empowerment rather than substitution.

The research also suggests that knowledge hiding should be treated as an early warning signal rather than a moral failing. When employees begin to withhold information, it may reflect deeper concerns about security and recognition. Identifying and addressing these concerns early could prevent longer-term damage to collaboration and organizational learning.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback