AI at work works best when employees are involved


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 12-02-2026 11:47 IST | Created: 12-02-2026 11:47 IST
AI at work works best when employees are involved
Representative Image. Credit: ChatGPT

Organizations worldwide are accelerating the adoption of artificial intelligence (AI) in the workplace, but many are discovering that technical deployment alone does not guarantee positive outcomes. Without careful management, AI can trigger uncertainty, resistance, and declining engagement among employees.

A new study published in Behavioral Sciences, titled Making Artificial Intelligence Work at Work: The Role of Human Resource Practices and Personal Attitudes in Fostering Meaningful Work with Artificial Intelligence, shows that transparent communication, employee involvement, and training play a decisive role in shaping how workers experience AI at work.

Why AI implementation is becoming a workplace well-being issue

While AI promises efficiency and innovation, it also raises fears around job displacement, deskilling, loss of autonomy, and constant monitoring. These concerns are not abstract. Employees increasingly encounter AI systems that influence performance evaluation, task prioritization, and even managerial decisions, often with limited explanation or consultation.

The authors focus on what they define as Employee-Centered AI Implementation practices, a set of organizational actions that shape how employees experience AI at work. These practices include transparent communication about why and how AI is introduced, opportunities for employees to express concerns and contribute feedback, and targeted training that helps workers understand and effectively use AI tools.

The research draws on job demands–resources theory, which suggests that workplace resources such as autonomy, clarity, and support can buffer the stress associated with change and enhance motivation and performance.

Using survey data from 168 Italian white-collar employees who regularly interact with AI systems, the authors examine how employee-centered practices influence job satisfaction and job performance. The results show a clear and consistent pattern: organizations that adopt employee-centered AI practices see better outcomes across both dimensions.

Employees who reported higher levels of transparency, involvement, and training also reported greater satisfaction with their jobs and stronger self-assessed performance. These effects were not marginal. The study finds that employee-centered AI implementation has a direct positive relationship with both outcomes, underscoring that the way AI is introduced shapes how employees evaluate their work experience.

Meaningful work as the missing link in AI adoption

The study identifies meaningful work as a key psychological mechanism linking AI practices to employee outcomes. Meaningful work refers to the extent to which employees perceive their work as valuable, purposeful, and aligned with their personal values. The authors argue that AI can either enhance or diminish this sense of meaning, depending on how it is implemented.

The findings show that employee-centered AI practices increase employees’ sense of meaningful work, which in turn boosts job satisfaction and performance. In other words, AI does not improve outcomes simply by making tasks faster or easier. It improves outcomes when employees understand how AI fits into their role, feel respected during its introduction, and see the technology as supporting rather than replacing their contribution.

This mediating role of meaningful work is particularly important in understanding why some AI initiatives succeed while others generate resistance. When AI is introduced without explanation or employee input, workers may perceive it as undermining their skills or reducing their autonomy. By contrast, when organizations frame AI as a tool that complements human judgment and provide space for employees to adapt, workers are more likely to integrate AI into their professional identity.

The study also challenges the assumption that technological change inevitably leads to alienation. Instead, it shows that organizational choices can shape whether AI becomes a source of disengagement or a catalyst for renewed purpose at work. Meaningful work emerges as a central indicator of successful AI integration, linking human resource practices with measurable performance outcomes.

Role of employee attitudes toward AI

While employee-centered practices matter, the study finds that their effectiveness is not uniform across all workers. A key moderating factor is employees’ personal attitudes toward AI. The authors distinguish between employees who hold generally positive views of AI and those who are more skeptical or apprehensive.

The analysis shows that employee-centered AI practices have a stronger positive impact on meaningful work, job satisfaction, and performance among employees with favorable attitudes toward AI. For these workers, transparent communication and training amplify existing openness to AI, reinforcing the perception that the technology is beneficial and aligned with their goals.

For employees with negative or uncertain attitudes toward AI, the same practices are less effective. While transparency and training still matter, they do not fully offset underlying concerns or distrust. This finding highlights an often-overlooked dimension of AI adoption: organizational strategies must account for individual differences in perception and readiness, not just structural processes.

The study suggests that organizations cannot assume a neutral starting point when introducing AI. Employees bring prior beliefs shaped by media narratives, personal experience, and broader social debates about automation. Addressing these beliefs may require tailored communication strategies, ongoing dialogue, and a longer adjustment period for some groups.

Importantly, the authors do not argue that skeptical employees should be ignored or overridden. Instead, the findings point to the need for early engagement and attitude-aware implementation strategies that recognize resistance as a signal to adapt, not a barrier to bypass.

Implications for organizations and AI governance

The study reinforces the idea that AI implementation is not a one-off technical rollout but an ongoing organizational change process that reshapes work identities, relationships, and expectations.

For organizations, the research recommends that investments in AI must be matched by investments in people. Training is not merely about technical skills but about helping employees understand the role of AI in decision-making and how their expertise remains relevant. Consultation mechanisms are not symbolic gestures but practical tools for identifying risks, improving system design, and building trust.

The study also aligns with broader policy debates around responsible AI. While much regulatory attention focuses on algorithmic transparency, bias, and accountability at the system level, the findings highlight the importance of organizational context. Even well-designed AI systems can generate negative outcomes if introduced without regard for employee experience.

From a governance perspective, the research suggests that responsible AI frameworks should extend beyond technical audits to include human resource practices and employee engagement metrics. Meaningful work, satisfaction, and performance are not peripheral concerns. They are indicators of whether AI is being integrated in a sustainable and socially responsible manner.

The authors also note the limitations of their study, including its cross-sectional design and focus on a specific national context. However, they argue that the underlying mechanisms identified are likely relevant across sectors and countries, particularly as AI adoption accelerates globally.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback