Organizational culture, not technology, shapes worker attitudes toward AI systems

The authors find that positive experiences with AI, specifically, when employees believe AI enhances their work, are the most powerful predictors of favorable attitudes across all key dimensions.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 18-11-2025 14:51 IST | Created: 18-11-2025 14:51 IST
Organizational culture, not technology, shapes worker attitudes toward AI systems
Representative Image. Credit: ChatGPT

Artificial intelligence may be transforming workplaces across industries, but a new scientific study shows that employees’ day-to-day experiences, workplace ethics, and organizational innovation climate determine whether they trust or fear the growing presence of AI. The research, featured in the journal AI, examines the real factors shaping employee attitudes toward AI systems used in organizational contexts.

The peer-reviewed study “Attitudes Toward Artificial Intelligence in Organizational Contexts,” argues that workers’ acceptance of AI is not just a matter of personal preference or technical exposure. Instead, employees form their opinions based on how AI intersects with ethical clarity in the workplace, how innovative their organization is, and how effectively AI supports their actual job performance. Their results reveal that organizations, not individuals, are the decisive force in shaping whether AI is embraced, rationalized, feared, or resisted.

Using a sample of hundreds of Italian employees across diverse sectors, the authors identify the specific cultural, ethical, and experiential factors that influence workers’ emotional and cognitive responses to AI technologies. The research offers guidance to employers struggling with AI adoption and provides insight into the future of human-AI collaboration in modern organizations.

Ethical culture shapes AI anxiety, trust and job insecurity

The authors analyze organizational ethical culture through multiple dimensions, including clarity of rules, feasibility of ethical action, sanctions, transparency, and supportiveness. Their findings show that ethical culture is not a peripheral issue but a decisive force in how workers interpret the role of AI in their professional lives.

The study demonstrates that ethical clarity, the degree to which rules and expectations are clearly communicated, reduces emotional responses such as AI-related anxiety and job insecurity. Workers who understand the boundaries and intentions behind workplace technologies perceive AI as less threatening. Ethical clarity also lowers the instinct to attribute human-like qualities to AI systems, indicating that ambiguous organizational practices tend to heighten emotional projections onto technology.

At the same time, the research shows that supportability, or the presence of cooperative and respectful workplace relationships, shapes how employees psychologically interpret AI. In supportive environments, workers are less likely to perceive AI as highly adaptable or human-like. This suggests that when social cohesion is strong, employees rely more on interpersonal support and less on anthropomorphic interpretations of AI tools.

Conversely, feasibility, which measures whether employees believe they can act ethically under real workplace constraints, plays a more complex role. Workers who perceive low feasibility, those who feel torn between ethical norms and organizational limitations, show heightened anxiety around AI. They also report stronger job-insecurity concerns and are more prone to attribute human traits to AI systems. This reflects an underlying tension: when employees feel ethically constrained, AI becomes an additional symbol of uncertainty and potential threat.

Ethical climate is not a backdrop but a mechanism through which AI technologies are interpreted. Organizations with ethical clarity and supportive work environments reduce resistance and foster trust. Those with ethical ambiguity or feasibility conflicts amplify emotional insecurity, making AI adoption more difficult and contentious.

Innovation climate has limited but significant influence on AI attitudes

Innovation is often assumed to correlate directly with technology acceptance, but the researchers challenge this assumption by examining several distinct dimensions of organizational innovativeness.

Their results show that general innovation climate does not significantly predict employees’ attitudes toward AI. Instead, only one specific dimension, Raising Projects, which reflects whether employees are encouraged to generate new ideas, has a meaningful impact. Workers in organizations that actively involve them in proposing new solutions and improvements perceive AI as more adaptable and more capable of functioning effectively in dynamic environments.

This indicates that innovation, in its abstract or top-down form, does little to change attitudes toward AI. Employees are not swayed by corporate rhetoric about transformation or digital readiness. What matters is whether they are personally empowered to participate in innovation processes. When workers are actively involved in shaping change, they interpret AI as a flexible and beneficial tool. Without such empowerment, innovation messaging has no substantial effect on how AI is perceived.

The findings reveal an important nuance often overlooked in discussions of AI adoption. Employees do not automatically equate innovation with technological competence. Instead, they respond to innovation culture only when it directly affects their role, creativity, and agency. This has significant implications for organizational leaders who assume that a strong innovation brand or culture is enough to generate support for AI implementation. The study shows that meaningful involvement, not messaging, drives positive attitudes.

Job performance supported by AI is the strongest driver of positive attitudes

The authors find that positive experiences with AI, specifically, when employees believe AI enhances their work, are the most powerful predictors of favorable attitudes across all key dimensions.

Workers who experience AI as performance-enhancing report:

  • Higher perceptions of AI quality
  • Greater belief that AI is personally useful
  • Reduced anxiety related to AI usage
  • Stronger perceptions of AI adaptability
  • Increased tendency to attribute human-like characteristics to AI
  • Lower resistance and higher analytic acceptance of AI tools

These effects exceed the influence of organizational ethical culture and innovation climate. In other words, real, positive interactions with AI override abstract fears or ethical concerns. When employees directly benefit from AI in their work, emotional resistance declines and technological acceptance rises, even in environments with moderate ethical ambiguity or weak innovation culture.

The researchers connect these findings to broader theories of technology adoption, emphasizing that hands-on experience and demonstrable utility are the foundation of sustainable acceptance. Employees respond most strongly to improvements in their own performance because these experiences reshape their expectations and reduce uncertainty surrounding AI technologies.

This insight makes it clear why organizations often struggle with AI rollout despite significant investment in communication campaigns or training programs. Without direct performance benefits, workers remain unconvinced. When AI improves efficiency, accuracy, or task support, acceptance becomes natural and self-reinforcing.

A Framework for Understanding Attitudes Toward AI at Work

The authors present a conceptual model showing that attitudes toward AI are shaped by three interconnected layers:

  1. Organizational Ethical Culture Determines whether employees feel secure, respected, and ethically grounded when interacting with AI systems.

  2. Organizational Innovativeness Influences whether workers view AI as adaptable, but only when innovation encourages employee participation.

  3. Perceived Job Performance with AI The most decisive factor, shaping utility, trust, quality perceptions, and emotional responses.

The study argues that these three layers form the foundation of what the authors call socially situated attitudes toward AI. Employees do not view AI in isolation. They interpret it through the lens of their organizational environment, their sense of agency, and their day-to-day experience with technology.

Implications for Organizations Adopting AI

The research offers several implications for leaders implementing AI technologies:

  • Ethical clarity reduces fear and job insecurity, making AI integration smoother.
  • Employee-driven innovation strengthens the perception of AI adaptability, enhancing acceptance.
  • Demonstrable improvements in job performance are essential; without them, even the best AI systems will face resistance.
  • Human–AI collaboration must be introduced with transparency and support, not top-down mandates.
  • Organizations must treat AI adoption as a cultural process, not a technical upgrade.

Overall, the study asserts that successful AI adoption depends far more on organizational conditions than on the technology itself. Ethical climate, participatory innovation, and performance-enhancing experiences create the environment in which AI can be trusted and used effectively.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback