Trust and perceived usefulness critical for effective human–AI collaboration

The findings provide a roadmap for organizations seeking to maximize the potential of human–AI collaboration. Building trust requires not just technical excellence but also transparent communication about AI’s capabilities and limitations, as well as leveraging positive social influence to normalize adoption. Training programs that enhance users’ understanding of AI models and their decision-making processes can further reinforce trust and encourage deeper engagement.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 07-10-2025 22:00 IST | Created: 07-10-2025 22:00 IST
Trust and perceived usefulness critical for effective human–AI collaboration
Representative Image. Credit: ChatGPT

Generative artificial intelligence is changing the way humans work alongside machines, but success hinges on building trust and demonstrating usefulness, according to a new study published in Information. Their research shows that trust, in its various dimensions, plays a decisive role in motivating individuals to cooperate with AI systems, signaling a strategic imperative for organizations that aim to integrate these technologies into daily operations.

The study, titled “Factors Affecting Human-Generated AI Collaboration: Trust and Perceived Usefulness as Mediators”, explores the psychological and behavioral foundations of human–AI teamwork by analyzing how different forms of trust and perceptions of AI’s utility shape collaboration. Drawing on responses from 305 participants, including experts, office workers, and graduate students, the research identifies specific trust factors that influence attitudes toward AI and the intention to cooperate with it.

How trust shapes human willingness to collaborate with AI

The study assesses the impact of trust on human engagement with generative AI. Trust was analyzed across four dimensions: calculative-based trust (linked to reliability and performance), cognition-based trust (based on understanding how AI operates), knowledge-based trust (related to familiarity with AI), and social influence-based trust (the effect of social endorsement and peer acceptance).

The researchers found that performance, reliability, understandability, and social influence significantly enhance trust in generative AI, which in turn boosts both perceived usefulness and the intention to collaborate. This demonstrates that people are more likely to engage with AI systems that deliver consistently good results, are easy to understand, and are positively regarded by their peers.

Interestingly, familiarity, often assumed to be a primary trust driver, showed no significant impact. The authors attribute this to participants’ existing experience with AI technologies, indicating that as AI tools become widespread, mere exposure may no longer be enough to build trust. Instead, users prioritize concrete outcomes and transparency in how the system operates.

Perceived usefulness emerges as a key mediator

Besides trust, the study underscores the importance of perceived usefulness as a mediator between trust and collaborative intention. When individuals believe that AI tools meaningfully enhance their productivity, creativity, or decision-making, they are more inclined to work alongside them. This highlights the need for organizations to demonstrate tangible benefits of AI applications rather than relying solely on technical sophistication or familiarity.

A notable and somewhat counterintuitive finding is that reliability, while enhancing trust, was negatively associated with perceived usefulness. The authors suggest that this paradox may stem from the expectation that generative AI’s value lies in its ability to provide novel and creative insights, not just predictable consistency. Users appear to seek AI that can offer fresh perspectives and innovative solutions, and may perceive overly consistent performance as limiting.

This insight challenges conventional wisdom in AI deployment, where reliability is often treated as the cornerstone of both trust and utility. It suggests that in contexts requiring creative problem-solving, such as content creation, product design, or complex decision-making, AI systems may need to balance predictability with the capacity to generate new ideas.

Implications for organizations and future research directions

The findings provide a roadmap for organizations seeking to maximize the potential of human–AI collaboration. Building trust requires not just technical excellence but also transparent communication about AI’s capabilities and limitations, as well as leveraging positive social influence to normalize adoption. Training programs that enhance users’ understanding of AI models and their decision-making processes can further reinforce trust and encourage deeper engagement.

The study also emphasizes the importance of context-sensitive trust management. For example, in high-stakes sectors like finance, healthcare, or cybersecurity, reliability and performance may carry more weight in shaping trust, while in creative industries, novelty and flexibility may be more critical to perceived usefulness. This nuanced approach can help organizations tailor AI deployment strategies to their specific operational and cultural environments.

The authors highlight several areas for further exploration. Future research should examine the role of emotional trust, which may influence collaboration in contexts where AI interacts directly with end-users or affects personal well-being. There is also a need to investigate ethical and cross-cultural factors, as perceptions of AI’s fairness, transparency, and value may vary significantly across regions and demographic groups.

Another key direction involves studying long-term collaboration dynamics. While the current study captures attitudes at a specific point in time, trust and perceived usefulness are likely to evolve as users gain more experience with AI tools. Understanding how these dynamics shift over time will be essential for maintaining effective partnerships between humans and AI.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback