AI systems failing to serve diverse users due to ignored personality differences

The study highlights the Big Five model as a dominant framework through which these differences can be understood. People high in openness tend to welcome new technologies and show curiosity toward AI tools, while those high in neuroticism may approach AI with caution or concern due to heightened sensitivity to uncertainty and risk. Agreeableness often correlates with more positive attitudes toward AI, while conscientiousness and extraversion show more context-dependent patterns.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 19-11-2025 18:47 IST | Created: 19-11-2025 18:47 IST
AI systems failing to serve diverse users due to ignored personality differences
Representative Image. Credit: ChatGPT

A new study argues that the designers driving today’s artificial intelligence (AI) smart systems continue to overlook one of the most important factors shaping public trust, satisfaction, and safe technology use - human personality.

The paper, titled “When technology meets personality: toward human-centered AI design,” published in AI & Society, shows that AI adoption is not determined by technical capability alone. Instead, stable personality traits, such as openness, neuroticism, extraversion, attachment style, sensation seeking, need for closure, and need for cognition, strongly influence how people think, feel, and behave when interacting with AI systems.

The authors warn that current design frameworks, which aim to address issues like bias, transparency, and personalization, still treat users as if they respond to AI in uniform ways. The study argues that this assumption is false and increasingly problematic as AI spreads into mobility, healthcare, education, workplaces, and social platforms. Without integrating personality insights into system design, the study says, AI risks creating mistrust, anxiety, and negative user experiences across sectors.

How does personality influence trust and acceptance of AI?

The study analyses whether personality traits predict people’s comfort levels, trust, and willingness to adopt AI technologies. Based on a large body of psychological and human–computer interaction research, the authors show that personality consistently shapes how people interpret automation, risk, and machine decision-making.

The study highlights the Big Five model as a dominant framework through which these differences can be understood. People high in openness tend to welcome new technologies and show curiosity toward AI tools, while those high in neuroticism may approach AI with caution or concern due to heightened sensitivity to uncertainty and risk. Agreeableness often correlates with more positive attitudes toward AI, while conscientiousness and extraversion show more context-dependent patterns.

Beyond the Big Five, the authors present additional personality constructs that matter for AI design. People with high need for cognition prefer detailed information and thoughtful interaction, while individuals with high need for closure seek clarity, structure, and predictable systems. Sensation seekers may enjoy interactive, dynamic features, but anxious users or those with avoidant attachment styles may prefer simpler, low-pressure designs. These psychological differences shape the expectations people bring to smart systems, influencing whether they accept or reject AI in everyday life.

The authors show that personality also affects how people judge risk and trustworthiness. For example, some users place strong value on control, while others feel more comfortable delegating tasks to automated systems. Some respond favorably to adaptive or human-like behaviors, while others prefer minimal interaction. The study makes clear that AI systems designed with a single type of user in mind inevitably fail others. As AI becomes more embedded in private and public life, designing for personality diversity becomes a practical necessity rather than an optional enhancement.

How do personality differences shape AI use in communication, mobility and medical contexts?

The researchers next examine how personality traits influence behavior across three major sectors already shaped by AI technologies: information and communication platforms, autonomous transportation, and digital medical systems. In each area, they highlight distinct patterns that demand attention from designers and policymakers.

In the information and communication domain, personality directs how individuals use and benefit from online interactions. Extraverted users often expand their social circles online, while introverted or socially anxious individuals may rely on digital platforms for support and self-expression. The authors point out that both groups can benefit from technology, but for different reasons: one out of social connectivity, the other out of reduced social barriers. Openness and neuroticism further shape how people navigate content, risk, social comparison, and digital identity. These differences influence who is more likely to adopt AI-based communication tools, how they respond to algorithmic recommendations, and what emotional outcomes result from prolonged use.

In transportation, especially with the rise of autonomous vehicles (AVs), personality again becomes a determining factor. People with higher openness and confidence in innovation often show greater willingness to adopt fully automated driving systems. Those high in anxiety, neuroticism, or need for closure may hesitate due to uncertainty, loss of control, or fear of malfunction. Meanwhile, individuals with high need for cognition may want detailed explanations about system decisions, while others prefer simpler displays. These variations influence the success of automation interfaces, safety systems, and in-vehicle AI assistants. According to the authors, ignoring such differences could slow public adoption of autonomous mobility.

The medical sector adds another layer of complexity. Many health applications, remote monitoring technologies, and AI-based diagnostic tools already assume users will respond positively to prompts and reminders. Yet personality traits heavily influence adherence and engagement. Conscientious users may follow detailed health routines, while neurotic individuals may need reassurance to reduce stress and worry. People high in extraversion may respond better to socially supportive features, whereas introverted users may prefer low-interaction modes. The study suggests that AI-supported healthcare could significantly improve outcomes when personality-aligned interfaces are built into system design.

Across all three sectors, the researchers find a common theme: personality shapes not only whether people use AI tools but also how they interpret them, how much they trust them, how safe they feel, and how much well-being they gain or lose from the interaction. The evidence establishes that technology design cannot rely on generalized user assumptions if systems are expected to serve diverse and global populations.

What does a personality-aware framework mean for the future of AI design?

The study proposes a structured model explaining how personality influences human–AI interaction and how designers can translate these insights into practical design strategies. The authors identify three core mechanisms that connect personality traits to user behavior: control–trust dynamics, cognitive–affective processing, and social-relational orientation.

Control–trust dynamics reflect how much autonomy users are willing to hand to AI. Some individuals rely heavily on automation and express high trust in systems, while others require transparency, step-by-step updates, or manual override options to feel safe. Cognitive–affective processing refers to how individuals think and feel about information presented by AI, influencing whether they prefer detailed data, simple summaries, or emotional reassurance. Social-relational orientation captures whether users seek close, supportive interaction from digital systems or prefer distant, functional designs.

These three mechanisms together create a blueprint for AI personalization that respects psychological diversity. The authors argue that AI systems should adapt at the interface level rather than expecting users to conform to uniform patterns of interaction. This shift would allow systems to meet users’ emotional needs, reduce anxiety, increase trust, and ultimately improve effectiveness.

However, the study warns that a personality-aware approach introduces new risks. Collecting psychological data raises serious concerns about manipulation, privacy, and loss of autonomy. Personalization based on emotional states or personality traits can drift into behavioral steering if not regulated properly. The authors insist that personality-based design must be paired with transparency, control, user consent, and clear guardrails to avoid misuse. Without safeguards, personality-aware AI could become a tool for social pressure, targeted influence, or surveillance rather than user empowerment.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback