Are we too optimistic about AI? Research reveals a new cognitive bias
A key hypothesis of the study was that individuals with higher risk aversion - those who avoid uncertain or high-stakes situations - would exhibit lower AIPA scores, as they might be more cautious about AI’s rapid advancement. Surprisingly, the study found no significant correlation between risk aversion and AI pessimism aversion.
Artificial intelligence is a transformative force, yet opinions on its impact vary widely. Some fear AI’s potential dangers, while others embrace its promise with unwavering optimism. But what drives this optimism, and why do some individuals downplay AI’s risks? A new study, "On Pessimism Aversion in the Context of Artificial Intelligence and Locus of Control: Insights from an International Sample," authored by Christian Montag, Peter J. Schulz, Heng Zhang, and Benjamin J. Li, and published in AI & Society, explores a novel psychological construct called AI Pessimism Aversion (AIPA). The study delves into how personality traits such as locus of control and risk aversion influence individuals’ attitudes toward AI, offering critical insights into the psychological factors shaping AI perceptions.
The concept of AI pessimism aversion
The study introduces AI Pessimism Aversion (AIPA), a psychological trait describing individuals who exhibit an overly optimistic view of AI by neglecting its potential risks. Inspired by Mustafa Suleyman’s concept of "pessimism aversion" - where tech elites downplay AI’s risks - the researchers sought to quantify this tendency. By analyzing responses from 543 participants across various countries, the study found that AIPA strongly correlates with general positive AI attitudes while being negatively linked to concerns about AI risks. Those with high AIPA tend to focus on AI’s benefits, dismissing warnings about its dangers as unnecessary fear-mongering.
To measure AIPA, the researchers developed a five-item scale assessing individuals' tendency to reject AI’s negative consequences while emphasizing its potential benefits. Respondents who scored higher on AIPA were more likely to believe that AI will solve global challenges, that concerns about AI dangers are exaggerated, and that AI will ultimately be a force for good.
The role of locus of control in AI perception
The study also examined how locus of control, a personality trait that determines whether individuals perceive life events as controlled by themselves (internal locus) or by external forces (external locus), influences attitudes toward AI. The findings revealed that individuals with a high internal locus of control - those who see themselves as in charge of their own lives - exhibited greater AI optimism and higher AIPA scores. These individuals were more likely to believe that AI could be harnessed for positive outcomes and that risks could be managed effectively.
Conversely, those with a high external locus of control - who believe their fate is dictated by external factors - did not significantly exhibit AI Pessimism Aversion. This suggests that people who feel powerless in shaping their future are less likely to develop an uncritical optimism about AI. Instead, they may hold more neutral or skeptical views about its implications.
The unexpected link between risk aversion and AI attitudes
A key hypothesis of the study was that individuals with higher risk aversion - those who avoid uncertain or high-stakes situations - would exhibit lower AIPA scores, as they might be more cautious about AI’s rapid advancement. Surprisingly, the study found no significant correlation between risk aversion and AI pessimism aversion. This contradicts previous assumptions that risk-averse individuals would be more skeptical of AI due to its uncertainties. Instead, the findings suggest that a person’s confidence in AI is shaped more by their locus of control than by their general comfort with uncertainty and risk.
This result has important implications for AI governance and public policy. If skepticism toward AI is not driven by general risk aversion but rather by how much control individuals feel they have over their lives, then effective AI communication strategies should focus on empowering people with knowledge and agency rather than merely emphasizing risk mitigation.
Implications for AI development and public trust
Understanding AI Pessimism Aversion has significant consequences for how AI is developed, marketed, and regulated. As AI becomes more integrated into everyday life, its adoption depends on public trust. The study suggests that tech companies and policymakers must be mindful of the psychological biases influencing AI perceptions.
For AI developers, the study highlights the need for balanced messaging. Overhyping AI’s benefits while downplaying risks could alienate skeptical audiences and create backlash if promised benefits fail to materialize. On the other hand, excessive fear-mongering without presenting solutions could stifle innovation and lead to unnecessary restrictions. Striking the right balance requires acknowledging AI’s transformative potential while addressing its risks transparently and proactively.
For policymakers, the findings emphasize the importance of public engagement and education. Since those with an internal locus of control are more likely to embrace AI optimistically, fostering a sense of empowerment - such as through AI literacy programs - could help bridge the gap between AI proponents and skeptics. Instead of simply regulating AI from the top down, engaging the public in discussions about AI ethics, transparency, and safety could foster more informed and constructive attitudes toward AI development.
Conclusion
The study On Pessimism Aversion in the Context of Artificial Intelligence and Locus of Control provides a groundbreaking exploration of how psychological traits influence AI attitudes. By introducing the concept of AI Pessimism Aversion, it sheds light on why some individuals embrace AI’s promise while overlooking its risks. The research highlights the role of locus of control in shaping optimism toward AI, while surprisingly finding that risk aversion does not significantly impact AI attitudes.
These insights offer valuable lessons for AI developers, policymakers, and communicators. Ensuring responsible AI adoption requires not just technical safeguards but also a deeper understanding of the psychological forces driving AI perceptions. By fostering informed optimism - grounded in both AI’s potential and its challenges - we can navigate the future of artificial intelligence with greater clarity and responsibility.
- FIRST PUBLISHED IN:
- Devdiscourse

