AI that thinks more like humans using personality-based prompts

By incorporating personality-based reasoning and modeling the full spectrum of human thought, this study represents a paradigm shift in AI development. Instead of designing AI that merely provides correct answers, the future of AI may lie in creating systems that think more like us—intuitively, imperfectly, and uniquely.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-03-2025 11:57 IST | Created: 03-03-2025 11:57 IST
AI that thinks more like humans using personality-based prompts
Representative Image. Credit: ChatGPT

Artificial intelligence has long been designed to optimize accuracy, efficiency, and logical correctness. However, human reasoning is far more nuanced - it blends intuition, emotion, and prior experience in a way that often defies simple right-and-wrong answers. Can AI truly replicate the full spectrum of human reasoning?

A recent study by Animesh Nighojkar, Bekhzodbek Moydinboyev, My Duong, and John Licato from the University of South Florida’s Advancing Machine and Human Reasoning (AMHR) Lab explores this question in depth. Their research, titled "Giving AI Personalities Leads to More Human-Like Reasoning," investigates whether large language models (LLMs) can mimic both fast, intuitive reasoning (System 1) and slower, deliberate reasoning (System 2) by incorporating personality-based prompting.

The Challenge of Capturing Human Reasoning

Traditionally, AI models have been evaluated on their ability to provide the correct answer to a given problem. But human reasoning isn’t just about correctness - it’s a diverse process shaped by cognitive biases, personal experiences, and social influences. This diversity creates what the researchers call the "full reasoning spectrum problem": AI systems that only optimize accuracy fail to capture the rich variety of human thought processes.

To tackle this challenge, the study builds on Dual Process Theory, which distinguishes between two modes of thinking:

  • System 1 (fast, intuitive, automatic)
  • System 2 (slow, logical, deliberate)

Most prior research has focused on making AI more logical (System 2), assuming that System 1 is prone to errors. However, the researchers argue that both modes are essential for modeling human cognition. For example, quick, gut-feeling decisions often guide everyday actions, while deeper reflection is used for more complex problem-solving.

Experimenting with AI Personality Traits

The study introduces a novel method to simulate human reasoning by applying personality-based prompting inspired by the Big Five Personality Model (openness, conscientiousness, extraversion, agreeableness, neuroticism). Instead of asking LLMs to simply generate a single correct answer, the researchers designed prompts that reflected different personality traits, thereby eliciting a range of responses that mirror human cognitive diversity.

To test this approach, they structured reasoning tasks using a modified version of the Natural Language Inference (NLI) format - a framework commonly used to assess whether a hypothesis logically follows from a given statement. Unlike conventional NLI datasets that only classify responses as entailment, contradiction, or neutral, this study introduced a six-category scale, allowing for more nuanced human-like responses.

The research team collected human responses through a crowdsourced survey, analyzing how different people reasoned through NLI problems using both intuitive (System 1) and reflective (System 2) thinking. They then used genetic algorithms to fine-tune AI-generated responses, optimizing how LLMs mimic human reasoning distributions.

Key Findings: AI Can Mimic Human Thought - But Open-Source Models Do It Better

One of the study’s most surprising findings was that open-source AI models like Llama and Mistral outperformed proprietary GPT models in predicting human response distributions. This challenges the assumption that larger, closed-source models necessarily produce more human-like reasoning.

Other key insights include:

  • Personality-based prompting significantly improved AI’s ability to match human reasoning. AI models trained using personality traits were better at predicting the entire range of human responses rather than just selecting a single correct answer.
  • AI can replicate both System 1 and System 2 reasoning styles. Prior research assumed that AI models should be optimized only for slow, logical thinking. However, this study found that AI can effectively mimic intuitive reasoning as well, especially when using personality prompts.
  • Traditional machine learning models struggled to capture human cognitive diversity. While classical ML algorithms could predict the most common response (the “gold label”), they failed to model the full distribution of human reasoning patterns.

The study suggests that future AI systems should incorporate personality-driven reasoning models to better align with human thought processes, making AI interactions feel more natural, relatable, and psychologically realistic.

Implications for AI Development and Human-AI Interaction

This research has far-reaching implications for AI design, particularly in fields that require nuanced decision-making, such as healthcare, law, and personalized education. If AI can better predict human reasoning patterns, it could:

  • Improve decision-making systems by aligning AI outputs with how humans naturally think.
  • Enhance AI ethics and fairness by accounting for cognitive biases rather than simply optimizing for accuracy.
  • Enable personalized AI assistants that adapt to individual users’ cognitive styles and personalities.

Rather than forcing AI to think like an idealized, hyper-rational human, this study suggests that making AI more imperfect—by embracing the diversity of human reasoning - could actually make it more human-like.

Conclusion: A New Path for AI Humanization

By incorporating personality-based reasoning and modeling the full spectrum of human thought, this study represents a paradigm shift in AI development. Instead of designing AI that merely provides correct answers, the future of AI may lie in creating systems that think more like us—intuitively, imperfectly, and uniquely.

With the right prompting techniques and training methodologies, AI can go beyond cold logic and develop a form of reasoning that feels more natural, engaging, and ultimately, human.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback