How generative AI shapes human analytical, creative, and systems thinking

Creative thinking benefits were smaller and more uneven. While AI users sometimes excelled in defining problems or articulating overarching goals, their outputs in novelty and appropriateness of ideas were not consistently better than those of the non-AI group. On average, both groups scored slightly below mid-level in creativity.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 14-10-2025 21:41 IST | Created: 14-10-2025 21:41 IST
How generative AI shapes human analytical, creative, and systems thinking
Representative Image. Credit: ChatGPT

A new peer-reviewed study examines how generative AI tools such as GPT-4 influence core human cognitive skills during problem solving. The paper, titled “Exploring the Impact of AI Tools on Cognitive Skills: A Comparative Analysis” and published in Algorithms, is among the first controlled academic studies to test AI’s effect on analytical, creative, and systems thinking in real-time problem-solving tasks.

The researchers answer a pressing question: as generative AI becomes embedded in professional and academic settings, does it enhance or erode higher-order cognitive skills that humans traditionally bring to complex decision-making?

Testing AI’s role in real-world problem solving

The study recruited 16 master’s-level, PhD-level, and early-career professionals, assigning them randomly to two groups. One group had access to GPT-4 during a simulated management-consulting case task; the other worked without AI.

A detailed scoring rubric covering 29 sub-skills across analytical, creative, and systems thinking was used to evaluate their outputs. The tasks required participants to define problems, generate solutions, compare alternatives, and explain recommendations, mimicking real decision-making scenarios.

The researchers found that AI-assisted participants generally completed tasks 15–20 minutes faster than those without AI. However, speed did not automatically translate into stronger performance across all skill areas.

Mixed results for cognitive gains

Analytical thinking showed the most consistent improvement from AI support. Participants using GPT-4 often delivered more coherent reasoning and made stronger comparisons among alternatives. Still, they sometimes underperformed in framing hypotheses and questioning unsupported conclusions, areas where human judgment remains critical.

Creative thinking benefits were smaller and more uneven. While AI users sometimes excelled in defining problems or articulating overarching goals, their outputs in novelty and appropriateness of ideas were not consistently better than those of the non-AI group. On average, both groups scored slightly below mid-level in creativity.

Systems thinking, understanding interconnections and dynamics, improved in some sub-skills when participants used AI, particularly in recognizing relationships and holistic perspectives. But other sub-skills, such as setting system boundaries and developing mental models, remained challenging for both AI and non-AI users.

How people use AI matters

The study highlights that the style of AI use influenced outcomes as much as the technology itself. Participants who engaged in a collaborative approach, iteratively prompting, checking, and refining AI outputs, achieved higher scores across many sub-skills. Those who relied heavily on copying and pasting AI-generated text often lagged behind, especially in more complex reasoning and synthesis tasks.

Another notable finding was that while spending more time on the task correlated with better scores for participants without AI, this link was much weaker for AI users. This suggests that AI alters the relationship between effort and output quality, highlighting the need for thoughtful integration rather than overreliance.

Implications for education and work

The authors stress that AI can enhance higher-order cognitive work but is not a substitute for it. Its value lies in accelerating certain steps, such as generating options or outlining reasoning, but humans remain central in questioning assumptions, assessing evidence, and integrating diverse perspectives.

For educators and managers, the study offers two key takeaways:

  • Teaching and encouraging effective human-AI collaboration skills, such as iterative prompting and critical evaluation, will be essential to get the best results.

  • Safeguarding and cultivating core human skills in analytical reasoning, creativity, and systems thinking remains vital to avoid over-dependence on automated outputs.

The authors note that their study is limited by its small sample size and controlled experimental setting. They call for further research with larger and more diverse participant pools, more varied task types, and long-term studies to understand how continued AI use shapes cognitive development and decision-making over time.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback