Generative AI can boost critical thinking, but only with structured guidance

While GenAI tools can streamline complex business analytics tasks, they may encourage shallow engagement if students rely on automated reasoning instead of understanding underlying logic. Business analytics education traditionally builds competencies in data cleaning, model construction, statistical interpretation, forecasting, and decision-making. GenAI tools can now complete many of these steps in seconds, prompting fears that students may bypass essential cognitive processes.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 11-12-2025 09:43 IST | Created: 11-12-2025 09:43 IST
Generative AI can boost critical thinking, but only with structured guidance
Representative Image. Credit: ChatGPT

Business schools are  increasingly adopting AI tools to enhance data literacy and analytic skills, but new research shows that student engagement with GenAI is far from uniform. A detailed investigation published in Education Sciences finds that the technology can deepen or weaken critical thinking depending on how students interact with it and how educators design learning experiences.

The findings come from the study “From Collaboration to Critique: Engaging With GenAI to Foster Critical Thinking in Business Analytics,” that examines how undergraduate business students engage with generative AI during a structured analytics assessment and identifies clear differences in how weak, middle and high performers learn, adapt and critique AI-generated insights.

Their results highlight a growing need for thoughtfully designed AI-integrated coursework to ensure that future managers and analysts do not become passive consumers of algorithmic outputs but instead learn to question, evaluate and contextualize the information generated by large language models.

GenAI accelerates analysis but risks replacing thought unless guided by purposeful pedagogy

While GenAI tools can streamline complex business analytics tasks, they may encourage shallow engagement if students rely on automated reasoning instead of understanding underlying logic. Business analytics education traditionally builds competencies in data cleaning, model construction, statistical interpretation, forecasting, and decision-making. GenAI tools can now complete many of these steps in seconds, prompting fears that students may bypass essential cognitive processes.

The research examines this tension by embedding GenAI use directly into a business analytics assessment modeled on a real-world decision-making scenario. Students were asked to collaborate with GenAI systems to perform analytic tasks, generate insights and evaluate model outcomes. To ensure the exercise demanded active engagement, they were required to replicate the same analysis manually using Excel and then compare discrepancies, assess reliability and reflect on limitations.

This dual-process design forced students to confront the differences between automated and human-generated analysis. While GenAI could quickly produce structured insights, it sometimes made errors, delivered vague justifications or failed to tailor outputs to the specific business scenario. These inconsistencies became focal points for students’ reflections on accuracy, robustness and ethical implications.

The assignment was built around Kolb’s experiential learning model, integrating concrete experience, reflective observation, abstract conceptualization and active experimentation. This structure ensured that students did not simply observe AI performance but engaged critically with its strengths and weaknesses, turning generative AI into a cognitive partner rather than a replacement for analytical thinking.

Data from 276 student reflections reveal a clear stratification in how different performance groups approached GenAI. These differences shed light on the evolving role of AI in business education and the emerging divide between students who use AI as a shortcut and those who use it as a strategic thinking tool.

Stronger students use GenAI for contextual reasoning while others focus only on basic tool output

The study analyses how students at different performance levels interact with GenAI. The researchers observe that weak, middle and strong performers demonstrate distinct approaches to using generative AI, and these approaches influence the depth of learning and critical thinking achieved.

Weak performers concentrated primarily on the operational aspects of GenAI. Their reflections focused on what the technology could or could not do at a functional level. They tended to highlight surface-level strengths, such as speed and convenience, or general limitations like occasional inaccuracies. Their analysis rarely extended into the strategic, ethical or contextual implications of AI-driven decision-making.

Middle performers demonstrated a more advanced level of critical engagement. They questioned the assumptions embedded in the GenAI output, identified mismatches between the assignment context and the AI’s generic interpretations, and recognized the importance of human oversight. These students also reflected on ethical risks such as bias, data privacy and the need for verification mechanisms. Their reflections showed developing confidence in evaluating AI through both technical and moral lenses.

Strong performers showed the deepest level of analytical and strategic reasoning. They did not simply complete the task but tested how GenAI behaved across different business scenarios. They adapted prompts, refined instructions, experimented with alternative models, and evaluated the broader implications of AI-powered decision-making for business contexts beyond the case at hand. These students demonstrated an ability to transfer insights across domains, a hallmark of higher-order analytical thinking.

The pattern reveals a widening pedagogical challenge: as GenAI capabilities grow, students who naturally question outputs and seek to understand underlying logic gain significant learning advantages, while those who rely on AI outputs without interrogation risk losing analytical depth.

The study highlights that this divergence is not caused by technology alone but by how students are guided to use it. Without structured reflection, even high-performing students might use GenAI superficially. With carefully designed learning experiences, however, weaker students can learn to critique automated insights and strengthen analytical skills that traditional assignments may not activate as effectively.

GenAI literacy must become a core component of business education

The researchers argue that GenAI literacy must become a foundational competency for the next generation of business professionals. As AI systems increasingly shape decisions in finance, marketing, strategy, operations, and human resources, the ability to critique algorithmic outputs is becoming essential.

The study identifies several priorities for educators seeking to integrate GenAI responsibly.

First, instructors should design assignments that require students to evaluate AI outputs rather than accept them at face value. This includes tasks where students identify discrepancies between AI-driven and manually calculated results, analyze potential reasons for error, and assess whether AI recommendations align with contextual business realities.

Second, educators should emphasize ethical reasoning. Students must understand where AI tools can generate bias, oversimplify complex scenarios or produce misleading conclusions. Ethical awareness becomes particularly important in business analytics, where decisions affect stakeholders, markets and organizational governance.

Third, the authors highlight the importance of selecting datasets that prompt critical thought. Data should be sufficiently complex or imperfect to reveal the constraints of GenAI reasoning, providing opportunities for students to scrutinize assumptions and refine interpretations.

Fourth, instructors should incorporate reflective writing into GenAI-supported coursework. Reflection deepens conceptual understanding by forcing students to articulate how they evaluate AI reasoning, what they learned and how their perspective evolved. This metacognitive element is crucial for translating AI outputs into meaningful human insight.

Finally, the study encourages extending GenAI-integrated teaching beyond analytics into areas such as strategy, supply chain management, human resource management and organizational behavior. Business students across disciplines now interact with AI systems, making domain-specific AI literacy a cross-curricular priority.

Ultimately, the findings suggest that the educational goal is not to teach students how to prompt GenAI more efficiently, but to help them become independent thinkers who can challenge, contextualize and improve AI-generated analysis.

The authors note that the future of business analytics education depends on cultivating students who are comfortable navigating a hybrid cognitive environment in which human insight and AI computation intertwine. Those who learn to evaluate AI outputs rigorously will be better prepared for data-driven roles in an evolving digital economy. Those who rely passively on automated systems may find themselves at a competitive disadvantage.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback