AI in the classroom? New study warns it’s hollowing out education

A central claim in the paper is that AI-generated content qualifies as "bullshit" in the philosophical sense articulated by Harry Frankfurt—utterances that are indifferent to truth. Because AI lacks consciousness, understanding, and a moral stake in its claims, it cannot be held accountable for its outputs. Therefore, students engaging primarily with AI-driven content would be relying on language that lacks epistemic grounding.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-04-2025 10:03 IST | Created: 03-04-2025 10:03 IST
AI in the classroom? New study warns it’s hollowing out education
Representative Image. Credit: ChatGPT

A new study has issued a stark warning to higher education institutions embracing artificial intelligence as a teaching substitute. Titled "Bullshit Universities: The Future of Automated Education" and published in AI & Society, the study argues that the use of generative AI in tertiary education risks undermining the core values and purposes of learning itself. Authored by Robert Sparrow and Gene Flenady, the paper makes a comprehensive case against the automation of university education, highlighting fundamental epistemological, pedagogical, and societal issues.

The paper responds to growing enthusiasm for using AI tools like ChatGPT in curriculum design, student feedback, and even grading. While many institutions promote AI as a time-saving and scalable solution to administrative and instructional burdens, the authors contend that the supposed efficiencies come at the cost of educational substance. They reject the premise that AI-generated outputs are suitable educational material, asserting that such outputs are inherently devoid of meaning and truth, and cannot replace human testimony or intention. AI, they argue, is incapable of "caring" about truth or acting responsibly, making its outputs epistemically untrustworthy.

Why is AI content considered 'bullshit' in this context?

A central claim in the paper is that AI-generated content qualifies as "bullshit" in the philosophical sense articulated by Harry Frankfurt—utterances that are indifferent to truth. Because AI lacks consciousness, understanding, and a moral stake in its claims, it cannot be held accountable for its outputs. Therefore, students engaging primarily with AI-driven content would be relying on language that lacks epistemic grounding. The authors extend this critique by explaining that AI systems lack the inferential commitments and behavioral coherence that give human speech its meaning. An AI may produce statements syntactically indistinguishable from human speech, but these outputs do not reflect understanding or intentionality.

Beyond semantics, the study explores how educational automation affects the social and institutional roles of teachers and students. It argues that true education involves not only the transmission of facts ("learning that") but also the cultivation of skills and judgment ("learning how"). These forms of learning depend heavily on human modeling, feedback, and embodied interaction—elements that AI systems cannot replicate. Teachers serve as role models, disciplinarians, and moral exemplars, offering a lived example of intellectual commitment and disciplinary rigor. The authors claim that replacing these functions with AI strips education of its transformative and relational dimensions.

The economic and bureaucratic trends driving AI adoption are also scrutinized. The study links educational automation to broader labor-saving tendencies in capitalist systems, which historically achieve efficiency by de-skilling human roles and substituting them with machines. Rather than merely supporting educators, AI tools may ultimately lead to their displacement. Once AI is introduced for content generation or feedback, economic logic may incentivize institutions to further reduce human involvement. Automation bias—a well-documented cognitive tendency to over-trust machines—would likely exacerbate this trend.

Another danger flagged is the risk of reducing education to content delivery. The authors argue that the recent trend of conceptualizing teaching as content transmission—aided by the rise of online learning platforms—has made universities particularly susceptible to automation. When education is seen as the passive reception of information, the nuanced, interactive, and socially embedded aspects of learning are disregarded. This conception lends itself well to AI integration but degrades the student experience and intellectual growth.

Can AI increase access without degrading quality?

The authors take issue with common arguments in favor of AI in education, including claims that AI can improve accessibility for underserved populations. While acknowledging the global inequities in access to higher education, the study cautions against offering students in marginalized contexts an inferior, automated education simply because it is cheap or scalable. It questions whether a second-rate education, mediated by machines without epistemic or moral accountability, should be acceptable under the guise of access.

In addressing the claim that students need AI skills for the future workforce, the study draws a clear distinction between teaching students about AI and allowing AI to teach students. The former can be achieved through specialized units or training modules; the latter, the authors argue, undermines the intellectual foundations of the university. They warn that using AI as a teaching substitute jeopardizes the acquisition of critical thinking, writing, and argumentation skills that define academic disciplines.

The study also responds to the objection that traditional universities already fall short of ideal teaching standards. It concedes that large class sizes, overworked staff, and standardized assessments have made education increasingly mechanical. However, the authors view this not as a justification for automation, but as a call to reinvest in human-centered learning. They argue that AI acceptance is, in part, a consequence of lowered expectations.

What should universities prioritize instead of AI?

Rather than turning to AI, universities should double down on what makes education meaningful: small class sizes, passionate human teachers, and intellectually demanding environments. With generative AI now capable of performing most tasks that students are asked to complete digitally, institutions must rethink assessment methods to preserve academic integrity and foster genuine learning. The authors caution that embracing AI could paradoxically erode the very human capacities, critical reasoning, empathy and moral responsibility, that education is meant to cultivate.

At stake, the authors argue, is not just the future of universities, but the cultivation of citizens capable of democratic deliberation in an AI-saturated world.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback