Faster code, shallower skills: What ChatGPT means for university programming courses
Artificial intelligence tools are rapidly reshaping university classrooms, but new evidence suggests their influence on learning is far more complex than early enthusiasm implied. As generative AI systems move from novelty to routine academic aid, educators are facing a critical question: do these tools strengthen student learning, or do they quietly erode the very skills higher education is meant to build?
A new empirical study delivers one of the clearest answers yet, showing that while ChatGPT can significantly speed up programming work and boost short-term confidence, its unchecked use may undermine creativity, collaboration, and deep understanding. The findings arrive at a moment when universities worldwide are struggling to set clear rules for AI use, amid growing concerns over academic integrity, skill dilution, and assessment reliability.
The study, titled ChatGPT in Programming Education: An Empirical Study on Its Impact on Student Performance, Creativity, and Teamwork, was published in Education Sciences. Conducted by Diana Stoyanova, Silviya Stoyanova-Petrova, Snezha Shotarova, Slavi Lyubomirov, and Nevena Mileva, the research draws on controlled classroom experiments with engineering undergraduates to assess how generative AI is reshaping learning outcomes in programming education.
Faster code, flatter learning outcomes
At the center of the study is a two-part experimental design that examines how students use ChatGPT in both individual and team-based programming tasks. The first phase focuses on text-based programming, a domain where generative AI is particularly strong. Students were allowed to use ChatGPT freely while completing coursework and final projects, after which their usage patterns and academic results were analyzed.
The results challenge the assumption that more AI assistance automatically translates into better grades. The researchers found no statistically significant relationship between how frequently students used ChatGPT and their final course performance. Instead, a more nuanced pattern emerged. High-performing students tended to rely on ChatGPT less, using it mainly as a reference tool. Lower-performing students, by contrast, turned to ChatGPT more often for code generation, debugging, and optimization.
This pattern suggests that ChatGPT functions more as a compensatory aid than a learning accelerator. Students who already possess strong programming foundations appear to use AI selectively, while those struggling are more likely to depend on it for ready-made solutions. While this reliance may help them complete tasks, it does not necessarily translate into stronger conceptual understanding or long-term skill development.
The findings reinforce a growing concern among educators: when students lean too heavily on generative AI for solution generation, learning risks becoming superficial. Tasks are completed, but the cognitive effort required to understand, analyze, and design code is reduced. The study suggests that AI-assisted efficiency can mask gaps in knowledge rather than closing them.
Creativity and collaboration decline under AI dependence
The second phase of the research offers a deeper look into how ChatGPT affects teamwork, creativity, and project quality. In this experiment, student teams developed visual programming projects under two conditions: one using ChatGPT and one relying solely on traditional resources such as textbooks, lecture notes, and peer discussion.
The contrast was stark. Projects completed with ChatGPT were finished significantly faster and scored higher for code completeness and efficiency. The AI tool helped students generate functional solutions quickly, reducing the time spent searching for information or debugging errors. From a productivity standpoint, ChatGPT delivered clear gains.
However, these gains came at a cost. Projects developed without ChatGPT consistently showed higher levels of originality, better interface design, and stronger attention to the end user. Creativity scores dropped when ChatGPT was introduced, reflecting a tendency to accept the first workable solution generated by the AI rather than experimenting with alternative designs or ideas.
Team dynamics also shifted. When ChatGPT was available, students reported less discussion and fewer collaborative problem-solving moments. Team members often worked in parallel, consulting the chatbot individually rather than engaging with each other. In contrast, teams working without ChatGPT spent more time debating decisions, explaining ideas, and jointly developing solutions.
This reduction in collaboration raises broader questions about how AI tools reshape social learning processes. Programming education is not only about producing correct code but also about developing communication skills, shared reasoning, and collective problem-solving. The study suggests that generative AI, when used without structure, may weaken these essential competencies.
Confidence rises, understanding does not always follow
One of the most striking findings of the study is the complex relationship between ChatGPT and student confidence. Many participants reported feeling more confident when using the AI tool, particularly when facing unfamiliar tasks or tight deadlines. ChatGPT provided quick guidance, reduced frustration, and offered reassurance during moments of uncertainty.
Yet this confidence was often fragile. Students acknowledged that they did not always fully understand the code generated by the chatbot. In some cases, the gap between perceived competence and actual understanding led to discomfort and insecurity, especially when students were asked to explain or modify AI-generated solutions.
This mismatch highlights a central risk of generative AI in education: the illusion of mastery. When students can produce working outputs without fully grasping underlying concepts, assessment results may no longer reflect true learning. Over time, this gap could weaken students’ ability to tackle complex problems independently or adapt their skills to new contexts.
The study also underscores growing ethical and pedagogical challenges. The ease with which ChatGPT generates complete solutions raises concerns about academic honesty and fair evaluation. Traditional assessment methods struggle to distinguish between student-generated and AI-generated work, prompting some educators to reconsider oral exams, live coding tasks, and process-based assessment models.
Rather than advocating for outright bans, the authors argue that prohibition would be counterproductive. Generative AI tools are already embedded in professional software development, and students will encounter them in the workplace. Denying exposure risks leaving graduates unprepared for real-world environments.
Rethinking how AI fits into higher education
The study’s conclusions point toward a more deliberate and reflective approach to AI integration. The authors recommend structured pedagogical strategies that combine AI-assisted and AI-free learning activities. By requiring students to alternate between using ChatGPT and working independently, educators can help learners compare outcomes, reflect on trade-offs, and develop critical judgment about when AI support is beneficial and when it is not.
One proposed approach involves designing assignments that emphasize higher-order thinking, such as modifying, evaluating, and extending existing code rather than generating solutions from scratch. These tasks force students to engage with logic, structure, and design decisions, limiting passive reliance on AI-generated outputs.
The research also highlights the importance of AI literacy. Students need explicit instruction not only on how to use tools like ChatGPT, but also on their limitations, biases, and ethical implications. Without this foundation, generative AI risks becoming a shortcut rather than a learning aid.
For universities, the findings arrive at a pivotal moment. As generative AI becomes ubiquitous, institutions face pressure to move beyond reactive policies and develop coherent strategies that balance innovation with educational integrity. The study suggests that the future of AI in higher education will depend less on the technology itself and more on how thoughtfully it is embedded into teaching and assessment.
- FIRST PUBLISHED IN:
- Devdiscourse

