Dialogue-based AI coaching increases ethical awareness in universities
The expansion of generative AI tools across higher education has triggered a sharp debate over plagiarism, weakened critical thinking, and the future of scientific standards. As students increasingly turn to AI-powered writing tools like ChatGPT, universities face mounting pressure to balance innovation with accountability.
In response to these challenges, researchers investigated whether guided support could shape responsible AI behavior. Their study, Promoting Academic Integrity in AI-Practice—The Effect of Live Coaching in Higher Education, published in Applied Sciences, explores how synchronous live coaching sessions influence students’ ethical awareness and confidence in AI-supported academic writing.
A structured response to AI disruption in higher education
The rapid adoption of LLMs has triggered widespread debate within universities about exam integrity, assessment design, and the long-term cognitive effects of AI-assisted learning. While AI promises personalized learning and efficiency gains, it also introduces new risks. Research cited within the study points to evidence that heavy AI reliance may correlate with reduced critical thinking, increased procrastination, and diminished memory retention.
Against this backdrop, APOLLON University launched a voluntary monthly live coaching session titled “AI in Scientific Writing.” The format was deliberately designed as a low-barrier intervention. Sessions lasted 90 minutes, were offered in the evening to accommodate working students, required no registration, and were accessible across all academic programs. Two instructors led each session, with one focused on managing live discussion and chat-based questions.
The structure followed a two-phase model. The first segment delivered targeted input on scientific integrity, responsible AI use, and critical source evaluation. The second phase opened into moderated dialogue, allowing students to raise uncertainties, practical concerns, and ethical dilemmas related to AI-supported writing.
The pedagogical framework drew on revised Bloom’s taxonomy, guiding students from foundational understanding of academic standards toward higher-order skills such as evaluating AI-generated text, assessing ethical implications, and developing personal guidelines for responsible AI engagement. The emphasis was not on technical AI mastery, but on reflective judgment and academic accountability.
The study’s central research question asked whether participation in this live coaching format enhanced both ethical awareness and practical competence in working with generative AI tools in academic contexts.
Gains in ethical awareness and critical practice
The authors conducted a cross-sectional online survey among 168 participating students. The university itself enrolls approximately 6,500 students, many of whom are adult learners working in health and social science professions. The average student age is 37.5, with a large majority enrolled in bachelor’s programs.
The sample reflected a diverse range of AI familiarity. Around half reported occasional AI use, a smaller share indicated regular use, and more than one-third said they were not using AI tools at all. This relatively cautious adoption rate distinguishes the population from institutions where AI usage is already widespread.
Quantitative findings showed strong positive perceptions of the live coaching format. Students reported improved understanding of responsible AI use and heightened awareness of scientific integrity. They also indicated greater confidence in evaluating the ethical appropriateness of AI applications within academic work.
Importantly, participants expressed a high likelihood of critically reviewing AI-generated outputs before incorporating them into assignments. Many also indicated that they would document AI use transparently and adhere to institutional disclosure requirements.
The strongest areas of agreement centered on ethical evaluation and academic integrity. Students widely felt that the coaching clarified how to apply citation rules when AI is involved and how to distinguish between acceptable support and academic misconduct.
However, responses also revealed areas of uncertainty. Confidence levels were lower regarding the safe and technically secure use of AI tools in scientific writing. Students expressed mixed views on how to manage risks such as hallucinated content, data protection concerns, and potential bias embedded within AI outputs.
These findings suggest that while ethical awareness can be strengthened through dialogue-based formats, technical literacy and security practices may require additional structured instruction.
The human factor: Dialogue, peer learning, and institutional trust
In addition to statistical measures, qualitative feedback from participants underscored the value of interpersonal interaction. Peer exchange emerged as one of the most frequently cited strengths of the live coaching format. Students reported that hearing others’ questions reduced anxiety and reinforced the sense that AI-related uncertainty is shared rather than individual.
The supportive tone of instructors was also central to the format’s success. Students emphasized the importance of an open, respectful atmosphere where AI use was neither stigmatized nor blindly endorsed. The coaching sessions positioned AI not as a shortcut to bypass academic effort, nor as a threat to be avoided entirely, but as a tool that demands critical scrutiny and ethical awareness.
This balanced framing appears to have resonated strongly with participants. Rather than encouraging passive reliance on AI systems, the coaching encouraged students to treat AI as a sparring partner, a provisional drafting assistant whose outputs must be verified, contextualized, and ethically integrated into scientific work.
The distance-learning context adds another dimension to the findings. In online education environments, students may experience isolation or hesitation when raising questions about emerging technologies. The live coaching format created a structured space for collective reflection, reducing barriers to participation.
The researchers also note that many participants may be first-generation university students or professionals returning to academic study later in life. In such contexts, peer learning and visible question-asking can be especially empowering, helping students navigate new technological expectations without fear of embarrassment.
Implications for AI literacy and academic policy
Live coaching offers one scalable pathway. By combining structured instruction with dialogue, institutions can foster higher-order thinking skills at a time when AI tools increasingly automate lower-level cognitive tasks.
The study also highlights generational and disciplinary differences that may influence AI integration. Students in earlier study phases often bring general questions about academic standards, while advanced students may confront more specialized concerns related to thesis writing or professional ethics. Tailoring AI guidance to study stage and discipline could enhance effectiveness.
At the same time, mixed-level sessions may provide benefits by allowing less experienced students to learn from more advanced peers. The balance between specialization and inclusivity remains an open question for future program design.
Limitations and the road ahead
Despite promising findings, the authors acknowledge several limitations. The evaluation relied on self-reported perceptions rather than objective performance measures. Without a control group, it is not possible to attribute improvements exclusively to the live coaching intervention.
The study was conducted at a single distance-learning university with a strong focus on health and social sciences, limiting generalizability. Students in traditional campus-based institutions or in highly technical disciplines may respond differently to AI guidance programs.
Moreover, the research does not yet establish whether perceived gains translate into long-term behavioral change. It remains unclear whether students consistently apply transparent citation practices, critically evaluate AI outputs in future assignments, or resist over-reliance on generative tools.
Future research could address these gaps through longitudinal designs, validated measurement instruments, and multi-institutional comparisons. Investigating how prior AI literacy influences responsiveness to coaching interventions would also provide valuable insights.
- FIRST PUBLISHED IN:
- Devdiscourse

