AI in the Classroom: How Chatbots Are Transforming Geomatics Education and Learning
A University of Florida study found that while AI chatbots enhance creativity and engagement in Geomatics education, they struggle with spatial data analysis and complex computations. Teaching advanced prompting and AI literacy can turn these tools into effective learning partners rather than mere answer generators.
A groundbreaking study from the University of Florida’s Fort Lauderdale Research and Education Center has explored how artificial intelligence chatbots are transforming higher education, especially in the data-driven world of Geomatics. Conducted by Dr. Hartwig H. Hochmair of the Geomatics Sciences program, the research, titled “Student Chatbot Use and Perceptions in a Course Assignment: Comparing Two Geomatics Courses”, is among the first to systematically examine how students interact with generative AI tools such as GPT-4o, Claude 3.5, Gemini 2.0,Gemini 2.0, and Microsoft Copilot in academic coursework.
Testing Chatbots in the Classroom
The study took place during the Spring 2025 semester across two courses, Measurement Science and Geospatial Analysis, with 45 participating students at the undergraduate and graduate levels. Students were tasked with creating five AI prompts based on lecture content and submitting them to one of seven chatbot platforms. They rated each chatbot’s response on a scale of zero to ten and were encouraged to refine unsatisfactory answers with follow-up prompts. The assignment was conducted through the University of Florida’s Canvas platform and approved by the Institutional Review Board.
Results revealed intriguing contrasts between the two courses. Students in Measurement Science awarded higher average chatbot ratings (8.6/10), while those in Geospatial Analysis gave lower scores (7.1/10). According to Hochmair, the difference reflected varied expectations, GIS students, dealing with spatial data and complex analytical questions, expected far more precision from AI. OpenAI’s ChatGPT-3.5/Turbo and GPT-4o emerged as the most popular platforms, while Mistral Large 2, Llama, and Microsoft Copilot saw limited use, partly due to access restrictions through the university’s NaviGator interface.
From Text to Multimodal Learning
A major focus of the research was how students used multimodal prompts, those combining text with images or datasets. Over half of the GIS students’ prompts included images, far exceeding the assignment’s 40 percent requirement, while the Measurement Science group produced only two percent. This showed that students in the spatially oriented GIS course were more willing to experiment with visual and data-rich prompts. However, chatbot performance dropped sharply when datasets were attached. Tasks that involved numerical calculations or GIS analyses, such as clustering or nearest-neighbor distance, frequently confused the models. Statistical analysis confirmed that data-based prompts received significantly lower ratings, with the odds of higher satisfaction falling by nearly 70 percent compared to text-only prompts.
The study concluded that while chatbots can handle conceptual reasoning, their ability to interpret structured spatial data remains limited. This finding points to a crucial technological gap between the text-based training of large language models and the specialized data formats used in Geomatics.
Creativity Versus Calculation
Students’ creativity also varied by course type. In Measurement Science, nearly 40 percent of prompts replicated classroom examples, while 88 percent of GIS students designed their own. Hochmair attributed this to the different cognitive demands of the subjects: mathematical surveying tasks encouraged reliance on established examples, whereas GIS’s broader conceptual framework invited independent thinking. Encouraging students to create original prompts, he noted, fosters deeper engagement, strengthens critical thinking, and helps them see chatbots as analytical collaborators rather than answer generators.
The study also compared question types. Students in Measurement Science leaned heavily toward computational questions (75 percent), while GIS students were more balanced, with half their prompts being conceptual. This difference not only shaped the quality of chatbot interactions but also revealed the potential of AI to spark higher-order reasoning in disciplines that emphasize open-ended problem solving.
When Chatbots Fail and Students Step In
One of the study’s most insightful findings came from examining follow-up behavior. GIS students submitted about 7.5 times more follow-up questions than those in Measurement Science, primarily to correct errors in calculations or data interpretation. Hochmair observed that targeted, precise follow-ups, such as identifying specific miscalculations or clarifying misread images, were far more effective than vague prompts like “redo the calculation.” Students learned that chatbots often needed context-rich guidance to improve accuracy.
When student and expert evaluations were compared, Measurement Science students showed weak agreement with expert ratings, suggesting lenient scoring, while GIS students’ assessments closely matched those of professional evaluators. This stronger alignment indicates that dealing with analytical and interpretive tasks helps students become more discerning users of AI tools.
Teaching the Art of Prompting
Despite the growing familiarity with AI, most students relied on “zero-shot” prompting, asking one question without refinement, rather than iterative approaches. Hochmair suggests that educators should introduce more advanced prompting strategies, such as Chain-of-Thought reasoning or few-shot prompting, to teach students how to extract higher-quality responses. This would not only improve chatbot interactions but also develop analytical and reflective thinking skills essential for scientific problem solving.
A Future of Smarter Classrooms
The University of Florida study ultimately concludes that chatbots can enrich learning in Geomatics education when used critically and creatively. Encouraging multimodal and original prompts enhances engagement, while confronting chatbot errors builds problem-solving resilience. However, educators must make students aware of AI’s technical limits, especially its struggles with spatial reasoning and computation. Hochmair argues that teaching AI literacy is as important as teaching Geomatics itself, as the next generation of geospatial professionals will need to question, verify, and collaborate intelligently with these digital partners.
As AI tools become increasingly embedded in academic life, the Fort Lauderdale Research and Education Center’s findings provide a roadmap for responsible integration, where innovation meets critical thinking, and technology serves as a catalyst for deeper learning rather than a shortcut to easy answers.
- READ MORE ON:
- artificial intelligence
- chatbots
- GPT-4o
- Claude 3.5
- Gemini 2.0
- Microsoft Copilot
- GIS
- FIRST PUBLISHED IN:
- Devdiscourse

