Public AI literacy gap widens as courses overlook older adults

Public AI literacy gap widens as courses overlook older adults
Representative image. Credit: ChatGPT

AI literacy efforts remain concentrated in schools, universities and workplaces while older adults and other public audiences are left behind, warns a new study, insisting that AI education for the general public must move beyond technical instruction and meet learners through their own cultural background, prior knowledge and lived experience.

The study, titled Towards a public understanding of AI: on the design and delivery of an introductory course for a general audience, was published in AI & Society. It examines the design and delivery of an introductory AI literacy course for older adults aged mostly between 60 and 75, using an approach known as Emergent Design to make complex AI concepts accessible to a non-technical audience.

Public AI literacy gap grows as tools enter daily life

The rise of large language models and AI-powered chatbots has drawn widespread attention, but much of the information available to the public comes through news coverage, corporate announcements, consultancy reports or vendor communications. The authors argue that these channels often focus on AI's promise, risks, market potential or controversy rather than giving people a practical understanding of what the technology is, how it works, what it can do and where it can fail.

This matters because public debate over AI adoption, regulation, education and social impact depends on an informed citizenry. If people understand AI only through hype, fear or vendor messaging, they may struggle to evaluate claims about the technology or participate meaningfully in decisions that affect work, education, health, privacy and public services.

The authors note that existing AI literacy courses are often technocentric and homogeneous. Many are built for formal educational settings, professional training or specific applications. Schools, universities and workplaces have seen growing efforts to incorporate AI into curricula, assessment, training and professional development. But courses designed for the general public remain limited, especially courses that explain how AI works rather than simply teaching how to use a tool.

AI literacy remains a contested concept. There is no single agreed definition, but the field generally covers the ability to understand AI, evaluate AI technologies, use AI effectively, communicate with AI systems and navigate ethical questions. Existing frameworks commonly ask what AI is, what it can do, how it works, how it should be used and how people perceive it. Recent scholarship has also placed more weight on ethical and human-centred AI literacy, reflecting concerns about responsible use, bias, accountability and social consequences.

The review finds a significant gap in who receives AI literacy education. The authors examined representative studies on AI literacy curricula and found offerings for K-12 students, university learners, adult professionals and some general public audiences. However, older adults were largely absent. This is a striking omission because older adults may face barriers to technology adoption, including lower confidence, less exposure to emerging tools and limited access to technology-focused education. They are also part of the public conversation on AI and are affected by its use in services, communications, finance, health information and public policy.

The authors argue that the challenge is not simply to simplify AI for older adults. The deeper problem is contextualisation. AI literacy courses must connect new ideas to what learners already know. Generic explanations, technical jargon and tool-focused demonstrations may fail when learners come from different backgrounds or have limited familiarity with computing. For older adults, especially those more comfortable with literature, history and philosophy than with science and technology, an effective course may need a different starting point.

That is where Emergent Design enters the study. The approach begins from the knowledge, culture and interests of the learner community. Rather than assuming a fixed curriculum will work for all audiences, it uses feedback, iteration and contextual examples to build bridges from familiar concepts to unfamiliar ones. The authors present this as a practical route for reaching public audiences that are usually underserved in AI education.

Literature, history and philosophy used to explain AI

The course examined in the study was developed through the Workers Education Association in Australia, an educational organisation serving primarily older adults. The first author proposed an introductory AI course after a discussion with the organisation's education manager, who indicated that the likely audience would be familiar with Western literature, history and some philosophy, but less familiar with science and technology.

That information shaped the course design. Instead of starting with mathematical models, programming or product demonstrations, the pilot course introduced AI through literary examples. The tutor used Jorge Luis Borges' short story The Library of Babel to illustrate the challenge of generating meaningful language from vast combinations of text. He then used Shakespeare's works to explain next-token completion, the idea behind large language models predicting what comes next in a sequence of words.

This framing allowed participants to approach a technical topic through cultural material they could recognize. The course also used questions and examples to explain why simple word prediction is not enough to capture meaning. From there, the tutor introduced word embeddings, the idea that words can be represented by their relationships to other words across many contexts. This led into a discussion of large language models, transformers, post-training and reinforcement learning with human feedback.

The pilot course also raised difficult questions. Participants wanted more practical examples and more time for discussion. Some found the material stimulating but wanted clearer structure and more answers to questions generated by the session. These comments became part of the Emergent Design process. The tutor used them to reshape the final course, showing how feedback from even one small pilot can improve the design of public AI education.

The final course expanded to three hours and was framed as a historical and cultural introduction to AI. It began with machine learning as learning from data, using an analogy with how children learn from examples rather than formal definitions. The course then moved through historical and cultural material, including early ideas about language, probability and text generation, before returning to large language models and their capabilities.

The tutor added more discussion of intelligence and reasoning after the pilot showed strong interest in those themes. The course introduced the Turing Test, the question of whether large language models are intelligent, and the difference between deductive, inductive and abductive reasoning. Rather than giving a simplistic answer to whether AI can reason, the tutor treated it as an open question and warned participants to be skeptical of vendor claims.

The final course discussed how generative AI can support ideation, explanation and simulation, while warning that AI systems should be treated as unreliable assistants whose outputs must be verified. The tutor addressed hallucinations, incorrect outputs, bias in training data and the need for human oversight in high-stakes uses.

The course also covered non-language AI systems, including AlphaGo and AlphaFold, to show that AI is broader than chatbots. This helped participants understand that AI includes systems designed for specific tasks, not just conversational tools.

Ethics emerged through participant questions rather than as a separate lecture. Questions about jobs, regulation, AI dangers, deepfakes, scams, deskilling and the possibility of AI solving complex problems led to discussions about social impact. The authors argue that this dialogic format may be especially useful for public AI education because ethical concerns often arise naturally from people's own experiences. One participant's concern about an AI-related job loss in their family, for example, opened a wider conversation about the human consequences of automation.

The course's feedback was strongly positive. The final session drew 35 participants, with survey respondents giving the course top scores overall and high ratings for content, outcomes and teaching. Participants praised the breadth of material, the historical and philosophical framing, the use of Borges, Shakespeare and Cormac McCarthy, and the tutor's willingness to engage with questions. The authors acknowledge that the feedback data were limited and not collected as part of a formal controlled research study, but they say the comments indicate strong demand for AI literacy among older adults.

Inclusive AI education for underserved audiences

AI literacy should not remain confined to students, professionals or technically confident users. As AI becomes more pervasive, public education must reach groups that are often outside formal technology training systems. Older adults are one such group, but the same challenge may apply to many other communities with different cultural, educational or technological backgrounds.

Emergent Design offers a useful way to build AI literacy for such audiences. The approach does not begin by assuming what learners need or what they can understand. Instead, it starts from what they already know and values the feedback they provide. In this case, literature, history and philosophy became bridges into concepts such as next-token prediction, word embeddings, reasoning, hallucination, bias and AI alignment.

The authors make clear that the course is not presented as a universal curriculum. It was designed for a specific demographic in a specific setting. A younger audience, a culturally different audience or a more technically experienced audience would likely require different examples, structure and pacing. That is precisely the point of the design method. Public AI education should be adapted to learners, not delivered as one-size-fits-all technical content.

The study also highlights the importance of instructor skill. Designing a course through Emergent Design requires the educator to understand the audience, listen to feedback and make connections across cultural, historical and technical domains. This places significant demands on tutors, but it also creates a richer form of learning. For the authors, the success of the course lay not only in explaining AI but in sparking curiosity and encouraging participants to think more deeply about the technology.

The paper also acknowledges some limitations, including that it was based on the practical design and delivery of a course, not a planned experimental research project. There was no baseline test of participants' AI knowledge, no control group and no formal measurement of learning gains. The feedback came from optional surveys and informal comments, which could reflect self-selection bias. The authors also acknowledge that the course's success in one setting does not automatically produce general design rules.

The authors suggest that future versions could include pre-course surveys to assess participants' starting knowledge and post-course assessments to measure gains. Longer courses could include demonstrations, more hands-on activities, prompting guidance and follow-up question-and-answer sessions. Comparative work across cultural and regional contexts could also help identify how AI literacy programs should be adapted for different public audiences.

Governments, universities, community organisations and civil society groups are increasingly concerned about AI literacy as a foundation for democratic participation. If the public cannot understand basic AI concepts, it becomes harder to debate regulation, workplace impacts, privacy risks, misinformation, bias or the appropriate role of AI in public services. AI literacy is not only a technical skill. It is becoming a civic competency.

The study also challenges the assumption that older adults are simply resistant to new technology. The course feedback suggests that when AI is explained through familiar cultural material and linked to real concerns, older learners can engage deeply and critically.

  • FIRST PUBLISHED IN:
  • Devdiscourse

TRENDING

OPINION / BLOG / INTERVIEW

AI can support rural income growth, but only with infrastructure and policy backing

Digital transformation of humanitarian supply chains could improve trust and sustainability

Quantum machine learning shows promise for adaptive learning, but classrooms are not ready

Developing APEC economies show stronger real capital mobility than advanced peers

DevShots

Latest News

Connect us on

LinkedIn Quora Youtube RSS
Give Feedback