AI-augmented messaging proves more persuasive in vaccine outreach
One of the most striking findings in the study was the impact of message placement. Participants were more likely to rate the message shown first, regardless of whether it was human-written or ChatGPT-enhanced, as more convincing. When the ChatGPT message appeared first in the pair, its mean score was significantly higher than when it was presented second. This “primacy effect,” well-documented in psychology, emphasizes the importance of survey design in evaluating message effectiveness and may inform how public health messages are structured in digital formats.
Generative AI chatbots like ChatGPT may help global health agencies strengthen vaccine confidence and reach wider audiences, all while maintaining credibility and transparency. A recent research paper titled "Working with Large Language Models to Enhance Messaging Effectiveness for Vaccine Confidence," published by researchers at Dartmouth College, explores the use of ChatGPT-augmented messaging to improve public trust in vaccination. Through a randomized online survey, the study finds that messages enhanced by ChatGPT were generally perceived as more persuasive than their original human-written counterparts, indicating that large language models (LLMs) may serve as cost-effective tools for strengthening public health communication.
The study targeted the persistent challenge of vaccine hesitancy, particularly in communities with limited public health communication resources. Participants were recruited via Amazon Mechanical Turk and presented with six pairs of vaccine-related messages, one original and one ChatGPT-enhanced version, without being told initially which message was AI-generated. After making comparisons, participants were later informed that some messages had been augmented by ChatGPT and asked to reflect on how that knowledge influenced their perceptions.
How effective are ChatGPT-augmented messages compared to human-written ones?
The results were notable. In four out of six message pairs, the ChatGPT-augmented version was rated more convincing. While not all comparisons reached statistical significance, the trend suggests a general preference for the AI-assisted messages. The researchers noted a key stylistic difference: ChatGPT messages tended to be longer, more enthusiastic, and featured direct calls to action such as “Get vaccinated today!” and “Don’t wait!”—phrasing that may have enhanced their perceived urgency and engagement.
Although the average overall score favoring ChatGPT messages was modest, a clear pattern emerged in participant feedback. Respondents described the AI-augmented messages using terms like “good,” “informative,” and “more persuasive,” indicating a generally favorable reception. Importantly, the study found no statistically significant correlation between how participants felt about ChatGPT and how they rated the messages, suggesting that even those with concerns about AI were not biased against its output when unaware of its origin.
That said, the study did detect a bimodal distribution in responses, some respondents strongly favored ChatGPT messages, while others reacted negatively. A minority expressed skepticism toward AI involvement in public health messaging, citing concerns such as perceived exaggeration, fear of job replacement, and lack of trust in the technology. However, these views were outweighed by the majority who saw value in the AI-enhanced content. Overall, the study’s findings support the conclusion that ChatGPT can be a viable tool for augmenting public health messaging without undermining credibility.
What factors influence the persuasiveness of AI-enhanced messaging?
One of the most striking findings in the study was the impact of message placement. Participants were more likely to rate the message shown first, regardless of whether it was human-written or ChatGPT-enhanced, as more convincing. When the ChatGPT message appeared first in the pair, its mean score was significantly higher than when it was presented second. This “primacy effect,” well-documented in psychology, emphasizes the importance of survey design in evaluating message effectiveness and may inform how public health messages are structured in digital formats.
Message length also appeared to influence perceived persuasiveness. Longer ChatGPT-augmented messages tended to receive higher scores, although the correlation was not statistically significant. The authors suggest that longer messages provide more room for ChatGPT to improve clarity, add context, and include persuasive language, which may make them more effective. The researchers also pointed out that prompt engineering played a role: using instructions like “Make this message more interesting” often led to extended responses that diverged meaningfully from the originals in tone and content.
The study further revealed that ChatGPT messages often included more emotional and motivational elements than their original counterparts. This included a higher frequency of exclamation marks, enthusiastic language, and urgent calls to action—features that align with best practices in health communication but may have been underutilized in the original messages created by smaller health departments or individual users.
Can ChatGPT be trusted to support public health communication?
While the study offers promising evidence of ChatGPT’s value in vaccine messaging, it also underscores the importance of transparency and thoughtful implementation. The researchers caution that not all audiences may react positively to learning that their health messages were generated or modified by AI. Although most participants responded favorably when informed about ChatGPT’s involvement, a few reported diminished trust in the message content upon learning it was AI-assisted.
Concerns about hallucinations, ChatGPT’s tendency to generate inaccurate or fabricated information, also remain. To mitigate these risks, the study emphasizes that ChatGPT should not be used to create messages from scratch for sensitive topics like public health. Instead, it should be viewed as an augmentation tool - refining, expanding, and enhancing content created by humans, while maintaining oversight by qualified professionals.
The authors recommend an iterative, collaborative approach between public health professionals and AI tools. Agencies can use ChatGPT to brainstorm or test multiple variations of a message, with final drafts reviewed by experts for factual accuracy and tone. This method retains human judgment while leveraging AI’s strengths in stylistic enhancement and language clarity.
Future studies should explore more personalized AI messaging based on demographic targeting, as well as experiments using other LLMs like GPT-4o or LLAMA. The researchers also recommend randomized message placement in surveys to better account for order bias and further exploration into how message length and tone affect persuasiveness.
- FIRST PUBLISHED IN:
- Devdiscourse

