Telegram facilitates spread of anti-vaccine narratives through malicious channels
Telegram's infrastructure, a hybrid of private chats and public broadcasting channels, offers a haven for disinformation campaigns. Unlike other platforms, Telegram combines anonymity, encryption, and virality. This structure complicates content moderation while amplifying coordinated manipulation.

A new study has broken ground on the hidden ecosystem of anti-vaccination misinformation circulating within Telegram, the encrypted messaging platform often used by conspiracy groups and health disinformation actors. The peer-reviewed study, “Profiling Antivaccination Channels in Telegram: Early Efforts in Detecting Misinformation,” was published in Frontiers in Communication. Researchers from Mykolas Romeris University in Lithuania applied latent profile analysis to 7,550 messages from 151 Telegram channels, identifying two distinct behavioral clusters - one dominated by manipulative, conspiracy-laden narratives, and another by less harmful, albeit politically reactive, content.
In contrast to conventional platforms like Facebook or X, Telegram’s minimal content moderation and emphasis on anonymity create fertile conditions for misinformation to flourish. The study proposes and validates a novel conceptual framework tailored to Telegram’s architecture, incorporating four dimensions: the characteristics of content creators and spreaders, the intended targets of misinformation, the multimodal content strategies used, and the broader social context that influences message resonance.
How Do Anti-Vaccine Actors Exploit Telegram's Infrastructure?
Telegram's infrastructure, a hybrid of private chats and public broadcasting channels, offers a haven for disinformation campaigns. Unlike other platforms, Telegram combines anonymity, encryption, and virality. This structure complicates content moderation while amplifying coordinated manipulation.
To address this, researchers manually annotated and statistically analyzed messages across 151 anti-vaccination Telegram channels. The study revealed that malicious channels often obscure the identity of the originator, with many messages lacking indicators of whether the sender was a human, bot, individual, or group. These ambiguity patterns were flagged as markers of coordinated disinformation. One class of channels, constituting 38.4% of the sample, was statistically distinguished by a high volume of malicious content, including conspiracy theories, trolling, and discourse manipulation techniques like testimonial framing and cloaked science. These messages often lacked clearly identified targets and were disproportionately framed around ongoing crises such as COVID-19 or vaccine injury claims.
The use of discourse strategies such as evidence collages and emotionally charged testimonials allows these actors to simulate legitimacy and build community trust. These tactics were particularly prevalent in the more manipulative channel class, referred to in the analysis as Class 1. In contrast, the less manipulative Class 2 (61.6% of the sample) leaned more heavily on referencing political discussions and breaking news events, with significantly lower levels of disinformation and rhetorical manipulation.
Latent Profile Analysis confirmed the reliability of these classes using statistical fit indices such as BIC and ICL, which demonstrated that the classification held strong predictive power. Messages in the more malicious group averaged 45.8 malicious messages per channel, compared to 23.4 in the less harmful group.
What Methods Were Used to Profile Telegram Channels?
The study introduced a four-dimensional profiling framework uniquely adapted for Telegram’s ecosystem. These included the attributes of spreaders or creators (e.g., bots vs. humans), the message content, the intended victim groups, and the broader social or political context of message deployment. Each of the 7,550 messages was manually coded, focusing on both textual and visual/multimodal elements. These included manipulated documents, memes, conspiracy framing, clickbait headlines, and images designed to mimic scientific authority.
When targeting victims, however, the study found an overwhelming number of messages, over 1,600, were labeled “undetermined,” meaning the victims were not explicitly identified. This ambiguity may serve a strategic purpose: by avoiding direct confrontation or slander, malicious actors may reduce the chance of content being flagged or reported. Only 14 messages targeted the medical or scientific community directly, despite this group being a common focus in anti-vaccine rhetoric.
Linguistic content analysis showed the dominance of conspiracy theories (1,235 messages), followed by testimonial narratives and political themes. Visually, evidence collages, composite graphics blending data, headlines, and manipulated charts, were the most common tactic. The prevalence of such content strongly aligns with the literature linking visual manipulation to increased virality and emotional impact.
In terms of social context, messages exploiting “active crisis” narratives were the most prevalent (1,117), with significantly fewer referencing elections (27) or wedge issues (218). This suggests that malicious channels favor framing their content in crisis-oriented language to exploit user anxiety and urgency.
How Can These Findings Inform Future Misinformation Detection?
The study’s primary contribution lies in demonstrating how statistical and manual annotation methods can be used to identify behavioral patterns in misinformation dissemination, especially in opaque ecosystems like Telegram. The latent profile analysis revealed that not all misinformation channels are equal: some are orchestrated and manipulative, others simply echo contentious information in real-time. This differentiation is vital for developing nuanced content moderation strategies and policy interventions.
Class 1, the more manipulative profile, was characterized by statistically significant patterns: elevated use of trolling, higher ambiguity in actor identity, heavy reliance on emotional and testimonial appeals, and a strong association with crisis-based messaging. This group also used rhetorical techniques that masked misinformation behind the façade of scientific neutrality, a strategy termed “cloaked science.”
Meanwhile, Class 2 channels showed a distinct profile, emphasizing political discourse and breaking news. While not free from misinformation, these channels showed fewer signs of deliberate manipulation and more signals of reactive content sharing. These behavioral distinctions have practical implications: Class 1 channels may require proactive moderation and policy targeting, while Class 2 may benefit from real-time fact-checking and algorithmic downranking.
The study also calls attention to the methodological limitations in misinformation detection. Manual annotation, while effective, is labor-intensive and prone to subjective bias. The authors suggest future directions should include AI-driven annotation tools, cross-platform comparative analyses, and more refined metadata integration, such as engagement metrics and forwarding patterns. Moreover, Telegram’s limited data accessibility and encryption demand ethical but creative workarounds for future monitoring.
To sum up, misinformation thrives not merely due to content but due to social dynamics and platform affordances. Addressing it effectively will require a multidimensional approach, combining computational profiling, sociopolitical awareness, and platform accountability.
- FIRST PUBLISHED IN:
- Devdiscourse