Charts, statistics and the hidden mechanics of modern misinformation
Data-driven communication dominates modern news and public discourse, yet the role of numbers in spreading misinformation remains poorly understood. A new academic review finds that misleading statistics and visuals are widely used to influence opinion but are rarely treated as misinformation in their own right.
The findings are presented in Data in the Context of Misinformation: A Scoping Review, published in Journalism & Mass Communication Quarterly, which compiles decades of research on how people interpret, trust, and misread numerical and visual data.
How misleading data slips through the cracks of misinformation research
The review finds that most misinformation scholarship still treats false or misleading information primarily as a text-based problem. Fabricated claims, deceptive headlines, and false narratives dominate academic attention, while numbers and visuals are often treated as secondary features or technical details. In many studies, misleading statistics or charts appear only as examples or experimental stimuli rather than as the central object of analysis.
This narrow focus is striking given how frequently data appear in real-world misinformation. Public debates on health, climate change, economics, and elections routinely rely on graphs, percentages, and numerical comparisons. These data points often shape perceptions of risk, urgency, and credibility long before audiences evaluate accompanying text. Yet the review identifies only a handful of studies that explicitly frame misleading or fabricated data as a form of misinformation in its own right.
Instead, misleading data are typically examined through other lenses. Design research focuses on poor visualization practices, such as truncated axes or unclear labels. Psychology studies explore how people misinterpret numbers or struggle with probability. Statistics research examines errors in data reporting or analysis. While each of these strands offers valuable insights, the review argues that their separation prevents a full understanding of how data function within misinformation ecosystems.
As a result, key questions remain largely unanswered. How often does misinformation rely on data compared to text-only claims? Do data-based falsehoods spread differently or persist longer? Are audiences more likely to trust and share misinformation when it includes numbers or charts? According to the review, existing research provides fragments of answers but no integrated framework.
The study emphasizes that this fragmentation is not just an academic problem. When misleading data are not recognized as misinformation, they may escape scrutiny by journalists, fact-checkers, and platforms that prioritize textual accuracy over numerical or visual integrity. This gap becomes especially dangerous during crises, such as pandemics or natural disasters, when rapidly changing data shape public behavior and policy decisions.
Why numbers and visuals can be especially persuasive
To explain why data-based misinformation can be so powerful, the review synthesizes findings using a dual-process model of cognition. This framework distinguishes between fast, automatic thinking and slower, more deliberate reasoning. According to the literature analyzed, data often exert their strongest influence during the earliest stages of information processing.
Visual features such as bar heights, line slopes, or color contrasts draw immediate attention. Numerical anchors, such as a single striking percentage or large figure, can set a reference point that shapes later judgments. At this stage, people rely on mental shortcuts rather than careful analysis. The review finds robust evidence that misleading visual cues and numerical framing can distort perception almost instantly.
Once these initial impressions are formed, they can persist even when people engage in more reflective thinking. The review highlights research showing that corrections and clarifications often struggle to fully undo the influence of misleading numbers or visuals. Early anchors continue to shape judgments, a phenomenon that helps explain why retracted statistics or corrected charts can leave lasting impressions.
Interpretation errors are also common when data representations do not align with familiar mental schemas. Unusual chart formats, logarithmic scales, or complex visualizations increase cognitive load and raise the risk of misunderstanding. While longer deliberation can improve accuracy in some cases, it can also introduce new errors when people lack the necessary numeracy or graph literacy.
Notablly, the review shows that data are not automatically trusted. Public skepticism toward statistics has grown in recent years, especially when numbers appear politicized or conflict with personal beliefs. However, mistrust does not eliminate the persuasive power of data. Instead, credibility judgments depend heavily on how well data align with existing attitudes and expectations. When numbers support a person’s prior beliefs, they are more likely to be accepted and remembered, even if they are misleading.
This interaction between data and motivated reasoning creates a paradox. Highly numerate or data-literate individuals are often better equipped to detect errors, but they may also be more skilled at selectively interpreting numbers to defend preexisting views. The review finds mixed evidence on whether higher numeracy consistently reduces susceptibility to misinformation, underscoring the complexity of data-driven persuasion.
Literacy, attitudes, and the growing need for data-aware journalism
The review analyses in detail the conditions that shape how people respond to data-based misinformation. Numeracy and graph literacy emerge as key factors across multiple stages of processing. Individuals who can read charts accurately and understand basic statistical concepts are generally more resilient to misleading representations. They spend more time examining visual details and are more likely to notice inconsistencies.
However, the review makes clear that literacy is not a complete safeguard. Even well-trained viewers can be misled by sophisticated design choices or subtle framing effects. Software defaults, design conventions, and time pressures can all contribute to the production and spread of misleading data, even in professional contexts such as academic publishing or journalism.
Attitudes and prior knowledge play an equally important role. Data that conflict with deeply held beliefs are more likely to be dismissed, while belief-consistent data are remembered and shared more readily. This pattern helps explain why corrections often fail to change minds, particularly in polarized debates. In such cases, data do not function as neutral evidence but as tools in broader identity-driven narratives.
The review also highlights a significant geographic and cultural bias in existing research. Most studies on data and misinformation originate from Western, highly educated societies. There is limited empirical work examining how misleading data operate in non-Western or lower-income contexts, despite evidence that data manipulation is used strategically in many political systems. This gap limits the global relevance of current findings and calls for broader comparative research.
The study raises important implications for journalism and public communication. Journalists are often positioned as gatekeepers who translate complex data into accessible stories. Yet the review finds that the role of journalists in creating or amplifying misleading data is rarely examined directly. Design choices, simplifications, and framing decisions made under deadline pressure can unintentionally distort meaning, even when underlying data are sound.
The findings suggest that preventing data-based misinformation requires more than fact-checking after the fact. It calls for stronger data literacy among journalists, clearer standards for visual and numerical reporting, and greater transparency around data sources and methods. Multimodal communication, combining text with carefully designed visuals and contextual explanations, can help audiences interpret data more accurately, but only if executed responsibly.
- FIRST PUBLISHED IN:
- Devdiscourse

