Scams, breaches, and confusion drive a wave of online help-seeking
Scams dominate the help-seeking landscape. Users frequently describe suspicious messages, payment requests, impersonation attempts, and fraudulent offers, often seeking confirmation before taking action. Account access issues are the second most common category, including reports of locked accounts, unauthorized logins, and recovery failures. Privacy-related questions form another major cluster, covering topics such as data tracking, anonymization tools, and unexplained changes in platform behavior. Harassment and abuse also feature prominently, especially when combined with doxxing fears or coordinated attacks.
Online scams, account hijackings, privacy breaches, and harassment have become routine experiences for internet users, cutting across age groups, platforms, and geographies. They are turning to public online communities to ask for help, clarity, and reassurance when something goes wrong. This shift reflects a widening gap between how risks are experienced and how support systems are designed.
A new study titled “Understanding Help-Seeking for Digital Safety, Privacy, and Security,” published as an arXiv research paper, examines this gap. The research provides a detailed picture of how people seek help for digital safety problems in real-world online environments and why existing support mechanisms frequently fail to meet their needs.
A sharp rise in public help-seeking for digital threats
The study analyzes more than 1.1 billion posts published on Reddit between January 2021 and December 2024, identifying roughly three million posts in which users explicitly asked for help related to digital safety, privacy, or security. These requests range from suspected scams and account takeovers to data misuse, harassment, and confusion over privacy tools. The scale of the dataset allows the authors to move beyond anecdotal evidence and measure help-seeking behavior over time, across topics, and at population level.
The findings show a clear and sustained increase in help-seeking activity. In the final year of the dataset alone, the volume of digital safety help-seeking posts rose by about 66 percent. By late 2024, users were posting more than 100,000 such requests each month. This growth coincides with the expansion of large-scale fraud operations, the normalization of data breaches, and increasing complexity in platform security features, all of which make it harder for users to understand what is happening when something goes wrong.
Scams dominate the help-seeking landscape. Users frequently describe suspicious messages, payment requests, impersonation attempts, and fraudulent offers, often seeking confirmation before taking action. Account access issues are the second most common category, including reports of locked accounts, unauthorized logins, and recovery failures. Privacy-related questions form another major cluster, covering topics such as data tracking, anonymization tools, and unexplained changes in platform behavior. Harassment and abuse also feature prominently, especially when combined with doxxing fears or coordinated attacks.
These categories often overlap. Users rarely face a single, isolated problem. A scam may follow an account compromise, harassment may escalate into privacy violations, and attempts to secure one platform may expose vulnerabilities on another. This interconnectedness increases the cognitive and emotional burden on users, who must make decisions quickly with incomplete information.
Help-seeking is about sensemaking, not just solutions
The study closely examines how users frame their requests and what they are actually seeking. A key finding is that help-seeking for digital safety is not simply about technical fixes. Many users turn to online communities because they are unsure whether a situation is dangerous, how serious it might be, or what consequences they could face if they act incorrectly.
Posts frequently contain expressions of confusion, anxiety, embarrassment, or self-blame. Users ask whether they have been scammed, whether a message is legitimate, or whether unusual account behavior is normal. In many cases, they are not yet asking how to fix a confirmed problem, but how to interpret ambiguous signals. This sensemaking role is central to understanding why public forums have become so important.
Reddit, as the primary empirical setting of the study, offers features that make this kind of help-seeking possible. Topic-specific communities allow users to describe complex situations in their own words and receive contextual responses from others with similar experiences. Replies often include step-by-step guidance, warnings about common pitfalls, and explanations of how certain attacks work. Just as importantly, respondents frequently help normalize the experience, reducing panic and helping users regain a sense of control.
However, the study also highlights the limits of this informal support system. Advice quality varies, and not all responses are accurate or complete. Some guidance reflects outdated threat models or personal opinion rather than verified best practices. Despite these risks, users continue to rely on peer communities because official support systems are often perceived as opaque, slow, or unresponsive.
The research suggests that many platform help centers assume users already understand the nature of their problem. Documentation is typically organized around predefined categories, requiring users to diagnose their own situation before they can find relevant guidance. The study shows that this assumption does not hold for a large share of real-world incidents, particularly those involving social engineering, partial compromises, or emerging attack patterns.
Implications for platforms, policy, and AI-based support
The findings challenge the idea that expanding automated reporting tools alone will solve user support problems. While automation can handle high volumes, it struggles with ambiguity, emotional distress, and compound scenarios, all of which are common in digital safety incidents.
The study points out a mismatch between platform incentives and user needs. Platforms often prioritize efficiency and risk containment, focusing on standardized flows that minimize human intervention. Users, by contrast, seek understanding, reassurance, and tailored guidance. When these needs are not met, they seek alternatives outside official channels, even when that exposes them to inconsistent advice.
The research also speaks directly to the growing interest in AI-driven user support. AI assistants are increasingly positioned as scalable solutions for handling safety and security queries. The study cautions that such systems must be designed with care. Effective AI support would need to recognize uncertainty, ask clarifying questions, and adapt guidance to specific contexts rather than delivering generic instructions. It would also need safeguards to avoid amplifying fear or providing false reassurance in high-stakes situations.
Rather than viewing peer communities as competitors, the authors suggest that platforms could learn from them. The ways users describe problems, the types of explanations they find helpful, and the emotional support they seek all offer valuable design signals. Integrating these insights into official support channels could improve user trust and reduce reliance on external forums.
- FIRST PUBLISHED IN:
- Devdiscourse

