People use menacing language online than in person: Study
People react less strongly to malicious speech on digital platforms, believing that such abuses on social media cause less harm than in face-to-face interactions, a study has found.
From online forums to community groups, research and experience show people are more willing to insult and use menacing language online than in person, especially when there's the protection of anonymity behind a computer.
"Many of us are taken aback when people like Milo Yiannopoulos target and harass people on Twitter, then go on TV and say that digital words don't hurt anyone," said Curtis Puryear, from the University of South Florida in the US.
"Yet our data finds that Yiannopoulos's perspective resonates with many of us to some degree," said Puryear, lead author of the study published in the journal Social Psychological and Personality Science.
"We expect people to be less hurt by malicious words in certain digital contexts, and we respond with less outrage. This may make it easy to discount the experiences of victims of online harassment," he said.
In one study of 270 students, people saw an image of someone participating in "nerd culture," with a comment of "go back to your mommy's basement nerd," in one of three environments: face-to-face; online with social information, such as names and photos, or online with little social information.
In another study, of 283 people, participants read a remark insulting a woman for making a comment about infrastructure and were presented with the negative comment being made on an online forum with little social information or as taking place at a public event.
Comparing the digital environments, they found mixed results. The presence of more social information, from names to photos, brought about more reactions to inflammatory comments.
However, even when people are identifiable, they found initial evidence that inflammatory speech is less shocking in digital contexts.
The cues that help to identify people as individuals, can be dulled in the online environment, suggests Puryear.
This lack of "personalisation" can dampen the social cues that tell people someone is a victim, making observers less likely to experience anger or act on behalf of the victim.
Another part of the dulled reactions to comments comes from what one could describe as "numbing," either through the sheer volume of reports of harassment online or from over-exposure of online harassment.
As more moral and social cues are communicated online, could people's attitudes change and start to reflect standards similar to in-person situations.
The results depend on how we shape our online communities, researchers said.
Building digital platforms that depersonalise users and foster norms accepting of malicious speech may increasingly dull our responses to victimisation.
"But if our norms and expectations begin to reflect that digital words really do matter then the disparity between how we react to victimisation in digital and physical space may fade," said Puryear.
(With inputs from agencies.)