“Who’s here to read the comments?”
This is a phrase we often see below controversial news articles and contentious posts online.
Comment systems are powerful. They give voices to the voiceless and allow people to express their opinion while engaging with a community. They are also a way to explore public opinion on important issues such as elections and vaccines.
However, comments have become the dark horse of disinformation, conspiracy theories, hate speech, bullying and radicalisation. This is true not only for social platforms, but also for publishers and aggregators.
We all know comments can be toxic, but because harmful content evolves all the time it is a difficult problem to manage. How bad actors use comment systems often goes unreported. There are few tools that allow for quantitative research. And comments are often left unmoderated.
Platforms and publishers can be notified of abusive comments after they are flagged by users. There’s been a lot of good work from publishers and The Coral Project has been leading the way in making progress here. But the problem becomes complicated when there are thousands of comments below a single post and each can reflect the latest in disinformation trends.
Instead of the post itself, the comments space is where a battleground often emerges. How decisive such spaces can be in the rise of vaccine hesitancy, for example, or other problems, is difficult to quantify. But a 2012 study published in the Journal of Computer-Mediated Communication showed that the way certain readers see news stories is distorted by the comments below.
Scale and Nature
Comments on controversial topics and news articles generate discussions which sometimes lead to hundreds, if not thousands, of replies and interactions online.
Although people use comments to express their opinions about a given topic, they have become a kind of safe haven for bad actors and extremist groups to spread their harmful narratives.
Islamic State supporters and followers, for example, have repeatedly used comments as a method to spread their propaganda.
One of the techniques used by Islamic State is to comment on content shared by popular Arabic pages and groups. They then use alternative spellings or change the language format in order to avoid detection.
Screenshots below show comments made by Islamic State followers on a popular Arabic Facebook page. These comments promoted their military operations in Central and South Asia:
In the next set of screenshots, other followers commented on popular posts with links to an Islamic State propaganda video. The comments are identical; they appear to have been posted by regular users whose accounts were hacked. The Islamic State even celebrated in the comments that they had hacked these accounts to spread propaganda:
Often, neutral posts become part of the commenting battleground.
For example, a post from the US broadcaster ABC about the Food and Drug Administration was one of the most engaged posts about COVID on Facebook around June 27, according to Crowdtangle. ABC had reported that the FDA added a warning to the Moderna and Pfizer vaccines about the possible risk of rare heart inflammation.
The post received phenomenal interactions with more than 34,000 comments. Although the post was neutral and simply reported on the FDA warning, as might be expected, the comments were flooded with anti-vaccine misinformation and conspiracy theories:
Then there is the Russian doll problem: comments-within-the-comments. One of the most popular comments on the ABC post received 121 replies and over 700 interactions. The user shared their positive experience with the vaccine and how “the benefits are much greater.”
This comment was not welcomed by everyone. Some users replied with speculation about alleged deaths caused by vaccines:
Comments add a unique flavour to debates online and can be great for engaging a community. However, the sheer scale of comments, and the difficulties in moderation, make them a useful hideout for the spread of evolving narratives. This is where we can help.
At Kinzen, a successful marriage between journalism expertise and technology has helped us to scale up the process of early detection of disinformation and harmful content. This is something we are exploring for comment systems.
On a daily basis, we monitor the evolution of language around campaigns of deception. Our work is not limited to one language, one platform or one country. With the knowledge of experts in the market, we have built technology that identifies harmful content across English, German, Arabic, Hindi, Russian, Turkish, French, Portuguese, Spanish and Swedish.
Applying this to comments is one way we can contribute to a higher quality information environment.