Managing online forums in the age of misinformation
Social media platforms and online discussion forums have given a voice to users without holding them accountable for the accuracy of what they say. As a result, these platforms have become a fertile ground for individuals intentionally spreading misinformation and fake news.
The platform, developed by the School of Information Technology (IT), Monash University Malaysia, uses a combination of graph algorithms and machine learning technology to extract valuable tacit information from platforms like Reddit, StackExchange and Quora, to apply a score that estimates the reliability of someone’s post.
Project Lead, Dr Ian Lim Wern Han from the School of IT, says this score technique can offer users with insight into the content they’re consuming online.
“By assigning numbers to users of various online discussion forums we’re able to reward those people who are sharing credible and trustworthy content, while punishing others who are pushing incorrect and misinformed content. The reward or punishment aspect is tied to the visibility and engagement of someone’s profile or content,” Dr Lim said.
“If users are credible, their content will be placed higher up on the page for more visibility and their Reddit votes will be worth more when they vote on other threads or comments. If a user is deemed untrustworthy, their post will be placed lower on the page or even in some cases hidden from the public altogether and their votes have less worth.”
A recent study by the Annenberg Public Policy Center of the University of Pennsylvania, revealed that people who relied on social media for information were more likely to be misinformed about vaccines than those who relied on traditional media platforms. The dissemination of fake news related to issues like health and politics will remain a constant challenge unless there is an urgent application that can appropriately moderate and verify content online.
Dr Lim’s research offers a possible solution. The accuracy of his approach is validated having collected more than 700,000 threads across a variety of online forums from almost two million users. His research profiled each individual with a rating and these numbers were then used to predict a user’s contribution on a subsequent day. The figures were updated daily, and the process was repeated in the ensuing days.
“There are an abundance of social media platforms with hundreds of thousands of threads and comments. Not only is it difficult but extremely costly to process these threads one by one, especially given their unstructured nature. So, I decided to review these threads from a user’s point of view and identified trustworthy users. I measured the value of trust and reliability of other online profiles based on my profiling methods,” said Dr Lim.
Using measures of confidence and volatility on a complex network of interactions ensures the most credible sources of information or questions appear at the very top of a thread on online forums. On the other hand, the very same rating can be used to match the questioner with suitable and reliable responses.
This methodology can also be applied to online social media influencers to ensure the celebrities, sports people or others with influence are not disseminating incorrect or misleading information or public service announcements.
“How can we classify the social media influence of a person who could potentially be spreading misinformation? Recently in the US, players in the National Basketball Association made headlines for their beliefs that the COVID-19 pandemic was being overblown and that there was a hidden agenda to it.
“Each time these players share a tweet, they have the ability to influence millions of people, for this reason it’s essential that we prevent the spread of misinformation online,” Dr Lim says.