New Report Raises Serious Doubts Over WhatsApp’s Misinformation Strategy
Personal messaging users have varying interpretations of the tags ‘forwarded’ and ‘forwarded many times,’ leading to the potential spread of misinformation, according to a new report.
The labels ‘forwarded’ and ‘forwarded many times’ are used by Meta – the owners of WhatsApp – to indicate that a message might not be trustworthy. Therefore, the information might not be correct.
However, new research based at Loughborough University reveals that these warnings are too vague and that Meta purposely chose ambiguous prompts to avoid too many negative associations between the messaging app and harmful content.
The findings are a “major concern for policymakers, news organizations and citizens,” according to Professor Andrew Chadwick, “…particularly as we head into this year of multiple major elections around the world.”
In a new report published today, the results of a national survey of 2,000 people show that very few people – 10% of messaging users – understood the tags as intended.
About half of the UK’s messaging users combined reported either not recalling having ever seen the messages, not knowing what they signified or being uncertain about what they mean.
The most common misinterpretation was that the tags simply indicated viral entertainment content such as jokes or videos.
Worryingly, 10% said they saw the tags as an indicator of accurate trustworthy, useful or relevant content.
Dr Natalie-Anne Hall, Postdoctoral Research Associate for the Everyday Misinformation Project, said: “These new findings from our recent nationally-representative survey confirm the patterns we first identified during the in-depth interviews we held with members of the public earlier in the project.
“Many people simply do not interpret the ‘forwarded’ tags as misinformation warnings, which means the tags don’t serve their intended purpose.
“Among the large group of people in the UK who use personal messaging, not only is there widespread ambiguity and lack of awareness of the tags’ intended function, there are also many irrelevant, inaccurate and even potentially dangerous misinterpretations.”
The report, Misinformation on Personal Messaging—Are WhatsApp’s Warnings Effective?, also highlighted what kinds of attitudes and demographic factors are more likely to be associated with misinterpreting the purpose of the “forwarded” and “forwarded many times” tags.
Younger people are more likely to misinterpret the tags, as are people who place a greater degree of trust in what they see on personal messaging, according to the researchers.
They also found that older messaging users, and those with lower levels of formal education are the least likely to be familiar with the tags and know how to interpret them.
People who use personal messaging most frequently are less likely to wrongly see the tags as signalling accurate or trustworthy information.
However, rather than associating the tags with potentially untrustworthy content, frequent messaging users still tend to associate the tags with popular content, jokes and multimedia.
Those who often participate in larger messaging groups – either of friends or of workmates – were also more likely to misperceive the tags’ purpose.
The report by Dr Hall, Professor Andrew Chadwick, Professor Cristian Vaccari (Edinburgh), Dr Brendan Lawson, and Portia Akolgo, offers five principles for the design of effective misinformation warnings on personal messaging. They are:
- Don’t rely on description alone: Misinformation warnings should clearly indicate the potential for misinformation.
- Introduce user friction: Misinformation warnings may be overlooked unless they use designs that force a person to stop and reflect.
- Gain media exposure: Platforms should engage in publicity campaigns about the intended purpose of misinformation warnings.
- Consider the context: It is crucial to understand the different ways messaging platforms are shaped by social norms and people’s relationships with others.
- Think beyond platforms: Technological features need to be combined with socially-oriented anti-misinformation interventions, to empower people to work together to use personal messaging platforms in ways that help reduce misinformation.
Professor Andrew Chadwick, Principal Investigator for the Everyday Misinformation Project in the Online Civic Culture Centre in Loughborough’s Department of Communication and Media, said: “The online spread of misinformation remains a major concern for policymakers, news organizations and citizens, particularly as we head into this year of multiple major elections around the world.
“However, the evidence base is improving all the time and our new report makes a new contribution.
“We focused on the often-neglected world of personal messaging, which is hugely popular globally and used by a large majority of the UK public.
“Above all, these findings are further evidence that online tech platforms have some work to do to show they are serious about tackling misinformation.
“Yet our report, with its five recommendations, also shows how societies can now move toward practical, evidence-based solutions that tangibly improve the health of our online civic culture. Meta can do better.”