Computer Scientists Warn: Reducing Distrust in Social Media Presents Challenges
Are anti-misinformation interventions on social media working as intended? It depends, according to a new study led by William & Mary researchers and published in the Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24).
Their study surveyed over 1,700 participants in the United States, revealing that anti-misinformation features increased users’ awareness of misinformation in social media; but did not make them more likely to share information on social media, or more willing to receive information from the platforms. Both trust and distrust coexisted in the participants, emerging as distinct features and not simply as opposite ends of a spectrum.
“Trust and distrust dynamics are the backbone of society,” said Yixuan (Janice) Zhang, an assistant professor in the William & Mary Department of Computer Science. The study, based on work funded by an unrestricted gift from Google, defined and measured these concepts, also providing a validated survey for future use.
Zhang served as lead author alongside Yimeng (Yvonne) Wang, a W&M Ph.D. student in computer science; the author group also included researchers from universities in three countries, all contributing to the multidisciplinary field of human-computer interaction.
“HCI has a lot to do with equitable computing, because we are dealing with human subjects,” said Zhang. Her HCI expertise aligns with William & Mary’s position in the evolution of the liberal arts and sciences, aptly expressed by the proposed school in computing, data science and physics.
The study focused on Facebook, X (formerly Twitter), YouTube and TikTok as commonly used sources of news and information, expressly targeting the period from January 2017 to January 2023 as coinciding with the rise of major misinformation campaigns.
During the period examined, these platforms had all enacted anti-misinformation strategies such as labeling false information, curating credible content and linking to additional sources. Examples of these interventions were shown to study participants who had recently engaged with the platforms.
Then, responders were asked to express their level of agreement with eight statements, which measured four facets of trust and four facets of distrust.
For example, statements using the trust dimension of “competence” probed users’ confidence in the platforms’ ability to combat misinformation; statements using the distrust dimension of “malevolence” assessed the users’ belief in the platforms’ purported spread of misinformation. Other facets of trust included benevolence, reliability and reliance; distrust entailed skepticism, dishonesty and fear.
Additionally, the study investigated how specific anti-misinformation interventions related to users’ trust and distrust in social media and how their experience with those features influenced their attitudes and behaviors.
An analysis of the results highlighted a cluster of respondents with high trust and high distrust, potentially indicating that users were discerning in the specific aspects of the platforms they endorsed. Also, this phenomenon suggested a discrepancy between the participants’ perception of a given platform and their interaction experiences. This means that users, for example, may trust other users to share reliable information while being skeptical of the platform’s ability to address misinformation.
The researchers also observed that trust and distrust perceptions varied across platforms and were influenced by demographic factors. These findings, they argued, may be useful to policymakers and regulators in tailoring interventions to users’ specific cultures and contexts.
As a HCI researcher, Zhang believes in human-centered computing and in the collaboration between diverse disciplines. In addition to designing and implementing computational technologies, during her Ph.D. program she got acquainted with educational and social science theories.
Wang’s interests, too, lie in the interaction between humans and computers. She is now investigating the use of technology in addressing mental health concerns and building trustworthy platforms for users to enhance their mental wellbeing.
“As we focus on human beings, we really want to know if our work can help them,” she said.