Are social media misinformation countermeasures working as intended? It depends, according to a new study led by researchers at the College of William & Mary and published in the proceedings of the CHI Conference on Ergonomics in Computing Systems (CHI '24).
Their study surveyed over 1,700 participants in the United States and found that anti-misinformation features increased users' awareness of misinformation on social media, but did not make them more likely to share information on social media or more willing to receive information from the platform. Both trust and distrust coexisted among participants and emerged as distinct characteristics, not simply as two poles of a spectrum.
“The dynamics of trust and distrust are fundamental to society,” says Yishuan Zhang (Janice), an assistant professor in the Department of Computer Science at the College of William & Mary. Based on research funded by an unrestricted donation from Google, the study also defines and measures these concepts, providing a validated survey for future use.
Zhang served as lead author along with Yimeng (Yvonne) Wang, a doctoral student in computer science at W&M. The author group included researchers from universities in three countries who are contributing to the interdisciplinary field of human-computer interaction.
“HCI is deeply connected to equitable computing because we are about human subjects,” says Zhang. Her HCI expertise is aligned with William & Mary’s position on advancing the liberal arts and sciences, aptly expressed by its proposed founding of departments of computing, data science, and physics.
The study focuses on Facebook, X (formerly Twitter), YouTube and TikTok as common sources of news and information, and specifically covers the period from January 2017 to January 2023, which coincides with the rise of mass misinformation campaigns.
During the study period, these platforms all implemented counter-misinformation strategies such as labeling false information, curating trustworthy content, and linking to additional sources of information. Examples of these interventions were shown to study participants who had recently used the platforms.
Respondents were then asked to express their level of agreement with eight statements measuring four dimensions of trust and four dimensions of distrust.
For example, statements using the trust dimension “competence” explored users' confidence in the platform's ability to combat misinformation, while statements using the distrust dimension “malice” assessed users' belief in claims that the platform spreads misinformation. Other dimensions of trust included benevolence, reliability, and trustworthiness, while distrust was accompanied by skepticism, dishonesty, and fear.
Additionally, the study explored how specific counter-misinformation interventions relate to users’ trust and distrust in social media, and how experiences with these features influenced users’ attitudes and behaviors.
Analyzing the results highlighted clusters of respondents with high trust and high distrust, which may indicate that users are discerning about certain aspects of the platforms they support. This phenomenon also suggests that there is a discrepancy between participants' perceptions and interaction experiences with a particular platform. That is, for example, users may trust others to share credible information while being skeptical of the platform's ability to address misinformation.
The researchers also observed that perceptions of trust and distrust vary across platforms and are influenced by demographic factors. These findings could help policymakers and regulators tailor interventions to users' specific cultures and contexts, the researchers argued.
As an HCI researcher, Zhang believes in human-centered computing and cross-disciplinary collaboration. During his doctoral studies, he was well versed in the design and implementation of computing technologies as well as education and social science theories.
Wang's interests also lie in human-computer interaction, and she is currently investigating the use of technology to address mental health issues and building a trusted platform for users to improve their mental wellbeing.
“Because we're focused on people, we really want to know if our work can help them,” she said.
Antonella Di Marzio, Senior Research Writer