- Award ID(s):
- 2027375
- PAR ID:
- 10302789
- Date Published:
- Journal Name:
- Frontiers in Political Science
- Volume:
- 3
- ISSN:
- 2673-3145
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)The global spread of the novel coronavirus is affected by the spread of related misinformation—the so-called COVID-19 Infodemic—that makes populations more vulnerable to the disease through resistance to mitigation efforts. Here, we analyze the prevalence and diffusion of links to low-credibility content about the pandemic across two major social media platforms, Twitter and Facebook. We characterize cross-platform similarities and differences in popular sources, diffusion patterns, influencers, coordination, and automation. Comparing the two platforms, we find divergence among the prevalence of popular low-credibility sources and suspicious videos. A minority of accounts and pages exert a strong influence on each platform. These misinformation “superspreaders” are often associated with the low-credibility sources and tend to be verified by the platforms. On both platforms, there is evidence of coordinated sharing of Infodemic content. The overt nature of this manipulation points to the need for societal-level solutions in addition to mitigation strategies within the platforms. However, we highlight limits imposed by inconsistent data-access policies on our capability to study harmful manipulations of information ecosystems.more » « less
-
Abstract We show that malicious COVID-19 content, including racism, disinformation, and misinformation, exploits the multiverse of online hate to spread quickly beyond the control of any individual social media platform. We provide a first mapping of the online hate network across six major social media platforms. We demonstrate how malicious content can travel across this network in ways that subvert platform moderation efforts. Machine learning topic analysis shows quantitatively how online hate communities are sharpening COVID-19 as a weapon, with topics evolving rapidly and content becoming increasingly coherent. Based on mathematical modeling, we provide predictions of how changes to content moderation policies can slow the spread of malicious content.
-
Background. Vaccine misinformation has been widely spread on social media, but attempts to combat it have not taken advantage of the attributes of social media platforms for health education. Methods. The objective was to test the efficacy of moderated social media discussions about COVID-19 vaccines in private Facebook groups. Unvaccinated U.S. adults were recruited using Amazon’s Mechanical Turk and randomized. In the intervention group, moderators posted two informational posts per day for 4 weeks and engaged in relationship-building interactions with group members. In the control group, participants received a referral to Facebook’s COVID-19 Information Center. Follow-up surveys with participants (N = 478) were conducted 6 weeks post-enrollment. Results. At 6 weeks follow-up, no differences were found in vaccination rates. Intervention participants were more likely to show improvements in their COVID-19 vaccination intentions (vs. stay same or decline) compared with control (p = .03). They also improved more in their intentions to encourage others to vaccinate for COVID-19. There were no differences in COVID-19 vaccine confidence or intentions between groups. General vaccine and responsibility to vaccinate were higher in the intervention compared with control. Most participants in the intervention group reported high levels of satisfaction. Participants engaged with content (e.g., commented, reacted) 11.8 times on average over the course of 4 weeks. Conclusions. Engaging with vaccine-hesitant individuals in private Facebook groups improved some COVID-19 vaccine-related beliefs and represents a promising strategy.more » « less
-
Redbird, Beth ; Harbridge-Yong, Laurel ; Mersey, Rachel Davis (Ed.)In our analysis, we examine whether the labelling of social media posts as misinformation affects the subsequent sharing of those posts by social media users. Conventional understandings of the presentation-of-self and work in cognitive psychology provide different understandings of whether labelling misinformation in social media posts will reduce sharing behavior. Part of the problem with understanding whether interventions will work hinges on how closely social media interactions mirror other interpersonal interactions with friends and associates in the off-line world. Our analysis looks at rates of misinformation labelling during the height of the COVID-19 pandemic on Facebook and Twitter, and then assesses whether sharing behavior is deterred misinformation labels applied to social media posts. Our results suggest that labelling is relatively successful at lowering sharing behavior, and we discuss how our results contribute to a larger understanding of the role of existing inequalities and government responses to crises like the COVID-19 pandemic.more » « less
-
Abstract Misinformation about the COVID-19 pandemic proliferated widely on social media platforms during the course of the health crisis. Experts have speculated that consuming misinformation online can potentially worsen the mental health of individuals, by causing heightened anxiety, stress, and even suicidal ideation. The present study aims to quantify the causal relationship between sharing misinformation, a strong indicator of consuming misinformation, and experiencing exacerbated anxiety. We conduct a large-scale observational study spanning over 80 million Twitter posts made by 76,985 Twitter users during an 18.5 month period. The results from this study demonstrate that users who shared COVID-19 misinformation experienced approximately two times additional increase in anxiety when compared to similar users who did not share misinformation. Socio-demographic analysis reveals that women, racial minorities, and individuals with lower levels of education in the United States experienced a disproportionately higher increase in anxiety when compared to the other users. These findings shed light on the mental health costs of consuming online misinformation. The work bears practical implications for social media platforms in curbing the adverse psychological impacts of misinformation, while also upholding the ethos of an online public sphere.more » « less