skip to main content


Title: Understanding the Use of Images to Spread COVID-19 Misinformation on Twitter
While COVID-19 text misinformation has already been investigated by various scholars, fewer research efforts have been devoted to characterizing and understanding COVID-19 misinformation that is carried out through visuals like photographs and memes. In this paper, we present a mixed-method analysis of image-based COVID-19 misinformation in 2020 on Twitter. We deploy a computational pipeline to identify COVID-19 related tweets, download the images contained in them, and group together visually similar images. We then develop a codebook to characterize COVID-19 misinformation and manually label images as misinformation or not. Finally, we perform a quantitative analysis of tweets containing COVID-19 misinformation images. We identify five types of COVID-19 misinformation, from a wrong understanding of the threat severity of COVID-19 to the promotion of fake cures and conspiracy theories. We also find that tweets containing COVID-19 misinformation images do not receive more interactions than baseline tweets with random images posted by the same set of users. As for temporal properties, COVID-19 misinformation images are shared for longer periods of time than non-misinformation ones, as well as have longer burst times. %\ywi added "have'' %\ywFor RQ2, we compare non-misinformation images instead of random images, and so it is not a direct comparison. When looking at the users sharing COVID-19 misinformation images on Twitter from the perspective of their political leanings, we find that pro-Democrat and pro-Republican users share a similar amount of tweets containing misleading or false COVID-19 images. However, the types of images that they share are different: while pro-Democrat users focus on misleading claims about the Trump administration's response to the pandemic, as well as often sharing manipulated images intended as satire, pro-Republican users often promote hydroxychloroquine, an ineffective medicine against COVID-19, as well as conspiracy theories about the origin of the virus. Our analysis sets a basis for better understanding COVID-19 misinformation images on social media and the nuances in effectively moderate them.  more » « less
Award ID(s):
2200052 1942610 2114407
PAR ID:
10460889
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
7
Issue:
CSCW1
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 32
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Jonason, Peter Karl (Ed.)
    At the time of writing, nearly one hundred published studies demonstrate that beliefs in COVID-19 conspiracy theories and misinformation are negatively associated with COVID-19 preventive behaviors. These correlational findings are often interpreted as evidence that beliefs in conspiracy theories and misinformation are exogenous factors that shape human behavior, such as forgoing vaccination. This interpretation has motivated researchers to develop methods for “prebunking,” “debunking,” or otherwise limiting the spread of conspiracy theories and misinformation online. However, the robust literatures on conspiracy theory beliefs, health behaviors, and media effects lead us to question whether beliefs in conspiracy theories and misinformation should be treated as exogenous to vaccine hesitancy and refusal. Employing U.S. survey data (n = 2,065) from July 2021, we show that beliefs in COVID-19 conspiracy theories and misinformation are not only related to COVID-19 vaccine hesitancy and refusal, but also strongly associated with the same psychological, social, and political motivations theorized to drive COVID-19 vaccine hesitancy and refusal. These findings suggest that beliefs in conspiracy theories and misinformation might not always be an exogenous cause, but rather a manifestation of the same factors that lead to vaccine hesitancy and refusal. We conclude by encouraging researchers to carefully consider modeling choices and imploring practitioners to refocus on the worldviews, personality traits, and political orientations that underlie both health-related behaviors and beliefs in conspiracy theories and misinformation. 
    more » « less
  2. Online misinformation is believed to have contributed to vaccine hesitancy during the Covid-19 pandemic, highlighting concerns about social media’s destabilizing role in public life. Previous research identified a link between political conservatism and sharing misinformation; however, it is not clear how partisanship affects how much misinformation people see online. As a result, we do not know whether partisanship drives exposure to misinformation or people selectively share misinformation despite being exposed to factual content. To address this question, we study Twitter discussions about the Covid-19 pandemic, classifying users along the political and factual spectrum based on the information sources they share. In addition, we quantify exposure through retweet interactions. We uncover partisan asymmetries in the exposure to misinformation: conservatives are more likely to see and share misinformation, and while users’ connections expose them to ideologically congruent content, the interactions between political and factual dimensions create conditions for the highly polarized users—hardline conservatives and liberals—to amplify misinformation. Overall, however, misinformation receives less attention than factual content and political moderates, the bulk of users in our sample, help filter out misinformation. Identifying the extent of polarization and how political ideology exacerbates misinformation can help public health experts and policy makers improve their messaging. 
    more » « less
  3. Abstract Misinformation about the COVID-19 pandemic proliferated widely on social media platforms during the course of the health crisis. Experts have speculated that consuming misinformation online can potentially worsen the mental health of individuals, by causing heightened anxiety, stress, and even suicidal ideation. The present study aims to quantify the causal relationship between sharing misinformation, a strong indicator of consuming misinformation, and experiencing exacerbated anxiety. We conduct a large-scale observational study spanning over 80 million Twitter posts made by 76,985 Twitter users during an 18.5 month period. The results from this study demonstrate that users who shared COVID-19 misinformation experienced approximately two times additional increase in anxiety when compared to similar users who did not share misinformation. Socio-demographic analysis reveals that women, racial minorities, and individuals with lower levels of education in the United States experienced a disproportionately higher increase in anxiety when compared to the other users. These findings shed light on the mental health costs of consuming online misinformation. The work bears practical implications for social media platforms in curbing the adverse psychological impacts of misinformation, while also upholding the ethos of an online public sphere. 
    more » « less
  4. Redbird, Beth ; Harbridge-Yong, Laurel ; Mersey, Rachel Davis (Ed.)
    In our analysis, we examine whether the labelling of social media posts as misinformation affects the subsequent sharing of those posts by social media users. Conventional understandings of the presentation-of-self and work in cognitive psychology provide different understandings of whether labelling misinformation in social media posts will reduce sharing behavior. Part of the problem with understanding whether interventions will work hinges on how closely social media interactions mirror other interpersonal interactions with friends and associates in the off-line world. Our analysis looks at rates of misinformation labelling during the height of the COVID-19 pandemic on Facebook and Twitter, and then assesses whether sharing behavior is deterred misinformation labels applied to social media posts. Our results suggest that labelling is relatively successful at lowering sharing behavior, and we discuss how our results contribute to a larger understanding of the role of existing inequalities and government responses to crises like the COVID-19 pandemic. 
    more » « less
  5. Attempting to make sense of a phenomenon or crisis, social media users often share data visualizations and interpretations that can be erroneous or misleading. Prior work has studied how data visualizations can mislead, but do misleading visualizations reach a broad social media audience? And if so, do users amplify or challenge misleading interpretations? To answer these questions, we conducted a mixed-methods analysis of the public's engagement with data visualization posts about COVID-19 on Twitter. Compared to posts with accurate visual insights, our results show that posts with misleading visualizations garner more replies in which the audiences point out nuanced fallacies and caveats in data interpretations. Based on the results of our thematic analysis of engagement, we identify and discuss important opportunities and limitations to effectively leveraging crowdsourced assessments to address data-driven misinformation. 
    more » « less