Abstract Social media has been increasingly utilized to spread breaking news and risk communications during disasters of all magnitudes. Unfortunately, due to the unmoderated nature of social media platforms such as Twitter, rumors and misinformation are able to propagate widely. Given this, a surfeit of research has studied false rumor diffusion on Twitter, especially during natural disasters. Within this domain, studies have also focused on the misinformation control efforts from government organizations and other major agencies. A prodigious gap in research exists in studying the monitoring of misinformation on social media platforms in times of disasters and other crisis events. Such studies would offer organizations and agencies new tools and ideologies to monitor misinformation on platforms such as Twitter, and make informed decisions on whether or not to use their resources in order to debunk. In this work, we fill the research gap by developing a machine learning framework to predict the veracity of tweets that are spread during crisis events. The tweets are tracked based on the veracity of their content as either true, false, or neutral. We conduct four separate studies, and the results suggest that our framework is capable of tracking multiple cases of misinformation simultaneously, with scores exceeding 87%. In the case of tracking a single case of misinformation, our framework reaches an score of 83%. We collect and drive the algorithms with 15,952 misinformation‐related tweets from the Boston Marathon bombing (2013), Manchester Arena bombing (2017), Hurricane Harvey (2017), Hurricane Irma (2017), and the Hawaii ballistic missile false alert (2018). This article provides novel insights on how to efficiently monitor misinformation that is spread during disasters.
more »
« less
Emotion and humor as misinformation antidotes
Many visible public debates over scientific issues are clouded in accusations of falsehood, which place increasing demands on citizens to distinguish fact from fiction. Yet, constraints on our ability to detect misinformation coupled with our inadvertent motivations to believe false science result in a high likelihood that we will form misperceptions. As science falsehoods are often presented with emotional appeals, we focus our perspective on the roles of emotion and humor in the formation of science attitudes, perceptions, and behaviors. Recent research sheds light on how funny science and emotions can help explain and potentially overcome our inability or lack of motivation to recognize and challenge misinformation. We identify some lessons learned from these related and growing areas of research and conclude with a brief discussion of the ethical considerations of using persuasive strategies, calling for more dialogue among members of the science communication community.
more »
« less
- Award ID(s):
- 1906864
- PAR ID:
- 10231796
- Date Published:
- Journal Name:
- Proceedings of the National Academy of Sciences
- Volume:
- 118
- Issue:
- 15
- ISSN:
- 0027-8424
- Page Range / eLocation ID:
- e2002484118
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Given the pervasiveness and dangers of misinformation, there has been a surge of research dedicated to uncovering predictors of and interventions for misinformation receptivity. One promising individual differences variable is intellectual humility (IH), which reflects a willingness to acknowledge the limitations of one’s views. Research has found that IH is correlated with less belief in misinformation, greater intentions to engage in evidence-based behaviors (e.g., receive vaccinations), and more actual engagement in evidence-based behaviors (e.g., take COVID-19 precautions). We sought to synthesize this growing area of research in a multi-level meta-analytic review (k = 27, S = 54, ES = 469, N = 33,814) to provide an accurate estimate of the relations between IH and misinformation receptivity and clarify potential sources of heterogeneity. We found that IH was related to less misinformation receptivity for beliefs (r = -.15, 95% CI [-.19, -.12]) and greater intentions to move away from misinformation (r = .13, 95% CI [.06, .19]) and behaviors that move people away from misinformation (r = .30, 95% CI [.24, .36]). Effect sizes were generally small, and moderator analyses revealed that effects were stronger for comprehensive (as opposed to narrow) measures of IH. These findings suggest that IH is one path for understanding resilience against misinformation, and we leverage our results to highlight pressing areas for future research focused on boundary conditions, risk factors, and causal implications.more » « less
-
Past work has explored various ways for online platforms to leverage crowd wisdom for misinformation detection and moderation. Yet, platforms often relegate governance to their communities, and limited research has been done from the perspective of these communities and their moderators. How is misinformation currently moderated in online communities that are heavily self-governed? What role does the crowd play in this process, and how can this process be improved? In this study, we answer these questions through semi-structured interviews with Reddit moderators. We focus on a case study of COVID-19 misinformation. First, our analysis identifies a general moderation workflow model encompassing various processes participants use for handling COVID-19 misinformation. Further, we show that the moderation workflow revolves around three elements: content facticity, user intent, and perceived harm. Next, our interviews reveal that Reddit moderators rely on two types of crowd wisdom for misinformation detection. Almost all participants are heavily reliant on reports from crowds of ordinary users to identify potential misinformation. A second crowd--participants' own moderation teams and expert moderators of other communities--provide support when participants encounter difficult, ambiguous cases. Finally, we use design probes to better understand how different types of crowd signals---from ordinary users and moderators---readily available on Reddit can assist moderators with identifying misinformation. We observe that nearly half of all participants preferred these cues over labels from expert fact-checkers because these cues can help them discern user intent. Additionally, a quarter of the participants distrust professional fact-checkers, raising important concerns about misinformation moderation.more » « less
-
Abstract Misinformation online poses a range of threats, from subverting democratic processes to undermining public health measures. Proposed solutions range from encouraging more selective sharing by individuals to removing false content and accounts that create or promote it. Here we provide a framework to evaluate interventions aimed at reducing viral misinformation online both in isolation and when used in combination. We begin by deriving a generative model of viral misinformation spread, inspired by research on infectious disease. By applying this model to a large corpus (10.5 million tweets) of misinformation events that occurred during the 2020 US election, we reveal that commonly proposed interventions are unlikely to be effective in isolation. However, our framework demonstrates that a combined approach can achieve a substantial reduction in the prevalence of misinformation. Our results highlight a practical path forward as misinformation online continues to threaten vaccination efforts, equity and democratic processes around the globe.more » « less
-
While COVID-19 text misinformation has already been investigated by various scholars, fewer research efforts have been devoted to characterizing and understanding COVID-19 misinformation that is carried out through visuals like photographs and memes. In this paper, we present a mixed-method analysis of image-based COVID-19 misinformation in 2020 on Twitter. We deploy a computational pipeline to identify COVID-19 related tweets, download the images contained in them, and group together visually similar images. We then develop a codebook to characterize COVID-19 misinformation and manually label images as misinformation or not. Finally, we perform a quantitative analysis of tweets containing COVID-19 misinformation images. We identify five types of COVID-19 misinformation, from a wrong understanding of the threat severity of COVID-19 to the promotion of fake cures and conspiracy theories. We also find that tweets containing COVID-19 misinformation images do not receive more interactions than baseline tweets with random images posted by the same set of users. As for temporal properties, COVID-19 misinformation images are shared for longer periods of time than non-misinformation ones, as well as have longer burst times. %\ywi added "have'' %\ywFor RQ2, we compare non-misinformation images instead of random images, and so it is not a direct comparison. When looking at the users sharing COVID-19 misinformation images on Twitter from the perspective of their political leanings, we find that pro-Democrat and pro-Republican users share a similar amount of tweets containing misleading or false COVID-19 images. However, the types of images that they share are different: while pro-Democrat users focus on misleading claims about the Trump administration's response to the pandemic, as well as often sharing manipulated images intended as satire, pro-Republican users often promote hydroxychloroquine, an ineffective medicine against COVID-19, as well as conspiracy theories about the origin of the virus. Our analysis sets a basis for better understanding COVID-19 misinformation images on social media and the nuances in effectively moderate them.more » « less
An official website of the United States government

