skip to main content


Title: COVIDLies: Detecting COVID-19 Misinformation on Social Media
The ongoing pandemic has heightened the need for developing tools to flag COVID-19-related misinformation on the internet, specifically on social media such as Twitter. However, due to novel language and the rapid change of information, existing misinformation detection datasets are not effective for evaluating systems designed to detect misinformation on this topic. Misinformation detection can be divided into two sub-tasks: (i) retrieval of misconceptions relevant to posts being checked for veracity, and (ii) stance detection to identify whether the posts Agree, Disagree, or express No Stance towards the retrieved misconceptions. To facilitate research on this task, we release COVIDLies (https://ucinlp.github.io/covid19 ), a dataset of 6761 expert-annotated tweets to evaluate the performance of misinformation detection systems on 86 different pieces of COVID-19 related misinformation. We evaluate existing NLP systems on this dataset, providing initial benchmarks and identifying key challenges for future models to improve upon.  more » « less
Award ID(s):
1817183
NSF-PAR ID:
10291543
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Perceived experts (i.e. medical professionals and biomedical scientists) are trusted sources of medical information who are especially effective at encouraging vaccine uptake. The role of perceived experts acting as potential antivaccine influencers has not been characterized systematically. We describe the prevalence and importance of antivaccine perceived experts by constructing a coengagement network of 7,720 accounts based on a Twitter data set containing over 4.2 million posts from April 2021. The coengagement network primarily broke into two large communities that differed in their stance toward COVID-19 vaccines, and misinformation was predominantly shared by the antivaccine community. Perceived experts had a sizable presence across the coengagement network, including within the antivaccine community where they were 9.8% of individual, English-language users. Perceived experts within the antivaccine community shared low-quality (misinformation) sources at similar rates and academic sources at higher rates compared to perceived nonexperts in that community. Perceived experts occupied important network positions as central antivaccine users and bridges between the antivaccine and provaccine communities. Using propensity score matching, we found that perceived expertise brought an influence boost, as perceived experts were significantly more likely to receive likes and retweets in both the antivaccine and provaccine communities. There was no significant difference in the magnitude of the influence boost for perceived experts between the two communities. Social media platforms, scientific communications, and biomedical organizations may focus on more systemic interventions to reduce the impact of perceived experts in spreading antivaccine misinformation.

     
    more » « less
  2. Misinformation runs rampant on social media and has been tied to adverse health behaviors such as vaccine hesitancy. Crowdsourcing can be a means to detect and impede the spread of misinformation online. However, past studies have not deeply examined the individual characteristics - such as cognitive factors and biases - that predict crowdworker accuracy at identifying misinformation. In our study (n = 265), Amazon Mechanical Turk (MTurk) workers and university students assessed the truthfulness and sentiment of COVID-19 related tweets as well as answered several surveys on personal characteristics. Results support the viability of crowdsourcing for assessing misinformation and content stance (i.e., sentiment) related to ongoing and politically-charged topics like the COVID-19 pandemic, however, alignment with experts depends on who is in the crowd. Specifically, we find that respondents with high Cognitive Reflection Test (CRT) scores, conscientiousness, and trust in medical scientists are more aligned with experts while respondents with high Need for Cognitive Closure (NFCC) and those who lean politically conservative are less aligned with experts. We see differences between recruitment platforms as well, as our data shows university students are on average more aligned with experts than MTurk workers, most likely due to overall differences in participant characteristics on each platform. Results offer transparency into how crowd composition affects misinformation and stance assessment and have implications on future crowd recruitment and filtering practices. 
    more » « less
  3. Redbird, Beth ; Harbridge-Yong, Laurel ; Mersey, Rachel Davis (Ed.)
    In our analysis, we examine whether the labelling of social media posts as misinformation affects the subsequent sharing of those posts by social media users. Conventional understandings of the presentation-of-self and work in cognitive psychology provide different understandings of whether labelling misinformation in social media posts will reduce sharing behavior. Part of the problem with understanding whether interventions will work hinges on how closely social media interactions mirror other interpersonal interactions with friends and associates in the off-line world. Our analysis looks at rates of misinformation labelling during the height of the COVID-19 pandemic on Facebook and Twitter, and then assesses whether sharing behavior is deterred misinformation labels applied to social media posts. Our results suggest that labelling is relatively successful at lowering sharing behavior, and we discuss how our results contribute to a larger understanding of the role of existing inequalities and government responses to crises like the COVID-19 pandemic. 
    more » « less
  4. Crises such as the COVID-19 pandemic continuously threaten our world and emotionally affect billions of people worldwide in distinct ways. Understanding the triggers leading to people’s emotions is of crucial importance. Social media posts can be a good source of such analysis, yet these texts tend to be charged with multiple emotions, with triggers scattering across multiple sentences. This paper takes a novel angle, namely, emotion detection and trigger summarization, aiming to both detect perceived emotions in text, and summarize events and their appraisals that trigger each emotion. To support this goal, we introduce CovidET (Emotions and their Triggers during Covid-19), a dataset of ~1,900 English Reddit posts related to COVID-19, which contains manual annotations of perceived emotions and abstractive summaries of their triggers described in the post. We develop strong baselines to jointly detect emotions and summarize emotion triggers. Our analyses show that CovidET presents new challenges in emotion-specific summarization, as well as multi-emotion detection in long social media posts. 
    more » « less
  5. Data visualizations can empower an audience to make informed decisions. At the same time, deceptive representations of data can lead to inaccurate interpretations while still providing an illusion of data-driven insights. Existing research on misleading visualizations primarily focuses on examples of charts and techniques previously reported to be deceptive. These approaches do not necessarily describe how charts mislead the general population in practice. We instead present an analysis of data visualizations found in a real-world discourse of a significant global event---Twitter posts with visualizations related to the COVID-19 pandemic. Our work shows that, contrary to conventional wisdom, violations of visualization design guidelines are not the dominant way people mislead with charts. Specifically, they do not disproportionately lead to reasoning errors in posters' arguments. Through a series of examples, we present common reasoning errors and discuss how even faithfully plotted data visualizations can be used to support misinformation online. 
    more » « less