skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Title: Misinformation debunking and cross-platform information sharing through Twitter during Hurricanes Harvey and Irma: a case study on shelters and ID checks
As the internet and social media continue to become increasingly used for sharing break- ing news and important updates, it is with great motivation to study the behaviors of online users during crisis events. One of the biggest issues with obtaining information online is the veracity of such content. Given this vulnerability, misinformation becomes a very danger- ous and real threat when spread online. This study investigates misinformation debunking efforts and fills the research gap on cross-platform information sharing when misinforma- tion is spread during disasters. The false rumor “immigration status is checked at shelters” spread in both Hurricane Harvey and Hurricane Irma in 2017 and was analyzed in this paper based on a collection of 12,900 tweets. By studying the rumor control efforts made by thousands of accounts, we found that Twitter users respond and interact the most with tweets from verified Twitter accounts, and especially government organizations. Results on sourcing analysis show that the majority of Twitter users who utilize URLs in their post- ings are employing the information in the URLs to help debunk the false rumor. The most frequently cited information comes from news agencies when analyzing both URLs and domains. This paper provides novel insights into rumor control efforts made through social media during natural disasters and also the information sourcing and sharing behaviors that users exhibit during the debunking of false rumors.  more » « less
Award ID(s):
1762807
PAR ID:
10177043
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Natural Hazards
ISSN:
0921-030X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Social media has been increasingly utilized to spread breaking news and risk communications during disasters of all magnitudes. Unfortunately, due to the unmoderated nature of social media platforms such as Twitter, rumors and misinformation are able to propagate widely. Given this, a surfeit of research has studied false rumor diffusion on Twitter, especially during natural disasters. Within this domain, studies have also focused on the misinformation control efforts from government organizations and other major agencies. A prodigious gap in research exists in studying the monitoring of misinformation on social media platforms in times of disasters and other crisis events. Such studies would offer organizations and agencies new tools and ideologies to monitor misinformation on platforms such as Twitter, and make informed decisions on whether or not to use their resources in order to debunk. In this work, we fill the research gap by developing a machine learning framework to predict the veracity of tweets that are spread during crisis events. The tweets are tracked based on the veracity of their content as either true, false, or neutral. We conduct four separate studies, and the results suggest that our framework is capable of tracking multiple cases of misinformation simultaneously, with scores exceeding 87%. In the case of tracking a single case of misinformation, our framework reaches an score of 83%. We collect and drive the algorithms with 15,952 misinformation‐related tweets from the Boston Marathon bombing (2013), Manchester Arena bombing (2017), Hurricane Harvey (2017), Hurricane Irma (2017), and the Hawaii ballistic missile false alert (2018). This article provides novel insights on how to efficiently monitor misinformation that is spread during disasters.

     
    more » « less
  2. Social media is being increasingly utilized to spread breaking news and updates during disasters of all magnitudes. Unfortunately, due to the unmoderated nature of social media platforms such as Twitter, rumors and misinformation are able to propagate widely. Given this, a surfeit of research has studied rumor diffusion on social media, especially during natural disasters. In many studies, researchers manually code social media data to further analyze the patterns and diffusion dynamics of users and misinformation. This method requires many human hours, and is prone to significant incorrect classifications if the work is not checked over by another individual. In our studies, we fill the research gap by applying seven different machine learning algorithms to automatically classify misinformed Twitter data that is spread during disaster events. Due to the unbalanced nature of the data, three different balancing algorithms are also applied and compared. We collect and drive the classifiers with data from the Manchester Arena bombing (2017), Hurricane Harvey (2017), the Hawaiian incoming missile alert (2018), and the East Coast US tsunami alert (2018). Over 20,000 tweets are classified based on the veracity of their content as either true, false, or neutral, with overall accuracies exceeding 89%. 
    more » « less
  3. Online social networks allow different agencies and the public to interact and share the underlying risks and protective actions during major disasters. This study revealed such crisis communication patterns during Hurricane Laura compounded by the COVID-19 pandemic. Hurricane Laura was one of the strongest (Category 4) hurricanes on record to make landfall in Cameron, Louisiana, U.S. Using an application programming interface (API), this study utilizes large-scale social media data obtained from Twitter through the recently released academic track that provides complete and unbiased observations. The data captured publicly available tweets shared by active Twitter users from the vulnerable areas threatened by Hurricane Laura. Online social networks were based on Twitter’s user influence feature (i.e., mentions or tags) that allows notification of other users while posting a tweet. Using network science theories and advanced community detection algorithms, the study split these networks into 21 components of various size, the largest of which contained eight well-defined communities. Several natural language processing techniques (i.e., word clouds, bigrams, topic modeling) were applied to the tweets shared by the users in these communities to observe their risk-taking or risk-averse behavior during a major compounding crisis. Social media accounts of local news media, radio, universities, and popular sports pages were among those which heavily involved and closely interacted with local residents. In contrast, emergency management and planning units in the area engaged less with the public. The findings of this study provide novel insights into the design of efficient social media communication guidelines to respond better in future disasters. 
    more » « less
  4. Though significant efforts such as removing false claims and promoting reliable sources have been increased to combat COVID-19 misinfodemic, it remains an unsolved societal challenge if lacking a proper understanding of susceptible online users, i.e., those who are likely to be attracted by, believe and spread misinformation. This study attempts to answer who constitutes the population vulnerable to the online misinformation in the pandemic, and what are the robust features and short-term behavior signals that distinguish susceptible users from others. Using a 6-month longitudinal user panel on Twitter collected from a geopolitically diverse network-stratified samples in the US, we distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation. We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation. This work brings unique contributions: First, contrary to the prior studies on bot influence, our analysis shows that social bots' contribution to misinformation sharing was surprisingly low, and human-like users' misinformation behaviors exhibit heterogeneity and temporal variability. While the sharing of misinformation was highly concentrated, the risk of occasionally sharing misinformation for average users remained alarmingly high. Second, our findings highlight the political sensitivity activeness and responsiveness to emotionally-charged content among susceptible users. Third, we demonstrate a feasible solution to efficiently predict users' transient susceptibility solely based on their short-term news consumption and exposure from their networks. Our work has an implication in designing effective intervention mechanism to mitigate the misinformation dissipation. 
    more » « less
  5. With the spread of the SARS-CoV-2, enormous amounts of information about the pandemic are disseminated through social media platforms such as Twitter. Social media posts often leverage the trust readers have in prestigious news agencies and cite news articles as a way of gaining credibility. Nevertheless, it is not always the case that the cited article supports the claim made in the social media post. We present a cross-genre ad hoc pipeline to identify whether the information in a Twitter post (i.e., a “Tweet”) is indeed supported by the cited news article. Our approach is empirically based on a corpus of over 46.86 million Tweets and is divided into two tasks: (i) development of models to detect Tweets containing claim and worth to be fact-checked and (ii) verifying whether the claims made in a Tweet are supported by the newswire article it cites. Unlike previous studies that detect unsubstantiated information by post hoc analysis of the patterns of propagation, we seek to identify reliable support (or the lack of it) before the misinformation begins to spread. We discover that nearly half of the Tweets (43.4%) are not factual and hence not worth checking – a significant filter, given the sheer volume of social media posts on a platform such as Twitter. Moreover, we find that among the Tweets that contain a seemingly factual claim while citing a news article as supporting evidence, at least 1% are not actually supported by the cited news, and are hence misleading. 
    more » « less