For over a decade, social media has proved to be a functional and convenient data source in the Internet of things. Social platforms such as Facebook, Twitter, Instagram, and Reddit have their own styles and purposes. Twitter, among them, has become the most popular platform in the research community due to its nature of attracting people to write brief posts about current and unexpected events (e.g., natural disasters). The immense popularity of such sites has opened a new horizon in `social sensing' to manage disaster response. Sensing through social media platforms can be used to track and analyze natural disasters and evaluate the overall response (e.g., resource allocation, relief, cost and damage estimation). In this paper, we propose a two-step methodology: i) wavelet analysis and ii) predictive modeling to track the progression of a disaster aftermath and predict the time-line. We demonstrate that wavelet features can preserve text semantics and predict the total duration for localized small scale disasters. The experimental results and observations on two real data traces (flash flood in Cummins Falls state park and Arizona swimming hole) showcase that the wavelet features can predict disaster time-line with an error lower than 20% with less than 50% of the data when compared to ground truth.
more »
« less
Identifying the Context of Hurricane Posts on Twitter using Wavelet Features
With the increase of natural disasters all over the world, we are in crucial need of innovative solutions with inexpensive implementations to assist the emergency response systems. Information collected through conventional sources (e.g., incident reports, 911 calls, physical volunteers, etc.) are proving to be insufficient [1]. Responsible organizations are now leaning towards research grounds that explore digital human connectivity and freely available sources of information. U.S. Geological Survey and Federal Emergency Management Agency (FEMA) introduced Critical Lifeline (CLL) s which identifies the most significant areas that require immediate attention in case of natural disasters. These organizations applied crowdsourcing by connecting digital volunteer networks to collect data on the critical lifelines from data sources including social media [3], [4], [5]. In the past couple of years, during some of the deadly hurricanes (e.g., Harvey, IRMA, Maria, Michael, Florence, etc.), people took on different social media platforms like never seen before, in search of help for rescue, shelter, and relief. Their posts reflect crisis updates and their real-time observations on the devastation that they witness. In this paper, we propose a methodology to build and analyze time-frequency features of words on social media to assist the volunteer networks in identifying the context before, during and after a natural disaster and distinguishing contexts connected to the critical lifelines. We employ Continuous Wavelet Transform to help create word features and propose two ways to reduce the dimensions which we use to create word clusters to identify themes of conversations associated with stages of a disaster and these lifelines. We compare two different methodologies of wavelet features and word clusters both qualitatively and quantitatively, to show that wavelet features can identify and separate context without using semantic information as inputs.
more »
« less
- Award ID(s):
- 1640625
- PAR ID:
- 10113082
- Date Published:
- Journal Name:
- 2019 IEEE International Conference on Smart Computing (SMARTCOMP)
- Page Range / eLocation ID:
- 350 to 358
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In an era increasingly affected by natural and human-caused disasters, the role of social media in disaster communication has become ever more critical. Despite substantial research on social media use during crises, a significant gap remains in detecting crisis-related misinformation. Detecting deviations in information is fundamental for identifying and curbing the spread of misinformation. This study introduces a novel Information Switching Pattern Model to identify dynamic shifts in perspectives among users who mention each other in crisisrelated narratives on social media. These shifts serve as evidence of crisis misinformation affecting user-mention network interactions. The study utilizes advanced natural language processing, network science, and census data to analyze geotagged tweets related to compound disaster events in Oklahoma in 2022. The impact of misinformation is revealed by distinct engagement patterns among various user types, such as bots, private organizations, non-profits, government agencies, and news media throughout different disaster stages. These patterns show how different disasters influence public sentiment, highlight the heightened vulnerability of mobile home communities, and underscore the importance of education and transportation access in crisis response. Understanding these engagement patterns is crucial for detecting misinformation and leveraging social media as an effective tool for risk communication during disastersmore » « less
-
Radianti, Jaziar; Dokas, Ioannis; Lalone, Nicolas; Khazanchi, Deepak (Ed.)The shared real-time information about natural disasters on social media platforms like Twitter and Facebook plays a critical role in informing volunteers, emergency managers, and response organizations. However, supervised learning models for monitoring disaster events require large amounts of annotated data, making them unrealistic for real-time use in disaster events. To address this challenge, we present a fine-grained disaster tweet classification model under the semi-supervised, few-shot learning setting where only a small number of annotated data is required. Our model, CrisisMatch, effectively classifies tweets into fine-grained classes of interest using few labeled data and large amounts of unlabeled data, mimicking the early stage of a disaster. Through integrating effective semi-supervised learning ideas and incorporating TextMixUp, CrisisMatch achieves performance improvement on two disaster datasets of 11.2% on average. Further analyses are also provided for the influence of the number of labeled data and out-of-domain results.more » « less
-
The increasing popularity of multimedia messages shared through public or private social media spills into diverse information dissemination contexts. To date, public social media has been explored as a potential alert system during natural disasters, but high levels of noise (i.e., non-relevant content) present challenges in both understanding social experiences of a disaster and in facilitating disaster recovery. This study builds on current research by uniquely using social media data, collected in the field through qualitative interviews, to create a supervised machine learning model. Collected data represents rescuers and rescuees during the 2017 Hurricane Harvey. Preliminary findings indicate a 99% accuracy in classifying data between signal and noise for signal-to-noise ratios (SNR) of 1:1, 1:2, 1:4, and 1:8. We also find 99% accuracy in classification between respondent types (volunteer rescuer, official rescuer, and rescuee). We furthermore compare human and machine coded attributes, finding that Google Vision API is a more reliable source of detecting attributes for the training set.more » « less
-
Global social media use during natural disasters has been well documented (Murthy et al., 2017). In the U.S., public social media platforms are often a primary venue for those affected by disasters . Some disaster victims believe first responders will see their public posts and that the 9-1-1 telephone system becomes overloaded during crises. Moreover, some feel that the accuracy and utility of information on social media is likely higher than traditional media sources . However, sifting through content during a disaster is often difficult due to the high volume of ‘non-relevant’ content. In addition, text is studied more than images posted on Twitter, leaving a potential gap in understanding disaster experiences. Images posted on social media during disasters have a high level of complexity (Murthy et al., 2016). Our study responds to O’Neal et al.’s (2017) call-to-action that social media images posted during disasters should be studied using machine learning.more » « less
An official website of the United States government

