Twitter is an extremely popular micro-blogging social platform with millions of users, generating thousands of tweets per second. The huge amount of Twitter data inspire the researchers to explore the trending topics, event detection and event tracking which help to postulate the fine-grained details and situation awareness. Obtaining situational awareness of any event is crucial in various application domains such as natural calamities, man made disaster and emergency responses. In this paper, we advocate that data analytics on Twitter feeds can help improve the planning and rescue operations and services as provided by the emergency personnel in the event of unusual circumstances. We take a different approach and focus on the users' emotions, concerns and feelings expressed in tweets during the emergency situations, and analyze those feelings and perceptions in the community involved during the events to provide appropriate feedback to emergency responders and local authorities. We employ sentiment analysis and change point detection techniques to process, discover and infer the spatiotemporal sentiments of the users. We analyze the tweets from recent Las Vegas shooting (Oct. 2017) and note that the changes in the polarity of the sentiments and articulation of the emotional expressions, if captured successfully can be employed as an informative tool for providing feedback to EMS.
more »
« less
Analyzing Social Media Texts and Images to Assess the Impact of Flash Floods in Cities
Computer Vision and Image Processing are emerging research paradigms. The increasing popularity of social media, micro- blogging services and ubiquitous availability of high-resolution smartphone cameras with pervasive connectivity are propelling our digital footprints and cyber activities. Such online human footprints related with an event-of-interest, if mined appropriately, can provide meaningful information to analyze the current course and pre- and post- impact leading to the organizational planning of various real-time smart city applications. In this paper, we investigate the narrative (texts) and visual (images) components of Twitter feeds to improve the results of queries by exploiting the deep contexts of each data modality. We employ Latent Semantic Analysis (LSA)-based techniques to analyze the texts and Discrete Cosine Transformation (DCT) to analyze the images which help establish the cross-correlations between the textual and image dimensions of a query. While each of the data dimensions helps improve the results of a specific query on its own, the contributions from the dual modalities can potentially provide insights that are greater than what can be obtained from the individual modalities. We validate our proposed approach using real Twitter feeds from a recent devastating flash flood in Ellicott City near the University of Maryland campus. Our results show that the images and texts can be classified with 67% and 94% accuracies respectively.
more »
« less
- Award ID(s):
- 1640625
- PAR ID:
- 10073169
- Date Published:
- Journal Name:
- 2017 IEEE International Conference on Smart Computing (SMARTCOMP)
- Page Range / eLocation ID:
- 1 to 6
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Twitter is an extremely popular micro-blogging social platform with millions of users, generating thousands of tweets per second. The huge amount of Twitter data inspire the researchers to explore the trending topics, event detection and event tracking which help to postulate the fine-grained details and situation awareness. Obtaining situational awareness of any event is crucial in various application domains such as natural calamities, man-made disaster and emergency responses. In this paper, we advocate that data analytics on Twitter feeds can help improve the planning and rescue operations and services as provided by the emergency personnel in the event of unusual circumstances. We take an emotional change detection approach and focus on the users’ emotions, concerns and feelings expressed in tweets during the emergency situations, and analyze those feelings and perceptions in the community involved during the events to provide appropriate feedback to emergency responders and local authorities. We employ improved emotion analysis and change point detection techniques to process, discover and infer the spatiotemporal sentiments of the users. We analyze the tweets from recent Las Vegas shooting (Oct. 2017) and note that the changes in the polarity of the sentiments and articulation of the emotional expressions, if captured successfully can be employed as an informative tool for providing feedback to EMS.more » « less
-
With benefits of fast query speed and low storage cost, hashing-based image retrieval approaches have garnered considerable attention from the research community. In this paper, we propose a novel Error-Corrected Deep Cross Modal Hashing (CMH-ECC) method which uses a bitmap specifying the presence of certain facial attributes as an input query to retrieve relevant face images from the database. In this architecture, we generate compact hash codes using an end-to-end deep learning module, which effectively captures the inherent relationships between the face and attribute modality. We also integrate our deep learning module with forward error correction codes to further reduce the distance between different modalities of the same subject. Specifically, the properties of deep hashing and forward error correction codes are exploited to design a cross modal hashing framework with high retrieval performance. Experimental results using two standard datasets with facial attributes-image modalities indicate that our CMH-ECC face image retrieval model outperforms most of the current attribute-based face image retrieval approaches.more » « less
-
Abstract Twitter is a frequent target for machine learning research and applications. Many problems, such as sentiment analysis, image tagging, and location prediction have been studied on Twitter data. Much of the prior work that addresses these problems within the context of Twitter focuses on a subset of the types of data available, e.g. only text, or text and image. However, a tweet can have several additional components, such as the location and the author, that can also provide useful information for machine learning tasks. In this work, we explore the problem of jointly modeling several tweet components in a common embedding space via task-agnostic representation learning, which can then be used to tackle various machine learning applications. To address this problem, we propose a deep neural network framework that combines text, image, and graph representations to learn joint embeddings for 5 tweet components: body, hashtags, images, user, and location. In our experiments, we use a large dataset of tweets to learn a joint embedding model and use it in multiple tasks to evaluate its performance vs. state-of-the-art baselines specific to each task. Our results show that our proposed generic method has similar or superior performance to specialized application-specific approaches, including accuracy of 52.43% vs. 48.88% for location prediction and recall of up to 15.93% vs. 12.12% for hashtag recommendation.more » « less
-
Geographic information systems (GIS) provide accurate maps of terrain, roads, waterways, and building footprints and heights. Aircraft, particularly small unmanned aircraft systems (UAS), can exploit this and additional information such as building roof structure to improve navigation accuracy and safely perform contingency landings particularly in urban regions. However, building roof structure is not fully provided in maps. This paper proposes a method to automatically label building roof shape from publicly available GIS data. Satellite imagery and airborne LiDAR data are processed and manually labeled to create a diverse annotated roof image dataset for small to large urban cities. Multiple convolutional neural network (CNN) architectures are trained and tested, with the best performing networks providing a condensed feature set for support vector machine and decision tree classifiers. Satellite image and LiDAR data fusion is shown to provide greater classification accuracy than using either data type alone. Model confidence thresholds are adjusted leading to significant increases in models precision. Networks trained from roof data in Witten, Germany and Manhattan (New York City) are evaluated on independent data from these cities and Ann Arbor, Michigan.more » « less
An official website of the United States government

