skip to main content

Title: Multi-task Multimodal Learning for Disaster Situation Assessment
During disaster events, emergency response teams need to draw up the response plan at the earliest possible stage. Social media platforms contain rich information which could help to assess the current situation. In this paper, a novel multi-task multimodal deep learning framework with automatic loss weighting is proposed. Our framework is able to capture the correlation among different concepts and data modalities. The proposed automatic loss weighting method can prevent the tedious manual weight tuning process and improve the model performance. Extensive experiments on a large-scale multimodal disaster dataset from Twitter are conducted to identify post-disaster humanitarian category and infrastructure damage level. The results show that by learning the shared latent space of multiple tasks with loss weighting, our model can outperform all single tasks.
Authors:
; ; ;
Award ID(s):
1937019
Publication Date:
NSF-PAR ID:
10275318
Journal Name:
2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)
Page Range or eLocation-ID:
209 to 212
Sponsoring Org:
National Science Foundation
More Like this
  1. In recent years, semi-supervised learning has been widely explored and shows excellent data efficiency for 2D data. There is an emerging need to improve data efficiency for 3D tasks due to the scarcity of labeled 3D data. This paper explores how the coherence of different modalities of 3D data (e.g. point cloud, image, and mesh) can be used to improve data efficiency for both 3D classification and retrieval tasks. We propose a novel multimodal semi-supervised learning framework by introducing instance-level consistency constraint and a novel multimodal contrastive prototype (M2CP) loss. The instance-level consistency enforces the network to generate consistent representations for multimodal data of the same object regardless of its modality. The M2CP maintains a multimodal prototype for each class and learns features with small intra-class variations by minimizing the feature distance of each object to its prototype while maximizing the distance to the others. Our proposed framework significantly outperforms all the state-of-the-art counterparts for both classification and retrieval tasks by a large margin on the modelNet10 and ModelNet40 datasets.
  2. A major challenge in many machine learning tasks is that the model expressive power depends on model size. Low-rank tensor methods are an efficient tool for handling the curse of dimensionality in many large-scale machine learning models. The major challenges in training a tensor learning model include how to process the high-volume data, how to determine the tensor rank automatically, and how to estimate the uncertainty of the results. While existing tensor learning focuses on a specific task, this paper proposes a generic Bayesian framework that can be employed to solve a broad class of tensor learning problems such as tensor completion, tensor regression, and tensorized neural networks. We develop a low-rank tensor prior for automatic rank determination in nonlinear problems. Our method is implemented with both stochastic gradient Hamiltonian Monte Carlo (SGHMC) and Stein Variational Gradient Descent (SVGD). We compare the automatic rank determination and uncertainty quantification of these two solvers. We demonstrate that our proposed method can determine the tensor rank automatically and can quantify the uncertainty of the obtained results. We validate our framework on tensor completion tasks and tensorized neural network training tasks.
  3. Timely, flexible and accurate information dissemination can make a life-and-death difference in managing disasters. Complex command structures and information organization make such dissemination challenging. Thus, it is vital to have an architecture with appropriate naming frameworks, adaptable to the changing roles of participants, focused on content rather than network addresses. To address this, we propose POISE, a name-based and recipient-based publish/subscribe architecture for efficient content dissemination in disaster management. POISE proposes an information layer, improving on state-of-the-art Information-Centric Networking (ICN) solutions such as Named Data Networking (NDN) in two major ways: 1) support for complex graph-based namespaces, and 2) automatic name-based load-splitting. To capture the complexity and dynamicity of disaster response command chains and information flows, POISE proposes a graph-based naming framework, leveraged in a dissemination protocol which exploits information layer rendezvous points (RPs) that perform name expansions. For improved robustness and scalability, POISE allows load-sharing via multiple RPs each managing a subset of the namespace graph. However, excessive workload on one RP may turn it into a “hot spot”, thus impeding performance and reliability. To eliminate such traffic concentration, we propose an automatic load-splitting mechanism, consisting of a namespace graph partitioning complemented by a seamless, loss-less core migration procedure.more »Due to the nature of our graph partitioning and its complex objectives, off-the-shelf graph partitioning, e.g., METIS, is inadequate. We propose a hybrid partitioning solution, consisting of an initial and a refinement phase. Our simulation results show that POISE outperforms state-of-the-art solutions, demonstrating its effectiveness in timely delivery and load-sharing.« less
  4. Abstract
    RapidLiq is a Windows software program for predicting liquefaction-induced ground failure using geospatial models, which are particularly suited for regional scale applications such as: (i) loss estimation and disaster simulation; (ii) city planning and policy development; (iii) emergency response; and (d) post-event reconnaissance (e.g., to remotely identify sites of interest). RapidLiq v1.0 includes four such models. One is a logistic regression model developed by Rashidian and Baise (2020), which has been adopted into United States Geological Survey (USGS) post-earthquake data products, but which is not often implemented by individuals owing to the geospatial variables that must be compiled. The other three models are machine and deep learning models (ML/DL) proposed by Geyin et al. (2021). These models are driven by algorithmic learning (benefiting from ML/DL insights) but pinned to a physical framework (benefiting from mechanics and the knowledge of regression modelers). While liquefaction is a physical phenomenon best predicted by mechanics, subsurface traits lack theoretical links to above-ground parameters, but correlate in complex, interconnected ways - a prime problem for ML/DL. All four models are described in an acompanying paper manuscript. All necessary predictor variables are compiled within RapidLiq, making user implementation trivial. The only required input is aMore>>
  5. Delivering the right information to the right people in a timely manner can greatly improve outcomes and save lives in emergency response. A communication framework that flexibly and efficiently brings victims, volunteers, and first responders together for timely assistance can be very helpful. With the burden of more frequent and intense disaster situations and first responder resources stretched thin, people increasingly depend on social media for communicating vital information. This paper proposes ONSIDE, a framework for coordination of disaster response leveraging social media, integrating it with Information-Centric dissemination for timely and relevant dissemination. We use a graph-based pub/sub namespace that captures the complex hierarchy of the incident management roles. Regular citizens and volunteers using social media may not know of or have access to the full namespace. Thus, we utilize a social media engine (SME) to identify disaster-related social media posts and then automatically map them to the right name(s) in near-real-time. Using NLP and classification techniques, we direct the posts to appropriate first responder(s) that can help with the posted issue. A major challenge for classifying social media in real-time is the labeling effort for model training. Furthermore, as disasters hits, there may be not enough data points availablemore »for labeling, and there may be concept drift in the content of the posts over time. To address these issues, our SME employs stream-based active learning methods, adapting as social media posts come in. Preliminary evaluation results show the proposed solution can be effective.« less