- NSF-PAR ID:
- 10283517
- Date Published:
- Journal Name:
- Frontiers in Big Data
- Volume:
- 4
- ISSN:
- 2624-909X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Spatial classification with limited feature observations has been a challenging problem in machine learning. The problem exists in applications where only a subset of sensors are deployed at certain regions or partial responses are collected in field surveys. Existing research mostly focuses on addressing incomplete or missing data, e.g., data cleaning and imputation, classification models that allow for missing feature values, or modeling missing features as hidden variables and applying the EM algorithm. These methods, however, assume that incomplete feature observations only happen on a small subset of samples, and thus cannot solve problems where the vast majority of samples have missing feature observations. To address this issue, we propose a new approach that incorporates physics-aware structural constraints into the model representation. Our approach assumes that a spatial contextual feature is observed for all sample locations and establishes spatial structural constraint from the spatial contextual feature map. We design efficient algorithms for model parameter learning and class inference. Evaluations on real-world hydrological applications show that our approach significantly outperforms several baseline methods in classification accuracy, and the proposed solution is computationally efficient on a large data volume.more » « less
-
null (Ed.)Abstract. High-resolution remote sensing imagery has been increasingly used for flood applications. Different methods have been proposed for flood extent mapping from creating water index to image classification from high-resolution data. Among these methods, deep learning methods have shown promising results for flood extent extraction; however, these two-dimensional (2D) image classification methods cannot directly provide water level measurements. This paper presents an integrated approach to extract the flood extent in three-dimensional (3D) from UAV data by integrating 2D deep learning-based flood map and 3D cloud point extracted from a Structure from Motion (SFM) method. We fine-tuned a pretrained Visual Geometry Group 16 (VGG-16) based fully convolutional model to create a 2D inundation map. The 2D classified map was overlaid on the SfM-based 3D point cloud to create a 3D flood map. The floodwater depth was estimated by subtracting a pre-flood Digital Elevation Model (DEM) from the SfM-based DEM. The results show that the proposed method is efficient in creating a 3D flood extent map to support emergency response and recovery activates during a flood event.more » « less
-
Chest X-ray (CXR) analysis plays an important role in patient treatment. As such, a multitude of machine learning models have been applied to CXR datasets attempting automated analysis. However, each patient has a differing number of images per angle, and multi-modal learning should deal with the missing data for specific angles and times. Furthermore, the large dimensionality of multi-modal imaging data with the shapes inconsistent across the dataset introduces the challenges in training. In light of these issues, we propose the Fast Multi-Modal Support Vector Machine (FMMSVM) which incorporates modality-specific factorization to deal with missing CXRs in the specific angle. Our model is able to adjust the fine-grained details in feature extraction and we provide an efficient optimization algorithm scalable to a large number of features. In our experiments, FMMSVM shows clearly improved classification performance.more » « less
-
Existing approaches for multi-label classification are trained offline, missing the opportunity to adapt to new data instances as they become available. To address this gap, an online multi-label classification method was proposed recently, to learn from data instances sequentially. In this work, we focus on multi-label classification tasks, in which the labels are organized in a hierarchy. We formulate online hierarchical multi-labeled classification as an online optimization task that jointly learns individual label predictors and a label threshold, and propose a novel hierarchy constraint to penalize predictions that are inconsistent with the label hierarchy structure. Experimental results on three benchmark datasets show that the proposed approach outperforms online multi-label classification methods, and achieves comparable to, or even better performance than offline hierarchical classification frameworks with respect to hierarchical evaluation metrics.more » « less
-
Abstract In this paper, a hardware-optimized approach to emotion recognition based on the efficient brain-inspired hyperdimensional computing (HDC) paradigm is proposed. Emotion recognition provides valuable information for human–computer interactions; however, the large number of input channels (> 200) and modalities (> 3 ) involved in emotion recognition are significantly expensive from a memory perspective. To address this, methods for memory reduction and optimization are proposed, including a novel approach that takes advantage of the combinatorial nature of the encoding process, and an elementary cellular automaton. HDC with early sensor fusion is implemented alongside the proposed techniques achieving two-class multi-modal classification accuracies of > 76% for valence and > 73% for arousal on the multi-modal AMIGOS and DEAP data sets, almost always better than state of the art. The required vector storage is seamlessly reduced by 98% and the frequency of vector requests by at least 1/5. The results demonstrate the potential of efficient hyperdimensional computing for low-power, multi-channeled emotion recognition tasks.