skip to main content


Title: Optimum Feature Ordering for Dynamic Instance–Wise Joint Feature Selection and Classification
Award ID(s):
1737443 1942330
NSF-PAR ID:
10288238
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Page Range / eLocation ID:
3370 to 3374
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract. Glacier velocity measurements are essential to understand ice flow mechanics, monitor natural hazards, and make accurate projections of future sea-level rise. Despite these important applications, the method most commonly used to derive glacier velocity maps, feature tracking, relies on empirical parameter choices that rarely account for glacier physics or uncertainty. Here we test two statistics- and physics-based metrics to evaluate velocity maps derived from optical satellite images of Kaskawulsh Glacier, Yukon, Canada, using a range of existing feature-tracking workflows. Based on inter-comparisons with ground truth data, velocity maps with metrics falling within our recommended ranges contain fewer erroneous measurements and more spatially correlated noise than velocity maps with metrics that deviate from those ranges. Thus, these metric ranges are suitable for refining feature-tracking workflows and evaluating the resulting velocity products. We have released an open-source software package for computing and visualizing these metrics, the GLAcier Feature Tracking testkit (GLAFT). 
    more » « less
  2. null (Ed.)

    We present a new method to improve the representational power of the features in Convolutional Neural Networks (CNNs). By studying traditional image processing methods and recent CNN architectures, we propose to use positional information in CNNs for effective exploration of feature dependencies. Rather than considering feature semantics alone, we incorporate spatial positions as an augmentation for feature semantics in our design. From this vantage, we present a Position-Aware Recalibration Module (PRM in short) which recalibrates features leveraging both feature semantics and position. Furthermore, inspired by multi-head attention, our module is capable of performing multiple recalibrations where results are concatenated as the output. As PRM is efficient and easy to implement, it can be seamlessly integrated into various base networks and applied to many position-aware visual tasks. Compared to original CNNs, our PRM introduces a negligible number of parameters and FLOPs, while yielding better performance. Experimental results on ImageNet and MS COCO benchmarks show that our approach surpasses related methods by a clear margin with less computational overhead. For example, we improve the ResNet50 by absolute 1.75% (77.65% vs. 75.90%) on ImageNet 2012 validation dataset, and 1.5%~1.9% mAP on MS COCO validation dataset with almost no computational overhead. Codes are made publicly available.

     
    more » « less
  3. null (Ed.)