skip to main content


Title: Inverse Feature Learning: Feature Learning Based on Representation Learning of Error
Award ID(s):
1657260
NSF-PAR ID:
10365461
Author(s) / Creator(s):
; ;
Publisher / Repository:
Institute of Electrical and Electronics Engineers
Date Published:
Journal Name:
IEEE Access
Volume:
8
ISSN:
2169-3536
Page Range / eLocation ID:
p. 132937-132949
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)

    We present a new method to improve the representational power of the features in Convolutional Neural Networks (CNNs). By studying traditional image processing methods and recent CNN architectures, we propose to use positional information in CNNs for effective exploration of feature dependencies. Rather than considering feature semantics alone, we incorporate spatial positions as an augmentation for feature semantics in our design. From this vantage, we present a Position-Aware Recalibration Module (PRM in short) which recalibrates features leveraging both feature semantics and position. Furthermore, inspired by multi-head attention, our module is capable of performing multiple recalibrations where results are concatenated as the output. As PRM is efficient and easy to implement, it can be seamlessly integrated into various base networks and applied to many position-aware visual tasks. Compared to original CNNs, our PRM introduces a negligible number of parameters and FLOPs, while yielding better performance. Experimental results on ImageNet and MS COCO benchmarks show that our approach surpasses related methods by a clear margin with less computational overhead. For example, we improve the ResNet50 by absolute 1.75% (77.65% vs. 75.90%) on ImageNet 2012 validation dataset, and 1.5%~1.9% mAP on MS COCO validation dataset with almost no computational overhead. Codes are made publicly available.

     
    more » « less
  2. Many existing studies on complex brain disorders, such as Alzheimer's Disease, usually employed regression analysis to associate the neuroimaging measures to cognitive status. However, whether these measures in multiple modalities have the predictive power to infer the trajectory of cognitive performance over time still remain under-explored. In this paper, we propose a high-order multi-modal multi-mask feature learning model to uncover temporal relationship between the longitudinal neuroimaging measures and progressive cognitive output scores. The regularizations through sparsity-induced norms implemented in the proposed learning model enable the selection of only a small number of imaging features over time and capture modality structures for multi-modal imaging markers. The promising experimental results in extensive empirical studies performed on the ADNI cohort have validated the effectiveness of the proposed method. 
    more » « less