Inverse Feature Learning: Feature Learning Based on Representation Learning of Error
                        
                    - Award ID(s):
- 1657260
- PAR ID:
- 10365461
- Publisher / Repository:
- Institute of Electrical and Electronics Engineers
- Date Published:
- Journal Name:
- IEEE Access
- Volume:
- 8
- ISSN:
- 2169-3536
- Page Range / eLocation ID:
- p. 132937-132949
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            null (Ed.)We present a new method to improve the representational power of the features in Convolutional Neural Networks (CNNs). By studying traditional image processing methods and recent CNN architectures, we propose to use positional information in CNNs for effective exploration of feature dependencies. Rather than considering feature semantics alone, we incorporate spatial positions as an augmentation for feature semantics in our design. From this vantage, we present a Position-Aware Recalibration Module (PRM in short) which recalibrates features leveraging both feature semantics and position. Furthermore, inspired by multi-head attention, our module is capable of performing multiple recalibrations where results are concatenated as the output. As PRM is efficient and easy to implement, it can be seamlessly integrated into various base networks and applied to many position-aware visual tasks. Compared to original CNNs, our PRM introduces a negligible number of parameters and FLOPs, while yielding better performance. Experimental results on ImageNet and MS COCO benchmarks show that our approach surpasses related methods by a clear margin with less computational overhead. For example, we improve the ResNet50 by absolute 1.75% (77.65% vs. 75.90%) on ImageNet 2012 validation dataset, and 1.5%~1.9% mAP on MS COCO validation dataset with almost no computational overhead. Codes are made publicly available.more » « less
- 
            Recent work introduced the model of learning from discriminative feature feedback, in which a human annotator not only provides labels of instances, but also identifies discriminative features that highlight important differences between pairs of instances. It was shown that such feedback can be conducive to learning, and makes it possible to efficiently learn some concept classes that would otherwise be in- tractable. However, these results all relied upon perfect annotator feedback. In this pa- per, we introduce a more realistic, robust ver- sion of the framework, in which the annotator is allowed to make mistakes. We show how such errors can be handled algorithmically, in both an adversarial and a stochastic setting. In particular, we derive regret bounds in both settings that, as in the case of a perfect an- notator, are independent of the number of features. We show that this result cannot be obtained by a naive reduction from the robust setting to the non-robust setting.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
