Active Label Refinement for Robust Training of Imbalanced Medical Image Classification Tasks in the Presence of High Label Noise
                        
                    - Award ID(s):
- 2245152
- PAR ID:
- 10618832
- Publisher / Repository:
- Springer Lecture Notes in Computer Science - Medical Image Computing and Computer-Assisted Interventions
- Date Published:
- Volume:
- 15011
- ISBN:
- 978-3-031-72120-5
- Page Range / eLocation ID:
- 37-47
- Format(s):
- Medium: X
- Location:
- Marrakesh, Morocco
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels. The task is to learn a classifier to predict the labels of future individual instances. Prior work on LLP for multi-class data has yet to develop a theoretically grounded algorithm. In this work, we propose an approach to LLP based on a reduction to learning with label noise, using the forward correction (FC) loss of Patrini et al. [30]. We establish an excess risk bound and generalization error analysis for our approach, while also extending the theory of the FC loss which may be of independent interest. Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures, compared to the leading methods.more » « less
- 
            Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels. The task is to learn a classifier to predict the labels of future individual instances. Prior work on LLP for multi-class data has yet to develop a theoretically grounded algorithm. In this work, we propose an approach to LLP based on a reduction to learning with label noise, using the forward correction (FC) loss of Patrini et al. [30]. We establish an excess risk bound and generalization error analysis for our approach, while also extending the theory of the FC loss which may be of independent interest. Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures, compared to the leading methods.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    