skip to main content

Title: Multifaceted Uncertainty Estimation for Label-Efficient Deep Learning
We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data. By leveraging the second-order uncertainty representation provided by subjective logic (SL), we conduct evidence-based theoretical analysis and formally decompose the predicted entropy over multiple classes into two distinct sources of uncertainty: vacuity and dissonance, caused by lack of evidence and conflict of strong evidence, respectively. The evidence based entropy decomposition provides deeper insights on the nature of uncertainty, which can help effectively explore a large and high-dimensional unlabeled data space. We develop a novel loss function that augments DL based evidence prediction with uncertainty anchor sample identification. The accurately estimated multiple sources of uncertainty are systematically integrated and dynamically balanced using a data sampling function for label-efficient active deep learning (ADL). Experiments conducted over both synthetic and real data and comparison with competitive AL methods demonstrate the effectiveness of the proposed ADL model.
Authors:
; ; ;
Award ID(s):
1814450
Publication Date:
NSF-PAR ID:
10253144
Journal Name:
Advances in neural information processing systems
Volume:
33
ISSN:
1049-5258
Sponsoring Org:
National Science Foundation
More Like this
  1. Larochelle, Hugo ; Ranzato, Marc'Aurelio ; Hadsell, Raia ; Balcan, Maria ; Lin, Hsuan (Ed.)
    We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data. By leveraging the second-order uncertainty representation provided by subjective logic (SL), we conduct evidence-based theoretical analysis and formally decompose the predicted entropy over multiple classes into two distinct sources of uncertainty: vacuity and dissonance, caused by lack of evidence and conflict of strong evidence, respectively. The evidence based entropy decomposition provides deeper insights on the nature of uncertainty, which can help effectively explore a large and high-dimensional unlabeled data space. We develop a novel loss function that augments DL based evidence prediction with uncertainty anchor sample identification. The accurately estimated multiple sources of uncertainty are systematically integrated and dynamically balanced using a data sampling function for label-efficient active deep learning (ADL). Experiments conducted over both synthetic and real data and comparison with competitive AL methods demonstrate the effectiveness of the proposed ADL model.
  2. Building a predictive model based on historical Electronic Health Records (EHRs) for personalized healthcare has become an active research area. Benefiting from the powerful ability of feature ex- traction, deep learning (DL) approaches have achieved promising performance in many clinical prediction tasks. However, due to the lack of interpretability and trustworthiness, it is difficult to apply DL in real clinical cases of decision making. To address this, in this paper, we propose an interpretable and trustworthy predictive model (INPREM) for healthcare. Firstly, INPREM is designed as a linear model for interpretability while encoding non-linear rela- tionships into the learning weights for modeling the dependencies between and within each visit. This enables us to obtain the contri- bution matrix of the input variables, which is served as the evidence of the prediction result(s), and help physicians understand why the model gives such a prediction, thereby making the model more in- terpretable. Secondly, for trustworthiness, we place a random gate (which follows a Bernoulli distribution to turn on or off) over each weight of the model, as well as an additional branch to estimate data noises. With the help of the Monto Carlo sampling and an ob- jective function accounting for datamore »noises, the model can capture the uncertainty of each prediction. The captured uncertainty, in turn, allows physicians to know how confident the model is, thus making the model more trustworthy. We empirically demonstrate that the proposed INPREM outperforms existing approaches with a significant margin. A case study is also presented to show how the contribution matrix and the captured uncertainty are used to assist physicians in making robust decisions.« less
  3. Building a predictive model based on historical Electronic Health Records (EHRs) for personalized healthcare has become an active research area. Benefiting from the powerful ability of feature ex- traction, deep learning (DL) approaches have achieved promising performance in many clinical prediction tasks. However, due to the lack of interpretability and trustworthiness, it is difficult to apply DL in real clinical cases of decision making. To address this, in this paper, we propose an interpretable and trustworthy predictive model (INPREM) for healthcare. Firstly, INPREM is designed as a linear model for interpretability while encoding non-linear rela- tionships into the learning weights for modeling the dependencies between and within each visit. This enables us to obtain the contri- bution matrix of the input variables, which is served as the evidence of the prediction result(s), and help physicians understand why the model gives such a prediction, thereby making the model more in- terpretable. Secondly, for trustworthiness, we place a random gate (which follows a Bernoulli distribution to turn on or off) over each weight of the model, as well as an additional branch to estimate data noises. With the help of the Monto Carlo sampling and an ob- jective function accounting for datamore »noises, the model can capture the uncertainty of each prediction. The captured uncertainty, in turn, allows physicians to know how confident the model is, thus making the model more trustworthy. We empirically demonstrate that the proposed INPREM outperforms existing approaches with a significant margin. A case study is also presented to show how the contribution matrix and the captured uncertainty are used to assist physicians in making robust decisions.« less
  4. Using unreliable information sources generating conflicting evidence may lead to a large uncertainty, which significantly hurts the decision making process. Recently, many approaches have been taken to integrate conflicting data from multiple sources and/or fusing conflicting opinions from different entities. To explicitly deal with uncertainty, a belief model called Subjective Logic (SL), as a variant of Dumpster-Shafer Theory, has been proposed to represent subjective opinions and to merge multiple opinions by offering a rich volume of fusing operators, which have been used to solve many opinion inference problems in trust networks. However, the operators of SL are known to be lack of scalability in inferring unknown opinions from large network data as a result of the sequential procedures of merging multiple opinions. In addition, SL does not consider deriving opinions in the presence of conflicting evidence. In this work, we propose a hybrid inference method that combines SL and Probabilistic Soft Logic (PSL), namely, Collective Subjective Plus, CSL + , which is resistible to highly conflicting evidence or a lack of evidence. PSL can reason a belief in a collective manner to deal with large-scale network data, allowing high scalability based on relationships between opinions. However, PSL does not considermore »an uncertainty dimension in a subjective opinion. To take benefits from both SL and PSL, we proposed a hybrid approach called CSL + for achieving high scalability and high prediction accuracy for unknown opinions with uncertainty derived from a lack of evidence and/or conflicting evidence. Through the extensive experiments on four semi-synthetic and two real-world datasets, we showed that the CSL + outperforms the state-of-the-art belief model (i.e., SL), probabilistic inference models (i.e., PSL, CSL), and deep learning model (i.e., GCN-VAE-opinion) in terms of prediction accuracy, computational complexity, and real running time.« less
  5. Abstract Deep learning (DL) is one of the fastest-growing topics in materials data science, with rapidly emerging applications spanning atomistic, image-based, spectral, and textual data modalities. DL allows analysis of unstructured data and automated identification of features. The recent development of large materials databases has fueled the application of DL methods in atomistic prediction in particular. In contrast, advances in image and spectral data have largely leveraged synthetic data enabled by high-quality forward models as well as by generative unsupervised DL methods. In this article, we present a high-level overview of deep learning methods followed by a detailed discussion of recent developments of deep learning in atomistic simulation, materials imaging, spectral analysis, and natural language processing. For each modality we discuss applications involving both theoretical and experimental data, typical modeling approaches with their strengths and limitations, and relevant publicly available software and datasets. We conclude the review with a discussion of recent cross-cutting work related to uncertainty quantification in this field and a brief perspective on limitations, challenges, and potential growth areas for DL methods in materials science.