skip to main content


Title: Experiential learning in computing accessibility education
We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data. By leveraging the second-order uncertainty representation provided by subjective logic (SL), we conduct evidence-based theoretical analysis and formally decompose the predicted entropy over multiple classes into two distinct sources of uncertainty: vacuity and dissonance, caused by lack of evidence and conflict of strong evidence, respectively. The evidence based entropy decomposition provides deeper insights on the nature of uncertainty, which can help effectively explore a large and high-dimensional unlabeled data space. We develop a novel loss function that augments DL based evidence prediction with uncertainty anchor sample identification. The accurately estimated multiple sources of uncertainty are systematically integrated and dynamically balanced using a data sampling function for label-efficient active deep learning (ADL). Experiments conducted over both synthetic and real data and comparison with competitive AL methods demonstrate the effectiveness of the proposed ADL model.  more » « less
Award ID(s):
1954376 1825023
NSF-PAR ID:
10223464
Author(s) / Creator(s):
; ; ;
Editor(s):
Larochelle, Hugo; Ranzato, Marc'Aurelio; Hadsell, Raia; Balcan, Maria ; Lin, Hsuan 
Date Published:
Journal Name:
Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data. By leveraging the second-order uncertainty representation provided by subjective logic (SL), we conduct evidence-based theoretical analysis and formally decompose the predicted entropy over multiple classes into two distinct sources of uncertainty: vacuity and dissonance, caused by lack of evidence and conflict of strong evidence, respectively. The evidence based entropy decomposition provides deeper insights on the nature of uncertainty, which can help effectively explore a large and high-dimensional unlabeled data space. We develop a novel loss function that augments DL based evidence prediction with uncertainty anchor sample identification. The accurately estimated multiple sources of uncertainty are systematically integrated and dynamically balanced using a data sampling function for label-efficient active deep learning (ADL). Experiments conducted over both synthetic and real data and comparison with competitive AL methods demonstrate the effectiveness of the proposed ADL model. 
    more » « less
  2. null (Ed.)
    Building a predictive model based on historical Electronic Health Records (EHRs) for personalized healthcare has become an active research area. Benefiting from the powerful ability of feature ex- traction, deep learning (DL) approaches have achieved promising performance in many clinical prediction tasks. However, due to the lack of interpretability and trustworthiness, it is difficult to apply DL in real clinical cases of decision making. To address this, in this paper, we propose an interpretable and trustworthy predictive model (INPREM) for healthcare. Firstly, INPREM is designed as a linear model for interpretability while encoding non-linear rela- tionships into the learning weights for modeling the dependencies between and within each visit. This enables us to obtain the contri- bution matrix of the input variables, which is served as the evidence of the prediction result(s), and help physicians understand why the model gives such a prediction, thereby making the model more in- terpretable. Secondly, for trustworthiness, we place a random gate (which follows a Bernoulli distribution to turn on or off) over each weight of the model, as well as an additional branch to estimate data noises. With the help of the Monto Carlo sampling and an ob- jective function accounting for data noises, the model can capture the uncertainty of each prediction. The captured uncertainty, in turn, allows physicians to know how confident the model is, thus making the model more trustworthy. We empirically demonstrate that the proposed INPREM outperforms existing approaches with a significant margin. A case study is also presented to show how the contribution matrix and the captured uncertainty are used to assist physicians in making robust decisions. 
    more » « less
  3. null (Ed.)
    Building a predictive model based on historical Electronic Health Records (EHRs) for personalized healthcare has become an active research area. Benefiting from the powerful ability of feature ex- traction, deep learning (DL) approaches have achieved promising performance in many clinical prediction tasks. However, due to the lack of interpretability and trustworthiness, it is difficult to apply DL in real clinical cases of decision making. To address this, in this paper, we propose an interpretable and trustworthy predictive model (INPREM) for healthcare. Firstly, INPREM is designed as a linear model for interpretability while encoding non-linear rela- tionships into the learning weights for modeling the dependencies between and within each visit. This enables us to obtain the contri- bution matrix of the input variables, which is served as the evidence of the prediction result(s), and help physicians understand why the model gives such a prediction, thereby making the model more in- terpretable. Secondly, for trustworthiness, we place a random gate (which follows a Bernoulli distribution to turn on or off) over each weight of the model, as well as an additional branch to estimate data noises. With the help of the Monto Carlo sampling and an ob- jective function accounting for data noises, the model can capture the uncertainty of each prediction. The captured uncertainty, in turn, allows physicians to know how confident the model is, thus making the model more trustworthy. We empirically demonstrate that the proposed INPREM outperforms existing approaches with a significant margin. A case study is also presented to show how the contribution matrix and the captured uncertainty are used to assist physicians in making robust decisions. 
    more » « less
  4. null (Ed.)
    Using unreliable information sources generating conflicting evidence may lead to a large uncertainty, which significantly hurts the decision making process. Recently, many approaches have been taken to integrate conflicting data from multiple sources and/or fusing conflicting opinions from different entities. To explicitly deal with uncertainty, a belief model called Subjective Logic (SL), as a variant of Dumpster-Shafer Theory, has been proposed to represent subjective opinions and to merge multiple opinions by offering a rich volume of fusing operators, which have been used to solve many opinion inference problems in trust networks. However, the operators of SL are known to be lack of scalability in inferring unknown opinions from large network data as a result of the sequential procedures of merging multiple opinions. In addition, SL does not consider deriving opinions in the presence of conflicting evidence. In this work, we propose a hybrid inference method that combines SL and Probabilistic Soft Logic (PSL), namely, Collective Subjective Plus, CSL + , which is resistible to highly conflicting evidence or a lack of evidence. PSL can reason a belief in a collective manner to deal with large-scale network data, allowing high scalability based on relationships between opinions. However, PSL does not consider an uncertainty dimension in a subjective opinion. To take benefits from both SL and PSL, we proposed a hybrid approach called CSL + for achieving high scalability and high prediction accuracy for unknown opinions with uncertainty derived from a lack of evidence and/or conflicting evidence. Through the extensive experiments on four semi-synthetic and two real-world datasets, we showed that the CSL + outperforms the state-of-the-art belief model (i.e., SL), probabilistic inference models (i.e., PSL, CSL), and deep learning model (i.e., GCN-VAE-opinion) in terms of prediction accuracy, computational complexity, and real running time. 
    more » « less
  5. Accurate uncertainty quantification is necessary to enhance the reliability of deep learning (DL) models in realworld applications. In the case of regression tasks, prediction intervals (PIs) should be provided along with the deterministic predictions of DL models. Such PIs are useful or “high-quality (HQ)” as long as they are sufficiently narrow and capture most of the probability density. In this article, we present a method to learn PIs for regression-based neural networks (NNs) automatically in addition to the conventional target predictions. In particular, we train two companion NNs: one that uses one output, the target estimate, and another that uses two outputs, the upper and lower bounds of the corresponding PI. Our main contribution is the design of a novel loss function for the PI-generation network that takes into account the output of the target-estimation network and has two optimization objectives: minimizing the mean PI width and ensuring the PI integrity using constraints that maximize the PI probability coverage implicitly. Furthermore, we introduce a self-adaptive coefficient that balances both objectives within the loss function, which alleviates the task of fine-tuning. Experiments using a synthetic dataset, eight benchmark datasets, and a real-world crop yield prediction dataset showed that our method was able to maintain a nominal probability coverage and produce significantly narrower PIs without detriment to its target estimation accuracy when compared to those PIs generated by three state-of-the-art neuralnetwork-based methods. In other words, our method was shown to produce higher quality PIs. 
    more » « less