Abstract The earth system is exceedingly complex and often chaotic in nature, making prediction incredibly challenging: we cannot expect to make perfect predictions all of the time. Instead, we look for specific states of the system that lead to more predictable behavior than others, often termed “forecasts of opportunity.” When these opportunities are not present, scientists need prediction systems that are capable of saying “I don't know.” We introduce a novel loss function, termed “abstention loss,” that allows neural networks to identify forecasts of opportunity for regression problems. The abstention loss works by incorporating uncertainty in the network's prediction to identify the more confident samples and abstain (say “I don't know”) on the less confident samples. The abstention loss is designed to determine the optimal abstention fraction, or abstain on a user‐defined fraction using a standard adaptive controller. Unlike many methods for attaching uncertainty to neural network predictions post‐training, the abstention loss is applied during training to preferentially learn from the more confident samples. The abstention loss is built upon nonlinear heteroscedastic regression, a standard computer science method. While nonlinear heteroscedastic regression is a simple yet powerful tool for incorporating uncertainty in regression problems, we demonstrate that the abstention loss outperforms it for the synthetic climate use cases explored here. The implementation of the proposed abstention loss is straightforward in most network architectures designed for regression, as it only requires modification of the output layer and loss function.
more »
« less
Controlled Abstention Neural Networks for Identifying Skillful Predictions for Classification Problems
Abstract The earth system is exceedingly complex and often chaotic in nature, making prediction incredibly challenging: we cannot expect to make perfect predictions all of the time. Instead, we look for specific states of the system that lead to more predictable behavior than others, often termed “forecasts of opportunity.” When these opportunities are not present, scientists need prediction systems that are capable of saying “I don't know.” We introduce a novel loss function, termed the “NotWrong loss,” that allows neural networks to identify forecasts of opportunity for classification problems. The NotWrong loss introduces an abstention class that allows the network to identify the more confident samples and abstain (say “I don't know”) on the less confident samples. The abstention loss is designed to abstain on a user‐defined fraction of the samples via a standard adaptive controller. Unlike many machine learning methods used to reject samples post‐training, the NotWrong loss is applied during training to preferentially learn from the more confident samples. We show that the NotWrong loss outperforms other existing loss functions for multiple climate use cases. The implementation of the proposed loss function is straightforward in most network architectures designed for classification as it only requires the addition of an abstention class to the output layer and modification of the loss function.
more »
« less
- Award ID(s):
- 2019758
- PAR ID:
- 10447430
- Publisher / Repository:
- DOI PREFIX: 10.1029
- Date Published:
- Journal Name:
- Journal of Advances in Modeling Earth Systems
- Volume:
- 13
- Issue:
- 12
- ISSN:
- 1942-2466
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The goal of active learning is to achieve the same accuracy achievable by passive learning, while using much fewer labels. Exponential savings in terms of label complexity have been proved in very special cases, but fundamental lower bounds show that such improvements are impossible in general. This suggests a need to explore alternative goals for active learning. Learning with abstention is one such alternative. In this setting, the active learning algorithm may abstain from prediction and incur an error that is marginally smaller than random guessing. We develop the first computationally efficient active learning algorithm with abstention. Our algorithm provably achieves p o l y l o g ( 1 ε ) label complexity, without any low noise conditions. Such performance guarantee reduces the label complexity by an exponential factor, relative to passive learning and active learning that is not allowed to abstain. Furthermore, our algorithm is guaranteed to only abstain on hard examples (where the true label distribution is close to a fair coin), a novel property we term \emph{proper abstention} that also leads to a host of other desirable characteristics (e.g., recovering minimax guarantees in the standard setting, and avoiding the undesirable noise-seeking'' behavior often seen in active learning). We also provide novel extensions of our algorithm that achieve \emph{constant} label complexity and deal with model misspecification.more » « less
-
This work concerns the development of deep networks that are certifiably robust to adversarial attacks. Joint robust classification-detection was recently introduced as a certified defense mechanism, where adversarial examples are either correctly classified or assigned to the “abstain” class. In this work, we show that such a provable framework can benefit by extension to networks with multiple explicit abstain classes, where the adversarial examples are adaptively assigned to those. We show that naïvely adding multiple abstain classes can lead to “model degeneracy”, then we propose a regularization approach and a training method to counter this degeneracy by promoting full use of the multiple abstain classes. Our experiments demonstrate that the proposed approach consistently achieves favorable standard vs. robust verified accuracy tradeoffs, outperforming state-of-the-art algorithms for various choices of number of abstain classesmore » « less
-
Abstract Quantized or low-bit neural networks are attractive due to their inference efficiency. However, training deep neural networks with quantized activations involves minimizing a discontinuous and piecewise constant loss function. Such a loss function has zero gradient almost everywhere (a.e.), which makes the conventional gradient-based algorithms inapplicable. To this end, we study a novel class of biased first-order oracle, termed coarse gradient, for overcoming the vanished gradient issue. A coarse gradient is generated by replacing the a.e. zero derivative of quantized (i.e., staircase) ReLU activation composited in the chain rule with some heuristic proxy derivative called straight-through estimator (STE). Although having been widely used in training quantized networks empirically, fundamental questions like when and why the ad hoc STE trick works, still lack theoretical understanding. In this paper, we propose a class of STEs with certain monotonicity and consider their applications to the training of a two-linear-layer network with quantized activation functions for nonlinear multi-category classification. We establish performance guarantees for the proposed STEs by showing that the corresponding coarse gradient methods converge to the global minimum, which leads to a perfect classification. Lastly, we present experimental results on synthetic data as well as MNIST dataset to verify our theoretical findings and demonstrate the effectiveness of our proposed STEs.more » « less
-
Several natural phenomena, such as floods, earth-quakes, volcanic eruptions, or extreme space weather events often come with severity indexes. While these indexes, whether linear or logarithmic are vital, data-driven predictive models for these events rather use a fixed threshold. In this paper, we explore encoding this ordinality to enhance the performance of data-driven models, with specific application in solar flare forecasting. The prediction of solar flares is commonly approached as a binary forecasting problem, categorizing events as either Flare (FL) or No-Flare (NF) based on a chosen threshold (e.g., >C-class, > M-class, or >X-class). However, this binary formulation overlooks the inherent ordinality between the sub-classes within each binary class (FL and NF). In this paper, we propose a novel loss function aimed at optimizing the binary flare prediction problem by embedding the intrinsic ordinal flare characteristics into the binary cross-entropy (BCE) loss function. This modification is intended to provide the model with better guidance based on the ordinal characteristics of the data and improve the overall performance of the models. For our experiments, we employ a ResNet34-based model with transfer learning to predict 2:M-class flares by utilizing the shape-based features of magnetograms of active region (AR) patches spanning from -90° to +90°of solar longitude as our input data. We use a composite skill score (CSS) as our evaluation metric, which is calculated as the geometric mean of the True Skill Score (TSS) and the Heidke Skill Score (HSS) to rank and compare our models' performance. The primary contributions of this work are as follows: (i) We introduce a novel approach to encode ordinality into a binary loss function showing an application to solar flare prediction, (ii) We enhance solar flare forecasting by enabling flare predictions for each AR across the entire solar disk, without any longitudinal restrictions, and evaluate and compare performance. (iii) Our candidate model, optimized with the proposed loss function, shows an improvement of (~17%, (~14%, and (~13% for AR patches within ±30°, ±60°, and ±90° of solar longitude, respectively in terms of CSS, when compared with standard BCE. Additionally, we demonstrate the ability to issue flare forecasts for ARs in near-limb regions (regions between ±60° to ±90°) with a CSS=0.34 (TSS=0.50 and HSS=0.23), expanding the scope of AR-based models for solar flare prediction. This advances the reliability of solar flare forecasts, leading to more effective prediction capabilities.more » « less
An official website of the United States government
