skip to main content

This content will become publicly available on June 15, 2024

Title: "Why did the Model Fail?": Attributing Model Performance Changes to Distribution Shifts
Machine learning models frequently experience performance drops under distribution shifts. The underlying cause of such shifts may be multiple simultaneous factors such as changes in data quality, differences in specific covariate distributions, or changes in the relationship between label and features. When a model does fail during deployment, attributing performance change to these factors is critical for the model developer to identify the root cause and take mitigating actions. In this work, we introduce the problem of attributing performance differences between environments to distribution shifts in the underlying data generating mechanisms. We formulate the problem as a cooperative game where the players are distributions. We define the value of a set of distributions to be the change in model performance when only this set of distributions has changed between environments, and derive an importance weighting method for computing the value of an arbitrary set of distributions. The contribution of each distribution to the total performance change is then quantified as its Shapley value. We demonstrate the correctness and utility of our method on synthetic, semi-synthetic, and real-world case studies, showing its effectiveness in attributing performance changes to a wide range of distribution shifts.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ICML 2023
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Given a sequence of random graphs, we address the problem of online monitoring and detection of changes in the underlying data distribution. To this end, we adopt the Random Dot Product Graph (RDPG) model which postulates each node has an associated latent vector, and inner products between these vectors dictate the edge formation probabilities. Existing approaches for graph change-point detection (CPD) rely either on extensive computation, or they store and process the entire observed time series. In this paper we consider the cumulative sum of a judicious monitoring function, which quantifies the discrepancy between the streaming graph observations and the nominal model. This reference distribution is inferred via spectral embeddings of the first few graphs in the sequence, and the monitoring function can be updated in an efficient, online fashion. We characterize the distribution of this running statistic, allowing us to select appropriate thresholding parameters that guarantee error-rate control. The end result is a lightweight online CPD algorithm, with a proven capability to flag distribution shifts in the arriving graphs. The novel method is tested on both synthetic and real network data, corroborating its effectiveness in quickly detecting changes in the input graph sequence. 
    more » « less
  2. Abstract

    Streaming adaptations of manifold learning based dimensionality reduction methods, such asIsomap, are based on the assumption that a small initial batch of observations is enough for exact learning of the manifold, while remaining streaming data instances can be cheaply mapped to this manifold. However, there are no theoretical results to show that this core assumption is valid. Moreover, such methods typically assume that the underlying data distribution is stationary and are not equipped to detect, or handle, sudden changes or gradual drifts in the distribution that may occur when the data is streaming. We present theoretical results to show that the quality of a manifold asymptotically converges as the size of data increases. We then show that a Gaussian Process Regression (GPR) model, that uses a manifold-specific kernel function and is trained on an initial batch of sufficient size, can closely approximate the state-of-art streaming Isomap algorithms, and the predictive variance obtained from the GPR prediction can be employed as an effective detector of changes in the underlying data distribution. Results on several synthetic and real data sets show that the resulting algorithm can effectively learn lower dimensional representation of high dimensional data in a streaming setting, while identifying shifts in the generative distribution. For instance, key findings on a Gas sensor array data set show that our method can detect changes in the underlying data stream, triggered due to real-world factors, such as introduction of a new gas in the system, while efficiently mapping data on a low-dimensional manifold.

    more » « less
  3. When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain—but not all—distribution shifts could result in significant performance degradation. In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially, making interventions by a human expert (or model retraining) unnecessary. While several works have developed tests for distribution shifts, these typically either use non-sequential methods, or detect arbitrary shifts (benign or harmful), or both. We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate. In this work, we design simple sequential tools for testing if the difference between source (training) and target (test) distributions leads to a significant increase in a risk function of interest, like accuracy or calibration. Recent advances in constructing time-uniform confidence sequences allow efficient aggregation of statistical evidence accumulated during the tracking process. The designed framework is applicable in settings where (some) true labels are revealed after the prediction is performed, or when batches of labels become available in a delayed fashion. We demonstrate the efficacy of the proposed framework through an extensive empirical study on a collection of simulated and real datasets. 
    more » « less
  4. We consider a general formulation of the multiple change-point problem, in which the data is assumed to belong to a set equipped with a positive semidefinite kernel. We propose a model-selection penalty allowing to select the number of change points in Harchaoui and Cappe's kernel-based change-point detection method. The model-selection penalty generalizes non-asymptotic model-selection penalties for the change-in-mean problem with univariate data. We prove a non-asymptotic oracle inequality for the resulting kernel-based change-point detection method, whatever the unknown number of change points, thanks to a concentration result for Hilbert-space valued random variables which may be of independent interest. Experiments on synthetic and real data illustrate the proposed method, demonstrating its ability to detect subtle changes in the distribution of data. 
    more » « less
  5. Abstract

    Estimating and predicting the state of the atmosphere is a probabilistic problem for which an ensemble modeling approach often is taken to represent uncertainty in the system. Common methods for examining uncertainty and assessing performance for ensembles emphasize pointwise statistics or marginal distributions. However, these methods lose specific information about individual ensemble members. This paper explores contour band depth (cBD), a method of analyzing uncertainty in terms of contours of scalar fields. cBD is fully nonparametric and induces an ordering on ensemble members that leads to box-and-whisker-plot-type visualizations of uncertainty for two-dimensional data. By applying cBD to synthetic ensembles, we demonstrate that it provides enhanced information about the spatial structure of ensemble uncertainty. We also find that the usefulness of the cBD analysis depends on the presence of multiple modes and multiple scales in the ensemble of contours. Finally, we apply cBD to compare various convection-permitting forecasts from different ensemble prediction systems and find that the value it provides in real-world applications compared to standard analysis methods exhibits clear limitations. In some cases, contour boxplots can provide deeper insight into differences in spatial characteristics between the different ensemble forecasts. Nevertheless, identification of outliers using cBD is not always intuitive, and the method can be especially challenging to implement for flow that exhibits multiple spatial scales (e.g., discrete convective cells embedded within a mesoscale weather system).

    Significance Statement

    Predictions of Earth’s atmosphere inherently come with some degree of uncertainty owing to incomplete observations and the chaotic nature of the system. Understanding that uncertainty is critical when drawing scientific conclusions or making policy decisions from model predictions. In this study, we explore a method for describing model uncertainty when the quantities of interest are well represented by contours. The method yields a quantitative visualization of uncertainty in both the location and the shape of contours to an extent that is not possible with standard uncertainty quantification methods and may eventually prove useful for the development of more robust techniques for evaluating and validating numerical weather models.

    more » « less