skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Interpretable Detection of Distribution Shifts in Learning Enabled Cyber-Physical Systems
The use of learning based components in cyber-physical systems (CPS) has created a gamut of possible avenues to use high dimensional real world signals generated from sensors like camera and LiDAR. The ability to process such signals can be largely attributed to the adoption of high-capacity function approximators like deep neural networks. However, this does not come without its potential perils. The pitfalls arise from possible over-fitting, and subsequent unsafe behavior when exposed to unknown environments. One challenge is that, in high dimensional input spaces it is almost impossible to experience enough training data in the design phase. What is required here, is an efficient way to flag out-of-distribution (OOD) samples that is precise enough to not raise too many false alarms. In addition, the system needs to be able to detect these in a computationally efficient manner at runtime. In this paper, our proposal is to build good representations for in-distribution data. We introduce the idea of a memory bank to store prototypical samples from the input space. We use these memories to compute probability density estimates using kernel density estimation techniques. We evaluate our technique on two challenging scenarios : a self-driving car setting implemented inside the simulator CARLA with image inputs, and an autonomous racing car navigation setting, with LiDAR inputs. In both settings, it was observed that a deviation from in-distribution setting can potentially lead to deviation from safe behavior. An added benefit of using training samples as memories to detect out-of-distribution inputs is that the system is interpretable to a human operator. Explanation of this nature is generally hard to obtain from pure deep learning based alternatives. Our code for reproducing the experiments is available at https:// github.com/ yangy96/ interpretable_ood_detection.git  more » « less
Award ID(s):
2125561
PAR ID:
10331509
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ACMIEEE International Conference on CyberPhysical Systems
ISSN:
2375-8317
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Incorporating learning based components in the current state-of-the-art cyber-physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine, and other safety-critical domains. This is because it would allow system designers to use high-dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs. The ability to handle such high-dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. However, achieving a meaningful coverage is impossible. This naturally leads to the following question: is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled. Our aim in this article is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from the in-distribution setting can potentially lead to unsafe behavior. It should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory-based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches, such feedback is difficult to obtain due to reliance on techniques which use variational autoencoders. 
    more » « less
  2. Deep neural networks (DNNs) have achieved near-human level accuracy on many datasets across different domains. But they are known to produce incorrect predictions with high confidence on inputs far from the training distribution. This challenge of lack of calibration of DNNs has limited the adoption of deep learning models in high-assurance systems such as autonomous driving, air traffic management, cybersecurity, and medical diagnosis. The problem of detecting when an input is outside the training distribution of a machine learning model, and hence, its prediction on this input cannot be trusted, has received significant attention recently. Several techniques based on statistical, geometric, topological, or relational signatures have been developed to detect the out-of-distribution (OOD) or novel inputs. In this paper, we present a runtime monitor based on predictive processing and dual process theory. We posit that the bottom-up deep neural networks can be monitored using top-down context models comprising two layers. The first layer is a feature density model that learns the joint distribution of the original DNN’s inputs, outputs, and the model’s explanation for its decisions. The second layer is a graph Markov neural network that captures an even broader context. We demonstrate the efficacy of our monitoring architecture in recognizing out-of-distribution and out-of-context inputs on the image classification and object detection tasks. 
    more » « less
  3. Deep-learning driven safety-critical autonomous systems, such as self-driving cars, must be able to detect situations where its trained model is not able to make a trustworthy prediction. This ability to determine the novelty of a new input with respect to a trained model is critical for such systems because novel inputs due to changes in the environment, adversarial attacks, or even unintentional noise can potentially lead to erroneous, perhaps life-threatening decisions. This paper proposes a learning framework that leverages information learned by the prediction model in a task-aware manner to detect novel scenarios. We use network saliency to provide the learning architecture with knowledge of the input areas that are most relevant to the decision-making and learn an association between the saliency map and the predicted output to determine the novelty of the input. We demonstrate the efficacy of this method through experiments on real-world driving datasets as well as through driving scenarios in our in-house indoor driving environment where the novel image can be sampled from another similar driving dataset with similar features or from adversarial attacked images from the training dataset. We find that our method is able to systematically detect novel inputs and quantify the deviation from the target prediction through this task-aware approach. 
    more » « less
  4. Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks, where attackers can inject hidden backdoors during the training stage. This poses a serious threat to the Model-as-a-Service setting, where downstream users directly utilize third-party models (e.g., HuggingFace Hub, ChatGPT). To this end, we study the inference-stage black-box backdoor detection problem in the paper, where defenders aim to build a firewall to filter out the backdoor inputs in the inference stage, with only input samples and prediction labels available. Existing investigations on this problem either rely on strong assumptions on types of triggers and attacks or suffer from poor efficiency. To build a more generalized and efficient method, we first provide a novel causality-based lens to analyze heterogeneous prediction behaviors for clean and backdoored samples in the inference stage, considering both sample-specific and sample-agnostic backdoor attacks. Motivated by the causal analysis and do-calculus in causal inference, we introduce Black-box Backdoor detection under the Causality Lens (BBCaL) which distinguishes backdoor and clean samples by analyzing prediction consistency after progressively constructing counterfactual samples. Theoretical analysis also sheds light on the effectiveness of the BBCaL. Extensive experiments on three benchmark datasets validate the effectiveness and efficiency of our method. 
    more » « less
  5. Training the deep neural networks that dominate NLP requires large datasets. These are often collected automatically or via crowdsourcing, and may exhibit systematic biases or annotation artifacts. By the latter we mean spurious correlations between inputs and outputs that do not represent a generally held causal relationship between features and classes; models that exploit such correlations may appear to perform a given task well, but fail on out of sample data. In this paper, we evaluate use of different attribution methods for aiding identification of training data artifacts. We propose new hybrid approaches that combine saliency maps (which highlight important input features) with instance attribution methods (which retrieve training samples influential to a given prediction). We show that this proposed training-feature attribution can be used to efficiently uncover artifacts in training data when a challenging validation set is available. We also carry out a small user study to evaluate whether these methods are useful to NLP researchers in practice, with promising results. We make code for all methods and experiments in this paper available. 
    more » « less