skip to main content

Title: Deep Set Auto Encoders for Anomaly Detection in Particle Physics
There is an increased interest in model agnostic search strategies for physics beyond the standard model at the Large Hadron Collider.We introduce a Deep Set Variational Autoencoder and present results on the Dark Machines Anomaly Score Challenge.We find that the method attains the best anomaly detection ability when there is no decoding step for the network, and the anomaly score is based solely on the representation within the encoded latent space.This method was one of the top-performing models in the Dark Machines Challenge, both for the open data sets as well as the blinded data sets.
Authors:
Award ID(s):
2019786
Publication Date:
NSF-PAR ID:
10323040
Journal Name:
SciPost Physics
Volume:
12
Issue:
1
ISSN:
2542-4653
Sponsoring Org:
National Science Foundation
More Like this
  1. We describe the outcome of a data challenge conducted as part of the Dark Machines (https://www.darkmachines.org) initiative and the Les Houches 2019 workshop on Physics at TeV colliders. The challenged aims to detect signals of new physics at the Large Hadron Collider (LHC) using unsupervised machine learning algorithms. First, we propose how an anomaly score could be implemented to define model-independent signal regions in LHC searches. We define and describe a large benchmark dataset, consisting of >1 billion simulated LHC events corresponding to 10\, fb^{-1} 10 f b − 1 of proton-proton collisions at a center-of-mass energy of 13 TeV. We then review a wide range of anomaly detection and density estimation algorithms, developed in the context of the data challenge, and we measure their performance in a set of realistic analysis environments. We draw a number of useful conclusions that will aid the development of unsupervised new physics searches during the third run of the LHC, and provide our benchmark dataset for future studies at https://www.phenoMLdata.org. Code to reproduce the analysis is provided at https://github.com/bostdiek/DarkMachines-UnsupervisedChallenge.
  2. Propensity score methods account for selection bias in observational studies. However, the consistency of the propensity score estimators strongly depends on a correct specification of the propensity score model. Logistic regression and, with increasing popularity, machine learning tools are used to estimate propensity scores. We introduce a stacked generalization ensemble learning approach to improve propensity score estimation by fitting a meta learner on the predictions of a suitable set of diverse base learners. We perform a comprehensive Monte Carlo simulation study, implementing a broad range of scenarios that mimic characteristics of typical data sets in educational studies. The population average treatment effect is estimated using the propensity score in Inverse Probability of Treatment Weighting. Our proposed stacked ensembles, especially using gradient boosting machines as a meta learner trained on a set of 12 base learner predictions, led to superior reduction of bias compared to the current state-of-the-art in propensity score estimation. Further, our simulations imply that commonly used balance measures (averaged standardized absolute mean differences) might be misleading as propensity score model selection criteria. We apply our proposed model - which we call GBM-Stack - to assess the population average treatment effect of a Supplemental Instruction (SI) program in anmore »introductory psychology (PSY 101) course at San Diego State University. Our analysis provides evidence that moving the whole population to SI attendance would on average lead to 1.69 times higher odds to pass the PSY 101 class compared to not offering SI, with a 95% bootstrap confidence interval of (1.31, 2.20).« less
  3. In this paper, a deep neural network hidden Markov model (DNN-HMM) is proposed to detect pipeline leakage location. A long pipeline is divided into several sections and the leakage occurs in different section that is defined as different state of hidden Markov model (HMM). The hybrid HMM, i.e., DNN-HMM, consists of a deep neural network (DNN) with multiple layers to exploit the non-linear data. The DNN is initialized by using a deep belief network (DBN). The DBN is a pre-trained model built by stacking top-down restricted Boltzmann machines (RBM) that compute the emission probabilities for the HMM instead of Gaussian mixture model (GMM). Two comparative studies based on different numbers of states using Gaussian mixture model-hidden Markov model (GMM-HMM) and DNN-HMM are performed. The accuracy of the testing performance between detected state sequence and actual state sequence is measured by micro F1 score. The micro F1 score approaches 0.94 for GMM-HMM method and it is close to 0.95 for DNN-HMM method when the pipeline is divided into three sections. In the experiment that divides the pipeline as five sections, the micro F1 score for GMM-HMM is 0.69, while it approaches 0.96 with DNN-HMM method. The results demonstrate that the DNN-HMMmore »can learn a better model of non-linear data and achieve better performance compared to GMM-HMM method.« less
  4. The advances in deep neural networks (DNN) have significantly enhanced real-time detection of anomalous data in IoT applications. However, the complexity-accuracy-delay dilemma persists: Complex DNN models offer higher accuracy, but typical IoT devices can barely afford the computation load, and the remedy of offloading the load to the cloud incurs long delay. In this article, we address this challenge by proposing an adaptive anomaly detection scheme with hierarchical edge computing (HEC). Specifically, we first construct multiple anomaly detection DNN models with increasing complexity and associate each of them to a corresponding HEC layer. Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network . We also incorporate a parallelism policy training method to accelerate the training process by taking advantage of distributed models. We build an HEC testbed using real IoT devices and implement and evaluate our contextual-bandit approach with both univariate and multivariate IoT datasets. In comparison with both baseline and state-of-the-art schemes, our adaptive approach strikes the best accuracy-delay tradeoff on the univariate dataset and achieves the best accuracy and F1-score on the multivariate dataset with only negligibly longer delay than the best (butmore »inflexible) scheme.« less
  5. Abstract

    We present the Swimmy (Subaru WIde-field Machine-learning anoMalY) survey program, a deep-learning-based search for unique sources using multicolored (grizy) imaging data from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP). This program aims to detect unexpected, novel, and rare populations and phenomena, by utilizing the deep imaging data acquired from the wide-field coverage of the HSC-SSP. This article, as the first paper in the Swimmy series, describes an anomaly detection technique to select unique populations as “outliers” from the data-set. The model was tested with known extreme emission-line galaxies (XELGs) and quasars, which consequently confirmed that the proposed method successfully selected $\sim\!\! 60\%$–$70\%$ of the quasars and $60\%$ of the XELGs without labeled training data. In reference to the spectral information of local galaxies at z = 0.05–0.2 obtained from the Sloan Digital Sky Survey, we investigated the physical properties of the selected anomalies and compared them based on the significance of their outlier values. The results revealed that XELGs constitute notable fractions of the most anomalous galaxies, and certain galaxies manifest unique morphological features. In summary, deep anomaly detection is an effective tool that can search rare objects, and, ultimately, unknown unknowns with large data-sets. Further development of themore »proposed model and selection process can promote the practical applications required to achieve specific scientific goals.

    « less