skip to main content


Search for: All records

Creators/Authors contains: "Williams, Mike"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    The Lipschitz constant of the map between the input and output space represented by a neural network is a natural metric for assessing the robustness of the model. We present a new method to constrain the Lipschitz constant of dense deep learning models that can also be generalized to other architectures. The method relies on a simple weight normalization scheme during training that ensures the Lipschitz constant of every layer is below an upper limit specified by the analyst. A simple monotonic residual connection can then be used to make the model monotonic in any subset of its inputs, which is useful in scenarios where domain knowledge dictates such dependence. Examples can be found in algorithmic fairness requirements or, as presented here, in the classification of the decays of subatomic particles produced at the CERN Large Hadron Collider. Our normalization is minimally constraining and allows the underlying architecture to maintain higher expressiveness compared to other techniques which aim to either control the Lipschitz constant of the model or ensure its monotonicity. We show how the algorithm was used to train a powerful, robust, and interpretable discriminator for heavy-flavor-quark decays, which has been adopted for use as the primary data-selection algorithm in the LHCb real-time data-processing system in the current LHC data-taking period known as Run 3. In addition, our algorithm has also achieved state-of-the-art performance on benchmarks in medicine, finance, and other applications.

     
    more » « less
  2. A bstract In this work, we explore new spin-1 states with axial couplings to the standard model fermions. We develop a data-driven method to estimate their hadronic decay rates based on data from τ decays and using SU(3) flavor symmetry. We derive the current and future experimental constraints for several benchmark models. Our framework is generic and can be used for models with arbitrary vectorial and axial couplings to quarks. We have made our calculations publicly available by incorporating them into the D ark C ast package, see https://gitlab.com/darkcast/releases . 
    more » « less
  3. Dark matter particles may interact with other dark matter particles via a new force mediated by a dark photon, A′, which would be the dark-sector analog to the ordinary photon of electromagnetism. The dark photon can obtain a highly suppressed mixing-induced coupling to the electromagnetic current, providing a portal through which dark photons can interact with ordinary matter. This review focuses on A′ scenarios that are potentially accessible to accelerator-based experiments. We summarize the existing constraints placed by such experiments on dark photons, highlight what could be observed in the near future, and discuss the major experimental challenges that must be overcome to improve sensitivities. 
    more » « less
  4. null (Ed.)
    A key challenge in searches for resonant new physics is that classifiers trained to enhance potential signals must not induce localized structures. Such structures could result in a false signal when the background is estimated from data using sideband methods. A variety of techniques have been developed to construct classifiers which are independent from the resonant feature (often a mass). Such strategies are sufficient to avoid localized structures, but are not necessary. We develop a new set of tools using a novel moment loss function (Moment Decomposition or MoDe) which relax the assumption of independence without creating structures in the background. By allowing classifiers to be more flexible, we enhance the sensitivity to new physics without compromising the fidelity of the background estimation. 
    more » « less
  5. Biscarat, C. ; Campana, S. ; Hegner, B. ; Roiser, S. ; Rovelli, C.I. ; Stewart, G.A. (Ed.)
    The locations of proton-proton collision points in LHC experiments are called primary vertices (PVs). Preliminary results of a hybrid deep learning algorithm for identifying and locating these, targeting the Run 3 incarnation of LHCb, have been described at conferences in 2019 and 2020. In the past year we have made significant progress in a variety of related areas. Using two newer Kernel Density Estimators (KDEs) as input feature sets improves the fidelity of the models, as does using full LHCb simulation rather than the “toy Monte Carlo” originally (and still) used to develop models. We have also built a deep learning model to calculate the KDEs from track information. Connecting a tracks-to-KDE model to a KDE-to-hists model used to find PVs provides a proof-of-concept that a single deep learning model can use track information to find PVs with high efficiency and high fidelity. We have studied a variety of models systematically to understand how variations in their architectures affect performance. While the studies reported here are specific to the LHCb geometry and operating conditions, the results suggest that the same approach could be used by the ATLAS and CMS experiments. 
    more » « less
  6. null (Ed.)
    Abstract This document presents the physics case and ancillary studies for the proposed CODEX-b long-lived particle (LLP) detector, as well as for a smaller proof-of-concept demonstrator detector, CODEX- $$\beta $$ β , to be operated during Run 3 of the LHC. Our development of the CODEX-b physics case synthesizes ‘top-down’ and ‘bottom-up’ theoretical approaches, providing a detailed survey of both minimal and complete models featuring LLPs. Several of these models have not been studied previously, and for some others we amend studies from previous literature: In particular, for gluon and fermion-coupled axion-like particles. We moreover present updated simulations of expected backgrounds in CODEX-b’s actively shielded environment, including the effects of shielding propagation uncertainties, high-energy tails and variation in the shielding design. Initial results are also included from a background measurement and calibration campaign. A design overview is presented for the CODEX- $$\beta $$ β demonstrator detector, which will enable background calibration and detector design studies. Finally, we lay out brief studies of various design drivers of the CODEX-b experiment and potential extensions of the baseline design, including the physics case for a calorimeter element, precision timing, event tagging within LHCb, and precision low-momentum tracking. 
    more » « less