skip to main content

Title: Learning to isolate muons
A bstract Distinguishing between prompt muons produced in heavy boson decay and muons produced in association with heavy-flavor jet production is an important task in analysis of collider physics data. We explore whether there is information available in calorimeter deposits that is not captured by the standard approach of isolation cones. We find that convolutional networks and particle-flow networks accessing the calorimeter cells surpass the performance of isolation cones, suggesting that the radial energy distribution and the angular structure of the calorimeter deposits surrounding the muon contain unused discrimination power. We assemble a small set of high-level observables which summarize the calorimeter information and close the performance gap with networks which analyze the calorimeter cells directly. These observables are theoretically well-defined and can be studied with collider data.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Journal of High Energy Physics
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A bstract A search for heavy Higgs bosons produced in association with a vector boson and decaying into a pair of vector bosons is performed in final states with two leptons (electrons or muons) of the same electric charge, missing transverse momentum and jets. A data sample of proton–proton collisions at a centre-of-mass energy of 13 TeV recorded with the ATLAS detector at the Large Hadron Collider between 2015 and 2018 is used. The data correspond to a total integrated luminosity of 139 fb − 1 . The observed data are in agreement with Standard Model background expectations. The results are interpreted using higher-dimensional operators in an effective field theory. Upper limits on the production cross-section are calculated at 95% confidence level as a function of the heavy Higgs boson’s mass and coupling strengths to vector bosons. Limits are set in the Higgs boson mass range from 300 to 1500 GeV, and depend on the assumed couplings. The highest excluded mass for a heavy Higgs boson with the coupling combinations explored is 900 GeV. Limits on coupling strengths are also provided. 
    more » « less
  2. null (Ed.)
    Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first layer of real-time data filtering at the CERN Large Hadron Collider, which has strict latency and resource constraints. We discuss how to design distance-weighted graph networks that can be executed with a latency of less than one μs on an FPGA. To do so, we consider a representative task associated to particle reconstruction and identification in a next-generation calorimeter operating at a particle collider. We use a graph network architecture developed for such purposes, and apply additional simplifications to match the computing constraints of Level-1 trigger systems, including weight quantization. Using the hls4ml library, we convert the compressed models into firmware to be implemented on an FPGA. Performance of the synthesized models is presented both in terms of inference accuracy and resource usage. 
    more » « less
  3. Abstract

    The high instantaneous luminosity of the CERN Large Hadron Collider leads to multiple proton–proton interactions in the same or nearby bunch crossings (pileup). Advanced pileup mitigation algorithms are designed to remove this noise from pileup particles and improve the performance of crucial physics observables. This study implements a semi-supervised graph neural network for particle-level pileup noise removal, by identifying individual particles produced from pileup. The graph neural network is firstly trained on charged particles with known labels, which can be obtained from detector measurements on data or simulation, and then inferred on neutral particles for which such labels are missing. This semi-supervised approach does not depend on the neutral particle pileup label information from simulation, and thus allows us to perform training directly on experimental data. The performance of this approach is found to be consistently better than widely-used domain algorithms and comparable to the fully-supervised training using simulation truth information. The study serves as the first attempt at applying semi-supervised learning techniques to pileup mitigation, and opens up a new direction of fully data-driven machine learning pileup mitigation studies.

    more » « less
  4. Abstract A search for long-lived charginos produced either directly or in the cascade decay of heavy prompt gluino states is presented. The search is based on proton–proton collision data collected at a centre-of-mass energy of $$\sqrt{s}$$ s  = 13 T $$\text {eV}$$ eV between 2015 and 2018 with the ATLAS detector at the LHC, corresponding to an integrated luminosity of 136 fb $$^{-1}$$ - 1 . Long-lived charginos are characterised by a distinct signature of a short and then disappearing track, and are reconstructed using at least four measurements in the ATLAS pixel detector, with no subsequent measurements in the silicon-microstrip tracking volume nor any associated energy deposits in the calorimeter. The final state is complemented by a large missing transverse-momentum requirement for triggering purposes and at least one high-transverse-momentum jet. No excess above the expected backgrounds is observed. Exclusion limits are set at 95% confidence level on the masses of the chargino and gluino for different chargino lifetimes. Chargino masses up to 660 (210) G $$\text {eV}$$ eV are excluded in scenarios where the chargino is a pure wino (higgsino). For charginos produced during the cascade decay of a heavy gluino, gluinos with masses below 2.1 T $$\text {eV}$$ eV are excluded for a chargino mass of 300 G $$\text {eV}$$ eV and a lifetime of 0.2 ns. 
    more » « less
  5. Abstract Several improvements to the ATLAS triggers used to identify jets containing b -hadrons ( b -jets) were implemented for data-taking during Run 2 of the Large Hadron Collider from 2016 to 2018. These changes include reconfiguring the b -jet trigger software to improve primary-vertex finding and allow more stable running in conditions with high pile-up, and the implementation of the functionality needed to run sophisticated taggers used by the offline reconstruction in an online environment. These improvements yielded an order of magnitude better light-flavour jet rejection for the same b -jet identification efficiency compared to the performance in Run 1 (2011–2012). The efficiency to identify b -jets in the trigger, and the conditional efficiency for b -jets that satisfy offline b -tagging requirements to pass the trigger are also measured. Correction factors are derived to calibrate the b -tagging efficiency in simulation to match that observed in data. The associated systematic uncertainties are substantially smaller than in previous measurements. In addition, b -jet triggers were operated for the first time during heavy-ion data-taking, using dedicated triggers that were developed to identify semileptonic b -hadron decays by selecting events with geometrically overlapping muons and jets. 
    more » « less