skip to main content


Title: Secondary vertex finding in jets with neural networks
Abstract Jet classification is an important ingredient in measurements and searches for new physics at particle colliders, and secondary vertex reconstruction is a key intermediate step in building powerful jet classifiers. We use a neural network to perform vertex finding inside jets in order to improve the classification performance, with a focus on separation of bottom vs. charm flavor tagging. We implement a novel, universal set-to-graph model, which takes into account information from all tracks in a jet to determine if pairs of tracks originated from a common vertex. We explore different performance metrics and find our method to outperform traditional approaches in accurate secondary vertex reconstruction. We also find that improved vertex finding leads to a significant improvement in jet classification performance.  more » « less
Award ID(s):
1836650
NSF-PAR ID:
10256991
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
The European Physical Journal C
Volume:
81
Issue:
6
ISSN:
1434-6044
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Several improvements to the ATLAS triggers used to identify jets containing b -hadrons ( b -jets) were implemented for data-taking during Run 2 of the Large Hadron Collider from 2016 to 2018. These changes include reconfiguring the b -jet trigger software to improve primary-vertex finding and allow more stable running in conditions with high pile-up, and the implementation of the functionality needed to run sophisticated taggers used by the offline reconstruction in an online environment. These improvements yielded an order of magnitude better light-flavour jet rejection for the same b -jet identification efficiency compared to the performance in Run 1 (2011–2012). The efficiency to identify b -jets in the trigger, and the conditional efficiency for b -jets that satisfy offline b -tagging requirements to pass the trigger are also measured. Correction factors are derived to calibrate the b -tagging efficiency in simulation to match that observed in data. The associated systematic uncertainties are substantially smaller than in previous measurements. In addition, b -jet triggers were operated for the first time during heavy-ion data-taking, using dedicated triggers that were developed to identify semileptonic b -hadron decays by selecting events with geometrically overlapping muons and jets. 
    more » « less
  2. Abstract

    In example‐based inverse linear blend skinning (LBS), a collection of poses (e.g. animation frames) are given, and the goal is finding skinning weights and transformation matrices that closely reproduce the input. These poses may come from physical simulation, direct mesh editing, motion capture or another deformation rig. We provide a re‐formulation of inverse skinning as a problem in high‐dimensional Euclidean space. The transformation matrices applied to a vertex across all poses can be thought of as a point in high dimensions. We cast the inverse LBS problem as one of finding a tight‐fitting simplex around these points (a well‐studied problem in hyperspectral imaging). Although we do not observe transformation matrices directly, the 3D position of a vertex across all of its poses defines an affine subspace, or flat. We solve a ‘closest flat’ optimization problem to find points on these flats, and then compute a minimum‐volume enclosing simplex whose vertices are the transformation matrices and whose barycentric coordinates are the skinning weights. We are able to create LBS rigs with state‐of‐the‐art reconstruction error and state‐of‐the‐art compression ratios for mesh animation sequences. Our solution does not consider weight sparsity or the rigidity of recovered transformations. We include observations and insights into the closest flat problem. Its ideal solution and optimal LBS reconstruction error remain an open problem.

     
    more » « less
  3. Doglioni, C. ; Kim, D. ; Stewart, G.A. ; Silvestris, L. ; Jackson, P. ; Kamleh, W. (Ed.)
    One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is finding and fitting particle tracks during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filterbased methods for highly parallel, many-core SIMD and SIMT architectures that are now prevalent in high-performance hardware. Previously we observed significant parallel speedups, with physics performance comparable to CMS standard tracking, on Intel Xeon, Intel Xeon Phi, and (to a limited extent) NVIDIA GPUs. While early tests were based on artificial events occurring inside an idealized barrel detector, we showed subsequently that our mkFit software builds tracks successfully from complex simulated events (including detector pileup) occurring inside a geometrically accurate representation of the CMS-2017 tracker. Here, we report on advances in both the computational and physics performance of mkFit, as well as progress toward integration with CMS production software. Recently we have improved the overall efficiency of the algorithm by preserving short track candidates at a relatively early stage rather than attempting to extend them over many layers. Moreover, mkFit formerly produced an excess of duplicate tracks; these are now explicitly removed in an additional processing step. We demonstrate that with these enhancements, mkFit becomes a suitable choice for the first iteration of CMS tracking, and eventually for later iterations as well. We plan to test this capability in the CMS High Level Trigger during Run 3 of the LHC, with an ultimate goal of using it in both the CMS HLT and offline reconstruction for the HL-LHC CMS tracker. 
    more » « less
  4. Abstract

    Efficient and accurate algorithms are necessary to reconstruct particles in the highly granular detectors anticipated at the High-Luminosity Large Hadron Collider and the Future Circular Collider. We study scalable machine learning models for event reconstruction in electron-positron collisions based on a full detector simulation. Particle-flow reconstruction can be formulated as a supervised learning task using tracks and calorimeter clusters. We compare a graph neural network and kernel-based transformer and demonstrate that we can avoid quadratic operations while achieving realistic reconstruction. We show that hyperparameter tuning significantly improves the performance of the models. The best graph neural network model shows improvement in the jet transverse momentum resolution by up to 50% compared to the rule-based algorithm. The resulting model is portable across Nvidia, AMD and Habana hardware. Accurate and fast machine-learning based reconstruction can significantly improve future measurements at colliders.

     
    more » « less
  5. Abstract IceCube, a cubic-kilometer array of optical sensors built to detect atmospheric and astrophysical neutrinos between 1 GeV and 1 PeV, is deployed 1.45 km to 2.45 km below the surface of the ice sheet at the South Pole. The classification and reconstruction of events from the in-ice detectors play a central role in the analysis of data from IceCube. Reconstructing and classifying events is a challenge due to the irregular detector geometry, inhomogeneous scattering and absorption of light in the ice and, below 100 GeV, the relatively low number of signal photons produced per event. To address this challenge, it is possible to represent IceCube events as point cloud graphs and use a Graph Neural Network (GNN) as the classification and reconstruction method. The GNN is capable of distinguishing neutrino events from cosmic-ray backgrounds, classifying different neutrino event types, and reconstructing the deposited energy, direction and interaction vertex. Based on simulation, we provide a comparison in the 1 GeV–100 GeV energy range to the current state-of-the-art maximum likelihood techniques used in current IceCube analyses, including the effects of known systematic uncertainties. For neutrino event classification, the GNN increases the signal efficiency by 18% at a fixed background rate, compared to current IceCube methods. Alternatively, the GNN offers a reduction of the background (i.e. false positive) rate by over a factor 8 (to below half a percent) at a fixed signal efficiency. For the reconstruction of energy, direction, and interaction vertex, the resolution improves by an average of 13%–20% compared to current maximum likelihood techniques in the energy range of 1 GeV–30 GeV. The GNN, when run on a GPU, is capable of processing IceCube events at a rate nearly double of the median IceCube trigger rate of 2.7 kHz, which opens the possibility of using low energy neutrinos in online searches for transient events. 
    more » « less