skip to main content


Title: Segment Linking: A Highly Parallelizable Track Reconstruction Algorithm for HL-LHC
Abstract The High Luminosity upgrade of the Large Hadron Collider (HL-LHC) will produce particle collisions with up to 200 simultaneous proton-proton interactions. These unprecedented conditions will create a combinatorial complexity for charged-particle track reconstruction that demands a computational cost that is expected to surpass the projected computing budget using conventional CPUs. Motivated by this and taking into account the prevalence of heterogeneous computing in cutting-edge High Performance Computing centers, we propose an efficient, fast and highly parallelizable bottom-up approach to track reconstruction for the HL-LHC, along with an associated implementation on GPUs, in the context of the Phase 2 CMS outer tracker. Our algorithm, called Segment Linking (or Line Segment Tracking), takes advantage of localized track stub creation, combining individual stubs to progressively form higher level objects that are subject to kinematical and geometrical requirements compatible with genuine physics tracks. The local nature of the algorithm makes it ideal for parallelization under the Single Instruction, Multiple Data paradigm, as hundreds of objects can be built simultaneously. The computing and physics performance of the algorithm has been tested on an NVIDIA Tesla V100 GPU, already yielding efficiency and timing measurements that are on par with the latest, multi-CPU versions of existing CMS tracking algorithms.  more » « less
Award ID(s):
1836650 2209443
NSF-PAR ID:
10430290
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Journal of Physics: Conference Series
Volume:
2375
Issue:
1
ISSN:
1742-6588
Page Range / eLocation ID:
012005
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The major challenge posed by the high instantaneous luminosity in the High Luminosity LHC (HL-LHC) motivates efficient and fast reconstruction of charged particle tracks in a high pile-up environment. While there have been efforts to use modern techniques like vectorization to improve the existing classic Kalman Filter based reconstruction algorithms, we take a fundamentally different approach by doing a bottom-up reconstruction of tracks. Our algorithm, called Line Segment Tracking, constructs small track stubs from adjoining detector regions, and then successively links these track stubs that are consistent with typical track trajectories. Since the production of these track stubs is localized, they can be made in parallel, which lends way into using architectures like GPUs and multi-CPUs to take advantage of the parallelism. We establish an implementation of our algorithm in the context of the CMS Phase-2 Tracker which runs on NVIDIA Tesla V100 GPUs, and measure the physics performance and the computing time. 
    more » « less
  2. Doglioni, C. ; Kim, D. ; Stewart, G.A. ; Silvestris, L. ; Jackson, P. ; Kamleh, W. (Ed.)
    One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is finding and fitting particle tracks during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filterbased methods for highly parallel, many-core SIMD and SIMT architectures that are now prevalent in high-performance hardware. Previously we observed significant parallel speedups, with physics performance comparable to CMS standard tracking, on Intel Xeon, Intel Xeon Phi, and (to a limited extent) NVIDIA GPUs. While early tests were based on artificial events occurring inside an idealized barrel detector, we showed subsequently that our mkFit software builds tracks successfully from complex simulated events (including detector pileup) occurring inside a geometrically accurate representation of the CMS-2017 tracker. Here, we report on advances in both the computational and physics performance of mkFit, as well as progress toward integration with CMS production software. Recently we have improved the overall efficiency of the algorithm by preserving short track candidates at a relatively early stage rather than attempting to extend them over many layers. Moreover, mkFit formerly produced an excess of duplicate tracks; these are now explicitly removed in an additional processing step. We demonstrate that with these enhancements, mkFit becomes a suitable choice for the first iteration of CMS tracking, and eventually for later iterations as well. We plan to test this capability in the CMS High Level Trigger during Run 3 of the LHC, with an ultimate goal of using it in both the CMS HLT and offline reconstruction for the HL-LHC CMS tracker. 
    more » « less
  3. Abstract

    In general-purpose particle detectors, the particle-flow algorithm may be used to reconstruct a comprehensive particle-level view of the event by combining information from the calorimeters and the trackers, significantly improving the detector resolution for jets and the missing transverse momentum. In view of the planned high-luminosity upgrade of the CERN Large Hadron Collider (LHC), it is necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance are sufficient in an environment with many simultaneous proton–proton interactions (pileup). Machine learning may offer a prospect for computationally efficient event reconstruction that is well-suited to heterogeneous computing platforms, while significantly improving the reconstruction quality over rule-based algorithms for granular detectors. We introduce MLPF, a novel, end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, computationally efficient, and scalable graph neural network optimized using a multi-task objective on simulated events. We report the physics and computational performance of the MLPF algorithm on a Monte Carlo dataset of top quark–antiquark pairs produced in proton–proton collisions in conditions similar to those expected for the high-luminosity LHC. The MLPF algorithm improves the physics response with respect to a rule-based benchmark algorithm and demonstrates computationally scalable particle-flow reconstruction in a high-pileup environment.

     
    more » « less
  4. Abstract The Large Hadron Collider (LHC) at CERN will undergo major upgrades to increase the instantaneous luminosity up to 5–7.5×10 34 cm -2 s -1 . This High Luminosity upgrade of the LHC (HL-LHC) will deliver a total of 3000–4000 fb -1 of proton-proton collisions at a center-of-mass energy of 13–14 TeV. To cope with these challenging environmental conditions, the strip tracker of the CMS experiment will be upgraded using modules with two closely-spaced silicon sensors to provide information to include tracking in the Level-1 trigger selection. This paper describes the performance, in a test beam experiment, of the first prototype module based on the final version of the CMS Binary Chip front-end ASIC before and after the module was irradiated with neutrons. Results demonstrate that the prototype module satisfies the requirements, providing efficient tracking information, after being irradiated with a total fluence comparable to the one expected through the lifetime of the experiment. 
    more » « less
  5. Abstract The Exa.TrkX project has applied geometric learning concepts such as metric learning and graph neural networks to HEP particle tracking. Exa.TrkX’s tracking pipeline groups detector measurements to form track candidates and filters them. The pipeline, originally developed using the TrackML dataset (a simulation of an LHC-inspired tracking detector), has been demonstrated on other detectors, including DUNE Liquid Argon TPC and CMS High-Granularity Calorimeter. This paper documents new developments needed to study the physics and computing performance of the Exa.TrkX pipeline on the full TrackML dataset, a first step towards validating the pipeline using ATLAS and CMS data. The pipeline achieves tracking efficiency and purity similar to production tracking algorithms. Crucially for future HEP applications, the pipeline benefits significantly from GPU acceleration, and its computational requirements scale close to linearly with the number of particles in the event. 
    more » « less