Abstract The reconstruction of the trajectories of charged particles, or track reconstruction, is a key computational challenge for particle and nuclear physics experiments. While the tuning of track reconstruction algorithms can depend strongly on details of the detector geometry, the algorithms currently in use by experiments share many common features. At the same time, the intense environment of the High-Luminosity LHC accelerator and other future experiments is expected to put even greater computational stress on track reconstruction software, motivating the development of more performant algorithms. We present here A Common Tracking Software (ACTS) toolkit, which draws on the experience with track reconstruction algorithms in the ATLAS experiment and presents them in an experiment-independent and framework-independent toolkit. It provides a set of high-level track reconstruction tools which are agnostic to the details of the detection technologies and magnetic field configuration and tested for strict thread-safety to support multi-threaded event processing. We discuss the conceptual design and technical implementation of ACTS, selected applications and performance of ACTS, and the lessons learned.
more »
« less
A GPU-Based Kalman Filter for Track Fitting
Abstract Computing centres, including those used to process High-Energy Physics data and simulations, are increasingly providing significant fractions of their computing resources through hardware architectures other than x86 CPUs, with GPUs being a common alternative. GPUs can provide excellent computational performance at a good price point for tasks that can be suitably parallelized. Charged particle (track) reconstruction is a computationally expensive component of HEP data reconstruction, and thus needs to use available resources in an efficient way. In this paper, an implementation of Kalman filter-based track fitting using CUDA and running on GPUs is presented. This utilizes the ACTS (A Common Tracking Software) toolkit; an open source and experiment-independent toolkit for track reconstruction. The implementation details and parallelization approach are described, along with the specific challenges for such an implementation. Detailed performance benchmarking results are discussed, which show encouraging performance gains over a CPU-based implementation for representative configurations. Finally, a perspective on the challenges and future directions for these studies is outlined. These include more complex and realistic scenarios which can be studied, and anticipated developments to software frameworks and standards which may open up possibilities for greater flexibility and improved performance.
more »
« less
- Award ID(s):
- 1836650
- PAR ID:
- 10354361
- Date Published:
- Journal Name:
- Computing and Software for Big Science
- Volume:
- 5
- Issue:
- 1
- ISSN:
- 2510-2036
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The major challenge posed by the high instantaneous luminosity in the High Luminosity LHC (HL-LHC) motivates efficient and fast reconstruction of charged particle tracks in a high pile-up environment. While there have been efforts to use modern techniques like vectorization to improve the existing classic Kalman Filter based reconstruction algorithms, we take a fundamentally different approach by doing a bottom-up reconstruction of tracks. Our algorithm, called Line Segment Tracking, constructs small track stubs from adjoining detector regions, and then successively links these track stubs that are consistent with typical track trajectories. Since the production of these track stubs is localized, they can be made in parallel, which lends way into using architectures like GPUs and multi-CPUs to take advantage of the parallelism. We establish an implementation of our algorithm in the context of the CMS Phase-2 Tracker which runs on NVIDIA Tesla V100 GPUs, and measure the physics performance and the computing time.more » « less
-
Abstract The High Luminosity upgrade of the Large Hadron Collider (HL-LHC) will produce particle collisions with up to 200 simultaneous proton-proton interactions. These unprecedented conditions will create a combinatorial complexity for charged-particle track reconstruction that demands a computational cost that is expected to surpass the projected computing budget using conventional CPUs. Motivated by this and taking into account the prevalence of heterogeneous computing in cutting-edge High Performance Computing centers, we propose an efficient, fast and highly parallelizable bottom-up approach to track reconstruction for the HL-LHC, along with an associated implementation on GPUs, in the context of the Phase 2 CMS outer tracker. Our algorithm, called Segment Linking (or Line Segment Tracking), takes advantage of localized track stub creation, combining individual stubs to progressively form higher level objects that are subject to kinematical and geometrical requirements compatible with genuine physics tracks. The local nature of the algorithm makes it ideal for parallelization under the Single Instruction, Multiple Data paradigm, as hundreds of objects can be built simultaneously. The computing and physics performance of the algorithm has been tested on an NVIDIA Tesla V100 GPU, already yielding efficiency and timing measurements that are on par with the latest, multi-CPU versions of existing CMS tracking algorithms.more » « less
-
Obeid, Iyad; Selesnick, Ivan; Picone, Joseph (Ed.)The goal of this work was to design a low-cost computing facility that can support the development of an open source digital pathology corpus containing 1M images [1]. A single image from a clinical-grade digital pathology scanner can range in size from hundreds of megabytes to five gigabytes. A 1M image database requires over a petabyte (PB) of disk space. To do meaningful work in this problem space requires a significant allocation of computing resources. The improvements and expansions to our HPC (highperformance computing) cluster, known as Neuronix [2], required to support working with digital pathology fall into two broad categories: computation and storage. To handle the increased computational burden and increase job throughput, we are using Slurm [3] as our scheduler and resource manager. For storage, we have designed and implemented a multi-layer filesystem architecture to distribute a filesystem across multiple machines. These enhancements, which are entirely based on open source software, have extended the capabilities of our cluster and increased its cost-effectiveness. Slurm has numerous features that allow it to generalize to a number of different scenarios. Among the most notable is its support for GPU (graphics processing unit) scheduling. GPUs can offer a tremendous performance increase in machine learning applications [4] and Slurm’s built-in mechanisms for handling them was a key factor in making this choice. Slurm has a general resource (GRES) mechanism that can be used to configure and enable support for resources beyond the ones provided by the traditional HPC scheduler (e.g. memory, wall-clock time), and GPUs are among the GRES types that can be supported by Slurm [5]. In addition to being able to track resources, Slurm does strict enforcement of resource allocation. This becomes very important as the computational demands of the jobs increase, so that they have all the resources they need, and that they don’t take resources from other jobs. It is a common practice among GPU-enabled frameworks to query the CUDA runtime library/drivers and iterate over the list of GPUs, attempting to establish a context on all of them. Slurm is able to affect the hardware discovery process of these jobs, which enables a number of these jobs to run alongside each other, even if the GPUs are in exclusive-process mode. To store large quantities of digital pathology slides, we developed a robust, extensible distributed storage solution. We utilized a number of open source tools to create a single filesystem, which can be mounted by any machine on the network. At the lowest layer of abstraction are the hard drives, which were split into 4 60-disk chassis, using 8TB drives. To support these disks, we have two server units, each equipped with Intel Xeon CPUs and 128GB of RAM. At the filesystem level, we have implemented a multi-layer solution that: (1) connects the disks together into a single filesystem/mountpoint using the ZFS (Zettabyte File System) [6], and (2) connects filesystems on multiple machines together to form a single mountpoint using Gluster [7]. ZFS, initially developed by Sun Microsystems, provides disk-level awareness and a filesystem which takes advantage of that awareness to provide fault tolerance. At the filesystem level, ZFS protects against data corruption and the infamous RAID write-hole bug by implementing a journaling scheme (the ZFS intent log, or ZIL) and copy-on-write functionality. Each machine (1 controller + 2 disk chassis) has its own separate ZFS filesystem. Gluster, essentially a meta-filesystem, takes each of these, and provides the means to connect them together over the network and using distributed (similar to RAID 0 but without striping individual files), and mirrored (similar to RAID 1) configurations [8]. By implementing these improvements, it has been possible to expand the storage and computational power of the Neuronix cluster arbitrarily to support the most computationally-intensive endeavors by scaling horizontally. We have greatly improved the scalability of the cluster while maintaining its excellent price/performance ratio [1].more » « less
-
null (Ed.)Computing needs for high energy physics are already intensive and are expected to increase drastically in the coming years. In this context, heterogeneous computing, specifically as-a-service computing, has the potential for significant gains over traditional computing models. Although previous studies and packages in the field of heterogeneous computing have focused on GPUs as accelerators, FPGAs are an extremely promising option as well. A series of workflows are developed to establish the performance capabilities of FPGAs as a service. Multiple different devices and a range of algorithms for use in high energy physics are studied. For a small, dense network, the throughput can be improved by an order of magnitude with respect to GPUs as a service. For large convolutional networks, the throughput is found to be comparable to GPUs as a service. This work represents the first open-source FPGAs-as-a-service toolkit.more » « less
An official website of the United States government

