Abstract Efficient and accurate algorithms are necessary to reconstruct particles in the highly granular detectors anticipated at the High-Luminosity Large Hadron Collider and the Future Circular Collider. We study scalable machine learning models for event reconstruction in electron-positron collisions based on a full detector simulation. Particle-flow reconstruction can be formulated as a supervised learning task using tracks and calorimeter clusters. We compare a graph neural network and kernel-based transformer and demonstrate that we can avoid quadratic operations while achieving realistic reconstruction. We show that hyperparameter tuning significantly improves the performance of the models. The best graph neural network model shows improvement in the jet transverse momentum resolution by up to 50% compared to the rule-based algorithm. The resulting model is portable across Nvidia, AMD and Habana hardware. Accurate and fast machine-learning based reconstruction can significantly improve future measurements at colliders.
more »
« less
Novel position reconstruction methods for highly granular electromagnetic calorimeters
We present work on design and reconstruction methods for sampling electromagnetic calorimeters with emphasis on highly granular designs. We use the clustered logarithmically weighted center-of-gravity estimator (lwk-means) for initial benchmarking of position resolution. We find that the θ and φ resolution for high energy photons in Si-W designs improves when increasing both sampling frequency and sampling thickness. Augmenting only one is found to have mixed results. We find that lwk-means is unable to effectively use calorimeter transverse cell sizes smaller than 2 mm. New reconstruction methods for highly granular designs are developed. We find that methods that only measure the initial particle shower and disregard the remaining shower can take advantage of cell sizes down to at least 10 µm, significantly outperforming the benchmark method. Of these, the best method and design is the initial particle shower “single hit” method using the calorimeter design with the highest sampling frequency and sampling fraction.
more »
« less
- Award ID(s):
- 2310030
- PAR ID:
- 10608088
- Publisher / Repository:
- EPJ Web Conf.
- Date Published:
- Journal Name:
- EPJ Web of Conferences
- Volume:
- 315
- ISSN:
- 2100-014X
- Page Range / eLocation ID:
- 03007
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Among the well-known methods to approximate derivatives of expectancies computed by Monte-Carlo simulations, averages of pathwise derivatives are often the easiest one to apply. Computing them via algorithmic differentiation typically does not require major manual analysis and rewriting of the code, even for very complex programs like simulations of particle-detector interactions in high-energy physics. However, the pathwise derivative estimator can be biased if there are discontinuities in the program, which may diminish its value for applications. This work integrates algorithmic differentiation into the electromagnetic shower simulation code HepEmShow based on G4HepEm, allowing us to study how well pathwise derivatives approximate derivatives of energy depositions in a sampling calorimeter with respect to parameters of the beam and geometry. We found that when multiple scattering is disabled in the simulation, means of pathwise derivatives converge quickly to their expected values, and these are close to the actual derivatives of the energy deposition. Additionally, we demonstrate the applicability of this novel gradient estimator for stochastic gradient-based optimization in a model example.more » « less
-
A practical and well-studied method for computing the novelty of a design is to construct an ordinal embedding via a collection of pairwise comparisons between items (called triplets), and use distances within that embedding to compute which designs are farthest from the center. Unfortunately, ordinal embedding methods can require a large number of triplets before their primary error measure — the triplet violation error — converges. But if our goal is accurate novelty estimation, is it really necessary to fully minimize all triplet violations? Can we extract useful information regarding the novelty of all or some items using fewer triplets than classical convergence rates might imply? This paper addresses this question by studying the relationship between triplet violation error and novelty score error when using ordinal embeddings. Specifically, we compare how errors in embeddings produced by Generalized Non-Metric Dimensional Scaling (GNMDS) converge under different sampling methods, for different numbers of embedded items, sizes of latent spaces, and for the top K most novel designs. We find that estimating the novelty of a set of items via ordinal embedding can require significantly fewer human-provided triplets than is needed to converge the triplet error, and that this effect is modulated by the type of triplet sampling method (random versus uncertainty sampling). We also find that uncertainty sampling causes unique converge behavior in estimating most novel items compared to non-novel items. Our results imply that in certain situations one can use ordinal embedding techniques to estimate novelty error in fewer samples than is typically expected. Moreover, the convergence behavior of top K novel items motivates new potential triplet sampling methods that go beyond typical triplet reduction measures.more » « less
-
A variety of imaging systems are in use in oceanographic surveys, and the opto-mechanical configurations have become highly sophisticated. However, much less consideration has been given to the accurate reconstruction of imaging data. To improve reconstruction of particles captured by Focused Shadowgraph Imaging (FoSI)—a system that excels at visualizing low-optical-density objects, we developed a novel object detection algorithm to process images with a resolution of ~ 12 μm per pixel. Suggested improvements to conventional edge-detection methods are relatively simple and time-efficient, and more accurately render the sizes and shapes of small particles ranging from 24 to 500 μm. In addition, we introduce a gradient of neutral density filters as a part of the protocol serving to calibrate recorded gray levels and thus determine the absolute values of detection thresholds. Set to intermediate detection threshold levels, particle numbers were highly correlated with beam attenuation (cp) measured independently. The utility of our method was underscored by its ability to remove imperfections (dirt, scratches and uneven illumination), and by capturing the transparent particle features such as found in gelatinous plankton, marine snow and a portion of the oceanic gel phase.more » « less
-
Magnetic particle tracking (MPT) is a recently developed non-invasive measurement technique that has gained popularity for studying dense particulate or granular flows. This method involves tracking the trajectory of a magnetically labeled particle, the field of which is modeled as a dipole. The nature of this method allows it to be used in opaque environments, which can be highly beneficial for the measurement of dense particle dynamics. However, since the magnetic field of the particle used is weak, the signal-to-noise ratio is usually low. The noise from the measuring devices contaminates the reconstruction of the magnetic tracer’s trajectory. A filter is then needed to reduce the noise in the final trajectory results. In this work, we present a neural network-based framework for MPT trajectory reconstruction and filtering, which yields accurate results and operates at very high speed. The reconstruction derived from this framework is compared to the state-of-the-art extended Kalman filter-based reconstruction.more » « less
An official website of the United States government

