skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, February 13 until 2:00 AM ET on Friday, February 14 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on May 1, 2025

Title: Performance optimization for a scintillating glass electromagnetic calorimeter at the EIC
Abstract

The successful realization of the EIC scientific program requires the design and construction of high-performance particle detectors. Recent developments in the field of scientific computing and increased availability of high performance computing resources have made it possible to perform optimization of multi-parameter designs, even when the latter require longer computational times (for example simulations of particle interactions with matter). Procedures involving machine-assisted techniques used to inform the design decision have seen a considerable growth in popularity among the EIC detector community. Having already been realized for tracking and RICH PID detectors, it has a potential application in calorimetry designs. A SciGlass barrel calorimeter originally designed for EIC Detector-1 has a semi-projective geometry that allows for non-trivial performance gains, but also poses special challenges in the way of effective exploration of the design space while satisfying the available space and the cell dimension constraints together with the full detector acceptance requirement. This talk will cover specific approaches taken to perform this detector design optimization.

 
more » « less
Award ID(s):
2110293 2309976 2012430
PAR ID:
10524065
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IOPscience
Date Published:
Journal Name:
Journal of Instrumentation
Volume:
19
Issue:
05
ISSN:
1748-0221
Page Range / eLocation ID:
C05049
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Artificial Intelligence is poised to transform the design of complex, large-scale detectors like ePIC at the future Electron Ion Collider. Featuring a central detector with additional detecting systems in the far forward and far backward regions, the ePIC experiment incorporates numerous design parameters and objectives, including performance, physics reach, and cost, constrained by mechanical and geometric limits.This project aims to develop a scalable, distributed AI-assisted detector design for the EIC (AID(2)E), employing state-of-the-art multiobjective optimization to tackle complex designs. Supported by the ePIC software stack and usingGeant4simulations, our approach benefits from transparent parameterization and advanced AI features.The workflow leverages the PanDA and iDDS systems, used in major experiments such as ATLAS at CERN LHC, the Rubin Observatory, and sPHENIX at RHIC, to manage the compute intensive demands of ePIC detector simulations. Tailored enhancements to the PanDA system focus on usability, scalability, automation, and monitoring.Ultimately, this project aims to establish a robust design capability, apply a distributed AI-assisted workflow to the ePIC detector, and extend its applications to the design of the second detector (Detector-2) in the EIC, as well as to calibration and alignment tasks. Additionally, we are developing advanced data science tools to efficiently navigate the complex, multidimensional trade-offs identified through this optimization process.

     
    more » « less
  2. Abstract

    The Einstein Telescope (ET), the European project for a third-generation gravitational-wave detector, has a reference configuration based on a triangular shape consisting of three nested detectors with 10 km arms, where each detector has a 'xylophone' configuration made of an interferometer tuned toward high frequencies, and an interferometer tuned toward low frequencies and working at cryogenic temperature. Here, we examine the scientific perspectives under possible variations of this reference design. We perform a detailed evaluation of the science case for a single triangular geometry observatory, and we compare it with the results obtained for a network of two L-shaped detectors (either parallel or misaligned) located in Europe, considering different choices of arm-length for both the triangle and the 2L geometries. We also study how the science output changes in the absence of the low-frequency instrument, both for the triangle and the 2L configurations. We examine a broad class of simple 'metrics' that quantify the science output, related to compact binary coalescences, multi-messenger astronomy and stochastic backgrounds, and we then examine the impact of different detector designs on a more specific set of scientific objectives.

     
    more » « less
  3. Abstract

    In general-purpose particle detectors, the particle-flow algorithm may be used to reconstruct a comprehensive particle-level view of the event by combining information from the calorimeters and the trackers, significantly improving the detector resolution for jets and the missing transverse momentum. In view of the planned high-luminosity upgrade of the CERN Large Hadron Collider (LHC), it is necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance are sufficient in an environment with many simultaneous proton–proton interactions (pileup). Machine learning may offer a prospect for computationally efficient event reconstruction that is well-suited to heterogeneous computing platforms, while significantly improving the reconstruction quality over rule-based algorithms for granular detectors. We introduce MLPF, a novel, end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, computationally efficient, and scalable graph neural network optimized using a multi-task objective on simulated events. We report the physics and computational performance of the MLPF algorithm on a Monte Carlo dataset of top quark–antiquark pairs produced in proton–proton collisions in conditions similar to those expected for the high-luminosity LHC. The MLPF algorithm improves the physics response with respect to a rule-based benchmark algorithm and demonstrates computationally scalable particle-flow reconstruction in a high-pileup environment.

     
    more » « less
  4. Abstract

    External and internal convertible (EIC) form-based motion control is one of the effective designs of simultaneous trajectory tracking and balance for underactuated balance robots. Under certain conditions, the EIC-based control design is shown to lead to uncontrolled robot motion. To overcome this issue, we present a Gaussian process (GP)-based data-driven learning control for underactuated balance robots with the EIC modeling structure. Two GP-based learning controllers are presented by using the EIC property. The partial EIC (PEIC)-based control design partitions the robotic dynamics into a fully actuated subsystem and a reduced-order underactuated subsystem. The null-space EIC (NEIC)-based control compensates for the uncontrolled motion in a subspace, while the other closed-loop dynamics are not affected. Under the PEIC- and NEIC-based, the tracking and balance tasks are guaranteed, and convergence rate and bounded errors are achieved without causing any uncontrolled motion by the original EIC-based control. We validate the results and demonstrate the GP-based learning control design using two inverted pendulum platforms.

     
    more » « less
  5. Abstract

    Significant investments to upgrade and construct large-scale scientific facilities demand commensurate investments in R&D to design algorithms and computing approaches to enable scientific and engineering breakthroughs in the big data era. Innovative Artificial Intelligence (AI) applications have powered transformational solutions for big data challenges in industry and technology that now drive a multi-billion dollar industry, and which play an ever increasing role shaping human social patterns. As AI continues to evolve into a computing paradigm endowed with statistical and mathematical rigor, it has become apparent that single-GPU solutions for training, validation, and testing are no longer sufficient for computational grand challenges brought about by scientific facilities that produce data at a rate and volume that outstrip the computing capabilities of available cyberinfrastructure platforms. This realization has been driving the confluence of AI and high performance computing (HPC) to reduce time-to-insight, and to enable a systematic study of domain-inspired AI architectures and optimization schemes to enable data-driven discovery. In this article we present a summary of recent developments in this field, and describe specific advances that authors in this article are spearheading to accelerate and streamline the use of HPC platforms to design and apply accelerated AI algorithms in academia and industry.

     
    more » « less