skip to main content

This content will become publicly available on January 1, 2023

Title: Measuring the 15 O( α , γ ) 19 Ne Reaction in Type I X-ray Bursts using the GADGET II TPC: Software
15 O( α , γ ) 19 Ne is regarded as one of the most important thermonuclear reactions in type I X-ray bursts. For studying the properties of the key resonance in this reaction using β decay, the existing Proton Detector component of the Gaseous Detector with Germanium Tagging (GADGET) assembly is being upgraded to operate as a time projection chamber (TPC) at FRIB. This upgrade includes the associated hardware as well as software and this paper mainly focusses on the software upgrade. The full detector set up is simulated using the ATTPCROOTv 2 data analysis framework for 20 Mg and 241 Am.
Authors:
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » ; ; ; ; ; ; ; ; ; ; « less
Editors:
Liu, W.; Wang, Y.; Guo, B.; Tang, X.; Zeng, S.
Award ID(s):
2011890 1913554
Publication Date:
NSF-PAR ID:
10317777
Journal Name:
EPJ Web of Conferences
Volume:
260
ISSN:
2100-014X
Sponsoring Org:
National Science Foundation
More Like this
  1. Cellular service providers continuously upgrade their network software on base stations to introduce new service features, fix software bugs, enhance quality of experience to users, or patch security vulnerabilities. A software upgrade typically requires the network element to be taken out of service, which can potentially degrade the service to users. Thus, the new software is deployed across the network using a rolling upgrade model such that the service impact during the roll-out is minimized. A sequential roll-out guarantees minimal impact but increases the deployment time thereby incurring a significant human cost and time in monitoring the upgrade. A network-widemore »concurrent roll-out guarantees minimal deployment time but can result in a significant service impact. The goal is to strike a balance between deployment time and service impact during the upgrade. In this paper, we first present our findings from analyzing upgrades in operational networks and discussions with network operators and exposing the challenges in rolling software upgrades. We propose a new framework Concord to effectively coordinate software upgrades across the network that balances the deployment time and service impact. We evaluate Concord using real-world data collected from a large operational cellular network and demonstrate the benefits and tradeoffs. We also present a prototype deployment of Concord using a small-scale LTE testbed deployed indoors in a corporate building.« less
  2. Liu, W. ; Wang, Y. ; Guo, B. ; Tang, X. ; Zeng, S. (Ed.)
    Sensitivity studies have shown that the 15 O(α, γ) 19 Ne reaction is the most important reaction rate uncertainty affecting the shape of light curves from Type I X-ray bursts. This reaction is dominated by the 4.03 MeV resonance in 19 Ne. Previous measurements by our group have shown that this state is populated in the decay sequence of 20 Mg. A single 20 Mg(βp α) 15 O event through the key 15 O(α, γ) 19 Ne resonance yields a characteristic signature: the emission of a proton and alpha particle. To achieve the granularity necessary for the identification of thismore »signature, we have upgraded the Proton Detector of the Gaseous Detector with Germanium Tagging (GADGET) into a time projection chamber to form the GADGET II detection system. GADGET II has been fully constructed, and is entering the testing phase.« less
  3. A bstract The search for long-lived particles (LLP) is an exciting physics opportunity in the upcoming runs of the Large Hadron Collider. In this paper, we focus on a new search strategy of using the High Granularity Calorimeter (HGCAL), part of the upgrade of the CMS detector, in such searches. In particular, we demonstrate that the high granularity of the calorimeter allows us to see “shower tracks” in the calorimeter, and can play a crucial role in identifying the signal and suppressing the background. We study the potential reach of the HGCAL using a signal model in which the Standardmore »Model Higgs boson decays into a pair of LLPs, h → XX . After carefully estimating the Standard Model QCD and the misreconstructed fake-track backgrounds, we give the projected reach for both an existing vector boson fusion trigger and a novel displaced-track-based trigger. Our results show that the best reach for the Higgs decay branching ratio, BR( h → XX ), in the vector boson fusion channel is about $$ \mathcal{O} $$ O (10 − 4 ) with lifetime cτ X ∼ 0 . 1–1 meters, while for the gluon gluon fusion channel it is about $$ \mathcal{O} $$ O (10 − 5 –10 − 6 ) for similar lifetimes. For longer lifetime cτ X ∼ 10 3 meters, our search could probe BR( h → XX ) down to a few × 10 − 4 (10 − 2 ) in the gluon gluon fusion (vector boson fusion) channels, respectively. In comparison with these previous searches, our new search shows enhanced sensitivity in complementary regions of the LLP parameter space. We also comment on many improvements can be implemented to further improve our proposed search.« less
  4. Abstract

    In general-purpose particle detectors, the particle-flow algorithm may be used to reconstruct a comprehensive particle-level view of the event by combining information from the calorimeters and the trackers, significantly improving the detector resolution for jets and the missing transverse momentum. In view of the planned high-luminosity upgrade of the CERN Large Hadron Collider (LHC), it is necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance are sufficient in an environment with many simultaneous proton–proton interactions (pileup). Machine learning may offer a prospect for computationally efficient event reconstruction that is well-suited to heterogeneous computingmore »platforms, while significantly improving the reconstruction quality over rule-based algorithms for granular detectors. We introduce MLPF, a novel, end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, computationally efficient, and scalable graph neural network optimized using a multi-task objective on simulated events. We report the physics and computational performance of the MLPF algorithm on a Monte Carlo dataset of top quark–antiquark pairs produced in proton–proton collisions in conditions similar to those expected for the high-luminosity LHC. The MLPF algorithm improves the physics response with respect to a rule-based benchmark algorithm and demonstrates computationally scalable particle-flow reconstruction in a high-pileup environment.

    « less
  5. Nowadays, there is a fast-paced shift from legacy telecommunication systems to novel software-defined network (SDN) architectures that can support on-the-fly network reconfiguration, therefore, empowering advanced traffic engineering mechanisms. Despite this momentum, migration to SDN cannot be realized at once especially in high-end networks of Internet service providers (ISPs). It is expected that ISPs will gradually upgrade their networks to SDN over a period that spans several years. In this paper, we study the SDN upgrading problem in an ISP network: which nodes to upgrade and when we consider a general model that captures different migration costs and network topologies, andmore »two plausible ISP objectives: 1) the maximization of the traffic that traverses at least one SDN node, and 2) the maximization of the number of dynamically selectable routing paths enabled by SDN nodes. We leverage the theory of submodular and supermodular functions to devise algorithms with provable approximation ratios for each objective. Using realworld network topologies and traffic matrices, we evaluate the performance of our algorithms and show up to 54% gains over state-of-the-art methods. Moreover, we describe the interplay between the two objectives; maximizing one may cause a factor of 2 loss to the other. We also study the dual upgrading problem, i.e., minimizing the upgrading cost for the ISP while ensuring specific performance goals. Our analysis shows that our proposed algorithm can achieve up to 2.5 times lower cost to ensure performance goals over state-of-the-art methods.« less