skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2209034

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper presents a search for massive, charged, long-lived particles with the ATLAS detector at the Large Hadron Collider using an integrated luminosity of $$140~fb^{−1}$$ of proton-proton collisions at $$\sqrt{s}=13$$~TeV. These particles are expected to move significantly slower than the speed of light. In this paper, two signal regions provide complementary sensitivity. In one region, events are selected with at least one charged-particle track with high transverse momentum, large specific ionisation measured in the pixel detector, and time of flight to the hadronic calorimeter inconsistent with the speed of light. In the other region, events are selected with at least two tracks of opposite charge which both have a high transverse momentum and an anomalously large specific ionisation. The search is sensitive to particles with lifetimes greater than about 3 ns with masses ranging from 200 GeV to 3 TeV. The results are interpreted to set constraints on the supersymmetric pair production of long-lived R-hadrons, charginos and staus, with mass limits extending beyond those from previous searches in broad ranges of lifetime 
    more » « less
    Free, publicly-accessible full text available July 1, 2026
  2. This Letter presents results from a combination of searches for Higgs boson pair production using 126 140 fb 1 of proton-proton collision data at s = 13 TeV recorded with the ATLAS detector. At 95% confidence level (CL), the upper limit on the production rate is 2.9 times the standard model (SM) prediction, with an expected limit of 2.4 assuming no Higgs boson pair production. Constraints on the Higgs boson self-coupling modifier κ λ = λ H H H / λ H H H SM , and the quartic H H V V coupling modifier κ 2 V = g H H V V / g H H V V SM , are derived individually, fixing the other parameter to its SM value. The observed 95% CL intervals are 1.2 < κ λ < 7.2 and 0.6 < κ 2 V < 1.5 , respectively, while the expected intervals are 1.6 < κ λ < 7.2 and 0.4 < κ 2 V < 1.6 in the SM case. Constraints obtained for several interaction parameters within Higgs effective field theory are the strongest to date, offering insights into potential deviations from SM predictions. © 2024 CERN, for the ATLAS Collaboration2024CERN 
    more » « less
  3. The ATLAS experiment at CERN explores vast amounts of physics data to answer the most fundamental questions of the Universe.The prevalence of Python in scientific computing motivated ATLAS to adopt it for its data analysis workflows while enhancing users’ experience.This paper will describe to a broad audience how a large scientific collaboration leverages the power of the Scientific Python ecosystem to tackle domain-specific challenges and advance our understanding of the Cosmos.Through a simplified example of the renowned Higgs boson discovery, attendees will gain insights into the utilization of Python libraries to discriminate a signal in immersive noise, through tasks such as data cleaning, feature engineering, statistical interpretation and visualization at scale. 
    more » « less
  4. A combination of searches for a new resonance decaying into a Higgs boson pair is presented, using up to 139 fb 1 of p p collision data at s = 13 TeV recorded with the ATLAS detector at the LHC. The combination includes searches performed in three decay channels: b b ¯ b b ¯ , b b ¯ τ + τ , and b b ¯ γ γ . No excess above the expected Standard Model background is observed and upper limits are set at the 95% confidence level on the production cross section of Higgs boson pairs originating from the decay of a narrow scalar resonance with mass in the range 251 GeV–5 TeV. The observed (expected) limits are in the range 0.96–600 fb (1.2–390 fb). The limits are interpreted in the type-I two-Higgs-doublet model and the minimal supersymmetric standard model, and constrain parameter space not previously excluded by other searches. © 2024 CERN, for the ATLAS Collaboration2024CERN 
    more » « less
  5. De_Vita, R; Espinal, X; Laycock, P; Shadura, O (Ed.)
    Differentiable Programming could open even more doors in HEP analysis and computing to Artificial Intelligence/Machine Learning. Current common uses of AI/ML in HEP are deep learning networks – providing us with sophisticated ways of separating signal from background, classifying physics, etc. This is only one part of a full analysis – normally skims are made to reduce dataset sizes by applying selection cuts, further selection cuts are applied, perhaps new quantities calculated, and all of that is fed to a deep learning network. Only the deep learning network stage is optimized using the AI/ML gradient decent technique. Differentiable programming offers us a way to optimize the full chain, including selection cuts that occur during skimming. This contribution investigates applying selection cuts in front of a simple neural network using differentiable programming techniques to optimize the complete chain on toy data. There are several well-known problems that must be solved – e.g., selection cuts are not differentiable, and the interaction of a selection cut and a network during training is not well understood. This investigation was motived by trying to automate reduced dataset skims and sizes during analysis – HL-LHC analyses have potentially multi-TB dataset sizes and an automated way of reducing those dataset sizes and understanding the trade-offs would help the analyser make a judgement between time, resource usages, and physics accuracy. This contribution explores the various techniques to apply a selection cut that are compatible with differentiable programming and how to work around issues when it is bolted onto a neural network. Code is available. 
    more » « less