Particle production from secondary proton-proton collisions, commonly referred to as pile-up, impair the sensitivity of both new physics searches and precision measurements at large hadron collider (LHC) experiments. We propose a novel algorithm,
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Puma , for modeling pile-up with the help of deep neural networks based on sparse transformers. These attention mechanisms were developed for natural language processing but have become popular in other applications. In a realistic detector simulation, our method outperforms classical benchmark algorithms for pile-up mitigation in key observables. It provides a perspective for mitigating the effects of pile-up in the high luminosity era of the LHC, where up to 200 proton-proton collisions are expected to occur simultaneously. -
Free, publicly-accessible full text available December 1, 2023
-
Free, publicly-accessible full text available November 1, 2023
-
Free, publicly-accessible full text available September 1, 2023
-
A bstract A search is presented for a heavy W′ boson resonance decaying to a B or T vector-like quark and a t or a b quark, respectively. The analysis is performed using proton-proton collisions collected with the CMS detector at the LHC. The data correspond to an integrated luminosity of 138 fb − 1 at a center-of-mass energy of 13 TeV. Both decay channels result in a signature with a t quark, a Higgs or Z boson, and a b quark, each produced with a significant Lorentz boost. The all-hadronic decays of the Higgs or Z boson and of the t quark are selected using jet substructure techniques to reduce standard model backgrounds, resulting in a distinct three-jet W′ boson decay signature. No significant deviation in data with respect to the standard model background prediction is observed. Upper limits are set at 95% confidence level on the product of the W′ boson cross section and the final state branching fraction. A W′ boson with a mass below 3.1 TeV is excluded, given the benchmark model assumption of democratic branching fractions. In addition, limits are set based on generalizations of these assumptions. These are the most sensitive limits to datemore »Free, publicly-accessible full text available September 1, 2023
-
Free, publicly-accessible full text available August 1, 2023
-
Free, publicly-accessible full text available August 1, 2023
-
Abstract A new algorithm is presented to discriminate reconstructed hadronic decays of tau leptons ( τ h ) that originate from genuine tau leptons in the CMS detector against τ h candidates that originate from quark or gluon jets, electrons, or muons. The algorithm inputs information from all reconstructed particles in the vicinity of a τ h candidate and employs a deep neural network with convolutional layers to efficiently process the inputs. This algorithm leads to a significantly improved performance compared with the previously used one. For example, the efficiency for a genuine τ h to pass the discriminator against jets increases by 10–30% for a given efficiency for quark and gluon jets. Furthermore, a more efficient τ h reconstruction is introduced that incorporates additional hadronic decay modes. The superior performance of the new algorithm to discriminate against jets, electrons, and muons and the improved τ h reconstruction method are validated with LHC proton-proton collision data at √ s = 13 TeV.Free, publicly-accessible full text available July 1, 2023
-
Free, publicly-accessible full text available July 1, 2023
-
Free, publicly-accessible full text available July 1, 2023