skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Duarte, J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    In high energy physics (HEP), jets are collections of correlated particles produced ubiquitously in particle collisions such as those at the CERN Large Hadron Collider (LHC). Machine-learning-based generative models, such as generative adversarial networks (GANs), have the potential to significantly accelerate LHC jet simulations. However, despite jets having a natural representation as a set of particles in momentum-space, a.k.a. a particle cloud, to our knowledge there exist no generative models applied to such a dataset. We introduce a new particle cloud dataset (JetNet), and, due to similarities between particle and point clouds, apply to it existing point cloud GANs. Results are evaluated using (1) the 1-Wasserstein distance between high- and low-level feature distributions, (2) a newly developed Fréchet ParticleNet Distance, and (3) the coverage and (4) minimum matching distance metrics. Existing GANs are found to be inadequate for physics applications, hence we develop a new message passing GAN (MPGAN), which outperforms existing point cloud GANs on virtually every metric and shows promise for use in HEP. We propose JetNet as a novel point-cloud-style dataset for the machine learning community to experiment with, and set MPGAN as a benchmark to improve upon for future generative models. 
    more » « less
  2. Abstract We presentgrizphotometric light curves for the full 5 yr of the Dark Energy Survey Supernova (DES-SN) program, obtained with both forced point-spread function photometry on difference images (DiffImg) performed during survey operations, and scene modelling photometry (SMP) on search images processed after the survey. This release contains 31,636DiffImgand 19,706 high-quality SMP light curves, the latter of which contain 1635 photometrically classified SNe that pass cosmology quality cuts. This sample spans the largest redshift (z) range ever covered by a single SN survey (0.1 <z< 1.13) and is the largest single sample from a single instrument of SNe ever used for cosmological constraints. We describe in detail the improvements made to obtain the final DES-SN photometry and provide a comparison to what was used in the 3 yr DES-SN spectroscopically confirmed Type Ia SN sample. We also include a comparative analysis of the performance of the SMP photometry with respect to the real-timeDiffImgforced photometry and find that SMP photometry is more precise, more accurate, and less sensitive to the host-galaxy surface brightness anomaly. The public release of the light curves and ancillary data can be found atgithub.com/des-science/DES-SN5YRand doi:10.5281/zenodo.12720777. 
    more » « less
  3. Free, publicly-accessible full text available January 1, 2026
  4. A search is presented for an extended Higgs sector with two new particles, X and ϕ , in the process X ϕ ϕ ( γ γ ) ( γ γ ) . Novel neural networks classify events with diphotons that are merged and determine the diphoton masses. The search uses LHC proton-proton collision data at s = 13 TeV collected with the CMS detector, corresponding to an integrated luminosity of 138 fb 1 . No evidence of such resonances is seen. Upper limits are set on the production cross section for m X between 300 and 3000 GeV and m ϕ / m X between 0.5% and 2.5%, representing the most sensitive search in this channel. © 2025 CERN, for the CMS Collaboration2025CERN 
    more » « less
    Free, publicly-accessible full text available January 1, 2026
  5. A<sc>bstract</sc> A measurement is performed of Higgs bosons produced with high transverse momentum (pT) via vector boson or gluon fusion in proton-proton collisions. The result is based on a data set with a center-of-mass energy of 13 TeV collected in 2016–2018 with the CMS detector at the LHC and corresponds to an integrated luminosity of 138 fb−1. The decay of a high-pTHiggs boson to a boosted bottom quark-antiquark pair is selected using large-radius jets and employing jet substructure and heavy-flavor taggers based on machine learning techniques. Independent regions targeting the vector boson and gluon fusion mechanisms are defined based on the topology of two quark-initiated jets with large pseudorapidity separation. The signal strengths for both processes are extracted simultaneously by performing a maximum likelihood fit to data in the large-radius jet mass distribution. The observed signal strengths relative to the standard model expectation are$$ {4.9}_{-1.6}^{+1.9} $$ 4.9 1.6 + 1.9 and$$ {1.6}_{-1.5}^{+1.7} $$ 1.6 1.5 + 1.7 for the vector boson and gluon fusion mechanisms, respectively. A differential cross section measurement is also reported in the simplified template cross section framework. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  6. Abstract Computing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025