skip to main content


Title: QCD factorization and quantum mechanics
It is unusual to find quantum chromodynamics (QCD) factorization explained in the language of quantum information science. However, we will discuss how the issue of factorization and its breaking in high-energy QCD processes relates to phenomena like decoherence and entanglement. We will elaborate with several examples and explain them in terms familiar from basic quantum mechanics and quantum information science. This article is part of the theme issue ‘Quantum technologies in particle physics’.  more » « less
Award ID(s):
2012926
NSF-PAR ID:
10335371
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
Volume:
380
Issue:
2216
ISSN:
1364-503X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We discuss implementation of the LHC experimental data sets in the new CT18 global analysis of quantum chromodynamics (QCD) at the next-to-next-leading order of the QCD coupling strength. New methodological developments in the fitting methodology are discussed. Behavior of the CT18 NNLO PDFs for the conventional and "saturation-inspired" factorization scales in deep-inelastic scattering is reviewed. Four new families of (N)NLO CTEQ-TEA PDFs are presented: CT18, A, X, and Z. 
    more » « less
  2. null (Ed.)
    We discuss implementation of the LHC experimental data sets in the new CT18 global analysis of quantum chromodynamics (QCD) at the next-to-next-leading order of the QCD coupling strength. New methodological developments in the fitting methodology are discussed. Behavior of the CT18 NNLO PDFs for the conventional and "saturation-inspired" factorization scales in deep-inelastic scattering is reviewed. Four new families of (N)NLO CTEQ-TEA PDFs are presented: CT18, A, X, and Z 
    more » « less
  3. Calculation of many-body correlation functions is one of the critical kernels utilized in many scientific computing areas, especially in Lattice Quantum Chromodynamics (Lattice QCD). It is formalized as a sum of a large number of contraction terms each of which can be represented by a graph consisting of vertices describing quarks inside a hadron node and edges designating quark propagations at specific time intervals. Due to its computation- and memory-intensive nature, real-world physics systems (e.g., multi-meson or multi-baryon systems) explored by Lattice QCD prefer to leverage multi-GPUs. Different from general graph processing, many-body correlation function calculations show two specific features: a large number of computation-/data-intensive kernels and frequently repeated appearances of original and intermediate data. The former results in expensive memory operations such as tensor movements and evictions. The latter offers data reuse opportunities to mitigate the data-intensive nature of many-body correlation function calculations. However, existing graph-based multi-GPU schedulers cannot capture these data-centric features, thus resulting in a sub-optimal performance for many-body correlation function calculations. To address this issue, this paper presents a multi-GPU scheduling framework, MICCO, to accelerate contractions for correlation functions particularly by taking the data dimension (e.g., data reuse and data eviction) into account. This work first performs a comprehensive study on the interplay of data reuse and load balance, and designs two new concepts: local reuse pattern and reuse bound to study the opportunity of achieving the optimal trade-off between them. Based on this study, MICCO proposes a heuristic scheduling algorithm and a machine-learning-based regression model to generate the optimal setting of reuse bounds. Specifically, MICCO is integrated into a real-world Lattice QCD system, Redstar, for the first time running on multiple GPUs. The evaluation demonstrates MICCO outperforms other state-of-art works, achieving up to 2.25× speedup in synthesized datasets, and 1.49× speedup in real-world correlation functions. 
    more » « less
  4. Rothkopf, A. ; Brambilla, N. ; Tolos, L. ; Tranberg, A. ; Kurkela, A. ; Roehrich, D. ; Andersen, J.O. ; Tywoniuk, K. ; Antonov, D. ; Greensite, J. (Ed.)
    We report the current understanding of heavy quarkonium production at high transverse momentum ( p T ) in hadronic collisions in terms of QCD factorization. In this presentation, we highlight the role of subleading power corrections to heavy quarkonium production, which are essential to describe the p T spectrum of quarkonium at a relatively lower p T . We also introduce prescription to match QCD factorization to fixed-order NRQCD factorization calculations for quarkonium production at low p T . 
    more » « less
  5. null (Ed.)
    In the recent years, there is a growing interest in using quantum computers for solving combinatorial optimization problems. In this work, we developed a generic, machine learning-based framework for mapping continuous-space inverse design problems into surrogate quadratic unconstrained binary optimization (QUBO) problems by employing a binary variational autoencoder and a factorization machine. The factorization machine is trained as a low-dimensional, binary surrogate model for the continuous design space and sampled using various QUBO samplers. Using the D-Wave Advantage hybrid sampler and simulated annealing, we demonstrate that by repeated resampling and retraining of the factorization machine, our framework finds designs that exhibit figures of merit exceeding those of its training set. We showcase the framework’s performance on two inverse design problems by optimizing (i) thermal emitter topologies for thermophotovoltaic applications and (ii) diffractive meta-gratings for highly efficient beam steering. This technique can be further scaled to leverage future developments in quantum optimization to solve advanced inverse design problems for science and engineering applications. 
    more » « less