A bstract Two-dimensional SU( N ) gauge theory coupled to a Majorana fermion in the adjoint representation is a nice toy model for higher-dimensional gauge dynamics. It possesses a multitude of “gluinoball” bound states whose spectrum has been studied using numerical diagonalizations of the light-cone Hamiltonian. We extend this model by coupling it to N f flavors of fundamental Dirac fermions (quarks). The extended model also contains meson-like bound states, both bosonic and fermionic, which in the large- N limit decouple from the gluinoballs. We study the large- N meson spectrum using the Discretized Light-Cone Quantization (DLCQ). When all the fermions are massless, we exhibit an exact $$ \mathfrak{osp} $$ osp (1|4) symmetry algebra that leads to an infinite number of degeneracies in the DLCQ approach. More generally, we show that many single-trace states in the theory are threshold bound states that are degenerate with multi-trace states. These exact degeneracies can be explained using the Kac-Moody algebra of the SU( N ) current. We also present strong numerical evidence that additional threshold states appear in the continuum limit. Finally, we make the quarks massive while keeping the adjoint fermion massless. In this case too, we observe some exact degeneracies thatmore »
This content will become publicly available on August 1, 2023
Bootstrapping pions at large N
A bstract We revisit from a modern bootstrap perspective the longstanding problem of solving QCD in the large N limit. We derive universal bounds on the effective field theory of massless pions by imposing the full set of positivity constraints that follow from 2 → 2 scattering. Some features of our exclusion plots have intriguing connections with hadronic phenomenology. The exclusion boundary exhibits a sharp kink, raising the tantalizing scenario that large N QCD may sit at this kink. We critically examine this possibility, developing in the process a partial analytic understanding of the geometry of the bounds.
- Award ID(s):
- 1915093
- Publication Date:
- NSF-PAR ID:
- 10376822
- Journal Name:
- Journal of High Energy Physics
- Volume:
- 2022
- Issue:
- 8
- ISSN:
- 1029-8479
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fundamentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We consider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M,w)-ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost w >> 1. For FFT and sorting networks we show a lower bound cost of Omega(w*n*log_{w*M}(n)), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when w is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(w,log(n)/log(w*M)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also showmore »
-
The approximate degree of a Boolean function f is the least degree of a real polynomial that approximates f pointwise to error at most 1/3. The approximate degree of f is known to be a lower bound on the quantum query complexity of f (Beals et al., FOCS 1998 and J. ACM 2001). We find tight or nearly tight bounds on the approximate degree and quantum query complexities of several basic functions. Specifically, we show the following. k-Distinctness: For any constant k, the approximate degree and quantum query complexity of the k-distinctness function is Ω(n3/4−1/(2k)). This is nearly tight for large k, as Belovs (FOCS 2012) has shown that for any constant k, the approximate degree and quantum query complexity of k-distinctness is O(n3/4−1/(2k+2−4)). Image size testing: The approximate degree and quantum query complexity of testing the size of the image of a function [n]→[n] is Ω~(n1/2). This proves a conjecture of Ambainis et al. (SODA 2016), and it implies tight lower bounds on the approximate degree and quantum query complexity of the following natural problems. k-Junta testing: A tight Ω~(k1/2) lower bound for k-junta testing, answering the main open question of Ambainis et al. (SODA 2016). Statistical distance frommore »
-
Calculation of many-body correlation functions is one of the critical kernels utilized in many scientific computing areas, especially in Lattice Quantum Chromodynamics (Lattice QCD). It is formalized as a sum of a large number of contraction terms each of which can be represented by a graph consisting of vertices describing quarks inside a hadron node and edges designating quark propagations at specific time intervals. Due to its computation- and memory-intensive nature, real-world physics systems (e.g., multi-meson or multi-baryon systems) explored by Lattice QCD prefer to leverage multi-GPUs. Different from general graph processing, many-body correlation function calculations show two specific features: a large number of computation-/data-intensive kernels and frequently repeated appearances of original and intermediate data. The former results in expensive memory operations such as tensor movements and evictions. The latter offers data reuse opportunities to mitigate the data-intensive nature of many-body correlation function calculations. However, existing graph-based multi-GPU schedulers cannot capture these data-centric features, thus resulting in a sub-optimal performance for many-body correlation function calculations. To address this issue, this paper presents a multi-GPU scheduling framework, MICCO, to accelerate contractions for correlation functions particularly by taking the data dimension (e.g., data reuse and data eviction) into account. This work firstmore »
-
Existing proofs that deduce BPP = P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown . Specifically, assuming exponential lower bounds against randomized NP ∩ coNP circuits, formally known as randomized SVN circuits, we convert any randomized algorithm over inputs of length n running in time t ≥ n into a deterministic one running in time t 2+α for an arbitrarily small constant α > 0. Such a slowdown is nearly optimal for t close to n , since under standard complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with pre-processing). The latter derandomization result holds under weaker assumptions, of exponential lower bounds against deterministic SVN circuits. Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+α)log s , under the assumption that there exists a function f ∈ E that requires randomized SVN circuits of size at least 2 (1-α′) n , where α = O (α)′. The construction uses, amongmore »