A bstract Twodimensional SU( N ) gauge theory coupled to a Majorana fermion in the adjoint representation is a nice toy model for higherdimensional gauge dynamics. It possesses a multitude of “gluinoball” bound states whose spectrum has been studied using numerical diagonalizations of the lightcone Hamiltonian. We extend this model by coupling it to N f flavors of fundamental Dirac fermions (quarks). The extended model also contains mesonlike bound states, both bosonic and fermionic, which in the large N limit decouple from the gluinoballs. We study the large N meson spectrum using the Discretized LightCone Quantization (DLCQ). When all the fermions are massless, we exhibit an exact $$ \mathfrak{osp} $$ osp (14) symmetry algebra that leads to an infinite number of degeneracies in the DLCQ approach. More generally, we show that many singletrace states in the theory are threshold bound states that are degenerate with multitrace states. These exact degeneracies can be explained using the KacMoody algebra of the SU( N ) current. We also present strong numerical evidence that additional threshold states appear in the continuum limit. Finally, we make the quarks massive while keeping the adjoint fermion massless. In this case too, we observe some exact degeneracies thatmore »
This content will become publicly available on August 1, 2023
Bootstrapping pions at large N
A bstract We revisit from a modern bootstrap perspective the longstanding problem of solving QCD in the large N limit. We derive universal bounds on the effective field theory of massless pions by imposing the full set of positivity constraints that follow from 2 → 2 scattering. Some features of our exclusion plots have intriguing connections with hadronic phenomenology. The exclusion boundary exhibits a sharp kink, raising the tantalizing scenario that large N QCD may sit at this kink. We critically examine this possibility, developing in the process a partial analytic understanding of the geometry of the bounds.
 Award ID(s):
 1915093
 Publication Date:
 NSFPAR ID:
 10376822
 Journal Name:
 Journal of High Energy Physics
 Volume:
 2022
 Issue:
 8
 ISSN:
 10298479
 Sponsoring Org:
 National Science Foundation
More Like this


In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fundamentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We consider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M,w)ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost w >> 1. For FFT and sorting networks we show a lower bound cost of Omega(w*n*log_{w*M}(n)), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when w is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(w,log(n)/log(w*M)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also showmore »

The approximate degree of a Boolean function f is the least degree of a real polynomial that approximates f pointwise to error at most 1/3. The approximate degree of f is known to be a lower bound on the quantum query complexity of f (Beals et al., FOCS 1998 and J. ACM 2001). We find tight or nearly tight bounds on the approximate degree and quantum query complexities of several basic functions. Specifically, we show the following. kDistinctness: For any constant k, the approximate degree and quantum query complexity of the kdistinctness function is Ω(n3/4−1/(2k)). This is nearly tight for large k, as Belovs (FOCS 2012) has shown that for any constant k, the approximate degree and quantum query complexity of kdistinctness is O(n3/4−1/(2k+2−4)). Image size testing: The approximate degree and quantum query complexity of testing the size of the image of a function [n]→[n] is Ω~(n1/2). This proves a conjecture of Ambainis et al. (SODA 2016), and it implies tight lower bounds on the approximate degree and quantum query complexity of the following natural problems. kJunta testing: A tight Ω~(k1/2) lower bound for kjunta testing, answering the main open question of Ambainis et al. (SODA 2016). Statistical distance frommore »

Calculation of manybody correlation functions is one of the critical kernels utilized in many scientific computing areas, especially in Lattice Quantum Chromodynamics (Lattice QCD). It is formalized as a sum of a large number of contraction terms each of which can be represented by a graph consisting of vertices describing quarks inside a hadron node and edges designating quark propagations at specific time intervals. Due to its computation and memoryintensive nature, realworld physics systems (e.g., multimeson or multibaryon systems) explored by Lattice QCD prefer to leverage multiGPUs. Different from general graph processing, manybody correlation function calculations show two specific features: a large number of computation/dataintensive kernels and frequently repeated appearances of original and intermediate data. The former results in expensive memory operations such as tensor movements and evictions. The latter offers data reuse opportunities to mitigate the dataintensive nature of manybody correlation function calculations. However, existing graphbased multiGPU schedulers cannot capture these datacentric features, thus resulting in a suboptimal performance for manybody correlation function calculations. To address this issue, this paper presents a multiGPU scheduling framework, MICCO, to accelerate contractions for correlation functions particularly by taking the data dimension (e.g., data reuse and data eviction) into account. This work firstmore »

Existing proofs that deduce BPP = P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown . Specifically, assuming exponential lower bounds against randomized NP ∩ coNP circuits, formally known as randomized SVN circuits, we convert any randomized algorithm over inputs of length n running in time t ≥ n into a deterministic one running in time t 2+α for an arbitrarily small constant α > 0. Such a slowdown is nearly optimal for t close to n , since under standard complexitytheoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with preprocessing). The latter derandomization result holds under weaker assumptions, of exponential lower bounds against deterministic SVN circuits. Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+α)log s , under the assumption that there exists a function f ∈ E that requires randomized SVN circuits of size at least 2 (1α′) n , where α = O (α)′. The construction uses, amongmore »