skip to main content


Title: Bootstrapping pions at large N
A bstract We revisit from a modern bootstrap perspective the longstanding problem of solving QCD in the large N limit. We derive universal bounds on the effective field theory of massless pions by imposing the full set of positivity constraints that follow from 2 → 2 scattering. Some features of our exclusion plots have intriguing connections with hadronic phenomenology. The exclusion boundary exhibits a sharp kink, raising the tantalizing scenario that large N QCD may sit at this kink. We critically examine this possibility, developing in the process a partial analytic understanding of the geometry of the bounds.  more » « less
Award ID(s):
1915093
NSF-PAR ID:
10376822
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of High Energy Physics
Volume:
2022
Issue:
8
ISSN:
1029-8479
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Traditional dark matter models, e.g., weakly interacting massive particles (WIMPs), assume dark matter (DM) is weakly coupled to the standard model so that elastic scattering between dark matter and baryons can be described perturbatively by the Born approximation; most direct detection experiments are analyzed according to that assumption. We show that when the fundamental DM-baryon interaction is attractive, dark matter-nucleus scattering is nonperturbative in much of the relevant parameter range. The cross section exhibits rich resonant behavior with a highly nontrivial dependence on atomic mass; furthermore, the extended rather than pointlike nature of nuclei significantly impacts the cross sections and must therefore be properly taken into account. The repulsive case also shows significant departures from perturbative predictions and also requires full numerical calculation. These nonperturbative effects change the boundaries of exclusion regions from existing direct detection, astrophysical and CMB constraints. Near a resonance value of the parameters the typical velocity-independent Yukawa behavior, σ ∼ v0, does not apply. We take the nontrivial velocity dependence into account in our analysis, however it turns out that this more accurate treatment has little impact on limits given current constraints. Correctly treating the extended size of the nucleus and doing an exact integration of the Schrödinger equation does have a major impact relative to past analyses based on the Born approximation and naive form factors, so those improvements are essential for interpreting observational constraints. We report the corrected exclusion regions superseding previous limits from XQC, CRESST Surface Run, CMB power spectrum and extensions with Lyman-α and Milky Way satellites, and Milky Way gas clouds. Some limits become weaker, by an order of magnitude or more, than previous bounds in the literature which were based on perturbation theory and pointlike sources, while others become stronger. Gaps which open by correct treatment of some particular constraint can sometimes be closed using a different constraint. We also discuss the dependence on mediator mass and give approximate expressions for the velocity dependence near a resonance. Sexaquark (uuddss) DM with mass around 2 GeV, which exchanges QCD mesons with baryons, remains unconstrained for most of the parameter space of interest. A statement in the literature that a DM-nucleus cross section larger than 10−25 cm2 implies dark matter is composite, is corrected. 
    more » « less
  2. null (Ed.)
    A bstract Two-dimensional SU( N ) gauge theory coupled to a Majorana fermion in the adjoint representation is a nice toy model for higher-dimensional gauge dynamics. It possesses a multitude of “gluinoball” bound states whose spectrum has been studied using numerical diagonalizations of the light-cone Hamiltonian. We extend this model by coupling it to N f flavors of fundamental Dirac fermions (quarks). The extended model also contains meson-like bound states, both bosonic and fermionic, which in the large- N limit decouple from the gluinoballs. We study the large- N meson spectrum using the Discretized Light-Cone Quantization (DLCQ). When all the fermions are massless, we exhibit an exact $$ \mathfrak{osp} $$ osp (1|4) symmetry algebra that leads to an infinite number of degeneracies in the DLCQ approach. More generally, we show that many single-trace states in the theory are threshold bound states that are degenerate with multi-trace states. These exact degeneracies can be explained using the Kac-Moody algebra of the SU( N ) current. We also present strong numerical evidence that additional threshold states appear in the continuum limit. Finally, we make the quarks massive while keeping the adjoint fermion massless. In this case too, we observe some exact degeneracies that show that the spectrum of mesons becomes continuous above a certain threshold. This demonstrates quantitatively that the fundamental string tension vanishes in the massless adjoint QCD 2 without explicit four-fermion operators. 
    more » « less
  3. Calculation of many-body correlation functions is one of the critical kernels utilized in many scientific computing areas, especially in Lattice Quantum Chromodynamics (Lattice QCD). It is formalized as a sum of a large number of contraction terms each of which can be represented by a graph consisting of vertices describing quarks inside a hadron node and edges designating quark propagations at specific time intervals. Due to its computation- and memory-intensive nature, real-world physics systems (e.g., multi-meson or multi-baryon systems) explored by Lattice QCD prefer to leverage multi-GPUs. Different from general graph processing, many-body correlation function calculations show two specific features: a large number of computation-/data-intensive kernels and frequently repeated appearances of original and intermediate data. The former results in expensive memory operations such as tensor movements and evictions. The latter offers data reuse opportunities to mitigate the data-intensive nature of many-body correlation function calculations. However, existing graph-based multi-GPU schedulers cannot capture these data-centric features, thus resulting in a sub-optimal performance for many-body correlation function calculations. To address this issue, this paper presents a multi-GPU scheduling framework, MICCO, to accelerate contractions for correlation functions particularly by taking the data dimension (e.g., data reuse and data eviction) into account. This work first performs a comprehensive study on the interplay of data reuse and load balance, and designs two new concepts: local reuse pattern and reuse bound to study the opportunity of achieving the optimal trade-off between them. Based on this study, MICCO proposes a heuristic scheduling algorithm and a machine-learning-based regression model to generate the optimal setting of reuse bounds. Specifically, MICCO is integrated into a real-world Lattice QCD system, Redstar, for the first time running on multiple GPUs. The evaluation demonstrates MICCO outperforms other state-of-art works, achieving up to 2.25× speedup in synthesized datasets, and 1.49× speedup in real-world correlation functions. 
    more » « less
  4. Abstract

    Given the current rates of climate change, with associated shifts in herbivore population densities, understanding the role of different herbivores in ecosystem functioning is critical for predicting ecosystem responses. Here, we examined how migratory geese and resident, non‐migratory reindeer—two dominating yet functionally contrasting herbivores—control vegetation and ecosystem processes in rapidly warming Arctic tundra.

    We collected vegetation and ecosystem carbon (C) flux data at peak plant growing season in the two longest running, fully replicated herbivore removal experiments found in high‐Arctic Svalbard. Experiments had been set up independently in wet habitat utilised by barnacle geeseBranta leucopsisin summer and in moist‐to‐dry habitat utilised by wild reindeerRangifer tarandus platyrhynchusyear‐round.

    Excluding geese induced vegetation state transitions from heavily grazed, moss‐dominated (only 4 g m−2of live above‐ground vascular plant biomass) to ungrazed, graminoid‐dominated (60 g m−2after 4‐year exclusion) and horsetail‐dominated (150 g m−2after 15‐year exclusion) tundra. This caused large increases in vegetation C and nitrogen (N) pools, dead biomass and moss‐layer depth. Alterations in plant N concentration and CN ratio suggest overall slower plant community nutrient dynamics in the short‐term (4‐year) absence of geese. Long‐term (15‐year) goose removal quadrupled net ecosystem C sequestration (NEE) by increasing ecosystem photosynthesis more than ecosystem respiration (ER).

    Excluding reindeer for 21 years also produced detectable increases in live above‐ground vascular plant biomass (from 50 to 80 g m−2; without promoting vegetation state shifts), as well as in vegetation C and N pools, dead biomass, moss‐layer depth and ER. Yet, reindeer removal did not alter the chemistry of plants and soil or NEE.

    Synthesis. Although both herbivores were key drivers of ecosystem structure and function, the control exerted by geese in their main habitat (wet tundra) was much more pronounced than that exerted by reindeer in their main habitat (moist‐to‐dry tundra). Importantly, these herbivore effects are scale dependent, because geese are more spatially concentrated and thereby affect a smaller portion of the tundra landscape compared to reindeer. Our results highlight the substantial heterogeneity in how herbivores shape tundra vegetation and ecosystem processes, with implications for ongoing environmental change.

     
    more » « less
  5. In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fundamentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We consider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M,w)-ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost w >> 1. For FFT and sorting networks we show a lower bound cost of Omega(w*n*log_{w*M}(n)), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when w is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(w,log(n)/log(w*M)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also show a lower bound for computations on an n*n diamond DAG of Omega(w*n^2/M) cost, which indicates no asymptotic improvement is achievable with fast reads. However, we show that for the minimum edit distance problem (and related problems), which would seem to be a diamond DAG, we can beat this lower bound with an algorithm with only O(w*n^2/(M*min(w^{1/3},M^{1/2}))) cost. To achieve this we make use of a "path sketch" technique that is forbidden in a strict DAG computation. Finally, we show several interesting upper bounds for shortest path problems, minimum spanning trees, and other problems. A common theme in many of the upper bounds is that they require redundant computation and a tradeoff between reads and writes. 
    more » « less