skip to main content


Title: Root Cones and the Resonance Arrangement
We study the connection between triangulations of a type $A$ root polytope and the resonance arrangement, a hyperplane arrangement that shows up in a surprising number of contexts. Despite an elementary definition for the resonance arrangement, the number of resonance chambers has only been computed up to the $n=8$ dimensional case. We focus on data structures for labeling chambers, such as sign vectors and sets of alternating trees, with an aim at better understanding the structure of the resonance arrangement, and, in particular, enumerating its chambers. Along the way, we make connections with similar (and similarly difficult) enumeration questions. With the root polytope viewpoint, we relate resonance chambers to the chambers of polynomiality of the Kostant partition function. With the hyperplane viewpoint, we clarify the connections between resonance chambers and threshold functions. In particular, we show that the base-2 logarithm of the number of resonance chambers is asymptotically $n^2$.  more » « less
Award ID(s):
1847284 1501059
NSF-PAR ID:
10252143
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
The Electronic Journal of Combinatorics
Volume:
28
Issue:
1
ISSN:
1077-8926
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    A classical parking function of lengthnis a list of positive integers$$(a_1, a_2, \ldots , a_n)$$(a1,a2,,an)whose nondecreasing rearrangement$$b_1 \le b_2 \le \cdots \le b_n$$b1b2bnsatisfies$$b_i \le i$$bii. The convex hull of all parking functions of lengthnis ann-dimensional polytope in$${\mathbb {R}}^n$$Rn, which we refer to as the classical parking function polytope. Its geometric properties have been explored in Amanbayeva and Wang (Enumer Combin Appl 2(2):Paper No. S2R10, 10, 2022) in response to a question posed by Stanley (Amer Math Mon 127(6):563–571, 2020). We generalize this family of polytopes by studying the geometric properties of the convex hull of$${\textbf{x}}$$x-parking functions for$${\textbf{x}}=(a,b,\dots ,b)$$x=(a,b,,b), which we refer to as$${\textbf{x}}$$x-parking function polytopes. We explore connections between these$${\textbf{x}}$$x-parking function polytopes, the Pitman–Stanley polytope, and the partial permutahedra of Heuer and Striker (SIAM J Discrete Math 36(4):2863–2888, 2022). In particular, we establish a closed-form expression for the volume of$${\textbf{x}}$$x-parking function polytopes. This allows us to answer a conjecture of Behrend et al. (2022) and also obtain a new closed-form expression for the volume of the convex hull of classical parking functions as a corollary.

     
    more » « less
  2. Large-scale magnetic fields thread through the electrically conducting matter of the interplanetary and interstellar medium, stellar interiors and other astrophysical plasmas, producing anisotropic flows with regions of high-Reynolds-number turbulence. It is common to encounter turbulent flows structured by a magnetic field with a strength approximately equal to the root-mean-square magnetic fluctuations. In this work, direct numerical simulations of anisotropic magnetohydrodynamic (MHD) turbulence influenced by such a magnetic field are conducted for a series of cases that have identical resolution, and increasing grid sizes up to $2048^3$ . The result is a series of closely comparable simulations at Reynolds numbers ranging from 1400 up to 21 000. We investigate the influence of the Reynolds number from the Lagrangian viewpoint by tracking fluid particles and calculating single-particle and two-particle statistics. The influence of Alfvénic fluctuations and the fundamental anisotropy on the MHD turbulence in these statistics is discussed. Single-particle diffusion curves exhibit mildly superdiffusive behaviours that differ in the direction aligned with the magnetic field and the direction perpendicular to it. Competing alignment processes affect the dispersion of particle pairs, in particular at the beginning of the inertial subrange of time scales. Scalings for relative dispersion, which become clearer in the inertial subrange for a larger Reynolds number, can be observed that are steeper than indicated by the Richardson prediction. 
    more » « less
  3. Tauman Kalai, Yael (Ed.)
    We introduce and study the communication complexity of computing the inner product of two vectors, where the input is restricted w.r.t. a norm N on the space ℝⁿ. Here, Alice and Bob hold two vectors v,u such that ‖v‖_N ≤ 1 and ‖u‖_{N^*} ≤ 1, where N^* is the dual norm. The goal is to compute their inner product ⟨v,u⟩ up to an ε additive term. The problem is denoted by IP_N, and generalizes important previously studied problems, such as: (1) Computing the expectation 𝔼_{x∼𝒟}[f(x)] when Alice holds 𝒟 and Bob holds f is equivalent to IP_{𝓁₁}. (2) Computing v^TAv where Alice has a symmetric matrix with bounded operator norm (denoted S_∞) and Bob has a vector v where ‖v‖₂ = 1. This problem is complete for quantum communication complexity and is equivalent to IP_{S_∞}. We systematically study IP_N, showing the following results, near tight in most cases: 1) For any symmetric norm N, given ‖v‖_N ≤ 1 and ‖u‖_{N^*} ≤ 1 there is a randomized protocol using 𝒪̃(ε^{-6} log n) bits of communication that returns a value in ⟨u,v⟩±ε with probability 2/3 - we will denote this by ℛ_{ε,1/3}(IP_N) ≤ 𝒪̃(ε^{-6} log n). In a special case where N = 𝓁_p and N^* = 𝓁_q for p^{-1} + q^{-1} = 1, we obtain an improved bound ℛ_{ε,1/3}(IP_{𝓁_p}) ≤ 𝒪(ε^{-2} log n), nearly matching the lower bound ℛ_{ε, 1/3}(IP_{𝓁_p}) ≥ Ω(min(n, ε^{-2})). 2) One way communication complexity ℛ^{→}_{ε,δ}(IP_{𝓁_p}) ≤ 𝒪(ε^{-max(2,p)}⋅ log n/ε), and a nearly matching lower bound ℛ^{→}_{ε, 1/3}(IP_{𝓁_p}) ≥ Ω(ε^{-max(2,p)}) for ε^{-max(2,p)} ≪ n. 3) One way communication complexity ℛ^{→}_{ε,δ}(N) for a symmetric norm N is governed by the distortion of the embedding 𝓁_∞^k into N. Specifically, while a small distortion embedding easily implies a lower bound Ω(k), we show that, conversely, non-existence of such an embedding implies protocol with communication k^𝒪(log log k) log² n. 4) For arbitrary origin symmetric convex polytope P, we show ℛ_{ε,1/3}(IP_{N}) ≤ 𝒪(ε^{-2} log xc(P)), where N is the unique norm for which P is a unit ball, and xc(P) is the extension complexity of P (i.e. the smallest number of inequalities describing some polytope P' s.t. P is projection of P'). 
    more » « less
  4. Vapnik-Chervonenkis (VC) theory has so far been unable to explain the small generalization error of overparametrized neural networks. Indeed, existing applications of VC theory to large networks obtain upper bounds on VC dimension that are proportional to the number of weights, and for a large class of networks, these upper bound are known to be tight. In this work, we focus on a subclass of partially quantized networks that we refer to as hyperplane arrangement neural networks (HANNs). Using a sample compression analysis, we show that HANNs can have VC dimension significantly smaller than the number of weights, while being highly expressive. In particular, empirical risk minimization over HANNs in the overparametrized regime achieves the minimax rate for classification with Lipschitz posterior class probability. We further demonstrate the expressivity of HANNs empirically. On a panel of 121 UCI datasets, overparametrized HANNs match the performance of state-of-the-art full-precision models. 
    more » « less
  5. Vapnik-Chervonenkis (VC) theory has so far been unable to explain the small generalization error of overparametrized neural networks. Indeed, existing applications of VC theory to large networks obtain upper bounds on VC dimension that are proportional to the number of weights, and for a large class of networks, these upper bound are known to be tight. In this work, we focus on a subclass of partially quantized networks that we refer to as hyperplane arrangement neural networks (HANNs). Using a sample compression analysis, we show that HANNs can have VC dimension significantly smaller than the number of weights, while being highly expressive. In particular, empirical risk minimization over HANNs in the overparametrized regime achieves the minimax rate for classification with Lipschitz posterior class probability. We further demonstrate the expressivity of HANNs empirically. On a panel of 121 UCI datasets, overparametrized HANNs match the performance of state-of-the-art full-precision models. 
    more » « less