skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Sequencing the Entangled DNA of Fractional Quantum Hall Fluids
We introduce and prove the “root theorem”, which establishes a condition for families of operators to annihilate all root states associated with zero modes of a given positive semi-definite k-body Hamiltonian chosen from a large class. This class is motivated by fractional quantum Hall and related problems, and features generally long-ranged, one-dimensional, dipole-conserving terms. Our theorem streamlines analysis of zero-modes in contexts where “generalized” or “entangled” Pauli principles apply. One major application of the theorem is to parent Hamiltonians for mixed Landau-level wave functions, such as unprojected composite fermion or parton-like states that were recently discussed in the literature, where it is difficult to rigorously establish a complete set of zero modes with traditional polynomial techniques. As a simple application, we show that a modified V1 pseudo-potential, obtained via retention of only half the terms, stabilizes the ν=1/2 Tao–Thouless state as the unique densest ground state.  more » « less
Award ID(s):
2029401
PAR ID:
10478342
Author(s) / Creator(s):
;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Symmetry
Volume:
15
Issue:
2
ISSN:
2073-8994
Page Range / eLocation ID:
303
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The complexity class ZPP NP[1] (corresponding to zero-error randomized algorithms with access to one NP oracle query) is known to have a number of curious properties. We further explore this class in the settings of time complexity, query complexity, and communication complexity. • For starters, we provide a new characterization: ZPP NP[1] equals the restriction of BPP NP[1] where the algorithm is only allowed to err when it forgoes the opportunity to make an NP oracle query. • Using the above characterization, we prove a query-to-communication lifting theorem , which translates any ZPP NP[1] decision tree lower bound for a function f into a ZPP NP[1] communication lower bound for a two-party version of f . • As an application, we use the above lifting theorem to prove that the ZPP NP[1] communication lower bound technique introduced by Göös, Pitassi, and Watson (ICALP 2016) is not tight. We also provide a “primal” characterization of this lower bound technique as a complexity class. 
    more » « less
  2. In 1950, Nash proposed a natural equilibrium solution concept for games hence called Nash equilibrium, and proved that all finite games have at least one. The proof is through a simple yet ingenious application of Brouwer’s (or, in another version Kakutani’s) fixed point theorem, the most sophisticated result in his era’s topology—in fact, recent algorithmic work has established that Nash equilibria are computationally equivalent to fixed points. In this paper, we propose a new class of universal non-equilibrium solution concepts arising from an important theorem in the topology of dynamical systems that was unavailable to Nash. This approach starts with both a game and a learning dynamics, defined over mixed strategies. The Nash equilibria are fixpoints of the dynamics, but the system behavior is captured by an object far more general than the Nash equilibrium that is known in dynamical systems theory as chain recurrent set. Informally, once we focus on this solution concept—this notion of “the outcome of the game”—every game behaves like a potential game with the dynamics converging to these states. In other words, unlike Nash equilibria, this solution concept is algorithmic in the sense that it has a constructive proof of existence. We characterize this solution for simple benchmark games under replicator dynamics, arguably the best known evolutionary dynamics in game theory. For (weighted) potential games, the new concept coincides with the fixpoints/equilibria of the dynamics. However, in (variants of) zero-sum games with fully mixed (i.e., interior) Nash equilibria, it covers the whole state space, as the dynamics satisfy specific information theoretic constants of motion. We discuss numerous novel computational, as well as structural, combinatorial questions raised by this chain recurrence conception of games. 
    more » « less
  3. The goal of this paper is to improve the efficiency and applicability of straightline extraction techniques in the random oracle model. Straightline extraction in the random oracle model refers to the existence of an extractor, which given the random oracle queries made by a prover on some theorem, is able to produce a witness for with roughly the same probability that produces a verifying proof. This notion applies to both zero-knowledge protocols and verifiable computation where the goal is compressing a proof. Pass (CRYPTO '03) first showed how to achieve this property for NP using a cut-and-choose technique which incurred a -bit overhead in communication where is a security parameter. Fischlin (CRYPTO '05) presented a more efficient technique based on ``proofs of work'' that sheds this cost, but only applies to a limited class of Sigma Protocols with a ``quasi-unique response'' property, which for example, does not necessarily include the standard OR composition for Sigma protocols. With Schnorr/EdDSA signature aggregation as a motivating application, we develop new techniques to improve the computation cost of straight-line extractable proofs. Our improvements to the state of the art range from 70X--200X for the best compression parameters. This is due to a uniquely suited polynomial evaluation algorithm, and the insight that a proof-of-work that relies on multicollisions and the birthday paradox is faster to solve than inverting a fixed target. Our collision based proof-of-work more generally improves the Prover's random oracle query complexity when applied in the NIZK setting as well. In addition to reducing the query complexity of Fischlin's Prover, for a special class of Sigma protocols we can for the first time closely match a new lower bound we present. Finally we extend Fischlin's technique so that it applies to a more general class of strongly-sound Sigma protocols, which includes the OR composition. We achieve this by carefully randomizing Fischlin's technique---we show that its current deterministic nature prevents its application to certain multi-witness languages. 
    more » « less
  4. We establish strong duality relations for functional two-step compositional risk-constrained learning problems with multiple nonconvex loss functions and/or learning constraints, regardless of nonconvexity and under a minimal set of technical assumptions. Our results in particular imply zero duality gaps within the class of problems under study, both extending and improving on the state of the art in (risk-neutral) constrained learning. More specifically, we consider risk objectives/constraints which involve real-valued convex and positively homogeneous risk measures admitting dual representations with bounded risk envelopes, generalizing expectations and including popular examples, such as the conditional value-at-risk (CVaR), the meanabsolute deviation (MAD), and more generally all real-valued coherent risk measures on integrable losses as special cases. Our results are based on recent advances in risk-constrained nonconvex programming in infinite dimensions, which rely on a remarkable new application of J. J. Uhl’s convexity theorem, which is an extension of A. A. Lyapunov’s convexity theorem for general, infinite dimensional Banach spaces. By specializing to the risk-neutral setting, we demonstrate, for the first time, that constrained classification and regression can be treated under a unifying lens, while dispensing certain restrictive assumptions enforced in the current literature, yielding a new state-of-the-art strong duality framework for nonconvex constrained learning. 
    more » « less
  5. We introduce a framework to study discrete-variable (DV) quantum systems based on qudits. It relies on notions of a mean state (MS), a minimal stabilizer-projection state (MSPS), and a new convolution. Some interesting consequences are: The MS is the closest MSPS to a given state with respect to the relative entropy; the MS is extremal with respect to the von Neumann entropy, demonstrating a “maximal entropy principle in DV systems.” We obtain a series of inequalities for quantum entropies and for Fisher information based on convolution, giving a “second law of thermodynamics for quantum convolutions.” We show that the convolution of two stabilizer states is a stabilizer state. We establish a central limit theorem, based on iterating the convolution of a zero-mean quantum state, and show this converges to its MS. The rate of convergence is characterized by the “magic gap,” which we define in terms of the support of the characteristic function of the state. We elaborate on two examples: the DV beam splitter and the DV amplifier. 
    more » « less