skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Complete Problems for Multi-Pseudodeterministic Computations
We exhibit several computational problems that are {\em complete} for multi-pseudodeterministic computations in the following sense: (1) these problems admit $$2$$-pseudodeterministic algorithms (2) if there exists a pseudodeterministic algorithm for any of these problems, then any multi-valued function that admits a $$k$$-pseudodeterministic algorithm for a constant $$k$$, also admits a pseudodeterministic algorithm. We also show that these computational problems are complete for {\em Search-BPP}: a pseudodeterministic algorithm for any of these problems implies a pseudodeterministic algorithm for all problems in Search-BPP.  more » « less
Award ID(s):
1849053 1934884
PAR ID:
10221647
Author(s) / Creator(s):
; ;
Editor(s):
Lee, James
Date Published:
Journal Name:
Innovations in Theoretical Computer Science
Page Range / eLocation ID:
66:1--66:16
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Stefano Leonardi and Anupam Gupta (Ed.)
    A probabilistic algorithm A is pseudodeterministic if, on every input, there exists a canonical value that is output with high probability. If the algorithm outputs one of k canonical values with high probability, then it is called a k-pseudodeterministic algorithm. In the study of pseudodeterminism, the Acceptance Probability Estimation Problem (APEP), which is to additively approximate the acceptance probability of a Boolean circuit, is emerging as a central computational problem. This problem admits a 2-pseudodeterministic algorithm. Recently, it was shown that a pseudodeterministic algorithm for this problem would imply that any multi-valued function that admits a k-pseudodeterministic algorithm for a constant k (including approximation algorithms) also admits a pseudodeterministic algorithm (Dixon, Pavan, Vinodchandran; ITCS 2021). The contribution of the present work is two-fold. First, as our main conceptual contribution, we establish that the existence of a pseudodeterministic algorithm for APEP is fundamentally related to the gap between probabilistic promise classes and the corresponding standard complexity classes. In particular, we show the following equivalence: APEP has a pseudodeterministic approximation algorithm if and only if every promise problem in PromiseBPP has a solution in BPP. A conceptual interpretation of this equivalence is that the algorithmic gap between 2-pseudodeterminism and pseudodeterminism is equivalent to the gap between PromiseBPP and BPP. Based on this connection, we show that designing pseudodeterministic algorithms for APEP leads to the solution of some open problems in complexity theory, including new Boolean circuit lower bounds. This equivalence also explains how multi-pseudodeterminism is connected to problems in SearchBPP. In particular, we show that if APEP has a pseudodeterministic algorithm, then every problem that admits a k(n)-pseudodeterministic algorithm (for any polynomial k) is in SearchBPP and admits a pseudodeterministic algorithm. Motivated by this connection, we also explore its connection to probabilistic search problems and establish that APEP is complete for certain notions of search problems in the context of pseudodeterminism. Our second contribution is establishing query complexity lower bounds for multi-pseudodeterministic computations. We prove that for every k ≥ 1, there exists a problem whose (k+1)-pseudodeterministic query complexity, in the uniform query model, is O(1) but has a k-pseudodeterministic query complexity of Ω(n), even in the more general nonadaptive query model. A key contribution of this part of the work is the utilization of Sperner’s lemma in establishing query complexity lower bounds. 
    more » « less
  2. null (Ed.)
    The {\sc Acceptance Probability Estimation Problem} (APEP) is to additively approximate the acceptance probability of a Boolean circuit. This problem admits a probabilistic approximation scheme. A central question is whether we can design a {\em pseudodeterministic} approximation algorithm for this problem: a probabilistic polynomial-time algorithm that outputs a canonical approximation with high probability. Recently, it was shown that such an algorithm would imply that {\em every approximation algorithm can be made pseudodeterministic} (Dixon, Pavan, Vinodchandran; ITCS 2021). The main conceptual contribution of this work is to establish that the existence of a pseudodeterministic algorithm for APEP is fundamentally connected to the relationship between probabilistic promise classes and the corresponding standard complexity classes. In particular, we show the following equivalence: {\em every promise problem in PromiseBPP has a solution in BPP if and only if APEP has a pseudodeterministic algorithm}. Based on this intuition, we show that pseudodeterministic algorithms for APEP can shed light on a few central topics in complexity theory such as circuit lowerbounds, probabilistic hierarchy theorems, and multi-pseudodeterminism. 
    more » « less
  3. We give a polynomial-time algorithm for learning neural networks with one layer of sigmoids feeding into any Lipschitz, monotone activation function (e.g., sigmoid or ReLU). We make no assumptions on the structure of the network, and the algorithm succeeds with respect to {\em any} distribution on the unit ball in n dimensions (hidden weight vectors also have unit norm). This is the first assumption-free, provably efficient algorithm for learning neural networks with two nonlinear layers. Our algorithm-- Alphatron-- is a simple, iterative update rule that combines isotonic regression with kernel methods. It outputs a hypothesis that yields efficient oracle access to interpretable features. It also suggests a new approach to Boolean learning problems via real-valued conditional-mean functions, sidestepping traditional hardness results from computational learning theory. Along these lines, we subsume and improve many longstanding results for PAC learning Boolean functions to the more general, real-valued setting of {\em probabilistic concepts}, a model that (unlike PAC learning) requires non-i.i.d. noise-tolerance. 
    more » « less
  4. The constraint satisfaction problems k-SAT and Quantum k-SAT (k-QSAT) are canonical NP-complete and QMA_1-complete problems (for k >= 3), respectively, where QMA_1 is a quantum generalization of NP with one-sided error. Whereas k-SAT has been well-studied for special tractable cases, as well as from a parameterized complexity perspective, much less is known in similar settings for k-QSAT. Here, we study the open problem of computing satisfying assignments to k-QSAT instances which have a "matching" or "dimer covering"; this is an NP problem whose decision variant is trivial, but whose search complexity remains open. Our results fall into three directions, all of which relate to the "matching" setting: (1) We give a polynomial-time classical algorithm for k-QSAT when all qubits occur in at most two clauses. (2) We give a parameterized algorithm for k-QSAT instances from a certain non-trivial class, which allows us to obtain exponential speedups over brute force methods in some cases by reducing the problem to solving for a single root of a single univariate polynomial. (3) We conduct a structural graph theoretic study of 3-QSAT interaction graphs which have a "matching". We remark that the results of (2), in particular, introduce a number of new tools to the study of Quantum SAT, including graph theoretic concepts such as transfer filtrations and blow-ups from algebraic geometry; we hope these prove useful elsewhere. 
    more » « less
  5. For a complexity class $$C$$ and language $$L$$, a constructive separation of $$L\notin C$$ gives an efficient algorithm (also called a refuter) to findcounterexamples (bad inputs) for every $$C$$-algorithm attempting to decide $$L$$.We study the questions: Which lower bounds can be made constructive? What arethe consequences of constructive separations? We build a case thatconstructiveness serves as a dividing line between many weak lower bounds weknow how to prove, and strong lower bounds against $$P$$, $ZPP$, and $BPP$. Putanother way, constructiveness is the opposite of a complexity barrier: it is aproperty we want lower bounds to have. Our results fall into three broadcategories. 1. Our first set of results shows that, for many well-known lower boundsagainst streaming algorithms, one-tape Turing machines, and query complexity,as well as lower bounds for the Minimum Circuit Size Problem, making theselower bounds constructive would imply breakthrough separations ranging from$$EXP \neq BPP$$ to even $$P \neq NP$$. 2. Our second set of results shows that for most major open problems in lowerbounds against $$P$$, $ZPP$, and $BPP$, including $$P \neq NP$$, $$P \neq PSPACE$$,$$P \neq PP$$, $$ZPP \neq EXP$$, and $$BPP \neq NEXP$$, any proof of the separationwould further imply a constructive separation. Our results generalize earlierresults for $$P \neq NP$$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $$BPP\neq NEXP$$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our third set of results shows that certain complexity separations cannotbe made constructive. We observe that for all super-polynomially growingfunctions $$t$$, there are no constructive separations for detecting high$$t$$-time Kolmogorov complexity (a task which is known to be not in $$P$$) fromany complexity class, unconditionally. 
    more » « less