Existing proofs that deduce BPP = P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown . Specifically, assuming exponential lower bounds against randomized NP ∩ coNP circuits, formally known as randomized SVN circuits, we convert any randomized algorithm over inputs of length n running in time t ≥ n into a deterministic one running in time t 2+α for an arbitrarily small constant α > 0. Such a slowdown is nearly optimal for t close to n , since under standard complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with pre-processing). The latter derandomization result holds under weaker assumptions, of exponential lower bounds against deterministic SVN circuits. Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+α)log s , under the assumption that there exists a function f ∈ E that requires randomized SVN circuits of size at least 2 (1-α′) n , where α = O (α)′. The construction uses, among other ideas, a new connection between pseudoentropy generators and locally list recoverable codes.
more »
« less
Exact sampling of the infinite horizon maximum of a random walk over a nonlinear boundary
Abstract We present the first algorithm that samples max n ≥0 { S n − n α }, where S n is a mean zero random walk, and n α with $$\alpha \in ({1 \over 2},1)$$ defines a nonlinear boundary. We show that our algorithm has finite expected running time. We also apply this algorithm to construct the first exact simulation method for the steady-state departure process of a GI/GI/∞ queue where the service time distribution has infinite mean.
more »
« less
- PAR ID:
- 10110560
- Date Published:
- Journal Name:
- Journal of Applied Probability
- Volume:
- 56
- Issue:
- 01
- ISSN:
- 0021-9002
- Page Range / eLocation ID:
- 116 to 138
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
A bstract We explore possible extensions of the t -channel and s -channel unitary model of high energy evolution in zero transverse dimensions appropriate to very high energy/atomic number where the dipole density in a toy hadron is parametrically high. We suggest that the appropriate generalization is to allow emission of more than one dipole in a single step of energy evolution. We construct explicitly such a model that preserves the t -channel and s-channel unitarity and have the correct low density limit, and study the particle multiplicity distribution resulting from this evolution. We consider initial conditions of a single dipole and many dipoles at initial rapidity. We observe that the saturation regime in this model is preceded by a parametric range of rapidities $$ \frac{1}{\alpha_s}\ln \frac{1}{\alpha_s}<\frac{1}{\alpha_s}\ln \frac{1}{\alpha_s^2} $$ 1 α s ln 1 α s < Y < 1 α s ln 1 α s 2 , where the saturation effects are still unimportant, but multiple emissions determine the properties of the evolution. We also discuss the influence of the saturation on the parton cascade and, in particular, find that in the saturation regime the entropy of partons becomes S ≈ $$ \frac{1}{2} $$ 1 2 ln N where N is the mean multiplicity.more » « less
-
We present a fast, differentially private algorithm for high-dimensional covariance-aware mean estimation with nearly optimal sample complexity. Only exponential-time estimators were previously known to achieve this guarantee. Given n samples from a (sub-)Gaussian distribution with unknown mean μ and covariance Σ, our (ε,δ)-differentially private estimator produces μ~ such that ∥μ−μ~∥Σ≤α as long as n≳dα2+dlog1/δ√αε+dlog1/δε. The Mahalanobis error metric ∥μ−μ^∥Σ measures the distance between μ^ and μ relative to Σ; it characterizes the error of the sample mean. Our algorithm runs in time O~(ndω−1+nd/ε), where ω<2.38 is the matrix multiplication exponent. We adapt an exponential-time approach of Brown, Gaboardi, Smith, Ullman, and Zakynthinou (2021), giving efficient variants of stable mean and covariance estimation subroutines that also improve the sample complexity to the nearly optimal bound above. Our stable covariance estimator can be turned to private covariance estimation for unrestricted subgaussian distributions. With n≳d3/2 samples, our estimate is accurate in spectral norm. This is the first such algorithm using n=o(d2) samples, answering an open question posed by Alabi et al. (2022). With n≳d2 samples, our estimate is accurate in Frobenius norm. This leads to a fast, nearly optimal algorithm for private learning of unrestricted Gaussian distributions in TV distance. Duchi, Haque, and Kuditipudi (2023) obtained similar results independently and concurrently.more » « less
-
We present a fast, differentially private algorithm for high-dimensional covariance-aware mean estimation with nearly optimal sample complexity. Only exponential-time estimators were previously known to achieve this guarantee. Given n samples from a (sub-)Gaussian distribution with unknown mean μ and covariance Σ, our (ϵ,δ)-differentially private estimator produces μ~ such that ∥μ−μ~∥Σ≤α as long as n≳dα2+dlog1/δ√αϵ+dlog1/δϵ. The Mahalanobis error metric ∥μ−μ^∥Σ measures the distance between μ^ and μ relative to Σ; it characterizes the error of the sample mean. Our algorithm runs in time O~(ndω−1+nd/\eps), where ω<2.38 is the matrix multiplication exponent.We adapt an exponential-time approach of Brown, Gaboardi, Smith, Ullman, and Zakynthinou (2021), giving efficient variants of stable mean and covariance estimation subroutines that also improve the sample complexity to the nearly optimal bound above.Our stable covariance estimator can be turned to private covariance estimation for unrestricted subgaussian distributions. With n≳d3/2 samples, our estimate is accurate in spectral norm. This is the first such algorithm using n=o(d2) samples, answering an open question posed by Alabi et al. (2022). With n≳d2 samples, our estimate is accurate in Frobenius norm. This leads to a fast, nearly optimal algorithm for private learning of unrestricted Gaussian distributions in TV distance.Duchi, Haque, and Kuditipudi (2023) obtained similar results independently and concurrently.more » « less
-
null (Ed.)Abstract We investigate the Hölder geometry of curves generated by iterated function systems (IFS) in a complete metric space. A theorem of Hata from 1985 asserts that every connected attractor of an IFS is locally connected and path-connected. We give a quantitative strengthening of Hata’s theorem. First we prove that every connected attractor of an IFS is (1/ s )-Hölder path-connected, where s is the similarity dimension of the IFS. Then we show that every connected attractor of an IFS is parameterized by a (1/ α)-Hölder curve for all α > s . At the endpoint, α = s , a theorem of Remes from 1998 already established that connected self-similar sets in Euclidean space that satisfy the open set condition are parameterized by (1/ s )-Hölder curves. In a secondary result, we show how to promote Remes’ theorem to self-similar sets in complete metric spaces, but in this setting require the attractor to have positive s -dimensional Hausdorff measure in lieu of the open set condition. To close the paper, we determine sharp Hölder exponents of parameterizations in the class of connected self-affine Bedford-McMullen carpets and build parameterizations of self-affine sponges. An interesting phenomenon emerges in the self-affine setting. While the optimal parameter s for a self-similar curve in ℝ n is always at most the ambient dimension n , the optimal parameter s for a self-affine curve in ℝ n may be strictly greater than n .more » « less
An official website of the United States government

