skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Duration of Gambler's Ruin Problem
Consider a gambler who on each bet either wins 1 with probability p or loses 1 with probability q=1-p, with the results of successive bets being independent. The gambler will stop betting when they are either up k or down k. Letting N be the number of bets made, we show that N is a new better than used random variable. Moreover, we show that if k is even then N/2 has an increasing failure rate, and if k is odd then (N+1)/2 has an increasing failure rate.  more » « less
Award ID(s):
2132759
PAR ID:
10516151
Author(s) / Creator(s):
Publisher / Repository:
INFORMS
Date Published:
Journal Name:
Operations Research Letters
ISSN:
0167-6377
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Let (ak)k∈N be an increasing sequence of positive integers satisfying the Hadamard gap condition a_{k+1}/a_k > q > 1 for all k ∈ N, and let S_n(ω) = \sum_{k=1}^n cos(2πa_kω), n ∈ N, ω ∈ [0, 1]. Then S_n is called a lacunary trigonometric sum, and can be viewed as a random variable defined on the probability space Ω = [0, 1] endowed with Lebesgue measure. Lacunary sums are known to exhibit several properties that are typical for sums of independent random variables. For example, a central limit theorem for (S_n)_{n∈N} has been obtained by Salem and Zygmund, while a law of the iterated logarithm is due to Erdős and Gál. In this paper we study large deviation principles for lacunary sums. Specifically, under the large gap condition ak+1/ak → ∞, we prove that the sequence (Sn/n)n∈N does indeed satisfy a large deviation principle with speed n and the same rate function I􏰡 as for sums of independent random variables with the arcsine distribution. On the other hand, we show that the large deviation principle may fail to hold when we only assume the Hadamard gap condition. However, we show that in the special case when ak = qk for some q ∈ {2, 3, . . .}, (S_n/n)_{n∈N} satisfies a large deviation principle (with speed n) and a rate function I_q that is different from I, and describe an algorithm to compute an arbitrary number of terms in the Taylor expansion of Iq . In addition, we also prove that Iq converges pointwise to I as q → ∞. Furthermore, we construct a random perturbation (a_k)_{k∈N} of the sequence (2^k)_{k∈N} for which a_{k+1}/a_k → 2 as k → ∞, but for which at the same time (S_n/n)n∈N satisfies a large deviation principle with the same rate function I as in the independent case, which is surprisingly different from the rate function I_2 one might naïvely expect. We relate this fact to the number of solutions of certain Diophantine equations. Together, these results show that large deviation principles for lacunary trigonometric sums are very sensitive to the arithmetic properties of the sequence (a_k)_{k∈N}. This is particularly noteworthy since no such arithmetic effects are visible in the central limit theorem or in the law of the iterated logarithm for lacunary trigonometric sums. Our proofs use a combination of tools from probability theory, harmonic analysis, and dynamical systems. 
    more » « less
  2. Stefano Leonardi and Anupam Gupta (Ed.)
    A probabilistic algorithm A is pseudodeterministic if, on every input, there exists a canonical value that is output with high probability. If the algorithm outputs one of k canonical values with high probability, then it is called a k-pseudodeterministic algorithm. In the study of pseudodeterminism, the Acceptance Probability Estimation Problem (APEP), which is to additively approximate the acceptance probability of a Boolean circuit, is emerging as a central computational problem. This problem admits a 2-pseudodeterministic algorithm. Recently, it was shown that a pseudodeterministic algorithm for this problem would imply that any multi-valued function that admits a k-pseudodeterministic algorithm for a constant k (including approximation algorithms) also admits a pseudodeterministic algorithm (Dixon, Pavan, Vinodchandran; ITCS 2021). The contribution of the present work is two-fold. First, as our main conceptual contribution, we establish that the existence of a pseudodeterministic algorithm for APEP is fundamentally related to the gap between probabilistic promise classes and the corresponding standard complexity classes. In particular, we show the following equivalence: APEP has a pseudodeterministic approximation algorithm if and only if every promise problem in PromiseBPP has a solution in BPP. A conceptual interpretation of this equivalence is that the algorithmic gap between 2-pseudodeterminism and pseudodeterminism is equivalent to the gap between PromiseBPP and BPP. Based on this connection, we show that designing pseudodeterministic algorithms for APEP leads to the solution of some open problems in complexity theory, including new Boolean circuit lower bounds. This equivalence also explains how multi-pseudodeterminism is connected to problems in SearchBPP. In particular, we show that if APEP has a pseudodeterministic algorithm, then every problem that admits a k(n)-pseudodeterministic algorithm (for any polynomial k) is in SearchBPP and admits a pseudodeterministic algorithm. Motivated by this connection, we also explore its connection to probabilistic search problems and establish that APEP is complete for certain notions of search problems in the context of pseudodeterminism. Our second contribution is establishing query complexity lower bounds for multi-pseudodeterministic computations. We prove that for every k ≥ 1, there exists a problem whose (k+1)-pseudodeterministic query complexity, in the uniform query model, is O(1) but has a k-pseudodeterministic query complexity of Ω(n), even in the more general nonadaptive query model. A key contribution of this part of the work is the utilization of Sperner’s lemma in establishing query complexity lower bounds. 
    more » « less
  3. We design a nonadaptive algorithm that, given a Boolean function f: {0, 1}^n → {0, 1} which is α-far from monotone, makes poly(n, 1/α) queries and returns an estimate that, with high probability, is an O-tilde(\sqrt{n})-approximation to the distance of f to monotonicity. Furthermore, we show that for any constant k > 0, approximating the distance to monotonicity up to n^(1/2−k)-factor requires 2^{n^k} nonadaptive queries, thereby ruling out a poly(n, 1/α)-query nonadaptive algorithm for such approximations. This answers a question of Seshadhri (Property Testing Review, 2014) for the case of nonadaptive algorithms. Approximating the distance to a property is closely related to tolerantly testing that property. Our lower bound stands in contrast to standard (non-tolerant) testing of monotonicity that can be done nonadaptively with O-tilde(n/ε^2) queries. We obtain our lower bound by proving an analogous bound for erasure-resilient testers. An α-erasure-resilient tester for a desired property gets oracle access to a function that has at most an α fraction of values erased. The tester has to accept (with probability at least 2/3) if the erasures can be filled in to ensure that the resulting function has the property and to reject (with probability at least 2/3) if every completion of erasures results in a function that is ε-far from having the property. Our method yields the same lower bounds for unateness and being a k-junta. These lower bounds improve exponentially on the existing lower bounds for these properties. 
    more » « less
  4. We give the first communication-optimal document exchange protocol. For any n and k 
    more » « less
  5. Abstract Community detection is considered for a stochastic block model graph of n vertices, with K vertices in the planted community, edge probability p for pairs of vertices both in the community, and edge probability q for other pairs of vertices. The main focus of the paper is on weak recovery of the community based on the graph G , with o ( K ) misclassified vertices on average, in the sublinear regime n 1- o (1) ≤ K ≤ o ( n ). A critical parameter is the effective signal-to-noise ratio λ = K 2 ( p - q ) 2 / (( n - K ) q ), with λ = 1 corresponding to the Kesten–Stigum threshold. We show that a belief propagation (BP) algorithm achieves weak recovery if λ > 1 / e, beyond the Kesten–Stigum threshold by a factor of 1 / e. The BP algorithm only needs to run for log * n + O (1) iterations, with the total time complexity O (| E |log * n ), where log * n is the iterated logarithm of n . Conversely, if λ ≤ 1 / e, no local algorithm can asymptotically outperform trivial random guessing. Furthermore, a linear message-passing algorithm that corresponds to applying a power iteration to the nonbacktracking matrix of the graph is shown to attain weak recovery if and only if λ > 1. In addition, the BP algorithm can be combined with a linear-time voting procedure to achieve the information limit of exact recovery (correctly classify all vertices with high probability) for all K ≥ ( n / log n ) (ρ BP + o (1)), where ρ BP is a function of p / q . 
    more » « less