skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, November 15 until 2:00 AM ET on Saturday, November 16 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Vuong, Thuy-Duong"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available January 1, 2025
  2. We study the problem of parallelizing sampling from distributions related to determinants: symmetric, nonsymmetric, and partition-constrained determinantal point processes, as well as planar perfect matchings. For these distributions, the partition function, a.k.a.\ the count, can be obtained via matrix determinants, a highly parallelizable computation; Csanky proved it is in NC. However, parallel counting does not automatically translate to parallel sampling, as classic reductions between the two are inherently sequential. We show that a nearly quadratic parallel speedup over sequential sampling can be achieved for all the aforementioned distributions. If the distribution is supported on subsets of size $k$ of a ground set, we show how to approximately produce a sample in $\widetilde{O}(k^{\frac{1}{2} + c})$ time with polynomially many processors for any constant $c>0$. In the two special cases of symmetric determinantal point processes and planar perfect matchings, our bound improves to $\widetilde{O}(\sqrt k)$ and we show how to sample exactly in these cases. As our main technical contribution, we fully characterize the limits of batching for the steps of sampling-to-counting reductions. We observe that only $O(1)$ steps can be batched together if we strive for exact sampling, even in the case of nonsymmetric determinantal point processes. However, we show that for approximate sampling, $\widetilde{\Omega}(k^{\frac{1}{2}-c})$ steps can be batched together, for any entropically independent distribution, which includes all mentioned classes of determinantal point processes. Entropic independence and related notions have been the source of breakthroughs in Markov chain analysis in recent years, so we expect our framework to prove useful for distributions beyond those studied in this work. 
    more » « less
  3. We develop a framework for sampling from discrete distributions $\mu$ on the hypercube $\{\pm 1\}^n$ by sampling from continuous distributions supported on $\mathbb{R}^n$ obtained by convolution with spherical Gaussians. We show that for well-studied families of discrete distributions $\mu$, convolving $\mu$ with Gaussians yields well-conditioned log-concave distributions, as long as the variance of the Gaussian is above an $O(1)$ threshold. We then reduce the task of sampling from $\mu$ to sampling from Gaussian-convolved distributions. Our reduction is based on a stochastic process widely studied under different names: backward diffusion in diffusion models, and stochastic localization. We discretize this process in a novel way that allows for high accuracy and parallelism. As our main application, we resolve open questions Anari, Hu, Saberi, and Schild raised on the parallel sampling of distributions that admit parallel counting. We show that determinantal point processes can be sampled via RNC algorithms, that is in time $\log(n)^{O(1)}$ using $n^{O(1)}$ processors. For a wider class of distributions, we show our framework yields Quasi-RNC sampling, i.e., $\log(n)^{O(1)}$ time using $n^{O(\log n)}$ processors. This wider class includes non-symmetric determinantal point processes and random Eulerian tours in digraphs, the latter nearly resolving another open question raised by prior work. Of potentially independent interest, we introduce and study a notion of smoothness for discrete distributions that we call transport stability, which we use to control the propagation of error in our framework. Additionally, we connect transport stability to constructions of optimally mixing local random walks and concentration inequalities. 
    more » « less
  4. We design fast algorithms for repeatedly sampling from strongly Rayleigh distributions, which include as special cases random spanning tree distributions and determinantal point processes. For a graph $G=(V, E)$, we show how to approximately sample uniformly random spanning trees from $G$ in $\widetilde{O}(\lvert V\rvert)$\footnote{Throughout, $\widetilde{O}(\cdot)$ hides polylogarithmic factors in $n$.} time per sample after an initial $\widetilde{O}(\lvert E\rvert)$ time preprocessing. This is the first nearly-linear runtime in the output size, which is clearly optimal. For a determinantal point process on $k$-sized subsets of a ground set of $n$ elements, defined via an $n\times n$ kernel matrix, we show how to approximately sample in $\widetilde{O}(k^\omega)$ time after an initial $\widetilde{O}(nk^{\omega-1})$ time preprocessing, where $\omega<2.372864$ is the matrix multiplication exponent. The time to compute just the weight of the output set is simply $\simeq k^\omega$, a natural barrier that suggests our runtime might be optimal for determinantal point processes as well. As a corollary, we even improve the state of the art for obtaining a single sample from a determinantal point process, from the prior runtime of $\widetilde{O}(\min\{nk^2, n^\omega\})$ to $\widetilde{O}(nk^{\omega-1})$. In our main technical result, we achieve the optimal limit on domain sparsification for strongly Rayleigh distributions. In domain sparsification, sampling from a distribution $\mu$ on $\binom{[n]}{k}$ is reduced to sampling from related distributions on $\binom{[t]}{k}$ for $t\ll n$. We show that for strongly Rayleigh distributions, the domain size can be reduced to nearly linear in the output size $t=\widetilde{O}(k)$, improving the state of the art from $t= \widetilde{O}(k^2)$ for general strongly Rayleigh distributions and the more specialized $t=\widetilde{O}(k^{1.5})$ for spanning tree distributions. Our reduction involves sampling from $\widetilde{O}(1)$ domain-sparsified distributions, all of which can be produced efficiently assuming approximate overestimates for marginals of $\mu$ are known and stored in a convenient data structure. Having access to marginals is the discrete analog of having access to the mean and covariance of a continuous distribution, or equivalently knowing ``isotropy'' for the distribution, the key behind optimal samplers in the continuous setting based on the famous Kannan-Lov\'asz-Simonovits (KLS) conjecture. We view our result as analogous in spirit to the KLS conjecture and its consequences for sampling, but rather for discrete strongly Rayleigh measures. 
    more » « less
  5. Loh, Po-Ling ; Raginsky, Maxim (Ed.)
    We establish a connection between sampling and optimization on discrete domains. For a family of distributions $\mu$ defined on size $k$ subsets of a ground set of elements, that is closed under external fields, we show that rapid mixing of natural local random walks implies the existence of simple approximation algorithms to find $\max \mu(\cdot)$. More precisely, we show that if $t$-step down-up random walks have spectral gap at least inverse polynomially large, then $t$-step local search finds $\max \mu(\cdot)$ within a factor of $k^{O(k)}$. As the main application of our result, we show that $2$-step local search achieves a nearly-optimal $k^{O(k)}$-factor approximation for MAP inference on nonsymmetric $k$-DPPs. This is the first nontrivial multiplicative approximation algorithm for this problem. In our main technical result, we show that an exchange inequality, a concept rooted in discrete convex analysis, can be derived from fast mixing of local random walks. We further advance the state of the art on the mixing of random walks for nonsymmetric DPPs and more generally sector-stable distributions, by obtaining the tightest possible bound on the step size needed for polynomial-time mixing of random walks. We bring the step size down by a factor of $2$ compared to prior works, and consequently get a quadratic improvement on the runtime of local search steps; this improvement is potentially of independent interest in sampling applications. 
    more » « less
  6. We introduce a notion called entropic independence that is an entropic analog of spectral notions of high-dimensional expansion. Informally, entropic independence of a background distribution $\mu$ on $k$-sized subsets of a ground set of elements says that for any (possibly randomly chosen) set $S$, the relative entropy of a single element of $S$ drawn uniformly at random carries at most $O(1/k)$ fraction of the relative entropy of $S$. Entropic independence is the analog of the notion of spectral independence, if one replaces variance by entropy. We use entropic independence to derive tight mixing time bounds, overcoming the lossy nature of spectral analysis of Markov chains on exponential-sized state spaces. In our main technical result, we show a general way of deriving entropy contraction, a.k.a. modified log-Sobolev inequalities, for down-up random walks from spectral notions. We show that spectral independence of a distribution under arbitrary external fields automatically implies entropic independence. We furthermore extend our theory to the case where spectral independence does not hold under arbitrary external fields. To do this, we introduce a framework for obtaining tight mixing time bounds for Markov chains based on what we call restricted modified log-Sobolev inequalities, which guarantee entropy contraction not for all distributions, but for those in a sufficiently large neighborhood of the stationary distribution. To derive our results, we relate entropic independence to properties of polynomials: $\mu$ is entropically independent exactly when a transformed version of the generating polynomial of $\mu$ is upper bounded by its linear tangent; this property is implied by concavity of the said transformation, which was shown by prior work to be locally equivalent to spectral independence. We apply our results to obtain (1) tight modified log-Sobolev inequalities and mixing times for multi-step down-up walks on fractionally log-concave distributions, (2) the tight mixing time of $O(n\log n)$ for Glauber dynamics on Ising models whose interaction matrix has eigenspectrum lying within an interval of length smaller than $1$, improving upon the prior quadratic dependence on $n$, and (3) nearly-linear time $\widetilde O_{\delta}(n)$ samplers for the hardcore and Ising models on $n$-node graphs that have $\delta$-relative gap to the tree-uniqueness threshold. In the last application, our bound on the running time does not depend on the maximum degree $\Delta$ of the graph, and is therefore optimal even for high-degree graphs, and in fact, is sublinear in the size of the graph for high-degree graphs. 
    more » « less
  7. Braverman, Mark (Ed.)
    We present a framework for speeding up the time it takes to sample from discrete distributions $\mu$ defined over subsets of size $k$ of a ground set of $n$ elements, in the regime where $k$ is much smaller than $n$. We show that if one has access to estimates of marginals $\mathbb{P}_{S\sim \mu}[i\in S]$, then the task of sampling from $\mu$ can be reduced to sampling from related distributions $\nu$ supported on size $k$ subsets of a ground set of only $n^{1-\alpha}\cdot \operatorname{poly}(k)$ elements. Here, $1/\alpha\in [1, k]$ is the parameter of entropic independence for $\mu$. Further, our algorithm only requires sparsified distributions $\nu$ that are obtained by applying a sparse (mostly $0$) external field to $\mu$, an operation that for many distributions $\mu$ of interest, retains algorithmic tractability of sampling from $\nu$. This phenomenon, which we dub domain sparsification, allows us to pay a one-time cost of estimating the marginals of $\mu$, and in return reduce the amortized cost needed to produce many samples from the distribution $\mu$, as is often needed in upstream tasks such as counting and inference. For a wide range of distributions where $\alpha=\Omega(1)$, our result reduces the domain size, and as a corollary, the cost-per-sample, by a $\operatorname{poly}(n)$ factor. Examples include monomers in a monomer-dimer system, non-symmetric determinantal point processes, and partition-constrained Strongly Rayleigh measures. Our work significantly extends the reach of prior work of Anari and Derezi\'nski who obtained domain sparsification for distributions with a log-concave generating polynomial (corresponding to $\alpha=1$). As a corollary of our new analysis techniques, we also obtain a less stringent requirement on the accuracy of marginal estimates even for the case of log-concave polynomials; roughly speaking, we show that constant-factor approximation is enough for domain sparsification, improving over $O(1/k)$ relative error established in prior work. 
    more » « less