Consider an algorithm performing a computation on a huge random object (for example a random graph or a "long" random walk). Is it necessary to generate the entire object prior to the computation, or is it possible to provide query access to the object and sample it incrementally "onthefly" (as requested by the algorithm)? Such an implementation should emulate the random object by answering queries in a manner consistent with an instance of the random object sampled from the true distribution (or close to it). This paradigm is useful when the algorithm is sublinear and thus, sampling the entire object up front would ruin its efficiency. Our first set of results focus on undirected graphs with independent edge probabilities, i.e. each edge is chosen as an independent Bernoulli random variable. We provide a general implementation for this model under certain assumptions. Then, we use this to obtain the first efficient local implementations for the ErdösRényi G(n,p) model for all values of p, and the Stochastic Block model. As in previous localaccess implementations for random graphs, we support VertexPair and NextNeighbor queries. In addition, we introduce a new RandomNeighbor query. Next, we give the first localaccess implementation for AllNeighbors queries inmore »
Enhanced diffusivity in perturbed senile reinforced random walk models
We consider diffusivity of random walks with transition probabilities depending on the number of consecutive traversals of the last traversed edge, the so called senile reinforced random walk (SeRW). In one dimension, the walk is known to be subdiffusive with identity reinforcement function. We perturb the model by introducing a small probability δ of escaping the last traversed edge at each step. The perturbed SeRW model is diffusive for any δ > 0 , with enhanced diffusivity (≫ O ( δ^2 ) ) in the small δ regime. We further study stochastically perturbed SeRW models by having the last edge escape probability of the form δ ξ n with ξ n ’s being independent random variables. Enhanced diffusivity in such models are logarithmically close to the so called residual diffusivity (positive in the zero δ limit), with diffusivity between O ( 1/ log δ  ) and O ( 1/ log  log δ  ) . Finally, we generalize our results to higher dimensions where the unperturbed model is already diffusive. The enhanced diffusivity can be as much as O ( 1/log ^2 δ )
 Award ID(s):
 1632935
 Publication Date:
 NSFPAR ID:
 10158977
 Journal Name:
 Asymptotic analysis
 ISSN:
 09217134
 Sponsoring Org:
 National Science Foundation
More Like this


We present an algorithm that, with high probability, generates a random spanning tree from an edgeweighted undirected graph in \Otil(n^{5/3 }m^{1/3}) time\footnote{The \Otil(\cdot) notation hides \poly(\log n) factors}. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O(n^\omega). For the special case of unweighted graphs, this improves upon the best previously known running time of \tilde{O}(\min\{n^{\omega},m\sqrt{n},m^{4/3}\}) for m >> n^{7/4} (Colbourn et al. '96, KelnerMadry '09, Madry et al. '15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinantbased and random walkbased techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute \epsapproximate effective resistances for a set SS of vertex pairs via approximate Schur complements in \Otil(m+(n + S)\eps^{2}) time, without using the JohnsonLindenstrauss lemma which requires \Otil( \min\{(m + S)\eps^{2},more »

The best known solutions for kmessage broadcast in dynamic networks of size n require Ω(nk) rounds. In this paper, we see if these bounds can be improved by smoothed analysis. To do so, we study perhaps the most natural randomized algorithm for disseminating tokens in this setting: at every time step, choose a token to broadcast randomly from the set of tokens you know. We show that with even a small amount of smoothing (i.e., one random edge added per round), this natural strategy solves kmessage broadcast in Õ(n+k³) rounds, with high probability, beating the best known bounds for k = o(√n) and matching the Ω(n+k) lower bound for static networks for k = O(n^{1/3}) (ignoring logarithmic factors). In fact, the main result we show is even stronger and more general: given 𝓁smoothing (i.e., 𝓁 random edges added per round), this simple strategy terminates in O(kn^{2/3}log^{1/3}(n)𝓁^{1/3}) rounds. We then prove this analysis close to tight with an almostmatching lower bound. To better understand the impact of smoothing on information spreading, we next turn our attention to static networks, proving a tight bound of Õ(k√n) rounds to solve kmessage broadcast, which is better than what our strategy can achieve in themore »

Many problems on data streams have been studied at two extremes of difficulty: either allowing randomized algorithms, in the static setting (where they should err with bounded probability on the worst case stream); or when only deterministic and infallible algorithms are required. Some recent works have considered the adversarial setting, in which a randomized streaming algorithm must succeed even on data streams provided by an adaptive adversary that can see the intermediate outputs of the algorithm. In order to better understand the differences between these models, we study a streaming task called “Missing Item Finding”. In this problem, for r < n, one is given a data stream a1 , . . . , ar of elements in [n], (possibly with repetitions), and must output some x ∈ [n] which does not equal any of the ai. We prove that, for r = nΘ(1) and δ = 1/poly(n), the space required for randomized algorithms that solve this problem in the static setting with error δ is Θ(polylog(n)); for algorithms in the adversarial setting with error δ, Θ((1 + r2/n)polylog(n)); and for deterministic algorithms, Θ(r/polylog(n)). Because our adversarially robust algorithm relies on free access to a string of O(r log n)more »

Context. Cool stars, such as M giants, can only be analyzed in the nearinfrared (NIR) regime due to the ubiquitous titanium oxide features in optical spectra of stars with T eff < 4000 K. In dustobscured regions, the inner bulge and Galactic center region, the intrinsically bright M giants observed in the NIR are an optimal option for studying stellar abundances and the chemical evolution of stellar populations. Because of the uncertainties in photometric methods, a method for determining the stellar parameters for M giants from the NIR spectra themselves is needed. Aims. We develop a method for determining the stellar parameters for M giants from the NIR spectra. We validate the method by deriving the stellar parameters for nearby wellstudied M giants with spectra from the spectral library of the Immersion GRating INfrared Spectrograph (IGRINS). We demonstrate the accuracy and precision of our method by determining the stellar parameters and α element trends versus metallicity for solar neighborhood M giants. Methods. We carried out new observations of 44 M giant stars with IGRINS mounted on the Gemini South telescope. We also obtained the full H and K band IGRINS spectra of six nearby wellstudied M giants at a spectral resolvingmore »