skip to main content


This content will become publicly available on May 2, 2024

Title: Fully Dynamic Connectivity in $O(\log n(\log\log n)^2)$ Amortized Expected Time
Dynamic connectivity is one of the most fundamental problems in dynamic graphalgorithms. We present a randomized Las Vegas dynamic connectivity datastructure with $O(\log n(\log\log n)^2)$ amortized expected update time and$O(\log n/\log\log\log n)$ worst case query time, which comes very close to thecell probe lower bounds of Patrascu and Demaine (2006) and Patrascu and Thorup(2011).  more » « less
Award ID(s):
2221980 1815316 1637546
NSF-PAR ID:
10435098
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
TheoretiCS
Volume:
Volume 2
ISSN:
2751-4838
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    ABSTRACT Future generations of radio interferometers targeting the 21 cm signal at cosmological distances with N ≫ 1000 antennas could face a significant computational challenge in building correlators with the traditional architecture, whose computational resource requirement scales as $\mathcal {O}(N^2)$ with array size. The fundamental output of such correlators is the cross-correlation products of all antenna pairs in the array. The FFT-correlator architecture reduces the computational resources scaling to $\mathcal {O}(N\log {N})$ by computing cross-correlation products through a spatial Fourier transform. However, the output of the FFT-correlator is meaningful only when the input antenna voltages are gain- and phase-calibrated. Traditionally, interferometric calibration has used the $\mathcal {O}(N^2)$ cross-correlations produced by a standard correlator. This paper proposes two real-time calibration schemes that could work in parallel with an FFT-correlator as a self-contained $\mathcal {O}(N\log {N})$ correlator system that can be scaled to large-N redundant arrays. We compare the performance and scalability of these two calibration schemes and find that they result in antenna gains whose variance decreases as 1/log N with increase in the size of the array. 
    more » « less
  2. null (Ed.)
    We consider the classical Minimum Balanced Cut problem: given a graph $G$, compute a partition of its vertices into two subsets of roughly equal volume, while minimizing the number of edges connecting the subsets. We present the first {\em deterministic, almost-linear time} approximation algorithm for this problem. Specifically, our algorithm, given an $n$-vertex $m$-edge graph $G$ and any parameter $1\leq r\leq O(\log n)$, computes a $(\log m)^{r^2}$-approximation for Minimum Balanced Cut on $G$, in time $O\left ( m^{1+O(1/r)+o(1)}\cdot (\log m)^{O(r^2)}\right )$. In particular, we obtain a $(\log m)^{1/\epsilon}$-approximation in time $m^{1+O(1/\sqrt{\epsilon})}$ for any constant $\epsilon$, and a $(\log m)^{f(m)}$-approximation in time $m^{1+o(1)}$, for any slowly growing function $m$. We obtain deterministic algorithms with similar guarantees for the Sparsest Cut and the Lowest-Conductance Cut problems. Our algorithm for the Minimum Balanced Cut problem in fact provides a stronger guarantee: it either returns a balanced cut whose value is close to a given target value, or it certifies that such a cut does not exist by exhibiting a large subgraph of $G$ that has high conductance. We use this algorithm to obtain deterministic algorithms for dynamic connectivity and minimum spanning forest, whose worst-case update time on an $n$-vertex graph is $n^{o(1)}$, thus resolving a major open problem in the area of dynamic graph algorithms. Our work also implies deterministic algorithms for a host of additional problems, whose time complexities match, up to subpolynomial in $n$ factors, those of known randomized algorithms. The implications include almost-linear time deterministic algorithms for solving Laplacian systems and for approximating maximum flows in undirected graphs. 
    more » « less
  3. null (Ed.)
    Graph compression or sparsification is a basic information-theoretic and computational question. A major open problem in this research area is whether $(1+\epsilon)$-approximate cut-preserving vertex sparsifiers with size close to the number of terminals exist. As a step towards this goal, we initiate the study of a thresholded version of the problem: for a given parameter $c$, find a smaller graph, which we call \emph{connectivity-$c$ mimicking network}, which preserves connectivity among $k$ terminals exactly up to the value of $c$. We show that connectivity-$c$ mimicking networks of size $O(kc^4)$ exist and can be found in time $m(c\log n)^{O(c)}$. We also give a separate algorithm that constructs such graphs of size $k \cdot O(c)^{2c}$ in time $mc^{O(c)}\log^{O(1)}n$. These results lead to the first offline data structures for answering fully dynamic $c$-edge-connectivity queries for $c \ge 4$ in polylogarithmic time per query as well as more efficient algorithms for survivable network design on bounded treewidth graphs. 
    more » « less
  4. Abstract Community detection is considered for a stochastic block model graph of n vertices, with K vertices in the planted community, edge probability p for pairs of vertices both in the community, and edge probability q for other pairs of vertices. The main focus of the paper is on weak recovery of the community based on the graph G , with o ( K ) misclassified vertices on average, in the sublinear regime n 1- o (1) ≤ K ≤ o ( n ). A critical parameter is the effective signal-to-noise ratio λ = K 2 ( p - q ) 2 / (( n - K ) q ), with λ = 1 corresponding to the Kesten–Stigum threshold. We show that a belief propagation (BP) algorithm achieves weak recovery if λ > 1 / e, beyond the Kesten–Stigum threshold by a factor of 1 / e. The BP algorithm only needs to run for log * n + O (1) iterations, with the total time complexity O (| E |log * n ), where log * n is the iterated logarithm of n . Conversely, if λ ≤ 1 / e, no local algorithm can asymptotically outperform trivial random guessing. Furthermore, a linear message-passing algorithm that corresponds to applying a power iteration to the nonbacktracking matrix of the graph is shown to attain weak recovery if and only if λ > 1. In addition, the BP algorithm can be combined with a linear-time voting procedure to achieve the information limit of exact recovery (correctly classify all vertices with high probability) for all K ≥ ( n / log n ) (ρ BP + o (1)), where ρ BP is a function of p / q . 
    more » « less
  5. Mobile data traffic will exceed PC Internet traffic by 2020. As the number of smartphone users and the amount of data transferred per smartphone grow exponentially, limited battery power is becoming an increasingly critical problem for mobile devices which depend on the network I/O. Despite the growing body of research in power management techniques for the mobile devices at the hardware layer as well as the lower layers of the networking stack, there has been little work focusing on saving energy at the application layer for the mobile systems during network I/O. In this paper, we propose a novel technique, called FastHLA, that can achieve significant energy savings at the application layer during mobile network I/O without sacrificing the performance. FastHLA is based on historical log analysis and real-time dynamic tuning of mobile data transfers to achieve the optimization goal. FastHLA can increase the data transfer throughout by up to 10X and decrease the energy consumption by up to 5X compared to state-of-the-art HTTP/2.0 transfers. 
    more » « less