The Eden cell growth model is a simple discrete stochastic process which produces a “blob” (aggregation of cells) in
The Eden Model in
 NSFPAR ID:
 10480448
 Publisher / Repository:
 Springer Science + Business Media
 Date Published:
 Journal Name:
 Journal of Applied and Computational Topology
 ISSN:
 23671726
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract : start with one cube in the regular grid, and at each time step add a neighboring cube uniformly at random. This process has been used as a model for the growth of aggregations, tumors, and bacterial colonies and the healing of wounds, among other natural processes. Here, we study the topology and local geometry of the resulting structure, establishing asymptotic bounds for Betti numbers. Our main result is that the Betti numbers at time$$\mathbb {R}^d$$ ${R}^{d}$t grow at a rate between and$$t^{(d1)/d}$$ ${t}^{(d1)/d}$ , where$$P_d(t)$$ ${P}_{d}\left(t\right)$ is the size of the site perimeter. Assuming a widely believed conjecture, this establishes the rate of growth of the Betti numbers in every dimension. We also present the results of computational experiments on finer aspects of the geometry and topology, such as persistent homology and the distribution of shapes of holes.$$P_d(t)$$ ${P}_{d}\left(t\right)$ 
Abstract Extending computational harmonic analysis tools from the classical setting of regular lattices to the more general setting of graphs and networks is very important, and much research has been done recently. The generalized Haar–Walsh transform (GHWT) developed by Irion and Saito (2014) is a multiscale transform for signals on graphs, which is a generalization of the classical Haar and Walsh–Hadamard transforms. We propose the
extended generalized Haar–Walsh transform (eGHWT), which is a generalization of the adapted time–frequency tilings of Thiele and Villemoes (1996). The eGHWT examines not only the efficiency of graphdomain partitions but also that of “sequencydomain” partitionssimultaneously . Consequently, the eGHWT and its associated bestbasis selection algorithm for graph signals significantly improve the performance of the previous GHWT with the similar computational cost, , where$$O(N \log N)$$ $O(NlogN)$N is the number of nodes of an input graph. While the GHWT bestbasis algorithm seeks the most suitable orthonormal basis for a given task among more than possible orthonormal bases in$$(1.5)^N$$ ${\left(1.5\right)}^{N}$ , the eGHWT bestbasis algorithm can find a better one by searching through more than$$\mathbb {R}^N$$ ${R}^{N}$ possible orthonormal bases in$$0.618\cdot (1.84)^N$$ $0.618\xb7{\left(1.84\right)}^{N}$ . This article describes the details of the eGHWT bestbasis algorithm and demonstrates its superiority using several examples including genuine graph signals as well as conventional digital images viewed as graph signals. Furthermore, we also show how the eGHWT can be extended to 2D signals and matrixform data by viewing them as a tensor product of graphs generated from their columns and rows and demonstrate its effectiveness on applications such as image approximation.$$\mathbb {R}^N$$ ${R}^{N}$ 
Abstract It has been recently established in David and Mayboroda (Approximation of green functions and domains with uniformly rectifiable boundaries of all dimensions.
arXiv:2010.09793 ) that on uniformly rectifiable sets the Green function is almost affine in the weak sense, and moreover, in some scenarios such Green function estimates are equivalent to the uniform rectifiability of a set. The present paper tackles a strong analogue of these results, starting with the “flagship degenerate operators on sets with lower dimensional boundaries. We consider the elliptic operators associated to a domain$$L_{\beta ,\gamma } = {\text {div}}D^{d+1+\gamma n} \nabla $$ ${L}_{\beta ,\gamma}=\text{div}{D}^{d+1+\gamma n}\nabla $ with a uniformly rectifiable boundary$$\Omega \subset {\mathbb {R}}^n$$ $\Omega \subset {R}^{n}$ of dimension$$\Gamma $$ $\Gamma $ , the now usual distance to the boundary$$d < n1$$ $d<n1$ given by$$D = D_\beta $$ $D={D}_{\beta}$ for$$D_\beta (X)^{\beta } = \int _{\Gamma } Xy^{d\beta } d\sigma (y)$$ ${D}_{\beta}{\left(X\right)}^{\beta}={\int}_{\Gamma}{Xy}^{d\beta}d\sigma \left(y\right)$ , where$$X \in \Omega $$ $X\in \Omega $ and$$\beta >0$$ $\beta >0$ . In this paper we show that the Green function$$\gamma \in (1,1)$$ $\gamma \in (1,1)$G for , with pole at infinity, is well approximated by multiples of$$L_{\beta ,\gamma }$$ ${L}_{\beta ,\gamma}$ , in the sense that the function$$D^{1\gamma }$$ ${D}^{1\gamma}$ satisfies a Carleson measure estimate on$$\big  D\nabla \big (\ln \big ( \frac{G}{D^{1\gamma }} \big )\big )\big ^2$$ $D\nabla (ln(\frac{G}{{D}^{1\gamma}})){}^{2}$ . We underline that the strong and the weak results are different in nature and, of course, at the level of the proofs: the latter extensively used compactness arguments, while the present paper relies on some intricate integration by parts and the properties of the “magical distance function from David et al. (Duke Math J, to appear).$$\Omega $$ $\Omega $ 
Abstract In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in finegrained complexity. In several cases our proof systems have optimal running time. Our main results include:
Certifying that a list of
n integers has no 3SUM solution can be done in Merlin–Arthur time . Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n)$$ $\stackrel{~}{O}\left(n\right)$ time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$ and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$ time).$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$Counting the number of
k cliques with total edge weight equal to zero in ann node graph can be done in Merlin–Arthur time (where$${\tilde{O}}(n^{\lceil k/2\rceil })$$ $\stackrel{~}{O}\left({n}^{\lceil k/2\rceil}\right)$ ). For odd$$k\ge 3$$ $k\ge 3$k , this bound can be further improved for sparse graphs: for example, counting the number of zeroweight triangles in anm edge graph can be done in Merlin–Arthur time . Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only count$${\tilde{O}}(m)$$ $\stackrel{~}{O}\left(m\right)$k cliques in unweighted graphs, and had worse running times for smallk .Computing the AllPairs Shortest Distances matrix for an
n node graph can be done in Merlin–Arthur time . Note this is optimal, as the matrix can have$$\tilde{O}(n^2)$$ $\stackrel{~}{O}\left({n}^{2}\right)$ nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\Omega (n^2)$$ $\Omega \left({n}^{2}\right)$ nondeterministic time algorithm.$$\tilde{O}(n^{2.94})$$ $\stackrel{~}{O}\left({n}^{2.94}\right)$Certifying that an
n variablek CNF is unsatisfiable can be done in Merlin–Arthur time . We also observe an algebrization barrier for the previous$$2^{n/2  n/O(k)}$$ ${2}^{n/2n/O\left(k\right)}$ time Merlin–Arthur protocol of R. Williams [CCC’16] for$$2^{n/2}\cdot \textrm{poly}(n)$$ ${2}^{n/2}\xb7\text{poly}\left(n\right)$ SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for$$\#$$ $\#$k UNSAT running in time. Therefore we have to exploit nonalgebrizing properties to obtain our new protocol.$$2^{n/2}/n^{\omega (1)}$$ ${2}^{n/2}/{n}^{\omega \left(1\right)}$ Due to the centrality of these problems in finegrained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution toCertifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time
. Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{4n/5}\cdot \textrm{poly}(n)$$ ${2}^{4n/5}\xb7\text{poly}\left(n\right)$ time.$$2^{2n/3}\cdot \textrm{poly}(n)$$ ${2}^{2n/3}\xb7\text{poly}\left(n\right)$n integers can be done in Merlin–Arthur time , improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{n/3}\cdot \textrm{poly}(n)$$ ${2}^{n/3}\xb7\text{poly}\left(n\right)$ time.$$2^{0.49991n}\cdot \textrm{poly}(n)$$ ${2}^{0.49991n}\xb7\text{poly}\left(n\right)$ 
Abstract In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularization. Higher order regularity can be obtained via replacing the Laplacian regulariser with a polyLaplacian regulariser. The methodology is readily adapted to graphs and here we consider graph polyLaplacian regularization in a fully supervised, nonparametric, noise corrupted, regression problem. In particular, given a dataset
and a set of noisy labels$$\{x_i\}_{i=1}^n$$ ${\left\{{x}_{i}\right\}}_{i=1}^{n}$ we let$$\{y_i\}_{i=1}^n\subset \mathbb {R}$$ ${\left\{{y}_{i}\right\}}_{i=1}^{n}\subset R$ be the minimizer of an energy which consists of a data fidelity term and an appropriately scaled graph polyLaplacian term. When$$u_n{:}\{x_i\}_{i=1}^n\rightarrow \mathbb {R}$$ ${u}_{n}:{\left\{{x}_{i}\right\}}_{i=1}^{n}\to R$ , for iid noise$$y_i = g(x_i)+\xi _i$$ ${y}_{i}=g\left({x}_{i}\right)+{\xi}_{i}$ , and using the geometric random graph, we identify (with high probability) the rate of convergence of$$\xi _i$$ ${\xi}_{i}$ to$$u_n$$ ${u}_{n}$g in the large data limit . Furthermore, our rate is close to the known rate of convergence in the usual smoothing spline model.$$n\rightarrow \infty $$ $n\to \infty $