We study the structure of the Liouville quantum gravity (LQG) surfaces that are cut out as one explores a conformal loopensemble
The Dushnik–Miller dimension of a poset
 NSFPAR ID:
 10475241
 Publisher / Repository:
 Springer Science + Business Media
 Date Published:
 Journal Name:
 Order
 ISSN:
 01678094
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract for$$\hbox {CLE}_{\kappa '}$$ ${\text{CLE}}_{{\kappa}^{\prime}}$ in (4, 8) that is drawn on an independent$$\kappa '$$ ${\kappa}^{\prime}$ LQG surface for$$\gamma $$ $\gamma $ . The results are similar in flavor to the ones from our companion paper dealing with$$\gamma ^2=16/\kappa '$$ ${\gamma}^{2}=16/{\kappa}^{\prime}$ for$$\hbox {CLE}_{\kappa }$$ ${\text{CLE}}_{\kappa}$ in (8/3, 4), where the loops of the CLE are disjoint and simple. In particular, we encode the combined structure of the LQG surface and the$$\kappa $$ $\kappa $ in terms of stable growthfragmentation trees or their variants, which also appear in the asymptotic study of peeling processes on decorated planar maps. This has consequences for questions that do a priori not involve LQG surfaces: In our paper entitled “$$\hbox {CLE}_{\kappa '}$$ ${\text{CLE}}_{{\kappa}^{\prime}}$CLE Percolations ” described the law of interfaces obtained when coloring the loops of a independently into two colors with respective probabilities$$\hbox {CLE}_{\kappa '}$$ ${\text{CLE}}_{{\kappa}^{\prime}}$p and . This description was complete up to one missing parameter$$1p$$ $1p$ . The results of the present paper about CLE on LQG allow us to determine its value in terms of$$\rho $$ $\rho $p and . It shows in particular that$$\kappa '$$ ${\kappa}^{\prime}$ and$$\hbox {CLE}_{\kappa '}$$ ${\text{CLE}}_{{\kappa}^{\prime}}$ are related via a continuum analog of the EdwardsSokal coupling between$$\hbox {CLE}_{16/\kappa '}$$ ${\text{CLE}}_{16/{\kappa}^{\prime}}$ percolation and the$$\hbox {FK}_q$$ ${\text{FK}}_{q}$q state Potts model (which makes sense even for nonintegerq between 1 and 4) if and only if . This provides further evidence for the longstanding belief that$$q=4\cos ^2(4\pi / \kappa ')$$ $q=4{cos}^{2}(4\pi /{\kappa}^{\prime})$ and$$\hbox {CLE}_{\kappa '}$$ ${\text{CLE}}_{{\kappa}^{\prime}}$ represent the scaling limits of$$\hbox {CLE}_{16/\kappa '}$$ ${\text{CLE}}_{16/{\kappa}^{\prime}}$ percolation and the$$\hbox {FK}_q$$ ${\text{FK}}_{q}$q Potts model whenq and are related in this way. Another consequence of the formula for$$\kappa '$$ ${\kappa}^{\prime}$ is the value of halfplane arm exponents for such divideandcolor models (a.k.a. fuzzy Potts models) that turn out to take a somewhat different form than the usual critical exponents for twodimensional models.$$\rho (p,\kappa ')$$ $\rho (p,{\kappa}^{\prime})$ 
Abstract Approximate integer programming is the following: For a given convex body
, either determine whether$$K \subseteq {\mathbb {R}}^n$$ $K\subseteq {R}^{n}$ is empty, or find an integer point in the convex body$$K \cap {\mathbb {Z}}^n$$ $K\cap {Z}^{n}$ which is$$2\cdot (K  c) +c$$ $2\xb7(Kc)+c$K , scaled by 2 from its center of gravityc . Approximate integer programming can be solved in time while the fastest known methods for exact integer programming run in time$$2^{O(n)}$$ ${2}^{O\left(n\right)}$ . So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point$$2^{O(n)} \cdot n^n$$ ${2}^{O\left(n\right)}\xb7{n}^{n}$ can be found in time$$x^* \in (K \cap {\mathbb {Z}}^n)$$ ${x}^{\ast}\in (K\cap {Z}^{n})$ , provided that the$$2^{O(n)}$$ ${2}^{O\left(n\right)}$remainders of each component for some arbitrarily fixed$$x_i^* \mod \ell $$ ${x}_{i}^{\ast}\phantom{\rule{0ex}{0ex}}mod\phantom{\rule{0ex}{0ex}}\ell $ of$$\ell \ge 5(n+1)$$ $\ell \ge 5(n+1)$ are given. The algorithm is based on a$$x^*$$ ${x}^{\ast}$cuttingplane technique , iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a algorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (Integer programming, lattice algorithms, and deterministic, vol. Estimation. Georgia Institute of Technology, Atlanta, 2012) that is considerably more involved. Our algorithm also relies on a new$$2^{O(n)}n^n$$ ${2}^{O\left(n\right)}{n}^{n}$asymmetric approximate Carathéodory theorem that might be of interest on its own. Our second method concerns integer programming problems in equationstandard form . Such a problem can be reduced to the solution of$$Ax = b, 0 \le x \le u, \, x \in {\mathbb {Z}}^n$$ $Ax=b,0\le x\le u,\phantom{\rule{0ex}{0ex}}x\in {Z}^{n}$ approximate integer programming problems. This implies, for example that$$\prod _i O(\log u_i +1)$$ ${\prod}_{i}O(log{u}_{i}+1)$knapsack orsubsetsum problems withpolynomial variable range can be solved in time$$0 \le x_i \le p(n)$$ $0\le {x}_{i}\le p\left(n\right)$ . For these problems, the best running time so far was$$(\log n)^{O(n)}$$ ${(logn)}^{O\left(n\right)}$ .$$n^n \cdot 2^{O(n)}$$ ${n}^{n}\xb7{2}^{O\left(n\right)}$ 
Abstract In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in finegrained complexity. In several cases our proof systems have optimal running time. Our main results include:
Certifying that a list of
n integers has no 3SUM solution can be done in Merlin–Arthur time . Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n)$$ $\stackrel{~}{O}\left(n\right)$ time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$ and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$ time).$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$Counting the number of
k cliques with total edge weight equal to zero in ann node graph can be done in Merlin–Arthur time (where$${\tilde{O}}(n^{\lceil k/2\rceil })$$ $\stackrel{~}{O}\left({n}^{\lceil k/2\rceil}\right)$ ). For odd$$k\ge 3$$ $k\ge 3$k , this bound can be further improved for sparse graphs: for example, counting the number of zeroweight triangles in anm edge graph can be done in Merlin–Arthur time . Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only count$${\tilde{O}}(m)$$ $\stackrel{~}{O}\left(m\right)$k cliques in unweighted graphs, and had worse running times for smallk .Computing the AllPairs Shortest Distances matrix for an
n node graph can be done in Merlin–Arthur time . Note this is optimal, as the matrix can have$$\tilde{O}(n^2)$$ $\stackrel{~}{O}\left({n}^{2}\right)$ nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\Omega (n^2)$$ $\Omega \left({n}^{2}\right)$ nondeterministic time algorithm.$$\tilde{O}(n^{2.94})$$ $\stackrel{~}{O}\left({n}^{2.94}\right)$Certifying that an
n variablek CNF is unsatisfiable can be done in Merlin–Arthur time . We also observe an algebrization barrier for the previous$$2^{n/2  n/O(k)}$$ ${2}^{n/2n/O\left(k\right)}$ time Merlin–Arthur protocol of R. Williams [CCC’16] for$$2^{n/2}\cdot \textrm{poly}(n)$$ ${2}^{n/2}\xb7\text{poly}\left(n\right)$ SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for$$\#$$ $\#$k UNSAT running in time. Therefore we have to exploit nonalgebrizing properties to obtain our new protocol.$$2^{n/2}/n^{\omega (1)}$$ ${2}^{n/2}/{n}^{\omega \left(1\right)}$ Due to the centrality of these problems in finegrained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution toCertifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time
. Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{4n/5}\cdot \textrm{poly}(n)$$ ${2}^{4n/5}\xb7\text{poly}\left(n\right)$ time.$$2^{2n/3}\cdot \textrm{poly}(n)$$ ${2}^{2n/3}\xb7\text{poly}\left(n\right)$n integers can be done in Merlin–Arthur time , improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{n/3}\cdot \textrm{poly}(n)$$ ${2}^{n/3}\xb7\text{poly}\left(n\right)$ time.$$2^{0.49991n}\cdot \textrm{poly}(n)$$ ${2}^{0.49991n}\xb7\text{poly}\left(n\right)$ 
Abstract Consider two halfspaces
and$$H_1^+$$ ${H}_{1}^{+}$ in$$H_2^+$$ ${H}_{2}^{+}$ whose bounding hyperplanes$${\mathbb {R}}^{d+1}$$ ${R}^{d+1}$ and$$H_1$$ ${H}_{1}$ are orthogonal and pass through the origin. The intersection$$H_2$$ ${H}_{2}$ is a spherical convex subset of the$${\mathbb {S}}_{2,+}^d:={\mathbb {S}}^d\cap H_1^+\cap H_2^+$$ ${S}_{2,+}^{d}:={S}^{d}\cap {H}_{1}^{+}\cap {H}_{2}^{+}$d dimensional unit sphere , which contains a great subsphere of dimension$${\mathbb {S}}^d$$ ${S}^{d}$ and is called a spherical wedge. Choose$$d2$$ $d2$n independent random points uniformly at random on and consider the expected facet number of the spherical convex hull of these points. It is shown that, up to terms of lower order, this expectation grows like a constant multiple of$${\mathbb {S}}_{2,+}^d$$ ${S}_{2,+}^{d}$ . A similar behaviour is obtained for the expected facet number of a homogeneous Poisson point process on$$\log n$$ $logn$ . The result is compared to the corresponding behaviour of classical Euclidean random polytopes and of spherical random polytopes on a halfsphere.$${\mathbb {S}}_{2,+}^d$$ ${S}_{2,+}^{d}$ 
Abstract Sequence mappability is an important task in genome resequencing. In the (
k ,m )mappability problem, for a given sequenceT of lengthn , the goal is to compute a table whosei th entry is the number of indices such that the length$$j \ne i$$ $j\ne i$m substrings ofT starting at positionsi andj have at mostk mismatches. Previous works on this problem focused on heuristics computing a rough approximation of the result or on the case of . We present several efficient algorithms for the general case of the problem. Our main result is an algorithm that, for$$k=1$$ $k=1$ , works in$$k=O(1)$$ $k=O\left(1\right)$ space and, with high probability, in$$O(n)$$ $O\left(n\right)$ time. Our algorithm requires a careful adaptation of the$$O(n \cdot \min \{m^k,\log ^k n\})$$ $O(n\xb7min\{{m}^{k},{log}^{k}n\})$k errata trees of Cole et al. [STOC 2004] to avoid multiple counting of pairs of substrings. Our technique can also be applied to solve the allpairs Hamming distance problem introduced by Crochemore et al. [WABI 2017]. We further develop time algorithms to compute$$O(n^2)$$ $O\left({n}^{2}\right)$all (k ,m )mappability tables for a fixedm and all or a fixed$$k\in \{0,\ldots ,m\}$$ $k\in \{0,\dots ,m\}$k and all . Finally, we show that, for$$m\in \{k,\ldots ,n\}$$ $m\in \{k,\dots ,n\}$ , the ($$k,m = \Theta (\log n)$$ $k,m=\Theta (logn)$k ,m )mappability problem cannot be solved in strongly subquadratic time unless the Strong Exponential Time Hypothesis fails. This is an improved and extended version of a paper presented at SPIRE 2018.