Abstract This paper studies the large scale limits of multi-type invariant distributions and Busemann functions of planar stochastic growth models in the Kardar–Parisi–Zhang (KPZ) class. We identify a set of sufficient hypotheses for convergence of multi-type invariant measures of last-passage percolation (LPP) models to the stationary horizon (SH), which is the unique multi-type stationary measure of the KPZ fixed point. Our limit theorem utilizes conditions that are expected to hold broadly in the KPZ class, including convergence of the scaled last-passage process to the directed landscape. We verify these conditions for the six exactly solvable models whose scaled bulk versions converge to the directed landscape, as shown by Dauvergne and Virág. We also present a second, more general, convergence theorem with future applications to polymer models and particle systems. Our paper is the first to show convergence to the SH without relying on information about the structure of the multi-type invariant measures of the prelimit models. These results are consistent with the conjecture that the SH is the universal scaling limit of multi-type invariant measures in the KPZ class.
more »
« less
Disjoint Optimizers and the Directed Landscape
We study maximal length collections of disjoint paths, or ‘disjoint optimizers’, in the directed landscape. We show that disjoint optimizers always exist, and that their lengths can be used to construct an extended directed landscape. The extended directed landscape can be built from an independent collection of extended Airy sheets, which we define from the parabolic Airy line ensemble. We show that the extended directed landscape and disjoint optimizers are scaling limits of the corresponding objects in Brownian last passage percolation (LPP). As two consequences of this work, we show that one direction of the Robinson-Schensted-Knuth bijection passes to the KPZ limit, and we find a criterion for geodesic disjointness in the directed landscape that uses only a single parabolic Airy line ensemble. The proofs rely on a new notion of multi-point LPP across the parabolic Airy line ensemble, combinatorial properties of multi-point LPP, and probabilistic resampling ideas.
more »
« less
- Award ID(s):
- 2505625
- PAR ID:
- 10610617
- Publisher / Repository:
- AMER MATHEMATICAL SOC
- Date Published:
- Journal Name:
- Memoirs of the American Mathematical Society
- Volume:
- 303
- Issue:
- 1524
- ISSN:
- 0065-9266
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract We consider point-to-point last-passage times to every vertex in a neighbourhood of size $$\delta N^{\nicefrac {2}{3}}$$ δ N 2 3 at distance N from the starting point. The increments of the last-passage times in this neighbourhood are shown to be jointly equal to their stationary versions with high probability that depends only on $$\delta $$ δ . Through this result we show that (1) the $$\text {Airy}_2$$ Airy 2 process is locally close to a Brownian motion in total variation; (2) the tree of point-to-point geodesics from every vertex in a box of side length $$\delta N^{\nicefrac {2}{3}}$$ δ N 2 3 going to a point at distance N agrees inside the box with the tree of semi-infinite geodesics going in the same direction; (3) two point-to-point geodesics started at distance $$N^{\nicefrac {2}{3}}$$ N 2 3 from each other, to a point at distance N , will not coalesce close to either endpoint on the scale N . Our main results rely on probabilistic methods only.more » « less
-
Class ambiguity refers to the phenomenon whereby samples with similar features belong to different classes at different locations. Given heterogeneous geographic data with class ambiguity, the spatial ensemble learning (SEL) problem aims to find a decomposition of the geographic area into disjoint zones such that class ambiguity is minimized and a local classifier can be learned in each zone. SEL problem is important for applications such as land cover mapping from heterogeneous earth observation data with spectral confusion. However, the problem is challenging due to its high computational cost (finding an optimal zone partition is NP-hard). Related work in ensemble learning either assumes an identical sample distribution (e.g., bagging, boosting, random forest) or decomposes multi-modular input data in the feature vector space (e.g., mixture of experts, multimodal ensemble), and thus cannot effectively minimize class ambiguity. In contrast, our spatial ensemble framework explicitly partitions input data in geographic space. Our approach first preprocesses data into homogeneous spatial patches and uses a greedy heuristic to allocate pairs of patches with high class ambiguity into different zones. Both theoretical analysis and experimental evaluations on two real world wetland mapping datasets show the feasibility of the proposed approach.more » « less
-
Meka, Raghu (Ed.)Motivated by applications to monotonicity testing, Lehman and Ron (JCTA, 2001) proved the existence of a collection of vertex disjoint paths between comparable sub-level sets in the directed hypercube. The main technical contribution of this paper is a new proof method that yields a generalization of their theorem: we prove the existence of two edge-disjoint collections of vertex disjoint paths. Our main conceptual contributions are conjectures on directed hypercube flows with simultaneous vertex and edge capacities of which our generalized Lehman-Ron theorem is a special case. We show that these conjectures imply directed isoperimetric theorems, and in particular, the robust directed Talagrand inequality due to Khot, Minzer, and Safra (SIAM J. on Comp, 2018). These isoperimetric inequalities, that relate the directed surface area (of a set in the hypercube) to its distance to monotonicity, have been crucial in obtaining the best monotonicity testers for Boolean functions. We believe our conjectures pave the way towards combinatorial proofs of these directed isoperimetry theorems.more » « less
-
Learning to optimize (L2O) has gained increasing popularity, which automates the design of optimizers by data-driven approaches. However, current L2O methods often suffer from poor generalization performance in at least two folds: (i) applying the L2O-learned optimizer to unseen optimizees, in terms of lowering their loss function values (optimizer generalization, or “generalizable learning of optimizers”); and (ii) the test performance of an optimizee (itself as a machine learning model), trained by the optimizer, in terms of the accuracy over unseen data (optimizee generalization, or “learning to generalize”). While the optimizer generalization has been recently studied, the optimizee generalization (or learning to generalize) has not been rigorously studied in the L2O context, which is the aim of this paper. We first theoretically establish an implicit connection between the local entropy and the Hessian, and hence unify their roles in the handcrafted design of generalizable optimizers as equivalent metrics of the landscape flatness of loss functions. We then propose to incorporate these two metrics as flatness-aware regularizers into the L2O framework in order to meta-train optimizers to learn to generalize, and theoretically show that such generalization ability can be learned during the L2O meta-training process and then transformed to the optimizee loss function. Extensive experiments consistently validate the effectiveness of our proposals with substantially improved generalization on multiple sophisticated L2O models and diverse optimizees.more » « less
An official website of the United States government

