skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Deformed Polynuclear Growth in (1+1) Dimensions
Abstract We introduce and study a one parameter deformation of the polynuclear growth (PNG) in (1+1)-dimensions, which we call the $$t$$-PNG model. It is defined by requiring that, when two expanding islands merge, with probability $$t$$ they sprout another island on top of the merging location. At $t=0$, this becomes the standard (non-deformed) PNG model that, in the droplet geometry, can be reformulated through longest increasing subsequences of uniformly random permutations or through an algorithm known as patience sorting. In terms of the latter, the $$t$$-PNG model allows errors to occur in the sorting algorithm with probability $$t$$. We prove that the $$t$$-PNG model exhibits one-point Tracy–Widom Gaussian Unitary Ensemble asymptotics at large times for any fixed $$t\in [0,1)$$, and one-point convergence to the narrow wedge solution of the Kardar–Parisi–Zhang equation as $$t$$ tends to $$1$$. We further construct distributions for an external source that are likely to induce Baik–Ben Arous–Péché-type phase transitions. The proofs are based on solvable stochastic vertex models and their connection to the determinantal point processes arising from Schur measures on partitions.  more » « less
Award ID(s):
1853981
PAR ID:
10338217
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
International Mathematics Research Notices
ISSN:
1073-7928
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The T 1 excited state relaxation in thiophosgene has attracted much attention as a relatively simple model for the intersystem crossing (ISC) transitions in polyatomic molecules. The very short (20–40 ps) T 1 lifetime predicted in several theoretical studies strongly disagrees with the experimental values (∼20 ns) indicating that the kinetics of T 1 → S 0 ISC is not well understood. We use the nonadiabatic transition state theory (NA-TST) with the Zhu–Nakamura transition probability and the multireference perturbation theory (CASPT2) to show that the T 1 → S 0 ISC occurs in the quantum tunneling regime. We also introduce a new zero-point vibrational energy correction scheme that improves the accuracy of the predicted ISC rate constants at low internal energies. The predicted lifetimes of the T 1 vibrational states are between one and two orders of magnitude larger than the experimental values. This overestimation is attributed to the multidimensional nature of quantum tunneling that facilitates ISC transitions along the non-minimum energy path and is not accounted for in the one-dimensional NA-TST. 
    more » « less
  2. Cumulative memory – the sum of space used per step over the duration of a computation – is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications like password hashing. It is a more accurate cost measure for algorithms that have infrequent spikes in memory usage and are run in environments such as cloud computing that allow dynamic allocation and de-allocation of resources during execution, or when many multiple instances of an algorithm are interleaved in parallel. We prove the first lower bounds on cumulative memory complexity for both sequential classical computation and quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time-space tradeoff lower bounds that can only lower bound the maximum space used during an execution. The resulting lower bounds on cumulative memory that we obtain are just as strong as the best time-space tradeoff lower bounds, which are very often known to be tight. Although previous results for pebbling and random oracle models have yielded time-space tradeoff lower bounds larger than the cumulative memory complexity, our results show that in general computational models such separations cannot follow from known lower bound techniques and are not true for many functions. Among many possible applications of our general methods, we show that any classical sorting algorithm with success probability at least 1/poly(n) requires cumulative memory Ω(n^2), any classical matrix multiplication algorithm requires cumulative memory Ω(n^6/T), any quantum sorting circuit requires cumulative memory Ω(n^3/T), and any quantum circuit that finds k disjoint collisions in a random function requires cumulative memory Ω(k^3 n/T^2). (Full version of ICALP 2023 paper.) 
    more » « less
  3. Cumulative memory---the sum of space used per step over the duration of a computation---is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications like password hashing. It is a more accurate cost measure for algorithms that have infrequent spikes in memory usage and are run in environments such as cloud computing that allow dynamic allocation and de-allocation of resources during execution, or when many multiple instances of an algorithm are interleaved in parallel. We prove the first lower bounds on cumulative memory complexity for both sequential classical computation and quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time-space tradeoff lower bounds that can only lower bound the maximum space used during an execution. The resulting lower bounds on cumulative memory that we obtain are just as strong as the best time-space tradeoff lower bounds, which are very often known to be tight. Although previous results for pebbling and random oracle models have yielded time-space tradeoff lower bounds larger than the cumulative memory complexity, our results show that in general computational models such separations cannot follow from known lower bound techniques and are not true for many functions. Among many possible applications of our general methods, we show that any classical sorting algorithm with success probability at least 1/\poly(n) requires cumulative memory \Omega(n^2), any classical matrix multiplication algorithm requires cumulative memory \Omega(n^6/T) , any quantum sorting circuit requires cumulative memory \Omega(n^3/T) , and any quantum circuit that finds k disjoint collisions in a random function requires cumulative memory \Omega(k^ 3 n/T^2) . 
    more » « less
  4. Cumulative memory – the sum of space used per step over the duration of a computation – is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications like password hashing. It is a more accurate cost measure for algorithms that have infrequent spikes in memory usage and are run in environments such as cloud computing that allow dynamic allocation and de-allocation of resources during execution, or when many multiple instances of an algorithm are interleaved in parallel. We prove the first lower bounds on cumulative memory complexity for both sequential classical computation and quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time-space tradeoff lower bounds that can only lower bound the maximum space used during an execution. The resulting lower bounds on cumulative memory that we obtain are just as strong as the best time-space tradeoff lower bounds, which are very often known to be tight. Although previous results for pebbling and random oracle models have yielded time-space tradeoff lower bounds larger than the cumulative memory complexity, our results show that in general computational models such separations cannot follow from known lower bound techniques and are not true for many functions. Among many possible applications of our general methods, we show that any classical sorting algorithm with success probability at least 1/poly(n) requires cumulative memory Ω(n^2), any classical matrix multiplication algorithm requires cumulative memory Ω(n^6/T), any quantum sorting circuit requires cumulative memory Ω(n^3/T), and any quantum circuit that finds k disjoint collisions in a random function requires cumulative memory Ω(k^3 n/T^2). 
    more » « less
  5. We study the connections between sorting and the binary search tree (BST) model, with an aim towards showing that the fields are connected more deeply than is currently appreciated. While any BST can be used to sort by inserting the keys one-by-one, this is a very limited relationship and importantly says nothing about parallel sorting. We show what we believe to be the first formal relationship between the BST model and sorting. Namely, we show that a large class of sorting algorithms, which includes mergesort, quicksort, insertion sort, and almost every instance-optimal sorting algorithm, are equivalent in cost to offline BST algorithms. Our main theoretical tool is the geometric interpretation of the BST model introduced by Demaine et al. [18], which finds an equivalence between searches on a BST and point sets in the plane satisfying a certain property. To give an example of the utility of our approach, we introduce the log-interleave bound, a measure of the information-theoretic complexity of a permutation π, which is within a lg lg n multiplicative factor of a known lower bound in the BST model; we also devise a parallel sorting algorithm with polylogarithmic span that sorts a permutation π using comparisons proportional to its log-interleave bound. Our aforementioned result on sorting and offline BST algorithms can be used to show existence of an offline BST algorithm whose cost is within a constant factor of the log-interleave bound of any permutation π. 
    more » « less