Abstract How fast a state of a system converges to a stationary state is one of the fundamental questions in science. Some Markov chains and random walks on finite groups are known to exhibit the non-asymptotic convergence to a stationary distribution, called the cutoff phenomenon. Here, we examine how quickly a random quantum circuit could transform a quantum state to a Haar-measure random quantum state. We find that random quantum states, as stationary states of random walks on a unitary group, are invariant under the quantum Fourier transform (QFT). Thus the entropic uncertainty of random quantum states has balanced Shannon entropies for the computational basis and the QFT basis. By calculating the Shannon entropy for random quantum states and the Wasserstein distances for the eigenvalues of random quantum circuits, we show that the cutoff phenomenon occurs for the random quantum circuit. It is also demonstrated that the Dyson-Brownian motion for the eigenvalues of a random unitary matrix as a continuous random walk exhibits the cutoff phenomenon. The results here imply that random quantum states could be generated with shallow random circuits.
more »
« less
This content will become publicly available on May 1, 2026
Restart Perturbations for Reversible Markov Chains: Trichotomy and Pre‐Cutoff Equivalence
ABSTRACT Given a reversible Markov chain on states, and another chain obtained by perturbing each row of by at most in total variation, we study the total variation distance between the two stationary distributions, . We show that for chains withcutoff, converges to 0, is asymptotically at most (with a sequence of perturbations for which convergence to this bound occurs), and converges to 1, respectively, if the product of and the mixing time of converges to 0, , and , respectively. This echoes recent results for specific random walks that exhibit cutoff, suggesting that cutoff is the key property underlying such results. Moreover, we show is maximized byrestart perturbations, for which “restarts” at a random state with probability at each step. Finally, we show thatpre‐cutoffis (almost) equivalent to a notion of “sensitivity to restart perturbations,” suggesting that chains with sharper convergence to stationarity are inherently less robust.
more »
« less
- Award ID(s):
- 2008130
- PAR ID:
- 10599296
- Publisher / Repository:
- Wiley
- Date Published:
- Journal Name:
- Random Structures & Algorithms
- Volume:
- 66
- Issue:
- 3
- ISSN:
- 1042-9832
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Consider the Aldous Markov chain on the space of rooted binary trees withnlabeled leaves in which at each transition a uniform random leaf is deleted and reattached to a uniform random edge. Now, fix 1 ≤ k<nand project the leaf mass onto the subtree spanned by the firstkleaves. This yields a binary tree with edge weights that we call a “decoratedk‐tree with total massn.” We introduce label swapping dynamics for the Aldous chain so that, when it runs in stationarity, the decoratedk‐trees evolve as Markov chains themselves, and are projectively consistent overk. The construction of projectively consistent chains is a crucial step in the construction of the Aldous diffusion on continuum trees by the present authors, which is then→∞continuum analog of the Aldous chain and will be taken up elsewhere.more » « less
-
Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of a Markov Chain. In particular, we observe a single trajectory X0,...,Xt,... of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M0 , or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [Kaz78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M0 is O(n) in the size of the state space n.more » « less
-
The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n.more » « less
-
The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n.more » « less
An official website of the United States government
