<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Parameter Symmetry in Perturbed GUE Corners Process and Reflected Drifted Brownian Motions</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>12/01/2020</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10233471</idno>
					<idno type="doi">10.1007/s10955-020-02652-7</idno>
					<title level='j'>Journal of Statistical Physics</title>
<idno>0022-4715</idno>
<biblScope unit="volume">181</biblScope>
<biblScope unit="issue">5</biblScope>					

					<author>Leonid Petrov</author><author>Mikhail Tikhonov</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[The perturbed GUE corners ensemble is the joint distribution of eigenvalues of all principal submatrices of a matrix G +diag(a), where G is the random matrix from the Gaussian Unitary Ensemble (GUE), and diag(a) is a fixed diagonal matrix. We introduce Markov transitions based on exponential jumps of eigenvalues, and show that their successive application is equivalent in distribution to a deterministic shift of the matrix. This result also leads to a new distributional symmetry for a family of reflected Brownian motions with drifts coming from an arithmetic progression. The construction we present may be viewed as a random matrix analogue of the recent results of the first author and Axel Saenz [17].Keywords Random matrices • Perturbed GUE corner process • Reflected Brownian motions 1 Introduction
Couplings for Perturbed GUE Corners ProcessThe Gaussian Unitary Ensemble (GUE) is the most well-known random matrix model [2,6,14]. This paper presents a new symmetry of the distribution of the perturbed GUE ensemble. By this we mean the random matrix ensemble of the form H = G + diag(a 1 , . . . , a N ), where G is an N × N GUE random matrix, to which we add a fixed diagonal matrix. This model is often also called GUE with external source. We refer to [3-5] and references therein for the history of the perturbed ensemble and various asymptotic results. (In fact, below we consider Communicated by Antti Knowles.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The unperturbed GUE random matrix, corresponding to a i &#8801; 0, is unitary invariant in the sense that there is equality in distribution G d = U GU * for any fixed N &#215; N unitary matrix U . This implies that the distribution of the eigenvalues of H is symmetric in the perturbation parameters a 1 , . . . , a N . The overall goal of the paper is to explore probabilistic consequences of this symmetry property.</p><p>Together with the eigenvalues &#955; N = (&#955; N N &#8804; . . . &#8804; &#955; N 1 ), &#955; N i &#8712; R, of the full matrix H = [h i j ] N i, j=1 , one can also consider its corners process, <ref type="foot">1</ref> that is, the interlacing collection of eigenvalues of the principal corners [h i j ] k i, j=1 of H for all k = 1, 2, . . . , N . (See Fig. <ref type="figure">1</ref> for an illustration.) The distribution of the corners process of H is not symmetric in the parameters a i . Moreover, assuming that the a i 's are all distinct, there are N ! different probability distributions on interlacing collections of eigenvalues at N levels.</p><p>In this paper we present explicit couplings between these N ! distributions, by showing that each nearest neighbour transposition a k &#8596; a k+1 , k = 1, . . . , N -1, of the parameters is equivalent in distribution to a rather simple Markov swap operator S a k -a k+1 k . This swap operator randomly changes the entries &#955; k i on the k-th level of the array given the two adjacent levels &#955; k-1 , &#955; k+1 , while leaving all other entries intact. If a k &gt; a k+1 , S a k -a k+1 k is realized as an independent collection of instantaneous exponential type jumps of each &#955; k i to the left:<ref type="foot">foot_1</ref> </p><p>where E i a k -a k+1 's are independent exponential random variables with parameter a ka k+1 (and mean 1/(a ka k+1 )). Here by agreement, &#955; k-1 k = -&#8734;. In particular, these left jumps are constrained by the interlacing. For a k &lt; a k+1 , the same jumps are performed to the right in a symmetric way:</p><p>where, by agreement, &#955; k-1 0 = +&#8734;. Finally, if a k = a k+1 , then S a k -a k+1 k is the identity operation.</p><p>Theorem 1.1 (Follows from Theorem 4.4 below) Assume that a k = a k+1 . Then the action of the Markov operator S a k -a k+1 k (with left jumps for a k &gt; a k+1 , and right jumps otherwise) turns the corners distribution of G + diag(a 1 , . . . , a k , a k+1 , . . . , a N ) into the one of G + diag(a 1 , . . . , a k+1 , a k , . . . , a N ), where G is the N &#215; N GUE random matrix.</p><p>We establish this theorem by relying on a perturbed Gibbs structure of the corners distribution of the matrix H . Namely, it is well-known that in the unperturbed case, the conditional distribution of the eigenvalues &#955; k i , 1 &#8804; i &#8804; k &#8804; N -1, given &#955; N , is uniform on the polytope defined by all the interlacing inequalities (known as the Gelfand-Tsetlin polytope). In the perturbed case, the Gibbs structure should be deformed in a certain way by means of the parameters a i (see Sect. 3.1). The coupling follows by considering the conditional distribution of &#955; k given two adjacent levels &#955; k&#177;1 , which reduces to a collection of independent exponential random variables confined to the corresponding intervals. Producing a suitable Markov swap operator for a single such variable (see Proposition 4.1 below), we arrive at the result of Theorem 1.1. does not change the distribution of G + diag(a 1 , . . . , a N ). However, this composition is not an identity transformation: two random jumps return a particle to the original location with probability 0.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Perturbation by an Arithmetic Progression</head><p>The perturbed GUE corners distributions are compatible for various N , and so one can define the corresponding perturbed GUE corners distribution on infinite interlacing arrays. It depends on an infinite parameter sequence a = {a i } i&#8712;Z &#8805;1 . One particular interesting case is when the perturbation parameters form an arithmetic progression a i = -(i -1)&#945;, where &#945; &gt; 0. Swapping a 1 with a 2 , then a 1 with a 3 , and so on all the way to infinity leads to an additive shift in the perturbation parameters, which is equivalent in distribution to a global shift: Theorem 1.3 (Theorem 5.2 below) The action of a sequence of left exponential jumps (where the parameter at level k is taken to be k&#945;), from level 1 up to infinity, is equivalent in distribution <ref type="foot">3</ref> to shifting all the elements of the interlacing array by &#945; to the left.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.3">Shifting of Reflected Brownian Motions</head><p>Let &#945; &gt; 0, and let let X k (t), k = 1, 2, . . ., be reflected Brownian motions constructed as follows. First, X 1 (t) is the standard driftless Brownian motion started from 0. Inductively, let X k (t), k = 2, 3, . . ., be a new independent Brownian motion with drift -(k -1)&#945;, and reflected down off of X k-1 (t) by means of subtracting local time when</p><p>is the standard Brownian motion, and</p><p>s) is the continuous non-decreasing process which increases only at times when X 1 (s) = X 2 (s) (in other words, it is twice the semimartingale local time of X 1 -X 2 at zero. We refer to <ref type="bibr">[5,</ref><ref type="bibr">19]</ref> for further details on the reflection mechanism, and for an explanation on how to start all these reflected processes from zero (which formally results in infinitely many collisions in finite time). Almost surely we have X 1 (t) &#8805; X 2 (t) &#8805; X 3 (t) . . . for all t.</p><p>Fix t and define</p><p>where E k&#945; , k = 1, 2, . . ., are independent exponential random variables with parameters k&#945; (and mean 1/(k&#945;)).</p><p>Theorem 1. <ref type="bibr">4</ref> For each fixed t, we have equality of joint distributions</p><p>In particular,</p><p>) is a normal random variable with mean (-&#945;t) and variance t. To the best of our knowledge, even this result for two processes (one a usual Brownian motion, and one reflected off it) is new.</p><p>Theorem 1.4 follows from Theorem 1.3 combined with the connection between the reflected drifted Brownian motions and the perturbed GUE corners process due to <ref type="bibr">[5]</ref>. We recall this connection in detail in Sect. 2.3 below, and prove Theorem 1.4 in the end of Sect. 5.</p><p>As stated, Theorem 1.4 assumes that the time t is fixed. Indeed, naively taking independent exponential shifts at different times t would make the functions t &#8594; X k (t) discontinuous. It is interesting to see whether a stochastic process version of Theorem 1.4 holds:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Open problem 1.5 Is it possible to construct a Markov operator on whole trajectories t &#8594; {X k (t)} k&#8712;Z &#8805;1 which is equivalent in distribution to a shift of reflected Brownian motions as stochastic processes?</head><p>Presumably, if such a Markov operator on processes exists, then its construction could be accomplished using the sticky Brownian motion, <ref type="foot">4</ref> as exponential random variables arise in the study of this process, e.g., see Theorem 1 in <ref type="bibr">[18]</ref>. It seems plausible that the difference process t &#8594; X 1 (t) -X 1 (t) &#8805; 0 itself could be distributed as the sticky Brownian motion, as the single-time distributions coincide thanks to the results of <ref type="bibr">[18]</ref> and <ref type="bibr">[10,</ref><ref type="bibr">Proposition 14]</ref>. However, it is less clear how to extend this idea to all differences t &#8594; X k (t) -X k (t) &#8805; 0.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.4">Related Discrete Model</head><p>The results of this paper might be viewed as a random matrix limit of the ones from the recent work <ref type="bibr">[17]</ref>. There, similar Markov swap operators were considered on discrete interlacing arrays as in Fig. <ref type="figure">1</ref>. A combination of these swap operators together with a certain Poissontype limit (cf. Sect. 1.5 below) has lead to a Markov chain on distributions of TASEP (totally asymmetric simple exclusion process) which decreases the time parameter. The shifting result for reflected drifted Brownian motions (Theorem 1.4) may be viewed as a certain analogue of the TASEP reversal property. In the Brownian case, instead of decreasing the time, the exponential jumps lead to a deterministic shift.</p><p>It should be pointed out that even though the discrete stochastic systems in <ref type="bibr">[17]</ref> converge to the reflected Brownian motions <ref type="bibr">[8]</ref> (and <ref type="bibr">[5]</ref> in the drifted case), here we do not rely on this convergence or the results of <ref type="bibr">[17]</ref>. Instead we obtain the results independently using basic mechanisms related to the (perturbed) Gibbs property.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.5">Unperturbed Case</head><p>In the arithmetic progression setting a i = -(i -1)&#945; with &#945; &gt; 0, when &#945; 0, the perturbed GUE corners process of H = G + diag(0, -&#945;, -2&#945;, . . .) becomes the usual GUE corners process, and the system of reflected Brownian motions {X k (t)} k&#8712;Z &#8805;1 becomes driftless. It would be very interesting to see whether the Markov operators considered in the present paper have meaningful limits as &#945; 0. However, this limit presents certain immediate issues which we discuss now.</p><p>For simplicity, consider the Brownian motion setup. Fix t &gt; 0 and suppress this parameter in the notation. As &#945; &#8594; 0, the Markov operator X k &#8594; X k (1.1) turns into the (deterministic) identity operator X k &#8594; X k . Indeed, this is because Prob(E k&#945; &gt; x) = e -k&#945;x &#8764; 1 -&#945;kx for all k and x, and so the minimum in (1.1) is equal to X k -X k+1 with probability of order 1 -O(&#945;). Arguing similarly to the discrete case considered in <ref type="bibr">[17,</ref><ref type="bibr">Sect. 6]</ref>, one can apply the map (1.1) a large number &#964;/&#945; of times, where &#964; &#8712; R &gt;0 is the scaled time.</p><p>Taking a Poisson-type limit should lead to a continuous time Markov process (with &#964; as the new time parameter) under which X k has an exponential clock of rate k(X k -X k+1 ), and when the clock rings, X k instantaneously jumps into</p><p>This jumping mechanism is known as the Hammersley process <ref type="bibr">[1,</ref><ref type="bibr">9]</ref>. However, applying this continuous time jumping process to the whole system {X k } k&#8712;Z &#8805;1 is problematic, as it leads to infinitely many jumps in finite time due to the growing jump rates k(X k -X k+1 ) as k &#8594; &#8734;. Moreover, under this hypothetical process X k would depend on all X j for j &gt; k, and so one cannot simply restrict the dynamics to finitely many particles where it would make sense.</p><p>On the other hand, by Theorem 1.4, the hypothetical continuous time dynamics should be equivalent in distribution to a deterministic shift of the (driftless) reflected Brownian motions by -&#945;t &#964;/&#945; &#8764; -t&#964; . It is reasonable to expect that such a deterministic shift of infinitely many X k 's cannot be achieved only by finitely many jumps in finite time. To summarize, Open problem 1.6 Do there exist well-defined &#945; 0 limits of the Markov operators acting on the GUE corners process perturbed by an arithmetic progression a i = -(i -1)&#945; or on the reflected drifted Brownian motions? These hypothetical limits should act on (much more studied) unperturbed GUE corners process and driftless reflected Brownian motions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Perturbed GUE Corners Process</head><p>This section is preliminary. We recall the perturbed GUE corners process <ref type="bibr">[5]</ref> (also called the GUE corners process with external source <ref type="bibr">[3]</ref>), and its connection to reflected Brownian motions with drifts. The original, unperturbed GUE corners process is due to <ref type="bibr">[11,</ref><ref type="bibr">12]</ref>, and it was linked to driftless reflected Brownian motions in <ref type="bibr">[19]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Matrix Model</head><p>Take a time parameter t &gt; 0 and an infinite sequence of parameters a = (a 1 , a 2 , . . .), a i &#8712; R.</p><p>Unless otherwise indicated, we assume that the parameters a i are pairwise distinct. Consider a random matrix H = t 1/2 &#8226; G + t &#8226; diag (a) of infinite size with entries:</p><p>Here g kk are independent real standard normal random variables, and g kl are independent complex standard normal random variables (that is, their real and imaginary parts are independent real normal random variables each with mean 0 and variance 1  2 ). The matrix H is Hermitian.</p><p>For each m &#8712; Z &#8805;1 , take the m &#215; m principal corner</p><p>, be its eigenvalues. At adjacent levels, the eigenvalues interlace (notation &#955; m &#8826; &#955; m+1 ):</p><p>We call the joint distribution of all {&#955; k j } 1&#8804; j&#8804;k&lt;&#8734; the perturbed GUE corners process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Joint Eigenvalue Density</head><p>A standard application of the Harish-Chandra-Itsykson-Zuber integral shows that the joint eigenvalue density of {&#955; N i } N i=1 at a fixed level N is given by</p><p>where the normalizing constant does not depend on a 1 , . . . , a N . Here and throughout the paper we use the notation</p><p>for the Vandermonde determinant. Observe from (2.2) that the distribution of {&#955; N j } N i=1 depends on the parameters a i in a symmetric way. This should indeed be the case, since the distribution of the eigenvalues of the N &#215; N matrix t 1/2 G N &#215;N + t diag(a 1 , . . . , a N ) does not depend on the order of the a i 's due to the unitary invariance of G N &#215;N . The main goal of this paper is to explore this distributional symmetry from a Markov operator point of view. For this, we will need the joint distribution of eigenvalues of all corners: Proposition 2.1 ([5, Proposition 2.3]) The joint density of the eigenvalues {&#955; k j } 1&#8804; j&#8804;k&#8804;N at the first N levels, where N &#8712; Z &#8805;1 is arbitrary, has the following form:</p><p>3) where we use the notation</p><p>and the normalizing constant does not depend on a 1 , . . . , a N .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Reflected Brownian Motions</head><p>Fix a perturbation sequence a = {a i } i&#8712;Z &#8805;1 . Consider a family of interacting Brownian motions B k j , 1 &#8804; j &#8804; k &lt; &#8734;, such that: &#8226; All processes start from zero.</p><p>&#8226; The processes B k j , j = 1, . . . , k have the same drift a k . &#8226; The evolution of each B k j does not depend on any of the B l i 's with l &gt; i. &#8226; The processes interact only through their local times. That is, when the processes are sufficiently far apart, each B k j behaves as an independent Brownian motion with drift a k .</p><p><ref type="foot">foot_4</ref> and reflects off both B k-1 j and B k-1 j-1 . Therefore, at each time t, the random variables {B k j (t)} 1&#8804; j&#8804;k&lt;&#8734; almost surely form an interlacing array as in Fig. <ref type="figure">1</ref>.</p><p>We refer to <ref type="bibr">[5,</ref><ref type="bibr">Sect. 4]</ref> (and <ref type="bibr">[19]</ref> in the driftless case) for details on the reflection mechanism.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proposition 2.2 ([5, Theorem 2])</head><p>At each fixed time moment t &#8712; R &#8805;0 , we have equality of joint distributions of two infinite interlacing arrays:</p><p>, where the right-hand side is the perturbed GUE corners process with the same time parameter t and perturbation sequence a.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Gibbs Measures</head><p>In this section we place the perturbed GUE corners process into a wider family of Gibbs measures on interlacing arrays.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Gibbs Property and Harmonic Functions</head><p>A measure on infinite interlacing arrays {&#955; k j } 1&#8804; j&#8804;k&lt;&#8734; (satisfying inequalities (2.1) between any two consecutive levels) is called a-Gibbs if for each N and any fixed configuration &#955; N at level N , the density of the conditional distribution of all the lower entries of the array has the form</p><p>(if some of the &#955; N i 's are equal, the density would have delta components and formula (3.1) should be understood in a limiting sense). Here and below 1 B stands for the indicator of an event B. Proposition 2.1 implies that the perturbed GUE corners process is an example of an a-Gibbs measure.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Remark 3.1</head><p>The fact that the density (3.1) integrates to 1 in &#955; 1 , . . . , &#955; N -1 can be checked by induction on N . Remark 3.2 When a i &#8801; a are all equal to each other, the a-Gibbs property becomes the usual Gibbs property, with (3.1) replaced by the uniform conditioning provided that the configurations &#955; 1 , . . . , &#955; N -1 , &#955; N interlace. A classification of uniform Gibbs measures on interlacing arrays is due to <ref type="bibr">[16]</ref>. In fact, performing a suitable exponential change of variables, one can see that when a is an arithmetic progression, the space of a-Gibbs measures is essentially the same as in the uniform case. This is somewhat parallel to how the two-sided q-Gelfand-Tsetlin graph degenerates to the "graph of spectra" <ref type="bibr">[7,</ref><ref type="bibr">15]</ref>.</p><p>To each a-Gibbs measure we can associate a family of a-harmonic functions as follows:</p><p>Density(&#955; N ),</p><p>where Density(&#955; N ) is the marginal density of &#955; N . The term "harmonic function" comes from the Vershik-Kerov theory of boundaries of branching graphs, cf. <ref type="bibr">[13]</ref>. Harmonicity means that the functions satisfy a version of a mean value theorem associated to a directed graph Laplacian. In the context of random matrices the discrete graph is replaced by a suitable continuous analogue, and the graph Laplacian becomes an integral operator. In other words, since the a-harmonic functions &#981; N for different N 's come from the same Gibbs measure, they must be consistent in the following sense:</p><p>3) should be viewed as a version of the mean value theorem, as discussed before Lemma 3.3.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proof of Lemma 3.3</head><p>The claim follows by writing down the joint distribution of &#955; 1 , . . . , &#955; N through &#981; N and the conditional distribution (3.1), and then integrating out &#955; 1 , . . . , &#955; N -2 (this produces the factor V(a 1 , . . . ,</p><p>) and &#955; N to get the marginal density of &#955; N -1 . The resulting marginal density is expressed through &#981; N -1 via (3.2), which yields the result.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lemma 3.4</head><p>For an a-Gibbs measure, let each &#981; k depend on a 1 , . . . , a k in a symmetric way. Then the distribution of &#955; k , where k &#8712; Z &#8805;1 is fixed, depends on the parameters a 1 , . . . , a k in a symmetric way, too.</p><p>Proof An immediate consequence of (3.2).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proposition 3.5 Any a-Gibbs measure is uniquely determined by the corresponding family of a-harmonic functions {&#981;</head><p>Proof Follows from the Kolmogorov extension theorem.</p><p>Let us emphasize that the results of this subsection (Lemmas 3.3 and 3.4 and Proposition 3.5) are valid not only for the perturbed GUE corners process (which, as we see next, is an example of an a-Gibbs measure), but hold in the full generality for any a-Gibbs measure.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Perturbed GUE Corners as a Gibbs Measure</head><p>One readily sees that for the perturbed GUE corners process we have the following harmonic functions:</p><p>(3.4) where the constant is the same as in (2.3) and does not depend on the a j 's. One readily checks that the a-Gibbs property (Lemma 3.3) for the perturbed GUE corners process is equivalent to the well-known integral identity for the Vandermonde determinants:</p><p>where the constant does not depend on the a j 's. The shift by a N t in the exponents in both sides by changing the variables in the integral and renaming the &#955; N -1 i 's, can also be removed (or replaced with any other shift bt) since the Vandermonde is translation invariant.</p><p>In particular, (2.3) together with Lemma 3.4 implies the symmetry (as in this lemma) of the perturbed GUE corners distribution with respect to the parameters a i .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Swap Operators Via Exponential Jumps</head><p>In this section we explore the Gibbs property and prove Theorem 1.1 on Markov swap operators. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Confined Exponential Distribution</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Elementary Markov Swap Operator</head><p>The next observation plays a key role: Proposition 4.1 Take real numbers c &lt; d and &#945; &gt; 0. Let X be distributed as E &#945; (c, d), and E &#945; &#8712; (0, +&#8734;) be an independent usual exponential random variable with parameter &#945; (i.e., with density &#945;e -&#945; y , y &gt; 0). Then the random variable</p><p>is distributed as E -&#945; (c, d).</p><p>Proof We have for the conditional distribution of Y given X = x:</p><p>The distribution of Y has an atom at y = x (coming from the event E &#945; &gt; Xc in (4.1)) and an absolutely continuous part on (0, x). The overall density of Y in the variable y is obtained from the following integral: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>=</head><p>-&#945; e -d&#945;e -c&#945; e -&#945; y , which completes the proof.</p><p>We will view the operation of passing from X to Y as in (4.1) as a one-step Markov transition operator. One can think that the "particle" X &#8712; (c, d) jumps left into the new location Y , by means of the new exponential random variable E &#945; . Note that the new location Y depends only on X and not on the right endpoint d of the interval. We call this Markov transition operator the elementary swap operator and denote it by S &#945; . This operator acts on distributions (in our case, densities) as Density Y = Density X S &#945; .</p><p>The swap operator S &#945; is analogous to the jump operator L &#945; in the discrete situation considered in <ref type="bibr">[17,</ref><ref type="bibr">Sect. 4]</ref>. Let us make a number of remarks. Remark 4.2 <ref type="bibr">(1)</ref> When &#945; = 0, the swap operator S &#945; should be understood as the identity map, which is evident from (4.2). ( <ref type="formula">2</ref>) For &#945; = -&#946; &lt; 0, algebraic manipulations in the proof of Proposition 4.1 make sense, but the new random variable Y obtained by applying S -&#946; to X &#8764; E -&#946; (c, d) does not admit a probabilistic interpretation as in (4.1). (3) Instead of applying S -&#946; to E -&#946; (c, d), let us consider the operator which moves X to the right symmetrically to how S &#945; moves the "particle" X to the left. That is, this new operator acts as</p><p>, where E &#946; is an independent exponential random variable. One can show similarly to Proposition 4.</p><p>All our results for Markov operators built from the left jumps S &#945; have straightforward analogues for these right jumping operators, and so we will only focus on the left jumps in the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Swap Operator for Gibbs Measures</head><p>Let us fix a perturbation sequence a, and let {&#955; m j } 1&#8804; j&#8804;m&lt;&#8734; be a random interlacing array distributed according to some a-Gibbs measure (for example, the perturbed GUE corners process with an arbitrary time parameter t &#8805; 0).</p><p>Next, fix a level k &#8712; Z &#8805;1 , and consider the conditional distribution of &#955; k given the two adjacent levels &#955; k-1 , &#955; k+1 (if k = 1, the conditioning is only on &#955; 2 ). From (3.1) one readily sees that this conditional distribution takes the form</p><p>where we have denoted &#945; := a ka k+1 . Equivalently, we can describe distribution (4.3) as follows.</p><p>Proposition 4. <ref type="bibr">3</ref> The conditional distribution of &#955; k given &#955; k-1 and &#955; k+1 is such that each &#955; k i , i = 1, . . . , k, is an independent random variable distributed as</p><p>where &#945; = a ka k+1 . (For i = k we set &#955; k-1 k = -&#8734;, and for i = 1 we set &#955; k-1 0 = +&#8734;, but both ends of the interval in (4.4) are always finite.) Proof Readily follows from (4.3).</p><p>Assume that &#945; = a ka k+1 &gt; 0, and take an array {&#955; m j } 1&#8804; j&#8804;m&lt;&#8734; as above. Let us define a new random interlacing array {&#957; m j } 1&#8804; j&#8804;m&lt;+&#8734; for which &#957; m j = &#955; m j for all m = k, j = 1, . . . , m, and such that</p><p>where E 1 &#945; , . . . , E k &#945; are independent usual exponential random variables with parameter &#945;. Note that almost surely we have &#957; k i &#8804; &#955; k i , i = 1, . . . , k. In other words, in (4.5) we independently apply the elementary swap operator S &#945; to each &#955; k i which is confined to the corresponding interval as in Proposition 4.3. Denote this combination of the swap operators applied at level k by S &#945; k . As in Remark 4.2, the Markov operator S &#945; k makes sense only for &#945; &gt; 0. Let &#964; k denote the elementary transposition (k, k + 1). For a perturbation sequence a, let &#964; k a = (a 1 , . . . , a k-1 , a k+1 , a k , . . .) be the permuted sequence. Theorem 4.4 (Theorem 1.1 in Introduction) Take an a-Gibbs measure for which each harmonic function &#981; N depends on the parameters a 1 , . . . , a N in a symmetric way. If a k &gt; a k+1 , then the action of the Markov operator S &#945; k (with &#945; = a ka k+1 ) on this a-Gibbs measure results in a &#964; k a-Gibbs measure which corresponds to harmonic functions modified as follows:</p><p>Proof Since the action of S &#945; k does not change levels j = k (and hence distributions of these levels), we clearly have &#981; j = &#981; j for j = k.</p><p>Thus, it remains to show that under S &#945; k the a-Gibbs property becomes &#964; k a-Gibbs. This can be seen by representing the conditional distributions as</p><p>The left-hand side depends on a 1 , . . . , a k+1 in a symmetric way. One can readily check that Prob(&#955; 1 , . . . , &#955; k-1 | &#955; k+1 ) depends on the parameters a k , a k+1 in a symmetric way, too. Indeed, this conditional distribution corresponds to integrating (3.1) (with N = k + 1) over &#955; k . The non-exponential prefactor in (3.1) is already symmetric, and for the exponential part we have</p><p>where we used the normalizing constant for the confined exponential distribution, and &#945; = a ka k+1 . Swapping the parameters as a k &#8596; a k+1 brings exp -&#945;(|&#955; k+1 | + |&#955; k-1 |) from the exponential factor in front of the product in <ref type="bibr">(4.8)</ref>. This factor compensates the product of the expressions exp</p><p>over i = 1, . . . , k, coming out of the product in (4.8) after the same swap. Thus, (4.8) is symmetric under a k &#8596; a k+1 .</p><p>The action of S &#945; k affects only the part Prob(&#955; k | &#955; k-1 , &#955; k-1 ) in the right-hand side of (4.7). Before the action of S &#945; k , each &#955; k i was distributed as E &#945; on the corresponding interval (see Proposition 4.3). By Proposition 4.1, after the action of S &#945; k , these random variables turn into the E -&#945; 's, which corresponds to a &#964; k a-Gibbs structure. Combining this with the symmetries in (4.7) described above and using Lemma 3.4 and Proposition 3.5, we arrive at the claim.</p><p>In particular, for a k &gt; a k+1 , the perturbed GUE corners process coming from the random matrix</p><p>, turns into the corners process for the random matrix</p><p>where T k is the permutation matrix of &#964; k = (k, k+1). In other words, applying the exponential jumps S a k -a k+1 k on the level of eigenvalues is equivalent in distribution to the change of basis e k &#8596; e k+1 in the space corresponding to the random matrix.</p><p>Acting on a-Gibbs measures with a = (0, -&#945;, -2&#945;, . . .) (5.1), S &#945; 1 interchanges 0 with -&#945;, then S 2&#945; 2 interchanges 0 (which is now the new a 2 ) with -2&#945;, and so on. After infinitely many swaps, the parameter 0 disappears, and one expects that the resulting distribution would be a-Gibbs with a = (-&#945;, -2&#945;, -3&#945;, . . .). For the special choice of the perturbed GUE corners process (5.2), the action of S &#945; is, moreover, equivalent in distribution to a global shift. We establish the following result: where I is the infinite identity matrix.</p><p>Proof Informally, one can think that (5.3) follows by a sequential application of the change of basis (4.9) under a single-level action S k&#945; k . The shift by &#945;t is precisely the difference between t &#8226; diag(a) before and after the modification of a. We will now prove this claim more formally, using Theorem 4.4 on how Gibbs measures change under swap operators.</p><p>Take the harmonic functions &#981; N = &#981; pertGUE(a;t) N as in (3.4) with a = (0, -&#945;, -2&#945;, . . .). The action of each S k&#945; k changes only the k-th function &#981; k as in (4.6) and leaves all other functions intact. Therefore, the action of the whole S &#945; replaces {&#981; k } by the family</p><p>Here we took a k = 0 because this is precisely the perturbation parameter that is being swapped with a k+1 = -k&#945; under the action of S k&#945; k . The integral in the right-hand side of (5.4) can be computed using (3.5) (with a N = 0 in that formula), and we obtain</p><p>k+1 j=1 e -t(( j-1)&#945;) 2 /2 = C 0 &#981; k (&#955; k ) e -tk 2 &#945; 2 /2 .</p><p>Here both const and C 0 are some constants which are independent of &#945;. Sequentially applying Theorem 4.4, we see that the new the harmonic functions {&#981; k } satisfy Gibbs property with the sequence a = (-&#945;, -2&#945;, -3&#945;, . . .), and hence (by Proposition 3.5) correspond to a Gibbs measure with shifted parameters. Let us now identify this particular Gibbs measure. The modified density of &#955; k after the application of S &#945; reads, by (3.2),</p><p>V(-&#945;, -2&#945;, . . . , -k&#945;)</p><p>V(-&#945;, -2&#945;, . . . , -k&#945;)</p><p>V(-&#945;, -2&#945;, . . . , -k&#945;) where Density(&#8226;) is the original density before applying S &#945; . In the last step, the two Vandermondes are equal by their translation invariance, and the ratio of the determinants is e -&#945;|&#955; k | (indeed, factor out e -&#955; k j from each j-th column of the determinant in the numerator). Now, using (2.2) we have C 0 e -&#945;|&#955; k |-tk 2 &#945; 2 /2 Density(&#955; k ) = C 0 const &#215; e -&#945;|&#955; k |-tk 2 &#945; 2 /2 det exp -(&#955; k i + t( j -1)&#945;) 2 2t k i, j=1</p><p>V(&#955; k 1 , . . . , &#955; k k ) V(0, -&#945;, . . . , -(k -1)&#945;) .</p><p>Here const is the normalizing constant in (2.2) which is independent of &#945;. Observe that in the exponents inside the determinant we have</p><p>(2 j -1).</p><p>Factoring out the last two terms from each j-th column, we get a factor which precisely cancels with e -&#945;|&#955; k |-tk 2 &#945; 2 /2 . Therefore, we see that</p><p>Normalizing, this implies that C 0 = 1. Thus, we see that applying S &#945; is indeed equivalent to the global shift by &#945;t to the left, as desired.</p><p>We can now establish the shifting property for the reflected Brownian motions: </p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0"><p>Also called minors process in the literature, cf.<ref type="bibr">[11]</ref>.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1"><p>Here and below we use the standard notation A &#8744; B = max(A, B), A &#8743; B = min(A, B) for A, B &#8712; R.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2"><p>Here and below by saying that two operations are "equivalent in distribution" we mean that the random elements resulting from both these operations, applied to the same initial random element, have the same distribution.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3"><p>We are grateful to Jon Warren (personal communication) for suggesting this connection.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4"><p>If one or both ends of the segment are not defined, they should be replaced with infinity of appropriate sign.</p></note>
		</body>
		</text>
</TEI>
