Integrable probability: From representation theory to Macdonald processes

These are lecture notes for a mini-course given at the Cornell Probability Summer School in July 2013. Topics include lozenge tilings of polygons and their representation theoretic interpretation, the (q,t)-deformation of those leading to the Macdonald processes, nearest neighbor dynamics on Macdonald processes, their limit to semi-discrete Brownian polymers, and large time asymptotic analysis of polymer's partition function.


Introduction
One way to describe the content of these lecture notes is to say that they give a proof of the following statement (up to certain technical details that can be looked up in suitable articles).  Then, for any κ > 0, with certain explicit κ-dependent constants c 1 , c 2 > 0, where F 2 ( · ) is the distribution function of the GUE Tracy-Widom distribution.
The quantity Z t N was introduced by O'Connell-Yor [69], and it can be viewed as the partition function of a semi-discrete Brownian polymer (also sometimes referred to as the "O'Connell-Yor polymer"). The limit relation above shows that this polymer model belongs to the celebrated Kardar-Parisi-Zhang (KPZ) universality class, see Corwin [35] for details on the KPZ class and §1.6 of [13] for more explanations, consequences, and references concerning the polymer interpretation.
The exact value of c 1 (κ) was conjectured by O'Connell-Yor [69] and proven by Moriarty-O'Connell [64], and the above limit theorem was proven by Borodin-Corwin [13] for a restricted range of κ and by Borodin-Corwin-Ferrari [16] for all κ > 0. A nice physics-oriented explanation of c 2 (κ) was given by Spohn [79].
Although the most direct proof of this theorem would likely be quite a bit shorter than these notes, brevity was not our goal. Despite the probabilistic appearance of the statement, any of the known approaches to the proof would involve a substantial algebraic component, and the appearance of algebra at first seems at least slightly surprising. The goal of these lectures notes is to suggest the most logically straightforward path (in authors' opinion) that leads to the desired result, minimizing as much as possible the number of ad hoc steps one takes along the way. (For an interested reader we remark that a shorter proof of Theorem 1.1 can be obtained via combining Corollary 4.2 of [67], Theorem 2 of [21], and asymptotic analysis of [13].) As we travel along our path (that naturally starts on the algebraic side in representation theory of unitary groups), we encounter other probabilistic models that are amenable to similar tools of analysis. The approach that we develop has a number of other applications as well. It was so far used for (we refer the reader to the indicated references for further explanations) • asymptotics of the KPZ equation with a certain class of initial conditions [16]; • asymptotics of Log-Gamma fully discrete random directed polymers [21]; • asymptotics of q-TASEP and ASEP [22], [43]; • analysis of new integrable (1+1)d interacting particle systems discrete time q-TASEPs of [15], q-PushASEP [28], [38], and q-Hahn TASEP [36]; • establishing a law of large numbers for infinite random matrices over a finite field [30] (conjectured by Vershik and Kerov, see [46]); • Gaussian Free Field asymptotics of the general beta Jacobi corners process [26]; • developing spectral theory for the q-Boson particle system [19] and other integrable particle systems [20]; • asymptotics of probabilistic models originating from representation theory of the infinitedimensional unitary group U (∞) [10], [11], [29], [12]. The emerging domain of studying such models that enjoy the benefits of a rich algebraic structure behind, is sometimes called Integrable Probability, and we refer the reader to the introduction of Borodin-Gorin [25] for a brief discussion of the domain and of its name (the integrable nature of the semi-discrete polymer of Theorem 1.1 was first established by O'Connell [67]). To a certain extent, the present text may be considered as a continuation of [25], but it can be also read independently.
In contrast with [25], in our exposition below we do not shy away from the representation theoretic background and intuition that were essential in developing the subject. We also focus on proving a single theorem, rather than describing the variety of other related problems listed above, in order to discuss in depth the analytic difficulties arising in converting an algebraic formalism into analytic statements. These difficulties are related to the phenomenon of intermittency and popular yet highly non-rigorous and sometimes dangerous replica trick favoured by physicists, and one of our goals is to show how raising the amount of "algebraization" of the problem can be used to overcome them.
The notes are organized as follows. In Section 2, we explain how lozenge tilings of a class of polygons on the triangular lattice can be interpreted via representation theory of the unitary groups, and how this leads to contour integral formulas for averages of various observables.
In Section 3, we show, in a specific example, how the steepest descent analysis of the obtained contour integrals yields meaningful probabilistic information about lozenge tilings. Section 4 describes an approach to constructing local Markov dynamics on lozenge tilings, and how (1+1)-dimensional interacting particle systems (like usual and long range Totally Asymmetric Simple Exclusion Processes (TASEPs)) arise as marginals of such dynamics. The approach we describe is relatively recent; it was developed in Borodin-Petrov [28] (an extension of the method will appear in [30]). Section 5 deals with a two-parameter (Macdonald, (q, t)-) generalization of the previous material.
In Section 6 we show how simple-minded asymptotics of the contour integrals in the qdeformation of lozenge tilings leads to semi-discrete Brownian polymers. The contour integrals describe the q-moments of the q-TASEP, an integrable deformation of the usual TASEP. Section 7 explains difficulties which arise if one straightforwardly tries to describe the distribution of the polymer partition function using its moments. The latter come out naturally as limits of the q-TASEP's q-moments.
In final Section 8 we demonstrate how those difficulties can be overcome through considering the Laplace transform of the polymer partition function and its q-analog for the q-TASEP particle locations. Summer School, and we would like to thank the organizers for the invitation and warm hospitality. We are also very grateful to Ivan Corwin and Vadim Gorin for numerous valuable comments, and we thank the anonymous referee for several helpful remarks. AB was partially supported by the NSF grant DMS-1056390. LP was partially supported by the RFBR-CNRS grants 10-01-93114 and 11-01-93105.

Lozenge tilings and representation theory
We begin with a discussion of a well-known probabilistic model of randomly tiling a hexagon drawn on the triangular lattice, and explain its relation to representation theory of unitary groups. This relation produces rather natural tools for analysis of uniformly random lozenge tilings of the hexagon. 2.1. Lozenge tilings of a hexagon. Consider the problem of tiling a hexagon with sides of length a, b, c, a, b, c drawn on the triangular lattice by lozenges that are defined as pairs of triangles glued together (see Fig. 1a). Here a, b, and c are any positive integers, and we assume that the side of an elementary triangle has length 1. There are three different types of lozenges: , , and . Such tilings (that are in a bijective correspondence with boxed plane partitions) can be interpreted in a variety of ways (see Fig. 2): (1) As dimers (or perfect matchings) on the dual hexagonal lattice. (1) (3) (4) Figure 2. Various interpretations of a lozenge tiling.
(2) As sets of nonintersecting Bernoulli paths following lozenges of two types ( and with prescribed beginnings and ends. (3) As stepped surfaces made of 1 × 1 × 1 cubes. (4) As interlacing configurations of lattice points centers of lozenges of one of the types, say, , as on Fig. 2. Such configurations must have a prescribed number of points in each horizontal section that may depend on the section. Our first goal is to match this combinatorial object with a basic representation theoretic one. (for some m = 1, 2, . . .) which respects the group structure: The classification of irreducible representations of U(N ) (equivalently, of GL(N, C) by analytic continuation "unitary trick" of H. Weyl) is one of high points of the classical representation theory. It is due to Hermann Weyl in mid-1920's. In order to understand how it works, let us restrict T to the abelian subgroup of diagonal unitary matrices H N := diag(e iϕ 1 , . . . , e iϕ N ) : ϕ 1 , . . . , ϕ N ∈ R .
Any commuting family of (diagonalizable 2 ) matrices can be simultaneously diagonalized. In particular, this is true for T (H N ). Hence, for 1 ≤ j ≤ m, where each t j is a continuous homomorphism H N → C. Any such homomorphism has the form There is a total of m weights (which is the dimension of the representation).
Theorem 2.1 (H. Weyl, see, e.g., [83], [85]). Irreducible representations of U(N ) are in oneto-one correspondence with ordered N -tuples λ = ( The correspondence is established by requiring that λ is the unique highest (in lexicographic order) weight of the corresponding representation. Then the generating function of all weights of this representation T λ can be written as Note that the denominator in (2.1) is the Vandermonde determinant which evaluates to The numerator in (2.1) is necessarily divisible by the denominator because of its skew-symmetry with respect to z i ↔ z j , and thus the ratio is a finite linear combination of the monomials of the form z k 1 1 . . . z k N N , k 1 , . . . , k N ∈ Z (i.e., an element of C[z ±1 1 , . . . , z ±1 N ] S(N ) ). The polynomials Trace(T λ ) are called Schur polynomials, after Issai Schur, who used them in the representation theory of the symmetric group in his thesis around 1900. However, one of the earliest appearances of them dates back to Cauchy [33] and Jacobi [54], over 100 years before Weyl's work. The Schur polynomials are denoted by s λ = Trace(T λ ). Schur polynomials are, generally speaking, symmetric homogeneous Laurent polynomials in N variables.
While the ratio of determinants formula (2.1) is beautiful and concise (it is a special case of Weyl's character formula which works for any compact semi-simple Lie group), is does not describe the set of weights explicitly. To do that, we need an elementary where the sum is taken over µ = (µ 1 , . . . , µ N −1 ) ∈ Z N −1 , the notation µ ≺ λ means the interlacing and |λ| = N j=1 λ j , |µ| = N −1 j=1 µ j . Proof. Clear the denominators in (2.2) and compare coefficients by each of the monomials z k 1 1 . . . z k N N . Applying this lemma N times, we see that the weights are in one-to-one correspondence with interlacing triangular arrays of integers Such arrays are called Gelfand-Tsetlin schemes/patterns, and they will play a prominent role in what follows.
Observe that if we shift all leftmost entries of a Gelfand-Tsetlin scheme of depth (or height) N by 0, the second to left ones by 1, etc., then in the end we obtain a similar array where some of the inequalities become strict: Proof. By picture, see Fig. 3. If we coordinatize by taking the centers of the vertical lozenges in the coordinate system on the picture, then we read off the shifted Gelfand-Tsetlin schemes (2.4).
The total number of weights of T λ (or, equivalently, the dimension of the representation) was denoted by m = m(λ) above, and it is given by Figure 3. On the correspondence between lozenge tilings of a hexagon and weights.
This is a special case of Weyl's dimension formula (which again works for any compact semi-simple Lie group).

2.3.
Distribution of lozenges on a horizontal slice. Consider the uniform probability measure Prob a,b,c on the space of all lozenge tilings of the hexagon with sides a, b, c, a, b, c (see Fig. 1). The normalizing factor in the measure Prob a,b,c (the so-called partition function) is given in (2.6) with λ as in (2.5).
Remark 2.5. For tilings of the hexagon with sides of length a, b, c, a, b, c the partition function was first computed in a nicer product form Let us focus on what happens when we consider one horizontal slice of our uniformly random lozenge tiling. As above, we coordinatize it by locations of centers of the vertical lozenges. It is easiest to assume that the slice is close enough to one of the two horizontal boundaries, say, the lowest one (see Fig. 4).
Proposition 2.6. For any h, 0 ≤ h ≤ min(a, c), the distribution of lozenges on the horizontal slice at height h has the form where Remark 2.7. Probability measures of the form (2.7) with arbitrary positive weight function w(·) are known as orthogonal polynomial ensembles as they are closely related to the orthogonal polynomials with weight w. The measure (2.7) itself is often referred to as the Hahn orthogonal polynomial ensemble, as this particular weight w (2.8) corresponds to the classical Hahn orthogonal polynomials. See, e.g., [59] and references therein for details.
Proof. We can cut the enumeration problem into two that look like those on Fig. 5, and then multiply the results. Each of the two problems (compute the number of tilings of the corresponding region with fixed top row) is solved by the dimension formula (2.6).
. . . fill Figure 5. Computing the distribution of lozenges on a horizontal slice (cf. Fig. 4) amounts to two enumeration problems.
The computation of Proposition 2.6 already allows to see asymptotic transitions for a fixed h (for example, h = 1). We can rewrite (2.8) as One can consider the following limit regimes: (1) If a, b, c → ∞ so that ab/c → t, the first term just contributes to a constant, while the second one converges to t x /x!. (2) In a similar way, if we keep a finite and send b, c → ∞ in such a way that b/(b + c) → ξ, 0 < ξ < 1, then we see that the relevant part of (2.9) converges to (3) A slightly more complicated limit transition would be to take a, b, c → ∞ so that the triple ratio a : b : c has a finite limit. Then Stirling's formula shows that after proper shifting and scaling of (x 1 , . . . , x h ), which would not affect the factor 1≤i<j≤h (x i − x j ) 2 , the nontrivial part of w a,b,c,h (x) converges to a Gaussian weight e −x 2 /2 , x ∈ R. There is also a representation-theoretic way to view these results. Restricting to a fixed horizontal slice means that we only care about the restriction of our representation of U(a + c) (recall Proposition 2.3) to the subgroup U(h) of matrices which are nontrivial (i.e., different from Id) only in the top-left h × h corner. In terms of weights, we only care about powers of z 1 , . . . , z h and substitute z h+1 = . . . = z N = 1. This is equivalent to saying that the probability (2.7) of (x 1 , . . . , x h ) = (µ 1 +h−1, µ 2 +h−2, . . . , µ h ) is the normalized coefficient of s µ (z 1 , . . . , z h ) in the identity where λ is as in (2.5), and we are dividing by the normalizing constants to have the "Prob" coefficients add up to 1. This corresponds to looking at relative dimensions of isotypical subspaces (i.e., those that transform according to fixed irreducible representation) rather than the actual ones. The first two of the above three limit transitions turn (2.10) into In fact, these two limits ab/c → t and b/(b + c) → ξ correspond to certain infinite-dimensional representations of the infinite-dimensional unitary group U(∞) = lim − → U(N ). The third (Gaussian) limit is the eigenvalue projection of the matrix Fourier transform identity where Herm(N ) is the space of N × N Hermitian (H * = H) matrices, A ∈ Herm(N ), and M(dB) is the probability measure on Herm(N ) with the density e − Trace(B 2 )/2 dB also known as the Gaussian Unitary Ensemble (or GUE ). This limit is a special case of the so-called quasiclassical limit in representation theory that degenerates "large" representations to probability measures on (co-adjoint orbits of) the associated Lie algebra, e.g., see [77], [52], [51]. For a broad survey of quantization ideas in representation theory see e.g. [58] and references therein.

Scalar operators and observables.
We are interested in more complex limit transitions than those in §2.3, and for accessing them the following representation theoretic thinking is useful. Our probability weights (2.7) arise as relative dimensions of the isotypical subspaces in the representation space for U(N ). Moreover, these subspaces are blocks of identical irreducibles with respect to the action of the smaller group U(h).

2.4.1.
Locally scalar operators. The problem of decomposing a representation on irreducible components is often referred to as the problem of (noncommutative) harmonic analysis. It can be viewed as a noncommutative Fourier transform an analogue of the classical Fourier transform when R acts by shifts on L 2 (R). The "best" way to solve such a problem would be to find operators in the representation space which project to a given isotypical component. For the classical Fourier transform, these operators have the form For the action of the symmetric group, such operators are known under the name Young symmetrizers, they date back to the earliest days of representation theory. However, even if one can construct such operators, they are quite complicated. The "next best" thing is to find operators which are scalar in each irreducible representation (the projection operators take value 1 in one irreducible representation, and 0 in all other irreducible representations). By a simple Schur's lemma, such operators are exactly those that commute with the action of the group.

Dilation operators.
Observe that U(h) has a nontrivial center scalar matrices of the form e iϕ · 1, ϕ ∈ R. Their action on elements diag(z 1 , . . . , z h ) ∈ H h amounts to multiplying each z j by e iϕ , and their action on a vector of weight (k 1 , . . . , k h ) ∈ Z h is the multiplication by e iϕ|k| = e iϕ(k 1 +...+k h ) (see §2.2). Hence, using the homogeneity of the Schur polynomials we see that on an irreducible representation of U(h) with highest weight µ = (µ 1 ≥ . . . ≥ µ h ) such an operator acts as the scalar operator e iϕ|µ| · 1.

2.4.3.
Quadratic Casimir-Laplace operator. Going further, the first nontrivial example of an operator which commutes with the action of U(h) is the so-called quadratic Casimir-Laplace operator C 2 . Its action on functions on H h is given by Such operators exist for all semi-simple Lie groups and are one of the basic representationtheoretic objects. Also, is the (projection to eigenvalues of the) generator of the Brownian motion on U(h). In other words, (2.13) is the generator of the circular Dyson Brownian motion [41], [40]. See also §4.1 below for a related Markov dynamics. It is immediate to see (using the ratio of determinants formula (2.1) and the fact that (z ∂ ∂z )z k = kz k ) that the action of the quadratic Casimir-Laplace operators on the Schur polynomials is diagonal, and We could now proceed with the application of C 2 to (2.11). However, let us first note that the dilation operators D ϕ can be written in a form rather similar to C 2 : Indeed, the desired eigenrelation D ϕ s µ = e iϕ|µ| s µ again follows from (2.1) and the fact that e iϕz ∂ ∂z z k = e iϕk z k .

2.4.4.
A q-deformation. Let us now note that we have a general recipe on our hands of constructing operators which have Schur functions as their eigenfunctions. Namely, for any operator of the form For example, we can take where q ∈ C is a parameter. Then using (2.14) we obtain a q-difference operator that can be rewritten in the form and (2.15) gives As before, we would like to substitute z 1 = . . . = z h = 1 in the above identity. However, observe that the left-hand side is not well-suited for that. A standard trick helps the left-hand side can be rewritten as a simple contour integral: where the integration contour goes around z 1 , . . . , z h is the positive direction.
Using the above lemma and setting z 1 = . . . = z h = 1, we read from (2.18): The quantity in the right-hand side of (2.19) does not seem very probabilistic, but we can now use the arbitrariness of the parameter q. For any n ∈ Z, we can compare the coefficients of q n in both sides of (2.19). This amounts to integrating the left-hand side again (with dq/q n+1 ), and thus yields: For any t ≥ 0 and h = 1, 2, . . ., The left-hand side of (2.20) is a very meaningful probabilistic quantity it is the probability of seeing a vertical lozenge at any given location on the horizontal slice (cf. Fig. 4). This is the so-called density function of the measure Prob t,h . Furthermore, we see that the right-hand side of (2.20) is well-suited for asymptotics. We perform the asymptotic analysis in the next section.
Remark 2.10. Theorem 2.9 is a special case of a more general formula that represents correlation functions of the so-called Schur measures as multiple contour integrals. See [25] and references therein for details.

Asymptotics of tiling density via double contour integrals
Here we perform an asymptotic analysis of the density function (2.20) of the measure Prob t,h on the hth horizontal slice in the regime where n is the point of observation in (2.20), which also must be scaled to yield nontrivial asymptotics. The limit regime (3.1) is quite nontrivial and is not achievable via elementary tools (in contrast with the limit transitions in §2.3). The reader may want to peek at Figures  10 and 11 below to see what type of description we are aiming at. Changing variables q → v = wq, dq = dv/w in the left-hand side of (2.20) gives Here by Γ 0 and Γ 1 we have denoted small positively oriented contours around 0 and 1, respectively. Further analysis uses the original idea of Okounkov [70] and largely follows [23]. We observe that the integrand above has the form If we manage to deform the contours in such a way that F (v) − F (w) < 0 on them except for possibly finite number of points (where denotes the real part), then our integral would asymptotically vanish as L → ∞. The deformation depends on the location of the critical points of F (z), i.e., of the roots of the equation The discriminant of the numerator has the form We will now consider all possible cases one by one.
In this case both roots of (3.2) are real and greater than 1. The plot of F (z) looks as on Fig. 6 (top). Moving the v contour to the level line F (z min ) and the w contour to the level line which implies the desired vanishing. However, in the process of deformation, the v contour, which was originally a small circle around the origin, has swallowed the w contour, see Fig. 6 (bottom). Because of (v − w) −1 in the integrand, we have to compensate the result of moving the contours by subtracting the residue Thus, we see that for √ ν > √ τ + √ η, the density of vertical lozenges asymptotically vanishes.
In this case, two critical points solutions of (3.2) are complex conjugate. Consider the contour plot of F (z) − F (z c ) , where we have shifted F (z) by the value of F at the upper critical point, F (z c ) = 0, (z c ) > 0 ( denotes the imaginary part). This contour plot looks like Fig. 7 (left). Deforming the w contour into However, in the process of deformation, we pick up the residue which is the limiting density function for vertical lozenges in this regime.
This final case contains two subcases depending on whether In the first one, the plot of F (z) looks as on Fig. 8 (upper). Deforming the integration contours to level lines (similarly to what was done before in Case 1) requires no residue-picking. Thus, the limiting density is zero for the subcase √ τ > √ η. In the second subcase, the picture is slightly different, see Fig. 8 (lower). The familiar deformation of the contours to the level lines now requires that the w contour swallows the v contour (see Fig. 9). This results in the extra residue Thus, the limiting density is 1 in this case.
Summarizing, we see that the asymptotic density of the vertical lozenges is nontrivial for each given τ , inside the parabola discr = 0 (3.3) in the (ν, η)-plane. Outside of this parabola, the density of the vertical lozenges either vanishes or tends to 1, signaling the frozen parts (facets) of the limit shape, see In a similar way, using products of operators D (1) with different values of q, one can extract integral representations for higher correlation functions of vertical lozenges (i.e., probabilities that a given set of locations is occupied by vertical lozenges). Those integral representations can be analyzed exactly in the same fashion as above, this was done in [23]. Indeed, if one knows (here h is fixed, but one can also handle different h's) Figure 10. Limiting density of the vertical lozenges in the (ν, η)-plane. Figure 11. Simulation of the limiting distribution of lozenges. See also [42].
for any q 1 , . . . , q s ∈ C, one can extract the order s correlation function by looking at coefficients of monomials q n 1 1 q n 2 2 . . . q ns s . The result reproduces known formulas for the correlation functions of the so-called Schur processes, e.g., see [25] and references therein.
It should also be possible to carry out a similar program for the case of the growing hexagon with sides a, b, c, a, b, c when the triple ratio a : b : c remains constant. This would require analyzing the asymptotics of ratios of the form with growing λ as in (2.5), which can probably be done via recently developed techniques of [47].
In a different way, integral representations for the correlation functions in the hexagon were recently obtained and asymptotically analyzed in [71], [72], [73].

Markov dynamics
Our next goal is to add an extra dimension to our probabilistic models by introducing suitable Markov evolutions on them. This is not obvious and requires preliminary work. 4.1. Dyson Brownian motion and its discrete counterparts. A hint at the existence of a nontrivial Markov dynamics comes from the relation to random matrices mentioned before (in particular, see the third limit regime in §2.3). Indeed, a GUE matrix of size N × N has density with respect to the Lebesgue measure on the linear space Herm(N ) of Hermitian N ×N matrices given by are independent identically distributed standard normal random variables. Following Dyson [40], one can replace these variables by standard Brownian motions. A nontrivial computation shows that the corresponding Markov process on Hermitian matrices projects to a Markov process on the spectra of matrices. The generator of the process on the spectra is given by (here Spec(X) = (x 1 , . . . , x N )): Here on the right, • means composition of operators: First, we multiply by the Vandermonde determinant i<j (x i −x j ), then apply the Laplacian, and after that divide by the Vandermonde determinant, similarly to (2.14) above. The projection of the (random) matrix X(t) ∈ Herm(N ) (evolving according to standard Brownian motions of its elements (4.1)) to the spectrum then has the distribution density (see for example [4]) The dynamics with generator (4.2) (called the Dyson Brownian motion) can be easily mimicked for all the ensembles of the form const is the generator of the standard Poisson process, and ∇ i acts as ∇ on the ith coordinate. One easily checks that the measures with w(x) = t x /x! are generated by the above Markov process started from the initial condition (x 1 , . . . , The process with generator (4.3) can be obtained by conditioning independent Poisson processes not to intersect until time +∞, and also to grow at the same rate: (Different growth rates of different x i 's will result in conjugating N i=1 ∇ i by a different function, cf. [60], [27].) This is similar to the stationary version of the Dyson Brownian motion being obtained from independent one-dimensional standard Brownian motions by conditioning on the event that they never intersect, and, moreover, stay within the distance o( √ time) from the origin as time goes to plus or minus infinity.

Gibbs property and stochastic links.
There is also another "perpendicular" Markovian structure on the measures from §2.3. Observe that the uniform measure on lozenge tilings has the following property: If we pick a domain inside the hexagon, then fixing the boundary lozenge configuration induces the uniform measure on tilings of the interior. This seemingly trivial observation becomes useful when the hexagon becomes infinitely large in some way (as in §2.3). Then the global uniform measure makes no sense, but this property survives. We will refer to it as to the Gibbs property.
In particular, fixing h vertical lozenges on the horizontal slice of height h (as on Fig. 4) induces the uniform measure on the set of all configurations of lozenges between this slice and the lower border (height zero). Thus, given locations h ) of the vertical lozenges on the h-th slice, the distribution of h − 1 vertical lozenges at height h − 1 is given by the ratio (assuming that . (we have used Proposition 2.4). We will denote the above probabilities by ). Note that the horizontal slices of measures that we obtain in §2.3 by taking limits ab/c → t and b/(b + c) → ξ of the hexagon are also related by these stochastic links Λ h h−1 . In the GUE limit, the formula remains the same, except that the x are now reals, not integers. In this case the above formula (4.4) gives the density of a Markov kernel with respect to the Lebesgue measure.

4.3.
Example of a two-dimensional dynamics. The two Markov processes discussed above (the Dyson Brownian motion and its discrete analogue L (N ) Poisson ) are quite canonical, but they have one deficiency they are one-dimensional (in the sense that the state space consists of particle configurations in Z 1 or R 1 ). We would like to construct a two-dimensional process which has interlacing two-dimensional arrays (2.3) as its state space, and that "stitches together" the above one-dimensional processes in a natural way. We begin by considering one such process which is constructed as follows.
Consider random words built from the alphabet {1, 2, . . . , N } as follows: Each letter j is appended at the end of the word according to a standard (= rate 1) Poisson process' jumps, and different letters appear independently. We can encode this as on Fig. 12: We draw a star ( * ) in row j at the time moment when a new letter j is added. The stars in each row form a Poisson process, and different rows are independent. From these data we construct a Gelfand-Tsetlin scheme (2.3) of depth N written as h ), as follows (see Fig. 13): In particular, we see that λ   5) is the same as the ab/c → t limit of the uniform measure on tilings of hexagon. That is, to obtain the measure on Gelfand-Tsetlin schemes, one takes the following distribution of the top row λ (N ) : 3 h N * * * * * * * * * * Figure 13. Nonintersecting paths used to determine λ j , see (4.5). On the picture, h = 5, j = 2, and λ Proof. This is essentially Greene's theorem for the Robinson-Schensted correspondence coupled with explicit formulas for the number of standard and semistandard Young tableaux. See, e.g., [50], [74], [80].
As we are interested in time evolution, the following statement is relevant: The Markov process on random words (i.e., the process of adding new letters according to standard Poisson processes) projects to a Markov process on Gelfand-Tsetlin schemes defined above. It can be described by the following rules: • Each "particle" λ (h) 1 has an independent Poissonian clock of rate 1. When the clock rings, the particle jumps by 1, i.e., λ Proof. See [28] (in particular, §7) and references therein. That is, if one starts with an initial condition of the form There is one more property which can be easily observed from the random words description of the dynamics. Namely, the projection of the process of Proposition 4.2 to the rightmost particles λ is Markov. It is more convenient to describe it in shifted strictly ordered coordinates y 1 = λ (1) .4)). Then each y j jumps to the right by 1 independently with rate 1, and pushes y j+1 over by 1 if y j+1 occupies the target location of y j (i.e., if we had y j+1 = y j + 1 before the jump). We call this process the PushTASEP, i.e., the Pushing Totally Asymmetric Simple Exclusion Process (it was introduced in [78] under the name long-range TASEP, see also [24]).

4.4.
General construction of two-dimensional dynamics. The existence of the Markov dynamics (of Proposition 4.2) satisfying (I)-(III) is remarkable, yet its above construction is fairly complicated. We would like to access it in a different way.
Let us search for all continuous-time Markov jump processes on Gelfand-Tsetlin schemes which satisfy conditions (I)-(III) of §4.3. They must have the following structure: Each particle λ (h) j jumps to the right by 1 with a certain rate (potentially dependent on λ (1) , . . . , λ (h) ), and its jump triggers further moves on the higher levels λ (h+1) , . . . , λ (N ) . Indeed, because of (III) and the fact that L (h) Poisson moves one particle at a time, no two particles on the same level can jump simultaneously. Moreover, because of (I), moves can propagate only upwards.
In order to reach a reasonable classification, we need to restrict the class further by requiring nearest neighbor interactions: A move of λ (see Fig. 14), which can trigger moves on level h + 2, and so on. Actually, it is better to extend the notion of the top right nearest neighbor from λ (h+1) j to the first particle in the sequence λ (h+1) j , λ (h+1) j−1 , . . . , λ (h+1) 1 whose jump does not violate interlacing. We will additionally assume (extending the nearest neighbor hypothesis) that the individual jump rates of particles at level h may only depend on λ (h−1) and λ (h) , and that the same is true for left and right probabilities of move propagation from level h − 1 to level h.
Let us now parametrize our possibilities. Fix h ≥ 2, and denote by w j = w j (λ (h−1) , λ (h) ) the jump rate of λ (h) j , 1 ≤ j ≤ h. Also, denote by l j the conditional probabilities that, given that the jth part of λ (h−1) has just increased by 1, this move propagates to the top left neighbor λ where e j is the vector having zeros at each position except the jth where it has 1. Similarly, let where ξ(j) is the lower index of the nearest top right neighbor of λ (h−1) j that is free to jump (typically, ξ(j) = j). Poisson whose jumps may propagate to level h with probabilities (l j , r j ). Moreover, particles on level h can jump independently according to the jump rates w j . Comparing the two ways to describe the rate of λ (h) → λ (h) + e m yields the desired relations.
The system of equations of Proposition 4.4 needs to be modified if the inequalities between parts of λ (h−1) and λ (h) are not strict. Indeed, if λ (h) j is "blocked" by λ j−1 , then w j must be zero, and also l j−1 and r j−1 make no sense as λ (h−1) could not have just come from the jump λ (h−1) − e j−1 → λ (h−1) . The modification looks as follows.
It is easy to describe linear spaces of solutions to the above linear systems. Any combination of them, for every pair λ (h−1) ≺ λ (h) , gives us a Markov process with desired properties. One can choose such combinations to design different processes.

4.5.
Further examples of two-dimensional dynamics. We give three examples below, see [28] for more. Example 1. All l j ≡ 1, all r j ≡ 0, and w j = 1, j = 1; 0, otherwise. This is the dynamics described above in §4.3 via nonintersecting paths.
Example 2. All r j ≡ 1, all l j ≡ 0, and This dynamics can be viewed as coming from the column insertion algorithm (as opposed to the row insertion algorithm corresponding to the dynamics of §4.3). Observe that the restriction of this dynamics to the left-most particles {λ h − h this restriction matches the well-known Totally Asymmetric Simple Exclusion Process (TASEP ). This dynamics was first introduced in [65].
Example 3. All r j ≡ 0, all l j ≡ 0, and all w j ≡ 1. This dynamics has minimal pushing and maximal "noise" (coming from individual jumps). It can be viewed as a two-dimensional growth model; in terms of the stepped surfaces interpretation, independently with rate one this dynamics adds all possible "sticks" (directed columns) of the form . . .    in the array such that the are no outside arrows pointing to any of the upper particles. The type of the lower particle λ (h) j can be arbitrary (i.e., it can jump independently or be pushed/pulled). The "flip" operation allows to replace one of the three local pictures by any of the two remaining ones. r j m+1 , l jm , and w jm+1 in each of the equations (4.6) are zero (and the remaining quantity is 1) can be transformed one into the other by a sequence of local "flips" as on Fig. 17. 4.6. Conclusion. We have seen how to construct random growth models in (1 + 1) dimension (TASEP, PushTASEP) and (2 + 1) dimension, and in §3 we have seen how these models can be analyzed at large times. We will now move on to a q-deformation of this picture, which will eventually lead us to directed polymers in random media.
We would like to add parameters to the theory.
It is easy to deform (= add parameters to) our model viewed as a probabilistic object. However, most such deformations would lack solvability properties of the original model based on Schur polynomials. The reason is that the Schur polynomials are algebraic objects, and algebraic structures (in contrast with probabilistic ones) are usually very rigid. Thus, to find meaningful (solvable) deformations of the model requires nontrivial algebraic work.
Historically, first two different one-parameter deformations of the Schur polynomials were suggested: around 1960 by algebraists Philip Hall and D.E. Littlewood, 4 and around 1970 by a statistician Henry Jack.
The Hall-Littlewood polynomials naturally arose in finite group theory and were later shown to be indispensable in representation theory of GL(n) over finite and p-adic fields.
The Jack polynomials extrapolated the so-called zonal spherical functions arising in harmonic analysis on Riemannian symmetric spaces from three distinguished parameter values that correspond to spaces over R, C, and H. They are also known as eigenfunctions of the trigonometric Calogero-Sutherland integrable system.
In mid-1980's, in a remarkable development Ian Macdonald united the two deformations into a two-parameter deformation known as Macdonald polynomials. The two parameters are traditionally denoted as q and t. We will soon set t to 0, so it will not interfere with the time variable in our Markov processes. The Hall-Littlewood polynomials arise when q = 0, and the Jack polynomials correspond to the limit regime t = q θ → 1, where θ > 0. Schur polynomials correspond to q = t. Other significant values are: Schur's Q-functions (for q = 0, t = −1); monomial symmetric functions (q = 0, t = 1); and (the most important for us) q-Whittaker functions arising for t = 0.

5.2.
Definition of Macdonald polynomials. The shortest way 5 to define Macdonald polynomials is to say that these are elements of Q(q, t)[x 1 , . . . , x N ] S(N ) (this is the algebra of symmetric polynomials in variables x 1 , . . . , x N whose coefficients are rational functions in q and t), that diagonalize the following first order q-difference operator: where, as before, (T q,x f )(x) = f (qx). It is immediately recognized as a deformation of the q = t operator (2.16) from §2.4.4. The operator D (1) from (5.1) is called the first Macdonald difference operator. There are also higher order ones, The operators D (k) are diagonalized by the same polynomial basis [61, Ch. VI].
As Schur polynomials, the Macdonald polynomials in N variables are parametrized by λ = (λ 1 ≥ . . . ≥ λ N ). We denote the (monic, i.e., with coefficient 1 of the lexicographically largest monomial, which is x λ 1 1 x λ 2 2 . . . x λ N N ) Macdonald polynomials by P λ . They satisfy D (k) P λ = e k (q λ 1 t N −1 , q λ 2 t N −2 , . . . , q λ N )P λ , 5.3. q-Whittaker facts. Developing the (beautiful) theory of Macdonald polynomials requires significant efforts, and we will not pursue this here. An excellent resource is the Macdonald's book [61]. Instead, we will focus on the q-Whittaker (t = 0) case, where, for a story parallel to the Schur case ( § §2-4), we need the following facts.
is the q-analogue of the factorial.
where for j = 1 the last factor is omitted.
A proof (in the more general (q, t)-setting) can be found in [13, §2.3]. In the Schur case, (5.5) reduces to the ratio of Vandermonde determinants, cf. Poisson (5.5). This immediately leads to the following q-analogue of Theorem 4.5: whose indices make no sense, then the corresponding factor is omitted: Let j 1 + 1 < j 2 + 1 < . . . < j κ + 1 be all the indices such that particle λ (h) jm+1 is free to move, i.e., λ with agreement r j κ+1 = l j 0 = 0, and also T h = 0. Solving these equations for all pairs λ (h−1) ≺ λ (h) is equivalent to constructing nearest neighbor Markov dynamics satisfying the q-versions of conditions (I)-(III) of §4.3.

5.5.
Examples of q-deformed two-dimensional dynamics. Using Theorem 5.3, we can now explore the same three examples as in §4.5: Example 1. We enforce the almost sure move propagation (i.e., r j + l j ≡ 1), and also This gives a unique solution for all j such that λ (h) j+1 is free. In fact, this expression telescopes to give We observe that for 0 ≤ q < 1, all the probabilities r j and l j are nonnegative, and the projection to the rightmost particles {λ 1 + h, it can be described as follows: Each particle jumps to the right by 1 independently with Poisson clock of rate 1. If the jth particle moved, it triggers the move of (j + 1)st one with probability q gap , where gap is the number of empty spots in front of the jth particle before the move (which in its turn may trigger the move of the (j + 2)nd particle, etc.). Note that the probability q gap is 1 if gap = 0. We call this particle system the q-PushTASEP, it was first introduced in [28]. Its generalization (called q-PushASEP ) with particles moving in both directions can be found in [38].
Example 2. Now we again enforce l j + r j ≡ 1, and This gives for all j such that λ (h) j+1 is free. Obviously, this gives negative probabilities, and we do not pursue this example further.
Example 3. Here we enforce l j = r j ≡ 0. This clearly gives w j = S j , and for 0 ≤ q < 1 this is a well-defined Markov process without long-range interactions. It was first constructed in [13], and it is closely related to the q-Boson stochastic particle system of [76], see also [22], [19]. While the projection of this process to the rightmost particles {λ  h − h it can be described as follows: Each particle jumps to the right by 1 independently of the others with rate 1 − q gap , where gap is (as before) the number of empty spaces in front of y h before the jump. Note that this rate vanishes when gap = 0, which correspond to a TASEP-like blocking of the move. We call this interacting particle system the q-TASEP.
Obviously, as q → 0, the q-PushTASEP turns into the usual PushTASEP, and q-TASEP becomes the usual TASEP. 5.6. Conclusion. We have thus obtained q-deformations of the random growth models from the Schur case. Our next task will be to investigate their asymptotic behavior at large times.

Asymptotics of q-deformed growth models
Our main tool in studying asymptotics will be the Macdonald difference operators ( §5.2). 6.1. A contour integral formula for expectations of observables. By Prob t,h we mean the measure defined in Proposition 5.2.
where all the integrals are taken over small positively oriented closed contours around 1.
Proof. In the proof we need to use the second Macdonald parameter t = 0. For this, let us in this proof denote the time variable by τ to avoid the confusion. We apply the kth order Macdonald operator D (k) (5.2) to the series expansion (5.4) defining our measures, which now looks as We then replace the sum in the left-hand side by the residue expansion of the integral (see [13, §2.2.3] for more detail) where the contours encircle {x 1 , . . . , x N } and no other poles (i.e., the residues are taken at z j = x m j , j = 1, . . . , k, 1 ≤ m j ≤ N ). Note that in fact via the Cauchy determinant formula.
In the right-hand side of (6.2) we use the eigenrelation (see §5.2): We then divide both sides of (6.2) by t k(k−1) 2 , take the limit as t → 0, and also set x j = 1.
There are two simple limit transitions that one can observe in the right-hand side of the contour integral formula (6.1). We consider them in §6.2 and §6.3 below. 6.2. Gaussian limit. The first limit regime is q = e −ε → 1, t = τ ε −1 → ∞, z j 's do not change. Looking at the left-hand side of (6.1), which is E q λ N +...+λ N −k+1 , it is natural to expect that each λ j grows as ε −1 so that the quantities q λ j have finite limits (which may still be random variables). Looking at higher powers of Macdonald operators indeed reveals that this is a Gaussian limit: ελ j has a law of large numbers with Gaussian fluctuations of size ε −1/2 , and the Markov dynamics we constructed converge to Gaussian processes. We do not pursue this limit regime here, its detailed exposition will appear in [17]. Another, structurally similar appearance of Gaussian processes can be found in [26]. 6.3. Polymer limit. The second limit is a bit more complicated. We again take q = e −ε → 1, but now t = τ ε −2 → ∞ (i.e., we wait for a longer time than in the Gaussian limit of §6.2). Then to see a nontrivial limit, we have to take the z j 's of distance O(ε) from 1: z j = 1 + εw j . Then we have and we see that the right-hand side of (6.1) becomes e −τ kε −1 ε k(k−1)−kN +k times an asymptotically finite expression. To figure out the limiting behavior of λ N + . . . + λ N −k+1 , we now have to take log q of this expression, or take the natural logarithm and multiply by −ε −1 . This gives where the remainder R N,k is supposed to be a finite random variable. Equivalently, with some limiting random variables T N,j . (Note that at this moment this is simply a guess!) We can now test what is happening with our dynamics under this conjectural scaling. For example, consider the q-PushTASEP. The asymptotics of the pushing probability is The increment of λ (h) 1 over time dτ = ε −2 dt must then be (1) the increment coming from its own jumps, which is ε −2 dt + ε −1 dB h , where B h is a Brownian motion, and (2) the increment coming from pushing, which is εe T h−1,1 −T h,1 times the increment of λ (h−1) 1 . Collecting terms of order ε −1 , we conclude that where B 1 , B 2 , . . . , B N are independent standard Brownian motions (for h = 1 the last term is omitted). This system of stochastic differential equations (SDEs, for short) is solved by The integral in the right-hand side can be viewed as the logarithm of the partition function (i.e., the free energy) of a semi-discrete Brownian polymer, see Fig. 18, and also [13,Chapter 5] for a general discussion of directed polymers in random media. More precisely, to any Poisson- type up-right path φ (0,1)→(τ,h) that travels from 1 to h during time τ with jumps at moments 0 < s 1 < . . . < s h−1 < τ , assign the energy Then T h,1 is the logarithm of the integral of the Boltzmann factor exp E(φ (0,1)→(t,h) ) over the Lebesgue measure on all such paths (the inverse temperature can be absorbed into the rescaling of τ with the help of the Brownian scaling). Similar empirical scaling arguments show that the Markov process on Gelfand-Tsetlin schemes of depth N that lead to the q-PushTASEP (Example 1 in §5.5) converges to a solution of the following system of SDEs: Proof. The Markov dynamics related to the semi-discrete directed Brownian polymer was introduced in [67]. The weak convergence was proven in [13, §4]. Figure 19. Nonintersecting Poisson paths φ 1 , . . . , φ k .
It is also known that the right-hand side of (6.6) satisfies the system of SDEs (6.5). Thus, the convergence in the above theorem should extend to a trajectory-wise statement, but, to our best knowledge, this has not been worked out in full detail yet.
It is worth noting that other (Brownian-type) scaling limits of growth models discussed here are considered in [48], [49]. N (corresponding to the q-TASEP), and either one can be used for the analysis. (Note that to obtain the polymer's partition function, N must remain fixed.) We will employ the q-TASEP, as this is a bit more straightforward, and there are more details on the q-TASEP in the literature. 7.1. Moments of the q-TASEP. We start by employing products of first order Macdonald operators rather than a single one to obtain moments (of all orders) of the q-TASEP particle locations.
Proof. We consider k = 2, for larger k the argument is similar. We start with the defining identity (5.4): whereP λ means the normalization of P λ by itself evaluated at all x j ≡ 1. Apply the first Macdonald operator in N 1 variables to (7.2) (note that the second Macdonald parameter t is zero, and it has no relation to the time t in (7.2)). This operator has the form D (1) Because of the eigenrelation D (1) N 1P λ (N 1 ) = q λ (N 1 ) N 1 P λ (N 1 ) , in the right-hand side of (7.2) we observe On the other hand, the left-hand side of (7.2) is, by residue expansion, equal to the integral over a positively oriented contour around the simple poles x 1 , . . . , x N 1 . The above argument works for applying D (1) . In the next step, we apply D . For the right-hand side of the resulting expression, we use Proposition 5.1 that gives This implies, together with the eigenrelation D (1) , that the right-hand side (after setting On the other hand, in the lefthand side, the x-dependence in (7.3), after setting x N 2 +1 = . . . = x N 1 = 1, is in the form of x−z e t(x−1) . Hence, we can apply the same residue expansion (for computing the application of D (1) N 2 to (7.3)) using the fact that G(qw) Here w is the new integration variable whose contour has to encircle x 1 , . . . , x N 2 , but not any other poles (in particular, not q −1 z). Renaming (z, w) → (z 1 , z 2 ), we obtain the desired formula for k = 2. For larger k the proof is similar.
The above iterative argument is due to V. Gorin, see [18] for a more general version. The original proof of (7.1) in [22] involved a discretization of the quantum delta Bose gas, see also [19] and §7.2 below. 7.2. Moments of the semi-discrete Brownian polymer. Since we already know the scaling which takes us from λ (N ) N to the polymer partition function ( §6.3), we can immediately do the limit in the integral of Proposition 7.1. This is very similar to the limit that we took in §6.3. That is, let us use This leads to the following formula for the moments of the polymer partition function: For (7.4) where the integrals are now over nested contours around 0: the w k contour contains only 0, the w k−1 contour contains {w k + 1} and 0, and so on; the w 1 contour contains {0, w k + 1, w k−1 + 1, . . . , w 2 + 1}. This limit transition from (7.1), however, is not a proof of the formula (7.4). Indeed, Theorem 6.2 only claims weak convergence, and we have exponential moments under the expectation (that is, expectations of unbounded functions).
Sketch of the proof of (7.4). Observe that if we define (for any, not necessarily ordered (with contours as in (7.4)), then because when we write out the integral for this linear combination, the integrand will be skew-symmetric in w i and w i+1 , and the two corresponding contours can be taken to be the same (the obstacle to both these properties in (7.4) is the factor (w i − w i+1 − 1), and the linear combination above exactly cancels this factor out). These properties together with initial condition F (0; N 1 , . . . , N k ) = 1 N 1 =...=N k =1 uniquely determine F (τ ; N 1 , N 2 . . . , N k ) for N 1 ≥ . . . ≥ N k ≥ 0. Thus, it suffices to check that the moments of the polymer partition function (see (6.6)) satisfy the same properties. This check is fairly straightforward.
A discussion of how the evolution equation and the boundary conditions at N i = N i+1 used above relate to a discrete quantum Bose gas can be found in [13], [22], [19]. 7.3. Continuous Brownian polymer. There is a further limit that takes the semi-discrete polymer to a fully continuous one. In that case, one defines where ξ is the space-time white noise, and : exp : means normally ordered exponential, e.g., see [1] for an explanation. Equivalently (via the Feynman-Kac formula), Z solves the stochastic heat equation with multiplicative noise Then, defining away from diagonal subset, and The observations (7.5)-(7.6) are not hard and were recorded at least as far back as the end of 1980's by Kardar [57] and Molchanov [63]. We believe that a rigorous proof can be extracted from the results of [8]. In particular, case N = 2 was treated in [2, I.3.2]. To the best of our knowledge, the general case has not been worked out in full detail yet.
7.4. Intermittency. By setting N 1 = . . . = N k = N in (7.4), we see that the nested contour integrals provide us with all moments of the polymer partition function e −T N,N . One might expect that this is sufficient to find its distribution or, equivalently, the distribution of the free energy T N,N . It turns out that in this particular situation this is not true. The distribution of the polymer partition function displays intermittency, which we now discuss. This term appeared in studying the velocity and temperature fields in a turbulent medium [7], and describes structures that appear in random media having the form of peaks that arise at random places and at random time moments. The phenomenon is widely discussed in physics literature, with magnetic hydrodynamics (like on the surface of the Sun) and cosmology (theory of creation of galaxies) being two well-known examples, e.g., see [84], [63].
The main property that allows one to detect an intermittent distribution is anomalous behavior (as compared to the Gaussian case, for example) of ratios of successive moments.
In general, imagine that one has a time-dependent nonnegative random variable Z(t) which grows in t roughly exponentially (or log Z grows roughly linearly). There are many ways to measure such growth; we mostly follow [32] in the exposition below. Define: • Almost sure Lyapunov exponentγ Proof. Hölder's inequality with 1 2 which implies that γ k ≤ γ k+h + γ k−h 2 . Replacing k by k + 1 and taking h = 1, we have (we used the hypothesis of the lemma). Rearranging terms gives the needed inequality for p = k + 1. Repeating inductively, we obtain the desired claim.
Let us now show how the definition of intermittency relates to peaks. If we pick α such that γ p p < α < γ p+1 p + 1 , then for large enough t (below we omit t in the notation for Z(t)): • Prob{Z > e αt } > 0, because otherwise we would have (E Z p+1 ) 1 p+1 ≤ α. • An overwhelming contribution to E Z p+1 comes from the region where Z > e αt . Indeed, The first term is ≤ e α(p+1)t e γ p+1 t , and we know that the left-hand side behaves exactly as e γ p+1 t .
• Prob{Z > e αt } ≤ e −(α−γp/p)pt because E Z p ≥ e αpt Prob{Z > e αt }. Hence, we observe a hierarchy of higher and higher peaks concentrated on smaller and smaller sets (that are actually exponentially small in probability), and higher peaks contribute overwhelmingly to high enough moments. In the situation of random fields when ergodicity allows to replace computing expectations by space averaging, at each fixed large time one can then observe a hierarchy of islands with exponentially (in time) high values that dominate moment computations.
Intermittency is a characteristic feature of products of a large number of independent random variables (cf. the toy example above). Indeed, by the central limit theorem, let us check that random variables of the form e ξ 1 +...+ξt ∼ e N (µt,σ 2 t) (for example, with independent identically distributed ξ j 's) are intermittent. We have E e N (µt,σ 2 t) p = e t pµ+ p 2 2 σ 2 , which implies that

Moment problem and intermittency.
Since under intermittency the moments are dominated by increasingly atypical behavior (i.e., observed with small probability), it is hard to expect that the moments would determine the distribution. For example, for the exponential of the standard Gaussian N (0, 1) they do not: Any distribution with density We thus see that this very nonrigorous procedure, quite remarkably, lead us to the correct almost sure behavior! In the next section we show how to access these results rigorously, and our approach will also explain in a way why the replica trick worked in this particular situation. 8. Laplace transforms 8.1. Setup. As we have seen in §7, the intermittency phenomenon prevents us from recovering the distribution of the polymer partition function from its moments. However, this is not so in the q-setting. Namely, the q-moments E q kλ (N ) N , k = 1, 2, . . ., uniquely determine the distribution of λ (N ) N ≥ 0 and q ∈ (0, 1), so these are moments of a bounded random variable). Our plan is thus to convert the q-moment formulas that we have (Proposition 7.1) into a formula for the expectation of a one-parameter family of observables that remain bounded (unlike the moments E q kλ (N ) N ) in the q → 1 which leads to polymers. Since this will involve q-moments with k → ∞, it is inconvenient to use nested contours in integral representations as their positions depend on k. There are two ways to "un-nest" the contours: (1) to deform all of them to identical large concentric circles |z| = R > 1; or (2) to deform all of them to identical small concentric circles |z − 1| = r < ε. The first way is easier to realize, but it is harder to turn the result into a meaningful asymptotic information. Thus, we proceed with the second one. The following lemma is nontrivial and very useful: A limiting case of this lemma, as q = e −ε → 1, z j = 1 + εz j , w j = 1 + εw j , is at the heart of the moments asymptotics which were stated in §7.5. 8.2. Generating functions. The form of the right-hand side of (8.1) suggests that one could take a generating function of such expressions over different k. More exactly, it easily implies that We will now take f (z) = e (q−1)tz (1 − z) N , with all the integration contours above being small enough positively oriented contours around 1. Here (n 1 , . . . , n ) are simply permuted values of (λ 1 ≥ . . . ≥ λ ) in (8.1), and the change of the combinatorial factor from 1 m 1 !m 2 ! . . . to 1 ! is due to that un-ordering. 10 Now, using the q-exponential identity (e.g., see [5], [45]) we can rewrite the left-hand side of (8.2) as One should expect that in a suitable scaling limit as q → 1 (which we can predict by looking at the moment asymptotics), the q-moment generating function would converge to the Laplace transform of the polymer partition function. The latter does define the distribution uniquely, with or without intermittency. The real question now is how to take a similar limit in the right-hand side of (8.2). Observe that termwise limit would produce a moment generating series, and we already know that it diverges! 8.3. Case N = 1 and the Mellin-Barnes integral representation. Let us consider the case N = 1 in which the problem of convergence is already there. Then µ k 's are the q-moments of the simple continuous-time one-sided random walk started from 0 at t = 0. We expect their q-generating function to converge (as q → 1) to the Laplace transform of the lognormal distribution (i.e., e N (0,τ ) ). Indeed, we should expect that because where T 1,1 (τ ) ∼ N (0, τ ). 10 Indeed, ! m 1 !m 2 ! . . . is the number of different ways to obtain a given λ = 1 m1 2 m2 . . . with |λ| = from (n 1 , . . . , n ) ∈ Z ≥1 .
Observe that we only have first order poles at w j = 1 in the right-hand side of (8.2) for N = 1. Because of vanishing of the det 1 q n i w i − w j i,j=1 for equal values of the w j 's, we conclude that only ≤ 1 give a nontrivial contribution. This contribution is (1 − q) n ζ n e (q n −1)tw (1 − w)(1 − qw) . . . We now need to take the q → 1 limit in the above sum, and we cannot do that termwise as this would result in a divergent series. A standard tool of the theory of special functions used for dealing with such a limit is the Mellin-Barnes integral representation which dates back to the end of the 19th century. In its simplest incarnation, it says that where 0 < δ < 1 and the integration is taken over a contour as on Fig. 21. We can now take 0 δ Figure 21. Integration contour in (8.4). 11 Note that Γ(−s)Γ(1 + s) = − π sin(πs) . the needed limit. We note that Γ q (x) = m≥1 1 − q m 1 − q x+m−1 (1 − q) 1−x is the q-analogue of the Euler Γ-function, and that (e.g., see [5]) lim q→1 Γ q (x) = Γ(x), x / ∈ {0, −1, −2, . . .}.
We see that the limit of the integral in (8.4) is 1 2πi which is a correct expression for the Laplace transform of the lognormal random variable e N (0,τ ) , as we expected.
Recall that −T N,N (t) can be identified with the logarithm of the polymer partition function as in (6.4) (and that −T N,N (t) d = T N,1 (t)). Note that Theorem 8.3 proves the value of the almost sure Lyapunov exponentγ 1 that we guessed (for κ = 1, but this could have been for any κ) using replica trick in §7.6.
The Tracy-Widom distribution in the right-hand side of (8.5) arises as the series where the a i and the b j contours are as on Fig. 22. The identification of (8.6) with a traditional formula for F GU E is explained in [13] (after formula (4.51)). The way one reaches (8.6) from the right-hand side of (8.5) is fairly straightforward. By changing the variables s j → y j = s j + v j , one rewrites the part of the integrand that depends on the large parameter N as Since e −ue −T N,N (t) = e −e −T N,N (t)+log u , we take log u ∼ −N f κ − rN 1 3 , and then we see that The analysis then follows the scheme explained in §3, with v contours being deformed to the domain with G(v) < 0, and y contours to the domain with G(y) > 0. The limiting expression arises in the situation when G(z) has a double critical point G (z c ) = G (z c ) = 0, and through a local change of integration variables near the critical point; the constant −g κ is actually G (z c ). Details can be found in [13] and [16].
Let us conclude by observing that if we expand the right-hand side of (8.5) into residues at s j = 1, 2, . . ., we get back the divergent generating series for the moments of the polymer partition function that we found before. This shows that a more sophisticated replica trick than the one from §7.6 can actually be used to obtain the limiting distribution, and not only the law of large numbers (i.e.,γ 1 ). Namely, one can obtain the moments by solving the equations (the delta Bose gas of §7.3) that they satisfy, write down the series for the Laplace transform through moments (despite the fact that this series diverges), make sense of this series via the Mellin-Barnes integral representation, and then proceed with the asymptotic analysis. This approach was successfully carried out in physics papers [39], [31]. However, the only plausible explanation we have at the moment as to why such an approach leads to the correct answer, is that it is a limiting case of the q-deformed situation, where all the steps are legal and indeed lead to a proof of the GUE edge fluctuations.