Brownian Motions with One-Sided Collisions: The Stationary Case

We consider an infinite system of Brownian motions which interact through a given Brownian motion being reflected from its left neighbor. Earlier we studied this system for deterministic periodic initial configurations. In this contribution we consider initial configurations distributed according to a Poisson point process with constant intensity, which makes the process space-time stationary. We prove convergence to the Airy process for stationary the case. As a byproduct we obtain a novel representation of the finite-dimensional distributions of this process. Our method differs from the one used for the TASEP and the KPZ equation by removing the initial step only after the limit $t\to\infty$. This leads to a new universal cross-over process.


Introduction
We will study an infinite system of interacting Brownian motions with x n (t) ∈ R, n ∈ Z, denoting the position of the n-th Brownian particle on the real line at time t. Initially the positions are ordered as x n (0) ≤ x n+1 (0), with the convention that x 0 (0) ≤ 0 < x 1 (0). As indicated in the title, particle n + 1 interacts through a steep, narrowly supported potential with its left neighbor, n, only. In the limit of zero support, the singular limit studied in our contribution, this interaction amounts to Brownian motion n + 1 being reflected from Brownian motion n. A mathematical definition will be given below. Under this dynamics the order is preserved, for all times t ≥ 0. In previous work we investigate the case of initial conditions with equal spacing, x n (0) = n [15]. Another natural initial condition is such to have the process space-time stationary, which is accomplished by assuming that {x n (0), n ∈ Z} is a Poisson point process with uniform intensity which, without loss of generality, can be taken as 1. Then {x n (t), n ∈ Z} is again an intensity 1 Poisson point process.
Our interest are the fluctuations of x n (t) for large t and n. To understand their properties one first has to find out how an initially small perturbation close to the origin propagates in time. This path is known as characteristic. In our model, because of the one-sided collisions, the characteristic turns out to be a straight line with velocity 1. If n = ⌊ϑt⌋, ⌊·⌋ denoting integer part, then for ϑ = 1 only the randomness of the initial conditions plays a role and the fluctuations of x n (t) will be Gaussian asymptotically on the t 1/2 scale. However close to the characteristic, i.e., n = ⌊t + rt 2/3 ⌋ with r = O(1), one observes non-Gaussian fluctuations in the t 1/3 scale, the properties of which will be analysed in great detail in this contribution. With our methods we can handle the stochastic process in r at fixed t. Two-time properties along the characteristic are known to be difficult. For example, a long-standing problem is to obtain the joint distribution of x ⌊t⌋ (t), x ⌊2t⌋ (2t) for large t. At the time of writing Johansson reports on asymptotic results for the model studied in this paper [23].
Our results are closely linked to the one-dimensional Kardar-Parisi-Zhang (KPZ) equation [24] with stationary initial data. KPZ is a stochastic PDE for a height function h(x, t) ∈ R and reads with W space-time white noise. As written the equation is only formal, but a precise mathematical meaning has been given [1,18]. As random initial data h(x, 0) we choose the statistics of two-sided Brownian motion with constant drift b. The dynamics is stationary in the sense that x → h(x, t) − h(0, t) is again two-sided Brownian motion with drift b [16]. Very recently, Borodin et al. [8] succeeded in writing down reasonably concise formulas for the distribution of h(x, t), confirming the prior replica computation [20]. Through an intricate asymptotic analysis they establish (Theorem 2.17 of [8]) that in distribution, for fixed r, where A stat denotes the Airy process corresponding to stationary initial data. In spirit one should think of x n (t) as h(x, t) with x being a continuum version of the discrete particle label n. More precisely, as one of our results we will establish in Theorem 2.2 that (1.4) which is the immediate analogue of (1.3). In fact, convergence is proved in the sense of finite-dimensional distribution, not only for the one-point distribution.
Similar results have been obtained earlier for the stationary PNG model [28] and for the stationary TASEP [14]. For the latter, the full stochastic process in r has been worked out [4]. The expression we obtain for the joint distributions of A stat , see Definition 2.1, is new and differs from the one in [4].
For several reasons we believe that it is of interest to add a third model to the list of KPZ type models with stationary initial conditions. Obviously, the universality hypothesis is further strengthened. More importantly our model provides a bridge to diffusion processes with one-sided interaction as discussed in [31]. Besides we also have to develop a method different from the previous ones. As in the case of PNG and TASEP, one cannot study the stationary initial conditions directly. One has to start from a step, in our case meaning that to the right of 0 the Poisson point process has density 1, while to the left it has density ρ, ρ < 1. Surprisingly, as for PNG and TASEP, by a Burke type theorem the left-half system can be replaced by a boundary condition for x 0 (t) and, in fact, only the right-half system with labels {n ≥ 0} has to be considered. For PNG and TASEP the limit ρ → 1 has been accomplished for fixed t, while here we first take the limit t → ∞ at step size 1 − ρ = t −1/3 δ. This leads us to a novel transition process, see Theorem 2.6. The stationary case, δ = 0, is then reached through a careful analytic continuation. man Research Foundation via the SFB 1060-B04 project. The work of H. Spohn is supported by the Fondation Science Mathématiques de Paris. The work of T. Weiss is supported by the German Research Foundation project SP181/29-1.
Theorem 2.2. In the sense of finite-dimensional distributions, Remark 2.3. The joint distributions of the Airy stat process were first obtained in [4], see Definition 1.1 and Theorem 1.2 therein. In Definition 2.1 we state an alternative formula for the joint distributions of the Airy stat process. The main difference between the two formulas is that in [4] the joint distributions are given in terms of a Fredholm determinant on L 2 ({1, . . . , m} × R), while here we have a Fredholm determinant on L 2 (R). A similar twist was already visible in [27] and has been generalized in [9].
Since {x n (t), n ∈ Z} is a Poisson point process, the process X t (r) − X t (0) is a scaled Poisson jump process up to a linear part and (2.9) Hence the limit process A stat (r) − A stat (0) must also have the statistics of two-sided Brownian motion, a property which is not so easily inferred from our formulas in Definition 2.1. But we will provide a direct proof of this fact in Section 8. Note that X t (0) and B(2r) are not independent. As already familiar from other models in the KPZ universality class [4,8,14,19], the proof of Theorem 2.2 proceeds via a sequence of approximating initial conditions. Firstly we consider the case where x 0 (0) = 0 and assume that the particles on R + is a Poisson process with intensity λ > 0 and on R − is a Poisson process with intensity ρ > 0. In other words, x n (0) = ζ n , n ∈ Z, with ζ 0 = 0, ζ n − ζ n−1 ∼ exp(λ), for n > 0, ζ n − ζ n−1 ∼ exp(ρ), for n ≤ 0. (2.10) As explained in Lemma 7.1, setting ζ 0 = 0 will induce a difference of order one as compared to the case considered in Theorem 2.2. In the scaling limits such differences are irrelevant. Thus it is enough to prove Theorem 2.2 for the initial conditions (2.10) with λ = 1 = ρ. In the sequel x n (t) always refers to the initial conditions (2.10), in such a way that the choice of the parameters λ, ρ can be inferred from the context. We obtain the fixed time multi-point distributions of the system {x n (t), n ∈ N 0 } in terms of a Fredholm determinant in the case λ > ρ. The restriction to non-negative integers comes from Burke's theorem by which the particles with n < 0 can be replaced by choosing x 0 (t) as a Brownian motion with drift ρ.
Notice that this result holds for λ > ρ only and not for the most interesting case λ = ρ. The latter can be accessed through a careful analytic continuation of the formulas. One of the novelty of this paper is to perform the analytic continuation after the scaling limit. This allows us to discover a new process, called finite-step Airy stat process, describing the large time limit close to stationarity (actually, one still needs to take care of the random shift of x 0 (0), which is however irrelevant as it goes to zero after scaling in the large time limit). As before, this process is defined through its finite-dimensional distributions.
Definition 2.5 (Finite-step Airy stat process). The finite-step Airy stat process with parameter δ > 0, A (δ) stat , is the process with m-point joint distributions at r 1 < r 2 < · · · < r m given by where χ s (r k , x) = ½(x > s k ) and the kernel K δ is defined by (2.16) Here, V r 1 ,r 2 is defined as in (2.2), and As mentioned already above, we are going to take the limit to stationarity after the long time limit. However, in general, the limits t → ∞ and λ−ρ ↓ 0 do not commute. Therefore we have to consider λ−ρ > 0 (to be able to apply Proposition 2.4), but vanishing with a tuned scaling exponent as t → ∞, a critical scaling. We set λ − ρ = δt −1/3 for δ > 0. As will be proven with this choice the limit t → ∞ commutes with δ ↓ 0.
Such considerations lead naturally to define the rescaled process as where the superscript of x indicates λ = 1 and ρ = 1 − t −1/3 δ. The second main result of our paper is the description of the joint distributions of the rescaled process in the long time limit. Theorem 2.6. For every δ > 0, the rescaled process (2.18) converges to the finite-step Airy stat process (2.19) in the sense of finite-dimensional distributions.
3 Semi-infinite initial conditions

Well-definiteness
Consider the initial conditions stated in (2.10). First we show that the system with infinitely many particles is well-defined. For that purpose we use the Skorokhod representation [2,30] to define the reflected Brownian motions. This representation is the following: the process x(t), driven by the Brownian motion B(t), starting from x(0) ∈ R and being reflected at some continuous function f (t) with f (0) < x(0) is defined as: Let B n , n ∈ Z, be independent standard Brownian motions starting at 0 and define the random variables with the convention s k = 0 and s n+1 = t. We will define the system {x n (t), n ∈ Z} as the limit of half-infinite systems {x  Notice that these processes indeed satisfy the Skorokhod equation, for n > −M, while the leftmost process is simply Thus as desired x 3) can be sees as a zerotemperature case of the O'Connell-Yor semi-discrete directed polymer [26] with appropriate boundary conditions (see discussion at the end of this section).
Next we show strong converge of the system {x  as well as sup For the proof of this proposition we first need the following concentration inequality, which is Proposition 2.1 of [25]: Proof of Proposition 3.1. Let us define an auxiliary system of processes, which we will use later in proving the Burke property, by  n (t) just in the drift of the leftmost particle, which of course influences all other particles as well (the choice of the extra drift is because the system with infinite many particles in R − generates a drift ρ). This system of particles satisfies Also, we have the inequalities Moreover, applying (3.14), gives Repeating the same argument, we see that for every t ∈ [0, T ] there exists M t such that x n (t) = x (3.20) Proof. This is a straightforward generalization of Lemma 3.2 [15].

Burke's property
We establish a useful property which will allow us to study our system of interacting Brownian motions through a system with a left-most Brownian particle.
Proposition 3.4. For each n ≤ 0, the process is a standard Brownian motion.
Remark 3.5. Proposition 3.4 allows us to restrict our attention to the half-infinite system. In fact, conditioned on the path of x 0 , the systems of particles {x n (t), n < 0} and {x n (t), n > 0} are independent, as it is clear by the definition of the system. Then (3.23) implies that the law of {x n (t), n > 0} is the same as the one obtained replacing the infinitely many particles {x m (t), m ≤ 0} with a single Brownian motion x 0 (t) which has a drift ρ. This property will be used to derive our starting result, Proposition 2.4.
Proof of Proposition 3.4. First notice that is a Brownian motion. Now assume x (M ) n−1 (t) − ζ n−1 − ρt is a Brownian motion. By definition, n−1 (s)−ζ n−1 +B n (t)−B n (s)) , (3.25) which allows us to apply Proposition 3.6, i.e., we have that is a Brownian motion. Since x It is clear, that in the case λ = ρ the process (3.23) is a Brownian motion for n > 0, too, i.e., the system is stationary in n. We also have stationarity in t, in the sense that for each t ≥ 0 the random variables {x n (t) − x n−1 (t), n ∈ Z} are independent and distributed exponentially with parameter ρ. The following result is a small modification of Theorem 2 in [26]. Proposition 3.6 (Burke's theorem for Brownian motions). Fix ρ > 0 and let B(t), C(t) be standard Brownian motions, as well as ζ ∼ exp(ρ), independent. Define the process (3.27) Then is distributed as a standard Brownian motion.
Proof. Extend the processes B(t), C(t) to two-sided Brownian motions indexed by R. Defining and we can apply Theorem 2 [26], i.e., d(t) is a Brownian motion. Now, (3.31) so by Lemma 3.7 it has exponential distribution with parameter ρ. As it is independent of the processes {B(t), C(t), t ≥ 0} we can write q(0) = ζ. Dividing the supremum into s < 0 and s ≥ 0 we arrive at:   is distributed as a Brownian motion with drift −ρ started at zero and being reflected (upwards) at zero, at time t. As t → ∞, this converges to the stationary distribution of this process, which is the exponential distribution with parameter 2ρ. s 0 s 1 s 2 s 3 t Figure 1: A path π ∈ Π(0, 0; t, 4) (thick black) and the random background noise (grey).
From a stochastic analysis point of view, the system {x n (t), n ≥ 0} satisfies x n (t) = ζ n + B n (t) + L n (t), for n ≥ 1, Here L n , n ≥ 2, are continuous non-decreasing processes increasing only when x n (t) = x n−1 (t). In fact, L n is twice the semimartingale local time at zero of x n − x n−1 . Notice that B 0 (t) is a standard Brownian motion independent of {ζ n , B n (t), n ≥ 1}, but not equal to B 0 (t).

Last passage percolation interpretation
One can also view the system {x n (t), n ≥ 0} as a model for last passage percolation (or zero-temperature semi-discrete directed polymer). We assign random background weights on the set {R + × N 0 } in the following way: • White noises dB n on the lines R + × {n} for n ≥ 1, • White noise d B 0 plus a Lebesgue measure of density ρ on the line R + × {0}, and • Dirac measures of magnitude ζ n − ζ n−1 on (0, n) for n ≥ 1.
An up-right path in {R + × N 0 } is characterized by its jumping points s i and it consists of line segments [s n−1 , s n ] × {n}, see Figure 1. The set of up-right paths can then be parameterized by The percolation time or weight of a path π ∈ Π is the integral over the background weights along the path. Explicitly, we have: for n 1 = 0, and for n 1 > 0, The last passage percolation time is given by the supremum over all paths: The supremum is almost surely attained by a unique path π * , called the maximizer. It exists because the supremum can be rewritten as a composition of a finite maximum and a supremum of a continuous function over a compact set. Uniqueness follows from elementary properties of the Brownian measure.
Most importantly, from the definition, we have We will use this interpretation in Section 7, however, it also gives some connections to different works. Our model can be seen as the semi-continuous limit of a more widely studied discrete last passage percolation model (see for example [21,22]). Moreover, our last passage percolation model is the zero temperature limit of a directed polymer model, which has been studied thoroughly in the recent past [8,29].
For later use we also define a version without boundary weights: In order to prove Proposition 2.4 we first start by considering the transition probability for a finite number of reflecting Brownian motions with drifts and arbitrary initial positions (Proposition 4.1). Then we will set the drift of the first Brownian motion to ρ, see Remark 3.5, and we will randomize the initial positions (Proposition 4.4).

Transition density for fixed initial positions
Proposition 4.1 generalizes Proposition 4.1 [15], which has been first shown in [33], to the case of non-zero drifts.
Proposition 4.1. The transition probability density of N one-sided reflected Brownian motions with drift µ from x(0) = ζ ∈ W N to x(t) = ξ ∈ W N at time t has a continuous version, which is given as follows. and Proof. We follow the proof of Proposition 8 in [33]. The strategy is to show that the transition density satisfies three equations, the backwards equation, boundary condition and initial condition, the latter one being contained in Lemma 4.2. These equations are then used for Itô's formula to prove that it indeed is the transition density. We start with the backwards equation and boundary condition: To see (4.5), move the prefactor e −µ i ζ i inside the integral in the (N + 1 − i)-th row of the determinant and notice that the differential operator transforms F k,l into −F k+1,l . Consequently, ζ i = ζ i−1 implies the (N + 1 − i)-th being the negative of the (N + 2 − i)-th row. (4.4) can be obtained by the computation Let f : W N → R be a C ∞ function, whose support is compact and has a distance of at least some ε > 0 to the boundary of W N . Define a function (4.7) The previous identities (4.5) and (4.4) carry over to the function F in the form of: Our processes satisfy x n (t) = ζ n + µ n t + B n (t) + L n (t), where B n are independent Brownian motions, L 1 ≡ 0 and L n , n ≥ 2, are continuous non-decreasing processes increasing only when x n (t) = x n−1 (t). In fact, L n is twice the semimartingale local time at zero of x n − x n−1 . Now fix some ε > 0, T > 0, define a process F (T + ε − t, x(t)) for t ∈ [0, T ] and apply Itô's formula: (4.10) From the definition it follows that dx n (t) = µ n dt + dB n (t) + dL n (t) and because continuous functions of finite variation do not contribute to the quadratic variation. Inserting the differentials, by (4.9) the integrals with respect to ds integrals cancel, which results in: (4.12) Since the measure dL n (t) is supported on {x n (t) = x n−1 (t)}, where the spatial derivative of F is zero (see (4.8)), the last term vanishes, too. So is a local martingale and, being bounded, even a true martingale. In particular its expectation is constant, i.e.: Applying Lemma 4.2 we can take the limit ε → 0, leading to (4.14) Because of the assumptions we made on f it is still possible that the distribution of x(T ) has positive measure on the boundary. We thus have to show that r t ( ζ, ξ) is normalized over the interiour of the Weyl chamber: Start by integrating (4.2) over ξ N ∈ [ξ N −1 , ∞). Pull the prefactor indexed by n = N as well as the integration inside the l = 1 column of the determinant. The (k, 1) entry is then given by: The contribution from x = ξ N −1 is a constant multiple of the second column and thus cancels out. The remaining terms are zero for k ≥ 2, since all these functions F k,2 have Gaussian decay. The only non-vanishing term comes from k = 1 and returns exactly 1 by an elementary residue calculation. The determinant can thus be reduced to the index set 2 ≤ k, l ≤ N. Successively carrying out the integrations of the remaining variables in the same way, we arrive at the claimed normalization. This concludes the proof.
Lemma 4.2. For fixed ζ ∈ W N , the transition density r t ( ζ, ξ) as given by for any C ∞ function f : W N → R, whose support is compact and has a distance of at least some ε > 0 to the boundary of W N .
Proof. At first consider the contribution to the determinant in (4.2) coming from the diagonal. For k = l the products in (4.3) cancel out, so we are left with a simple gaussian density. This contribution is thus given by the multidimensional heat kernel, which is well known to converge to the delta distribution. The remaining task is to prove that for all other permutations the integral vanishes in the limit.
Let σ be such a permutation. Its contribution is where we have extended the domain of f to R N , being identically zero outside of W N . We also omitted the prefactor since it is bounded for ξ in the compact domain of f . There exist i < j with σ(j) ≤ i < σ(i). Let It is enough to restrict the area of integration to these two sets, since on the complement of W 1 ∪ W 2 , we have so we are not inside the support of f . We start with the contribution coming from W 1 . Notice that by all functions F k,l with k > l can be written as iterated derivatives of F k,k and some exponential functions. For each k = i with k > σ(k) we write F k,σ(k) in this way and then use partial integration to move the exponential factors and derivatives onto f . The result is for a new C ∞ functionf , which has compact support and is therefore bounded, too. We can bound the contribution by first integrating the vari- (4.22) W ′ 1 consists of the yet to be integrated ξ-components that are contained in the set W 1 ∩ supp( f ). In particular, W ′ 1 is compact, so the functions F k,σ(k) , k = i, are bounded uniformly in t by Lemma 4.3. The remaining integral gives: which converges to 0 as t → 0 by (4.25). The contribution of W 2 can be bounded analogously with j playing the role of i. The final convergence is then given by (4.24).
In addition, for each 1 ≤ k < l ≤ N the function F k,l (x, t) is bounded uniformly in t on compact sets.
Proof. Let x < −ε, and choose a µ which is positive. By a transformation of variable we have where g(|v|) denotes a bound on the fraction part of the integrand, which grows at most polynomial in |v|. Convergence of the integral is ensured by the exponential term, so integrating and taking the limit t → 0 gives (4.25).
To see (4.24), notice that by l ≤ k the integrand has no poles, so we can shift the contour to the right, such that µ is negative, and obtain the convergence analogously.
We are left to prove uniform boundedness of F k,l on compact sets for k < l. For x ≤ 0 we can use (4.26) to get for t ≤ 1. In the case x > 0 we shift the contour to negative µ, thus obtaining contributions from residua as well as from the remaining integral. The latter can be bounded as before, while the residua are well-behaved functions, which converge uniformly on compact sets.

Transition density for random initial positions
To obtain a representation as a signed determinantal point process we have to introduce a new measure. This measure È + coincides with È on the sigma algebra which is generated by ζ k+1 −ζ k , k ∈ Z, and the driving Brownian mo- But under È + , ζ 0 is a random variable with an exponential distribution instead of being fixed at zero. Formally, È + = È ⊗ È ζ 0 , with È ζ 0 giving rise to ζ 0 ∼ exp(λ − ρ), so that È is the result of conditioning È + on the event {ζ 0 = 0}. This new measure satisfies a determinantal formula for the joint distribution at a fixed time.
For the related model, the totally asymmetric simple exclusion process, a formula similar to the one of Proposition 4.4 also exists [7]. Here we provide a direct proof of it.
Proof of Proposition 4.4. The fixed time distribution can be obtained by integrating the transition density (4.1) over the initial condition. Denote by (4.30) Since all w k are integrated over the same contour, we can replace w k by w σ(k) : (4.31) We apply Lemma 4.5 below to the sum and finally obtain (4.32) Lemma 4.5. Given N ∈ N, λ > 0 and w 1 , . . . , w N ∈ C \ R − , the following identity holds: Proof. We use induction on N. For N = 1 the statement is trivial. For arbitrary N, rearrange the left hand side of (4.33) as 34) where we applied the induction hypothesis to the second sum. Further, To show (4.36) we introduce the variable w N +1 and consider the factorization det 1≤k,l≤N +1

Proof of Proposition 2.4
We can rewrite the measure in Proposition 4.4 in terms of a conditional L-ensemble (see Lemma 3.4 of [11] reported here as Lemma 4.6) and obtain a Fredholm determinant expression for the joint distribution of any subsets of particles position. Then it remains to relate the law under È + and È, which is the law of the reflected Brownian motions specified by the initial condition (2.10). This is made using a shift argument, analogue to the one used for the polynuclear growth model with external sources [5,19] or in the totally asymmetric simple exclusion process [4,14,28].
Proof of Proposition 2.4. The proof is divided into two steps. In Step 1 we determine the distribution under È + and in Step 2 we extend this result via a shift argument to È.
Step 1. We consider the law of the process under È + for now. The first part of the proof is identical to the proof of Proposition 3.5 [15], so it is only sketched here. Using repeatedly the identity Using the antisymmetry of the determinant and encoding the constraint on the integration variables into indicator functions, we obtain that the measure (4.28) is a marginal of and using the convention that ξ n−1 n ≤ y always holds. The measure (4.41) has the appropiate form for applying Lemma 4.6. The composition of theφ functions can be evaluated explicitly as φ 0,n (x, y) = (φ 1 * · · · * φ n )(x, y) = ρ 1−n e ρy , for n ≥ 1, for n, k ≥ 1 and some 0 < ε < λ. In the case n ≥ k the integrand has no poles in the region |w| < λ, which implies Ψ n n−k = (−1) n−k F n−k . The straightforward recursion (φ n * Ψ n n−k )(ξ) = Ψ n−1 n−1−k (ξ) (4.45) eventually leads to condition (4.64) being satisfied. The space V n is generated by  By residue calculating rules, Φ n n−k is a polynomial of order n − k for k ≥ 2 and a linear combination of 1 and e ρξ for k = 1, so these functions indeed generate V n . To show (4.66) for ℓ ≥ 2, we decompose the scalar product as follows: Since n − k ≥ 0 we are free to choose the sign of ε as necessary. For the first term, we choose ε < 0 and the path Γ 0 close enough to zero, such that always Re(w − z) > 0. Then, we can take the integral over ξ inside and obtain .
(4.50) For the second term, we choose ε > 0 to obtain Re(w − z) < 0. Then again, we can take the integral over ξ inside and arrive at the same expression up to a minus sign. The net result of (4.49) is a residue at w = z, which is given by The case ℓ = 1 uses the same decomposition and requires the choice ε > ρ resp. ε < 0, finally leading to Furthermore, bothφ n (ξ n−1 n , x) and Φ n 0 (ξ) are constants, so the kernel has a simple form (compare with (4.67)) K(n 1 , ξ 1 ; n 2 , ξ 2 ) = −φ n 1 ,n 2 (ξ 1 , ξ 2 )½ (n 2 >n 1 ) + However, the relabeling ξ k 1 := ξ k−1 included a index shift, so the kernel of our system is actually (4.54) Note that we are free to extend the summation over k up to infinity, since the integral expression for Φ n n−k (ξ) vanishes for k > n anyway. Taking the sum inside the integrals we can write (4.56) By choosing contours such that |z| < |w|, we can use the formula for a geometric series, resulting in .  S, a)).
where x n n+1 are some "virtual" variables and Z N is a normalization constant. If Z N = 0, then the correlation functions are determinantal.
To write down the kernel we need to introduce some notations. Define φ (n 1 ,n 2 ) (x, y) = (φ n 1 +1 * · · · * φ n 2 )(x, y), n 1 < n 2 , 0, where (a * b)(x, y) = R dz a(x, z)b(z, y), and, for 1 ≤ n < N, are linearly independent and generate the n-dimensional space V n . Define a set of functions {Φ n n−j (x), j = 1, . . . , n} spanning V n defined by the orthogonality relations , for some c n = 0, n = 1, . . . , N, then the kernel takes the simple form reproduces the same system with new parameters λ = 1 and ρ = ρ λ . We can therefore restrict our considerations to λ = 1 without loss of generality.
Fix λ = 1 from now on. According to (2.18) we use the scaled variables with δ > 0. Correspondingly, consider the rescaled (and conjugated) kernel K resc (r 1 , s 1 ; r 2 , s 2 ) = t 1/3 e ξ 1 −ξ 2 K(n 1 , ξ 1 ; n 2 , ξ 2 ), (5.3) which naturally decomposes into K resc (r 1 , s 1 ; r 2 , s 2 ) = −φ resc r 1 ,r 2 (s 1 , s 2 )½ (r 1 <r 2 ) + K resc 0 (r 1 , s 1 ; r 2 , s 2 ). (5.4) Remark 5.2. Instead of integrals over Airy functions (2.17) can also be written as contour integrals: (5.5) In the integral defining K, the path for W and Z do not have to intersect. In addition, the Gaussian part has a representation in terms of an integral over Airy functions: In order to establish the asymptotics of the joint distributions, one needs both a pointwise limit of the kernel, as well as uniform bounds to ensure convergence of the Fredholm determinant expansion. The first time this approach was used is in [17]. These results are contained in the following propositions. Proposition 5.5. For fixed r 1 , r 2 , L and δ > 0 there exists t 0 > 0 such that the estimate |K resc 0 (r 1 , s 1 ; r 2 , s 2 )| ≤ const · e − min{δ,1}s 2 (5.9) holds for any t > t 0 and s 1 , s 2 > 0.
Proposition 5.6 (Proposition 5.4 of [15]). For fixed r 1 < r 2 there exists t 0 > 0 and C > 0 such that Now we can prove the asymptotic theorem: Proof of Theorem 2.6. The joint distributions of the rescaled process X 11) where n i and ξ i are given in (5.2). Using the change of variables σ k = t −1/3 (ζ k − 2t − 2t 2/3 r i k ) and a conjugation we obtain where the fraction inside the determinant is the new conjugation, which does not change the value of the determinant. Using Corollary 5.4 and Propositions 5.5, 5.6, we can bound the (k, l)-coefficient inside the determinant by assuming the r k are ordered. The bounds Using the Hadamard bound on the determinant, the integrand of (5.12) is therefore bounded by which is integrable. Furthermore, which is summable, since the factorial grows like (N/e) N , i.e., much faster than the nominator. Dominated convergence thus allows to interchange the limit t → ∞ with the integral and the infinite sum. The pointwise convergence comes from Proposition 5.3, thus It remains to show that the convergence carries over to the measure È.
The identity (5.20) Notice that in (5.12), s i appears only in the indicator function, so differentiation just results in one of the σ k not being integrated but instead being set to s i . Using the same bounds as before we can again show interchangeability of the limit t → ∞ with the remaining integrals and the infinite sum.
Before showing Propositions 5.3 and 5.5, we introduce some auxiliary functions and establish asymptotic results for them. There exist two explicit integral representations for these polynomials: Introducing the shorthands x = 2t 1/2 + 2t 1/6 r + t −1/6 s, n = t + 2t 2/3 r and applying the change of variables w → −wt −1/2 , we can write Using Stirling's approximation and Taylor expansion in the exponents one can further analyze this as (5.27) with with the error terms being uniform for s ∈ [−L, L]. The observation as t → ∞, settles the convergence of α t . Using the second integral representation of the Hermite polynomials one can rewrite β t , too: Analyzing the prefactor as done before finishes the proof. Proof. We start by analyzing β t . Defining functions as for some small, positive ε chosen in the following, and let θ ∈ (π/6, π/4). As shown in Figure 2, we change the contour Γ 0 to Since we will only estimate the absolute value of the integrals, the direction of integration does not matter. If t and s are fixed, the integrand is dominated by the exp(−z 2 ) term for large |z|. Thus the contribution coming from γ 3 (R) converges to 0 as R → ∞. With γ 2 = lim R→∞ γ 2 (R) our choice for the contour of integration is now γ 1 ∪ γ 2 ∪ γ 2 .
We start by analyzing Let us consider the prefactor e G(z 0 ) at first. Since ω is small we can use Taylor expansion, as well as (5.33), to obtain the bounds To show convergence of the integral part of (5.35) we first bound the real part of the exponent: (5.38) η satisfies: where we used |u| < ω. Given any ε we can now choose both L and t 0 large, such that the first term dominates. Consequently η will be bounded from below by some positive constant η 0 . The integral contribution coming from γ 1 can thus be bounded as

(5.40)
Finally we need a corresponding bound on the γ 2 contribution to the integral. By symmetry this case covers also the contour γ 2 . Write . From the previous estimates one easily gets e G(z 1 ) ≤ e G(z 0 ) ≤ e − 1 2 s 3/2 , (5.42) so the remaining task is to show boundedness of the integral part of (5.41). At first notice that the real part of the f 1 contribution in the exponent is negative, so we can omit it, avoiding the problem of large s. By elementary calculus, we have for all u ≥ ω/ cos θ, that is, γ 2 is a steep descent curve for f 3 . We can therefore restrict the contour to a neighbourhood of the critical point z 1 , which we choose of magnitude δ, at the expense of an error of order O(e −const δ t ): . Taylor expanding these functions leads to ≤ χ 2 |r| · t 2/3 uω, for some function f t (r). From the convergence of α t and β t it is clear that f t converges, too. Since we already know that β t is uniformly bounded by a constant times e −s 3/2 /2 , the exponential bound on α t follows.
Proof of Proposition 5.3. Regarding the first part of the kernel, we notice, that n i = 0 does not appear in our scaling, so we can use the formula (for n 2 > n 1 ) This is the same function as in [15], proof of Proposition 5.1, so the limit lim t→∞ φ resc r 1 ,r 2 (s 1 , s 2 ) = 1 4π(r 2 − r 1 ) e −(s 2 −s 1 ) 2 /4(r 2 −r 1 ) ½(r 1 < r 2 ), is not proven here.
The different parts of the remaining kernel can be rewritten as integrals over the previously defined functions α and β. For K, choose the contours in such a way that Re(z − w) > 0 is ensured.
Also rewrite f as follows: Similarly, with The residuum satisfies the limit lim t→∞ Res g,−ρ = e δ 3 /3+r 2 δ 2 −s 2 δ (5.54) uniformly in s 2 . The prefactor of the last part of the kernel is simply Combining all these equations gives (5.55) Using the previous lemmas we can deduce compact convergence of the kernel. Indeed (omitting the r-dependence for greater clarity) we can write: (5.56) By Lemma 5.8 the integrand converges to zero for every x > 0. Using Lemma 5.9 we can bound it by const · e −2x , thus ensuring that (5.56) goes to zero, i.e., K converges compactly. In the same way we can show the convergence of f and g . Applying the limit in (5.55) and inserting the expressions for α and β finishes the proof.

Path-integral style formula
Using the results from [9] we can transform the formula for the multidimensional probability distribution of the finite-step Airy stat process from the current form involving a Fredholm determinant over the space L 2 ({r 1 , . . . , r m } × R) into a path-integral style form, where the Fredholm determinant is over the simpler space L 2 (R). The result of [9] can not be applied at the stage of finite time as one of the assumption is not satisfied.
Proposition 6.1. For any parameters χ k ∈ R, 1 ≤ k ≤ m, satisfying Writing K δ r i (x, y) := K δ (r i , x; r i , y), the finite-dimensional distributions of the finite-step Airy stat process are given by
Remark 6.2. The operator V r j ,r i for r i < r j is defined only on the range of K δ r i and acts on it in the following way: In particular, we have also V r j ,r i 1 = 1.
Proof. We will denote conjugations by the operator M by a hat in the following way: Applying the conjugation also in the determinant in (2.15), the identity we have to show is: This is done by applying Theorem 1.1 [9]. It has three groups of assumptions we have to prove. We merged them into two by choosing the multiplication operators of Assumption 3 to be the identity.

Assumption 1
(i) The operators P s i V r i ,r j , P s i K δ r i , P s i V r i ,r j K δ r j and P s j V r j ,r i K δ r i for r i < r j preserve L 2 (R) and are trace class in L 2 (R).
The semigroup property is clear. To see the reversibility relation, start from the contour integral representation (5.5) of K r j ,r j and f r j and use the Gaussian identity: On the other hand we have so K δ r i V r i ,r j = K r i ,r j + δ f r i ⊗ g r j , which proves Assumption 2 (iii). Noticing Remark 6.2, the right-invertibility follows immediately. Assumption 1 (ii) can be deduced from Assumption 1 (i) as shown in Remark 3.2, [9]. Using the previous identities we thus are left to show that the three operators P s i V r i ,r j , for r i < r j , as well as P s i K r i ,r j and P s i f r i ⊗ g r j , for arbitrary r i , r j ∈ R, are all L 2 -bounded and trace class.
First notice that V r i ,r j (x, y) = V 0,r j −r i (−x, −y). Using the shorthand r = r j − r i and inserting this into the integral representation (5.6) of V we have r (x, y), (6.10) with the new operators r (x, y) = e 2 3 r 3 e r(x−y) Ai(r 2 + x − y).

(6.11)
Introducing yet another operator, The Hilbert-Schmidt norm of the first factor is given by The asymptotic behaviour of the Airy function and the inequalities χ i > χ j > 0 imply that both integrals are finite. Similarly, 14) where we used 2r > χ i + χ j as well. As a product of two Hilbert-Schmidt operators, P s i V r i ,r j is thus L 2 -bounded and trace class.
We decompose the operator K r i ,r j as where K ′ r (x, y) = e 2 3 r 3 e r(x+y) Ai(r 2 + x + y). (6.16) Again, we bound the Hilbert-Schmidt norms, as well as The superexponential decay of the Airy function implies that for every c 1 > |r j | we can find c 2 such that e 2r j z Ai 2 (r 2 j + z) ≤ c 2 e −c 1 z . This proves finiteness of the integrals.
Regarding the last operator, start by decomposing it as for some function φ with L 2 -norm 1. Next, notice that It is easy to see that lim s→∞ f r i (s) = 1, so f r i is bounded on the area of integration. But then the m 2 r i term ensures the decay, implying that the integral is finite. Furthermore, Analyzing the asymptotic behaviour of g r j we see that for large positive arguments, the first part decays exponentially with rate −δ and the second part even superexponentially. δ > χ j thus gives convergence on the positive half-line. For negative arguments, it is sufficient to see that g r j does not grow faster than exponentially.

Analytic continuation -Proof of Theorem 2.2
First of all let us show that the choice of x 0 (0) = 0 is asymptotically irrelevant. Denote by X (0) t (r) the rescaled process as in (2.18), where x 0 (0) = 0, and X t (r) the rescaled process as in (2.1), where −x 0 (0) ∼ exp (1). This corresponds to a finite shift of the system, which is therefore irrelevant in the large time limit.
We know from Theorem 2.6 and Proposition 6.1 that: In this section we prove the main Theorem 2.2 by extending this equation to δ = 0. The right hand side can actually be analytically continued for all δ ∈ R (see Proposition 7.4). Additionally we have to show that the left hand side is continuous at δ = 0. This proof relies mainly on Proposition 7.2, which gives a bound on the exit point of the maximizing path from the lower boundary in the last passage percolation model.
Proof of Theorem 2.2. We adopt the point of view of last passage percolation discussed in Section 3.3. The superscripts of x, L and w indicate the choice of ρ, while λ is always fixed at 1. It is clear that for any path π the weight w (ρ) ( π) is non-decreasing in ρ. But then the supremum is non-decreasing, too, and: for ρ < 1. We know that there exists a unique maximizing path π * ∈ Π(0, 0; t; n). We can therefore define Z n (t) := s * 0 , the exit point from the lower boundary specifically with ρ = 1. We want to derive the inequality This can be seen as follows: Note that π * maximizes w (1) ( π) and not necessarily w (ρ) ( π). In particular we have Combining the last two equations results in (7.4). (7.3) and (7.4) imply that for the rescaled processes X For any ε > 0 it holds Then, taking t → ∞, we obtain (7.9) Using Proposition 7.2 on the last term and Proposition 7.4 on the other terms, we can now take the limit δ → 0, resulting in Proof. By scaling of t and β, (7.11) is equivalent to lim β→∞ lim sup t→∞ È Z t (t + 2t 2/3 r) > βt 2/3 = 0, (7.12) for any r ∈ R, which is the limit we are showing. We introduce some new events: Notice that if M β occurs, then L (0,0)→(t+2t 2/3 r,t) = L (0,0)→(βt 2/3 ,0) + L (βt 2/3 ,0)→(t+2t 2/3 r,t) , (7.14) resulting in M β ∩ E β ⊆ N β . We arrive at the inequality: We further define new random variables By Theorem 7 [34], for any fixed r ∈ R, where ξ GUE has the GUE Tracy-Widom distribution. ξ (t) spiked follows the distribution of the largest eigenvalue of a critically spiked GUE matrix, as will be shown in Lemma 7.3. ξ (t) N has the distribution of a standard normal random variable ξ N for any β > 0, t > 0.
Proof. The family of processes L (βt 2/3 ,0)→(βt 2/3 +t,n) indexed by n ∈ N 0 and time parameter t ≥ 0 is precisely a marginal of Warren's process with drifts, starting at zero, as defined in [13]. In our case only the first particle has a drift of 1, and all the others zero. By Theorem 2 [13], the fixed time distribution of this process is given by the distribution of the largest eigenvalue of a spiked n × n GUE matrix, where the spikes are given by the drifts. Thus we can apply the results on spiked random matrices, more concretely we want to apply Theorem 1.1 [6], with the potential V (x) = −x 2 /2. Since L * := L (βt 2/3 ,0)→(t+2t 2/3 r,n) (7.28) represents a n × n GUE matrix diffusion M(t) at time t = t + 2t 2/3 (r − β/2), it is distributed according to the density where I 11 is a n × n matrix with a one at entry (1, 1) and zeros elsewhere. In order to apply the theorem we need the density given in equation (1) [6], i.e., consider the scaled quantity L * / √ nt. The size of the first-order spike is then: We are thus in the neighbourhood of the critical value a c = 1. For α ≥ 0, let With F 0 (s) being the cumulative distribution function of the GUE Tracy-Widom distribution, and K 0,0 (s 1 , s 2 ) as in (2.17), define: Applying (28) [6], we have where α = β/2 − r. Since in our case α > 1, we can estimate: Combining this with the usual bounds on the Airy kernel and the Airy function, we see that as β → ∞, the scalar product in (7.32) converges to zero and we are left with the limit of F 0 which is one.
On the other hand, and from which the claim follows.
All involved functions are locally bounded, so to establish convergence it is enough to investigate their asymptotic behaviour. g r 1 may grow exponentially at arbitrary high rate, depending on r 1 and δ, for both large positive and large negative arguments. We therefore need superexponential bounds on the function: (½ − PK) −1 (Pf * + PKP s 1 1 + (P − P s 1 )1) . (7.43) For this purpose we first need an expansion of the operator P: P = n k=1P s 1 V r 1 ,r 2 . . .P s k−1 V r k−1 ,r k P s k V r k ,r 1 . Notice that all operators P s i ,P s i and V r i ,r j map superexponentially decaying functions onto superexponentially decaying functions. Moreover P s i andP s i generate superexponential decay for large negative resp. positive arguments.
The function f * decays superexponentially for large arguments but may grow exponentially for small ones. Since every part of the sum contains one projection P s k , Pf * decays superexponentially on both sides.
Examining (P − P s 1 )1, notice that the k = 1 contribution in (7.44) is equal to P s 1 , which is cancelled out here. All other contributions contain bothP s 1 and P s k , which ensure superexponential decay.
Using the usual asymptotic bound on the Airy function, we see that the operator K maps any function in its domain onto one which is decreasing superexponentially for large arguments. By previous arguments, functions in the image of PK decay on both sides, in particular PKP s 1 1.
Now, in order to establish the finiteness of the scalar product, decompose the inverse operator as (½ − PK) −1 = ½ + PK(½ − PK) −1 . The contribution coming from the identity has just been settled. As inverse of a bounded operator, (½ − PK) −1 is also bounded. Because of the rapid decay, the functions Pf * , PKP s 1 1 and (P − P s 1 )1 are certainly in L 2 (R) and thus mapped onto L 2 (R) by this operator. Finally, the image of an L 2 (R)-function under the operator PK is decaying superexponentially on both sides. The expression (7.42) is thus an analytic function in δ in the domain R. Setting δ = 0 returns the value of G m ( r, s). Combining these results with (7.39) finishes the proposition.
Proof. We employ the same strategy as in [4]. For that purpose we use the following equivalence det(½ + A) = 0 ⇐⇒ ½ + A is invertible. = F GOE (2 2/3 s min ) > 0 (7.46) for any s min > −∞, where F GOE is the GOE Tracy-Widom distribution function. For the last equality see [12,22]. The tails of the GOE Tracy-Widom distribution have been studied in great detail in various publications, see for instance [3].
(8.9) Regarding the first term, notice that the multiple derivative of the Fredholm determinant gives exactly the multipoint density of the Airy 2 process, which is known to decay exponentially for both large positive and negative arguments. This exponential decay dominates over the linear growth of R. Similarly, the (m − 1)-fold derivative is smaller the (m − 1)-point density of the Airy 2 process, so this contribution vanishes in the limit, too.
For large negative σ 1 , we have Sg → 0 and SKS −1 → ½. The rank one contribution is thus PP s 1 1 − (P − P s 1 )1 ⊗ 0. (8.17) We have to be somewhat careful here, as the convergence is weak (only pointwise) and (Sg) is not even L 2 -integrable. But the first factor decays superexponentially on both sides for finite σ 1 and also in the limiting case PP s 1 1 − (P − P s 1 )1 = (1 − P)P s 1 1, so one should be able to derive nice convergence properties. Neglecting this rank one contribution we are left with lim