KPZ equation tails for general initial data

We consider the upper and lower tail probabilities for the centered (by time / 24 ) and scaled (according to KPZ time 1 / 3 scaling) one-point distribution of the Cole-Hopf solution of the KPZ equation when started with initial data drawn from a very general class. For the lower tail, we prove an upper bound which demonstrates a crossover from super-exponential decay with exponent 3 in the shallow tail to an exponent 5 / 2 in the deep tail. For the upper tail, we prove super-exponential decay bounds with exponent 3 / 2 at all depths in the tail.


Introduction
In this paper we consider the following question: How does the initial data for an SPDE affect the statistics of the solution at a later time? Namely, we consider the Kardar-Parisi-Zhang (KPZ) equation (or equivalently, the stochastic heat equation (SHE)) and probe the lower and upper tails of the centered (by time/24) and scaled (by time 1/3 ) one-point distribution for the solution at finite and long times. Our main results (Theorems 1.2 and 1.4) show that within a very large class of initial data, the tail behavior for the KPZ equation does not change in terms of the super-exponential decay rates and at most changes in terms of the coefficient in the exponential. These results are the first tail bounds for general initial data which capture the correct decay exponents and which respect the long-time scaling behavior of the solution.
Note that the logarithm in (1.1) is defined since Z(T, X) is almost-surely strictly positive for all T > 0 and X ∈ R [Mue91]. We refer to [Qua12,Cor12,Hai13] for more details about the KPZ equation and the SHE and their relation to random growth, interacting particle systems, directed polymers and other probabilistic systems (see also [Mol,Kho14,BC95,BC17,Com17]).
In this paper, we consider very general initial data as now describe. (1) f (y) ≤ C + ν 2 2/3 y 2 , ∀y ∈ R,   the log density has power law decay with exponent 5/2. Region II (shallow lower tail, when −T 2/3 s 0): the log density has power law decay with exponent 3. Region III (center, when s ≈ 0): the density depends on initial data as predicted by the KPZ fixed point. Region IV (upper tail, when s 0): the log density has power law decay with exponent 3/2. The universality of the power law exponents (in regions I, II and IV ) for general initial data constitutes the main contribution of this paper.
on P h f T (y) ≤ −s when y = 0. This is explained in Section 1.1. Finally, observe that two important choices of initial data -narrow wedge and Brownian motion -do not fit into this class 2 . The narrow wedge result is in fact a building block for the proof of this result, while Brownian follows as a fairly easy corollary (see Section 1.2).
Our second main result pertains to the upper tail and shows upper and lower bounds which hold uniformly over f ∈ Hyp(C, ν, θ, κ, M ), and T > π.
Presently, exact formulas amenable to rigorous asymptotics are only available for onepoint tail probabilities, and not multi-point. However, by using the Gibbs property for the KPZ line ensemble (introduced in [CH16] and recalled here in Section 2) we will be able to extend this one-point tail control to the entire spatial process. Working with the Gibbs property is a central technical aspect of our present work and forms the backbone of the proof of Theorem 1.2. Besides the KPZ line ensemble, another helpful property of the narrow wedge KPZ solution is the stationarity of the spatial process Υ T (·) after a parabolic shift. Proposition 1.9 (Proposition 1 of [CQ13]). Let H be the Cole-Hopf solution to KPZ started from initial data H 0 . Fix k ∈ Z >0 . For any T 1 , . . . , T k ≥ 0, X 1 , . . . , X k ∈ R and s 1 , . . . , s k ∈ R, A simply corollary of this result is that for T 1 , T 2 ∈ R >0 , X 1 , X 2 ∈ R and s 1 , s 2 ∈ R, P H(T 1 , X 1 ) > s 1 , H(T 2 , X 2 ) > s 2 ≥ P H(T 1 , X 1 ) > s 1 P H(T 2 , X 2 ) > s 2 . (1.10)

Narrow wedge and Brownian initial data results
Neither narrow wedge nor two-sided Brownian initial data belongs to the class of functions in Definition 1.1. We record here the analogues of Theorems 1.2 and 1.4 for these two cases. As mentioned in the last section, the one point tail results for the narrow wedge solution are important inputs to the proof of Theorems 1.2 and 1.4. We recall these below. Proposition 1.10 (Theorem 1.1 of [CG]). Fix , δ ∈ (0, 1 3 ) and T 0 > 0. Then, there exist s 0 = s 0 ( , δ, T 0 ), (1.11) and, Our general initial data results also rely upon upper and lower bounds on the upper tail probability of Υ T (·) which are, in fact, new (see Section 1.3 for a discussion of previous work).
(1.12) Our next two results are about the tail probabilities for the KPZ equation with two sided Brownian motion initial data; as this initial data falls outside our class, some additional arguments are necessary. Define H Br  We first state our result on the lower tail of h Br T (0). Theorem 1.13. Fix , δ ∈ (0, 1 3 ) and T 0 > 0. There exist s 0 = s 0 ( , δ, T 0 ) and K = K( , δ, T 0 ) > 0 such that for all s ≥ s 0 and T ≥ T 0 , (1.14) Our last result of this section is about the upper tail probability of h Br T (0). Theorem 1.14. Fix , µ ∈ (0, 1 2 ) and T 0 > 0. Then, there exists s 0 = s 0 ( , µ, T 0 ) such that for all s ≥ s 0 and T ≥ T 0 , where c 1 > c 2 depend on the values of , µ and T 0 as described in Theorem 1.4.
In Theorem 1.14, the second term of the upper bound (on the right-hand side of the equation) comes from the fact that Brownian motion is random, and the first term arises in an analogous way as it does for deterministic initial data in Theorem E[e t|X| ] < ∞.

Previous work and further directions
The study of tail probabilities for the KPZ equation and the SHE has a number of motivations including intermittency and large deviations. We recall some of the relevant previous literature here and compare what is done therein to the results of this present work.
The first result regarding the lower tail probability of Z(T, X) the proof of its almost sure positivity by [Mue91]. Later, [MN08] investigated the lower tail of the SHE restricted on the unit interval with general initial data and Dirichlet boundary condition; they bounded P(log Z(T, X) ≤ −s) from above by c 1 exp(−c 2 s 3 2 −δ ) (where c 1 , c 2 are two positive constants depending inexplicitly on T ). In [MF14], this upper bound was further improved to c 1 exp(−c 2 s 2 ) for the delta initial data SHE (the constants are different but still depend inexplicitly on T ). Using these bounds, [CH16] demonstrated similar upper bounds on the lower tail probability of the KPZ equation under general initial data. There are also tail bounds for the fractional Laplacian (∆ α/2 with α ∈ (1, 2]) SHE.
None of the previous SHE lower tail bounds were suitable to taking time T large. Specifically, the constants depend inexplicitly on T and the centering by T /24 and scaling by T 1/3 were not present. Thus, as T grows, the bounds weaken significantly to the point of triviality. For instance, one cannot conclude tightness of the centered and scaled version of log Z(T, X) (Υ T (X) herein) as T goes to infinity using the bounds.
The first lower tail bounds suitable to taking T large came in our previous work [CG] which dealt with the delta initial data SHE (see Proposition 1.10 herein). That result relied upon an identity of [BG16] (see Proposition 4.5). No analog of that identity seems to exist for general initial data. This is why we use the KPZ line ensemble approach in our present work. The upper tail probability of the SHE had been studied before in a number of places. For instance, see [CD15,CJK13,KKX17] in regards to its connection to the moments and the intermittency property [GM90,GKM07] of the SHE. Again, there is a question of whether results are suitable to taking T large. The only such result is [CQ13,Corollary 14] which shows that for some constants c 1 , c 2 , c 1 , c 2 , and s, T ≥ 1, P(Υ T > s) ≤ c 1 exp(−c 1 sT 1/3 ) + c 2 exp(−c 2 s 3/2 ). When s T 2 3 the second bound is active and one sees the expected 3/2 power-law in the exponent. However, as s T 2 3 , the leading term above become c 1 exp(−c 1 sT 1 3 ) and only demonstrates exponential decay. Our result (Theorem 1.11) shows that c 1 exp(−c 1 sT 1 3 ) is not a tight upper bound for P(Υ T > s) in this regime of s. In fact, the 3/2 power-law is shown to be valid for all s even as T grows (with upper and lower bounds of this sort). 3 In light of our results, it might natural to expect the true decay exponent is 3 − 1/α. Perhaps the methods of [MF14] can be applied to give decay at least with exponent 2. Heuristically, one may be able to see the true exponent by using the physics weak noise theory as in, for example, [KMS16].
EJP 25 (2020), paper 66. Some works have focused on the large s but fixed T upper tail, e.g. [CJK13] showed that log P log Z(T, X) > s −s 3 2 as s → ∞ where Z(0, X) ≡ 1. These results are not suitable for taking T and s large together. Our results (Theorems 1.4, 1.11 and 1.14) provide the first upper and lower bound for the upper tail probability which are well-adapted to taking T large. In particular, we showed that for a wide range of initial data the exponent of the upper tail decay is always 3 2 (a result which was not proved before for any specific initial data). However, the constants in the exponent for our bounds on the upper tail probability are not optimal.
It is natural to speculate on the values of these optimal coefficients. There is some discussion of this in the physics literature (see, for example, [KMS16, HLDM + 18]) based on numerics and the weak noise theory (WNT) 4 . In the deep lower tail (the 5/2 exponent region) the coefficient depends on the initial data and can be predicted using the WNT as in [KMS16]. For the shallow lower tail (the 3 exponent region) one expects (by reason of continuity) to have a coefficient corresponding to the tail decay of the KPZ fixed point with the corresponding initial data. Remarkably, for the upper tail (the 3/2 exponent region) it seem that for all deterministic initial data, the upper tail coefficient remains the same 5 . However, for Brownian initial data, the coefficient changes by a factor of 2.
There have been previous considerations of tail bounds in the direction of studying large deviations for the KPZ equation (i.e., the probability that as T → ∞, log Z(T, X) looks like cT for some constant not equal to −1/24). The speed for the upper tail and lower tail are different (the former being T and the later being T 2 ). The lower tail large deviation principle has been the subject of significant study in the physics literature (see [SMP17, CGK + 18, KL18a,KL18b] and references therein). Recently, [Tsa] provided a rigorous proof of the lower tail rate function. We are not aware of a rigorous proof of the (likely) simpler upper tail rate function for the KPZ equation (there is some nonrigorous predictions about this, see e.g. [LDMS16]). However, for a discrete analog (the log-gamma polymer) and a semi-discrete analog (the O'Connell-Yor polymer) such an upper tail bound is proved in [GS13] and [Jan15] respectively. We finally mention a few directions worth pursuing. Theorem 1.2 only provides an upper bound on the lower tail. Our KPZ line ensemble methods are able to produce a lower bound, but with a worse (larger) power law. It is only for the narrow wedge initial data that we have a tight matching lower bound. We conjecture that there should be a similarly tight upper and lower bound for the lower tail which holds true for general initial data. The large deviation result for the lower tail (see [SMP17,CGK + 18,Tsa]) is only shown for narrow wedge initial data (though there is also some work needed for flat and Brownian initial data). It would be interesting to determine how the large deviation rate function depends on the initial data. In fact, even for the KPZ fixed point (e.g. TASEP) this does not seem to be resolved.
Outline Section 2 reviews the KPZ line ensemble and its Gibbs property. Sections 3.1 and 3.2 establish the lower tail bounds of Theorems 1.2 and 1.13 by first analyzing the narrow wedge initial condition tails and then feeding those bounds into an argument leveraging the Gibbs property and the convolution formula of Proposition 1.7. We prove the upper tail bounds of Theorem 1.11 in Section 4 by analyzing the moment formula (see Lemma 4.1) and the Laplace transform formula (see Proposition 4.5) of the narrow wedge solution. Sections 5.1 and 5.2 contain the proofs of (respectively) Theorems 1.4 and 1.14 on the upper tail bounds under general initial data. 4 The approach is to look at the KPZ equation in short time with very weak noise. This is a different problem than looking at the deep tail, but so far the results one gets from the WNT seem to be true even in long time. 5 For instance, for flat and narrow wedge initial data, the upper tail seems to have the same 4/3 coefficient.

KPZ line ensemble
This section reviews (following the work of [CH16]) the KPZ line ensemble and its Gibbs property. We use this construction in order to transfer one-point information (namely, tail bounds) into spatially uniform information for Υ T (y) (see Definition 1.6). It is through this mechanism that we can escape the bonds of exact formulas and generalize the conclusions of [CG] to general initial data.
Definition 2.1. Fix intervals Σ ⊂ N and Λ ⊂ R. Let X be the set of all continuous functions f : Σ × Λ → R endowed with the topology of uniform convergence on the compact subsets of Σ × Λ. Denote the sigma field generated by the Borel subsets of X by C.
A Σ × Λ-indexed line ensemble L is a random variable in a probability space (Ω, B, P) such that it takes values in X and is measurable with respect to (B, C). In simple words, L is a collection of Σ-indexed random continuous curves, each mapping Λ to R.
Fix two integers k 1 ≤ k 2 , a < b and two vectors x, y ∈ R k2−k1+1 . A {k 1 , . . . , k 2 } × (a, b) -indexed line ensemble is called a free Brownian bridge line ensemble with the entrance data x and the exit data y if its law, denoted here as P k1,k2,(a,b), x, y free , is that of k 2 − k 1 + 1 independent Brownian bridges starting at time a at points x and ending at time b at points y. We use the notation E where the curves (L k1 , . . . , L k2 ) are distributed via P k1,k2,(a,b), x, y free . Throughout this paper we will restrict our attention to one parameter family of Hamiltonians indexed by T ≥ 0: This is a spatial Markov property -the ensemble in a given region has marginal distribution only dependent on the boundary-values of said region.
Denote the sigma field generated by the curves with indices outside K × (a, b) by Denote the set of all Borel measurable functions from C K to R by B(C K ). Then, a K-stopping domain (a, b) is said to satisfy the strong H-Brownian Gibbs property if for all F ∈ B(C K ), following holds P-almost surely, is the restriction of the P-distributed curves and on the r.h.s. L k1 , . . . , L k2 is P k1,k2,( ,r), x, y,f,g H -distributed.
is same as the measure of a free Brownian bridge started from x and ended at y.
The following lemma demonstrates a sufficient condition under which the strong H-Brownian Gibbs property holds. (1) The lowest indexed curve H 1 T (X) is equal in distribution (as a process in X) the Cole-Hopf solution H nw (T, X) of KPZ started from the narrow wedge initial data.
(3) Define the scaled KPZ line ensemble {Υ The following proposition is a monotonicity result which shows that two line ensembles with the same index set can be coupled in such a way that if the boundary conditions of one ensemble dominates the other, then likewise do the curves.
Let us provide the basic idea behind how we use Lemma 2.4. Note that by H-Brownian Gibbs property the lowest indexed curve 2 − 1 3 Υ (1) T (x)} n∈N,x∈R , when restricted to the interval (a, b), has the conditional measure P . On the other hand, is the probability measure of a Brownian bridge on the interval (a, b) with the entrance and exit data 2 − 1 3 Υ (1) respectively. Lemma 2.4 constructs a coupling between these two measures on the curve 2 − 1 3 Υ for any event A whose chance increases 8 under the pointwise decrease of Υ (1) T . In most of our applications of this idea, it is easy to find upper bounds on the r.h.s. of (2.2) using Brownian bridge calculations. Via (2.2), those bounds transfers to the spatial process Υ (1) T (·). Since, by Proposition 2.3, this curve is equal in law to Υ T (·) (the scaled and centered narrow wedge KPZ equation solution), these bounds in conjunction with the convolution formula of Proposition 1.7 embodies the core of our techniques to generalize the tail bounds from narrow wedge to general initial data. The following lemma is used in controlling the probabilities which arise on r.h.s. of (2.2).

Lower tail under general initial data
In this section, we prove Theorems 1.2 and 1.13. Starting with the tail bounds of Proposition 1.10, we first bound the lower tail probabilities of the narrow wedge solution at a countable set of points of R (see Lemma 3.1). Combining this with the Brownian Gibbs property of the narrow wedge solution and the growth conditions of initial data (given in Definition 1.1), we prove the lower tail bound of Theorem 1.2 in Section 3.1 via the convolution formula of Proposition 1.7. By controlling the fluctuations of a two sided Brownian motion in small intervals, we prove the lower tail bound of Theorem 1.13 (see Section 3.2) in a similar way.

Proof of Theorem 1.2
Recall that the initial data H 0 is defined from f via (1.3). Also recall the definition of Υ T (·) from (1.8). Fix the sequence {ζ n } n∈Z where ζ n := n s 1+δ . Let us define the following events Here, we suppress the dependence on the various variables. By (1.9) of Proposition 1.7, To begin to bound this, note that We focus on bounding separately the two terms on the right side of (3.1).
Now it suffices to control the second term on the right side of (3.1). We start by showing: Under the assumption that f belongs to the class Hyp(C, ν, θ, κ, M ), there exists s 1 = s 1 (C, ν, θ, κ, M ) such that for all s ≥ s 1 , (3.7) Proof. Assume the events on the l.h.s. of (3.7) occur. Appealing to (1.3), we observe Clearly, there exists s 1 = s 1 (C, ν, θ, κ, M ) such that the right side above is bounded below by e −T 1 3 s for all s ≥ s 1 . This shows the claimed containment of the events in (3.7).
Owing to (3.7) and then, Bonferroni's union bound, We obtain an upper bound of the r.h.s. of (3.8) in the following lemma.
To complete the proof of Theorem 1.2, it only remains to prove Lemma 3.3 which we show below.

Proof of Theorem 1.13
This proof is similar to that of Theorem 1.2. We use the same notations ζ n , E n and F n introduced in the beginning of the proof of Theorem 1.2 and additionally define We can use (3.2) of Lemma 3.1 to bound n P(E n ). While the conclusion of Lemma 3.2 does not hold in the present case, we will show that it does hold with high probability. (3.17) Applying (3.17) and (3.2) to (3.15), we obtain (1.14). To complete the proof of Theorem 1.13, we now need to prove Lemma 3.4 which is given as follows.
Proof of Lemma 3.4. Observe first that Thanks to this containment, we get We bound the r.h.s. of (3.19), via the reflection principle as P min where X 1 , X 2 are independent Gaussians with variance 2 1 3 s −(1+δ) . By tail estimates, it follows that the r.h.s. of (3.20) is bounded above by c 1 e −c2s 3+δ for some constants c 1 , c 2 > 0 which only depend on . Plugging this into (3.19) and combining with (3.18), we find (3.16).

Upper Tail under narrow wedge initial data
The aim of this section is to prove Theorem 1.11. To achieve this, we first state a few auxiliary results which combine together to prove Theorem 1.11. These auxiliary results  (1 + )s To see this, we first note that Using the approximation 1 − exp − e −ζsT 1/3 ≈ exp(−ζsT 1/3 ), we see that (4.2) implies    Proposition 4.4. Fix ∈ (0, 1), T > π and c > 4 3 1 + 1 3 . Then, there exists {s n } n such that s n → ∞ as n → ∞ and P(Υ T (0) > s n ) ≥ e −cs 3/2 n for all n ∈ N.

Proof of Proposition 4.3
To prove Proposition 4.3, we need the following lemma. Let 2T k/2 k 3 2 when T < π.
Taking λ = (k) (i.e., λ 1 = k and λ i = 0 for all i ≥ 2), evaluating the single integral and noting that all the terms on the r.h.s. above are positive yields the lower bound in (4.14) when T 0 > π. In the case when T 0 < π, the term corresponding to λ = (k) is bounded below by T (k0−1)/2 0 π k/2 ψ T (k) for all T ∈ [T 0 , π]. This yields the lower bound in (4.14) when T 0 < π.
For the upper bound, we first show that if λ is a partition of k not equal to (k) then with equality only when λ = (k − 1, 1). We prove this by induction. It is straightforward to check that (4.17) holds when k = 1, 2. Assume (4.17) holds when k = k 0 − 1. Now we show it for k = k 0 . Let us assume that λ is a partition of k 0 and write The right hand side of the above display is equal to when λ = (k 0 ), (k 0 − 1, 1). In the case when λ (λ) = 1, the above inequality follows by our assumption since (λ 1 , . . . , λ (λ)−1 ) is a partition of k 0 − 1. For λ (λ) > 1, we write Note that (λ 1 , . . . , λ (λ) − 1) is a partition of k 0 − 1. Since λ (λ) < k 0 and (4.17) holds for k = k 0 − 1, the right hand side of the above display is greater than 0. This shows (4.18) and hence, proves (4.17). We return to the proof of the upper bound in (4.14). Observe that by bounding the cross-product over i < j by 1 and using Gaussian integrals, we may bound When T > π, the r.h.s. of (4.19) ≤ 1. Otherwise, the r.h.s. of (4.19) is bounded above by (π/T ) k/2 . Owing to this, (4.17), and m 1 !m 2 ! . . . ≤ k!, we get  Proof of (4.4). Combining Markov's inequality and the second inequality of (4.14), we get (4.23) The first inequality of (4.23) follows by noting that k 0 ≥ c −1 for some positive constant c. We get the second inequality of (4.23) by noticing that 2s . Finally, (4.4) follows by plugging (4.23) into the r.h.s. of (4.22).

Proof of Proposition 4.4
We prove this by contradiction. Assume there exists M > 0 such that P(Υ T (0) > s) ≤  we may choose k to be a sufficiently large integer such that the r.h.s. of (4.39) exceeds M . Then, approximating the integral of (4.38) by C kT  which contradicts (4.14). Hence, the claim follows.

Proof of Proposition 4.1
Our proof of Proposition 4.1 relies on a Laplace transform formula for Z nw (T, 0) which was proved in [BG16] and follows from the exact formula for the probability It is worth noting that I s (x) = exp(−J s (x)).
Proposition 4.5 (Theorem 1 of [BG16]). For all s ∈ R, (4.41) We start our proof of Proposition 4.1 with upper and lower bounds on the r.h.s. of (4.41).

Proof of Proposition 4.6
Proof of (4.42). We start by noticing the following trivial lower bound where inequality is obtained via J s (a k ) ≤ e −T 1 3 sζ which follows on the event A. Our next task is to bound k>k0 I s (a k ) from below. To achieve this, we recall the result of [CG,Proposition 4.5] which shows that for any , δ ∈ (0, 1) we can augment the probability space on which the Airy point process is defined so that there exists a random variable Here, λ k is the k-th zero of the Airy function (see [CG,Proposition 4.6]) and we fix some δ ∈ (0, ). Define φ(s) := s Appealing to the tail probability of C Ai , we have P(C Ai ≤ φ(s)) ≥ 1 − e −s 3 2 + 4 3 . We now claim that for some constant C > 0,  To prove this note that for all k ≥ k 0 , (4.49) The first inequality of (4.49) is an outcome of [CG,Proposition 4.6] and the second inequality follows from [CG,Lemma 5.6]. Applying (4.49), we get Summing over k > k 0 in (4.50), approximating the sum by the corresponding integral, and evaluating yields (4.48). Now, we turn to complete the proof of (4.42). Plugging (4.48) into the r.h.s. of (4.47) yields   To finish the proof, we observe that Proof of (4.43). Here, we need to get an upper bound on E ∞ k=1 I s (a k ) . We start by splitting E ∞ k=1 I s (a k ) into two different parts (again set A = a 1 ≤ (1 + ζ)s ):  We split the first term on the r.h.s. of (4.54) as follows (4.55) On the event B, we may bound  EJP 25 (2020), paper 66. 1 3 P(A). Thanks to Theorem 1.4 of [CG], we know that for any δ > 0, there exists s δ such that P(B c ) ≤ e −c(ζs) 3−δ for all s ≥ s δ . Now, we plug these bounds into (4.55) which provides an upper bound to the first term on the r.h.s. of (4.54). As a result, we find for sufficiently large s. This completes the proof of (4.43) and hence also of Proposition 4.6.

Upper tail under general initial data
This section contains the proofs of Theorems 1.4 and 1.14.

Proof of Theorem 1.4
Theorem 1.4 will follow directly from the next two propositions which leverage narrow wedge upper tail decay results to give general initial data results. The cost of this generalization is in terms of both the coefficients in the exponent and the ranges on which the inequalities are shown to hold. Recall h f T and Υ T from (1.5) and (1.8) respectively.
The following proposition has two parts which correspond to T being greater or, less than equal to π. The main goal of this proposition is to provide a recipe to deduce upper bounds on P(h f T (0) > s) by employing the upper bounds on P(Υ T (0) > s). We have noticed in Theorem 1.11 that the latter bounds vary as s lies in different intervals and furthermore, those intervals vary with T . This motivates us to choose a generic set of intervals of s based on a given T and assume upper bounds on P(Υ T (0) > s) in those intervals. In what follows, we show how those translate to the upper bounds on P(h f T (0) > s).
In the next result, we demonstrate some upper bound on the first term on the r.h.s. of (5.9).
Let n 0 = n 0 (s, δ, τ ) < n 0 = n 0 (s, δ, τ ) ∈ N be such that 2 −5/3 τ ζ 2 n ∈ S 2 for all integer n in (5.14) Plugging this into (5.11), summing in a similar way as in the proof of Lemma 3.1 and noticing we arrive at Applying (5.17) to the r.h.s. of (5.11) for all n such that 2 −5/3 τ ζ 2 n ∈ S 1 ∪ S 3 and summing in a similar way as in the proof of Lemma 3.1 yields for some C 2 = C 2 ( , T 0 ). Adding (5.16) and (5.18) and noticing that 1 − 2µ 3 3 2 ≥ (1 − µ), we obtain (5.10) if s ∈ [s 0 , s 1 ] ∩ [s, ∞) wheres depends on and T 0 . Now, we turn to the case when s ∈ (s 1 , s 2 ] ∪ (s 2 , ∞) ∩ [s 0 , ∞). Owing to (5.1), for all n ∈ Z and s ∈ [s 0 , ∞), . for some K = K(C, T, τ, ν) > 0. There exists s = s (µ, T 0 , C, ν, θ, κ, M ) such that the right hand side of the above inequality is bounded above by exp(sT  Proof. We need to bound P E c n−1 ∩ E c n+1 ∩ F n for all n ∈ Z. Define We begin with the following inequality We will bound each term on the r.h.s. above. Proposition 1.10 provides s := s ( , T 0 ), K = K( , T 0 ) > 0 and the following upper bound 11 for s ≥ s and T ≥ T 0 (1 − ) s  (1) T (·) stays in between M (·) and L(·) at ζ n−1 and ζ n+1 . The rightmost point in (ζ n , ζ n+1 ) where Υ (1) T (·) hits U (·) is labeled σ n . The event that the black curve stays above the square at ζ n is B n and P H 2T ( B n ) (see (5.25) for P H 2T ) is the probability of B n conditioned on the sigma algebra F ext {1} × (ζ n−1 , σ n ) . On the other hand, P H 2T ( B n ) (see (5.26) for P H 2T ) is the probability of B n under the free Brownian bridge (scaled by 2 1 3 ) measure on the interval (ζ n−1 , σ n ) with same starting and end point as Υ (1) T (·). The dashed black curve is such a free Brownian bridge coupled to Υ (1) T (y) for all y ∈ (ζ n−1 , σ n ). Owing to this coupling, P H 2T ( B n ) ≥ P H 2T ( B n ). The probability of B(σ n ) staying above the bullet point is 1 2 which implies that P H 2T ( B n ) ≥ 1 2 .
Consequently, we can bound the probability of ( E c n−1 ∩ E n−1 ) ∩ ( E c n+1 ∩ E n+1 ) ∩ F n by 2P( B n ) (see (5.28)). The expected value of P( B n ) can be bounded above by the upper tail probability of Υ Proof. We parallel the proof of [CH14, Proposition 4.4] (see also [CH16,Lemma 4.1]). Figure 2 illustrates the main objects in this proof and the argument (whose details we now provide).
To finish the proof of (5.5) we combine (5.34) with (5.35) below and take s 0 = max{s , s }.
Proof of Theorem 1.14. This theorem is proved in the same way as Theorem 1.4 by combining Proposition 5.3 and Proposition 5.4. We do not duplicate the details.

Proof of Proposition 5.3
To prove this proposition, we use similar arguments as in Section 5.1.2. Let τ ∈ (0, 1 2 ) be fixed (later we choose its value). Recall the events E n and F n from Section 5. (5.43) Using Lemma 5.1 (see (5.10)) and Lemma 5.4 (see (5.22)) we can bound the first two terms on the right side hand side of (5.43). However, unlike in Theorem 5.1, the last term in (5.43) is not zero. We now provide an upper bound to this term.

Proof of Proposition 5.4
We use similar argument as in Proposition 5.2. The main difference from the proof of Proposition 5.2 is that we do not expect (5.34) to hold because the initial data is now a two sided Brownian motion, hence, (1.3) of Definition 1.1 is not satisfied. However, it holds with high probability which follows from the following simple consequence of the reflection principle for B (a two-sided Brownian motion with diffusion coefficient 2  To complete the proof, let us define: W ± := Υ T (±s −n+2 ) ≥ − 1 2 2/3 s 2(n−2) + 1 + 2µ 3 , W int := Υ T (y) ≥ − y 2 2 2/3 + 1 + µ 3 s, ∀y ∈ [−s −n+2 , s −n+2 ] .
We claim that there exists s = s (µ, n, T 0 ) such that for all s ≥ s and T ≥ T 0 , Therefore (using (5.47) for the second inequality) we arrive at the claimed (5.48) via To finish the proof of Proposition 5.4 we use a similar argument as used to prove (5.35). For any n ∈ Z ≥3 , there exists s = s (µ, n, T 0 ) such that for all s ≥ s and T ≥ T 0 , Combining this with (5.48) and taking s 0 = max{s , s }, we arrive at (5.42) for all s ≥ s 0 .