Moments of partition functions of 2D Gaussian polymers in the weak disorder regime -- II

Let $W_N(\beta) = \mathrm{E}_0\left[e^{ \sum_{n=1}^N \beta \omega(n,S_n) - N\beta^2/2}\right]$ be the partition function of a two-dimensional directed polymer in a random environment, where $\omega(i,x), i\in \mathbb{N}, x\in \mathbb{Z}^2$ are i.i.d. standard normal and $\{S_n\}$ is the path of a random walk. With $\beta=\beta_N=\widehat{\beta} \sqrt{\pi/\log N}$ and $\widehat{\beta}\in (0,1)$ (the subcritical window), $\log W_N(\beta_N)$ is known to converge in distribution to a Gaussian law of mean $-\lambda^2/2$ and variance $\lambda^2$, with $\lambda^2=\log ((1-\widehat\beta^2)^{-1})$ (Caravenna, Sun, Zygouras, Ann. Appl. Probab. (2017)). We study in this paper the moments $\mathbb E [W_N( \beta_N)^q]$ in the subcritical window, and prove a lower bound that matches for $q=O(\sqrt{\log N})$ the upper bound derived by us in Cosco, Zeitouni, arXiv:2112.03767 [math.PR]. The analysis is based on appropriate decouplings and a Poisson convergence that uses the method of ''two moments suffice''.


Introduction and results
Let ((S n ) n≥0 , (P x ) x∈Z 2 ) be the simple random walk on Z 2 .The associated expectation will be written as E x .We let p n (x) = P 0 (S n = x).
Let ω(n, x), n ∈ N, x ∈ Z 2 be a collection of i.i.d.random variables distributed according to a centered Gaussian of variance one N (0, 1). Set where the asymptotics on R N follow from the local limit theorem p 2n (0) ∼ 1 πn , see e.g.Appendix A. We define the normalized partition function: It is known, see e.g.[2,Theorem 2.8], that for β < 1, log W N → N (−λ 2 /2, λ 2 ), where λ 2 = λ( β) 2 = − log(1 − β2 ), and further, from [7, Theorem 1.1], we have that for any fixed q integer and β < 1, The goal of this paper is to establish a lower bound on the q-th moment of W N when q can increase as function of N , thus complementing the upper bounds derived in [4], to which we refer for motivation and applications.Of particular interest is the case of q 2 of order log N .Our starting point is the formula , where C q = {(i, j), 1 ≤ i < j ≤ q}.(See [4] for a proof of (2).)Here is our main result.
Theorem 1.1.Suppose that q 2 = O(log N ) and log log N = O(q 2 ).There exists This last bound matches to leading order the upper bound E[W q N ] ≤ e ( q 2 )λ 2 (1+|εN |) that we obtained in [4] in the regime q 2 ≤ c log N with c = c( β).
Theorem 1.1 requires q to be larger than √ log log N .The statement in fact continues to hold without that restriction: indeed, for q = O(1) this is contained in [7], while we provide in Appendix B the modifications needed to extend the statement to the range 1 ≪ q 2 < log log N .
In fact, for q independent of N , the convergence (1) yields an exact equivalence with errors o(1) in the exponents.As shown in [8], the underlying reason is an asymptotic decoupling for the intersection local time of the walks.In comparison, we prove a weaker form of decoupling, for a larger number of walks.
Remark 1.2.It was pointed to us by F. Caravenna that in the continuous setup, i.e. when the random walk S n is replaced by a Brownian motion, the sum in the definition of W N is replaced by an integral, and the environment replaced by a regularized white noise, the result of Theorem 1.1 with ε N = 0 follows from a correlation inequality, see [3] for a similar argument.We do not see how to adapt this to the discrete setup.
We further observe that when q is too large, the behavior changes: Theorem 1.3.For all β > 0 there exist c 0 = c 0 ( β) > 0 and c 1 = c 1 ( β) > 0 such that when q 2 ≥ c 1 (log N ) 2 , we have E[W q N ] ≥ e c0( q 2 )N/ log N .1.1.A high level view of the proof and structure of the paper.We provide in this section a somewhat impressionistic view of the proof, that neglects important details but captures the main ideas.The starting point is (2), that reduces the computation of moments of the partition function to the evaluation of exponential moments of the total pairwise intersections of q independent random walk paths.Toward this end, we introduce certain decoupling time L k with L k+1 = L k + o k and with o k being a large multiple (ν 2 ) of l k ≫ 1, see (5).Very roughly, l k ∼ (cl k−1 ) 1+α/ log N , and we mostly care about l k > N ǫ for some ǫ small.Now, within each interval I k = [L k , L k+1 ), we only count intersections of paths within a subinterval of length l k that is separated from both ends, and within this interval we only count the intersections of disjoint pairs.Using the Markov property, contributions from different I k s decouple as long as we condition on the position of the paths at the beginning and end of I k (the precise statement is contained in Proposition 2.3).Crucially, we then reduce the contribution within each I k to paths whose starting points and ending points are "where they should be", and then further reduce it to a moment of a certain quantity we call a k , see ( 14), which depends only on a pair of random walks, and the total number of disjoint pairs that intersect, denoted R k ; this is the content of the crucial Proposition 2.5.
Having obtained the decoupling, there are two tasks remaining.The first is to obtain a good control on a k , that is the contribution of intersections of a single pair of walks.This necessitates estimates that are related to those we obtained in [4], with the upshot being that that a k ∼ 1/(1 − β2 (log l k )/(log N )), see Proposition 2.6.
The main innovation of the paper is then to obtain a good control of R k , the number of disjoint pair intersections.We prove in Proposition 2.8 that R k is close in distribution to a Poisson random variable.The proof of Proposition 2.8, which takes up most of Section 3, is based on Stein's method, more specifically on the "two moments suffice" theorem of Arratia, Goldstein and Gordon [1].Essentially, we use that disjoint pairs of path are independent to introduce a notion of neighborhood of dependence between pairs of indices.Taking parameters in the right order drives the Poisson parameter (roughly, α) to infinity and completes the proof of Theorem 1.1.
Theorem 1.3 is much easier and obtained by forcing an event where the walks stay confined to a neighborhood of the origin.See Section 2.3 for the proof.1.2.Notation.Throughout the paper, constants C are positive universal constants, whose values may change at different occurrences.
We will use repeatedly that = (S 2k ) when S 1 n and S 2 n are two independent simple random walks.
B(x, r) denotes the Euclidean ball of radius r centered at x ∈ R 2 .

2.1.
Preliminaries for the proof of Theorem 1.1.Throughout the paper, we always assume that N, ε −1 0 , δ −1 , ν 1 , ν 2 , M, α ≥ 100 and in accordance to the order of the limits, that Next, we introduce the times l k , L k that we use to decompose the process.With we set: (5) The times L k and l k satisfy the following straightforward relations: Moreover, the following bounds on K hold: Remark 2.2.It follows from ( 6)-(i) and (3)-(ii) that ν 1 l k−1 ≤ l k .This fact will turn out useful in several places.
To help us control the positions of the walks at the times (L k ), we define the (random) set of indices , where we recall that B(x, r) is the Euclidean ball of radius r centered at x ∈ R 2 , and further introduce the event: For all m ∈ N and x = (x 1 , . . ., x m ), y = (y 1 , . . ., y v ) ∈ (Z 2 ) m , write x ∼ n y whenever P ⊗m x (S 1 n = y 1 , . . ., S m n = y m ) > 0. When x ∼ n y, denote by E n,y x the expectation for m copies of the simple random walk started at x and conditioned on arriving at y at time n, that is . We are now ready to decompose the moment of W N as a product of contributions coming from the different time intervals [L k , L k+1 ].This is the purpose of the next proposition.
First, H 0 holds by (2) (we use the convention that an empty product equals 1).Suppose now that H l holds for some l < K. Let S n = (S 1 n , . . ., S q n ) and denote by Ãk the event A k shifted in time by −L K−l .By Markov's property, (11) (recall that o K−l = L K+1−l − L K−l ).We apply again Markov's property to find that for all x = (x 1 , . . ., x q ) ∈ (Z 2 ) q , E ⊗q On the event ÃK−l , we let (i r ) r≤q0 be the q 0 smallest indices such that by restricting the sum inside the exponential to the walks indexed by the i r 's and to the time interval ν 1 l k−1 , ν 1 l k−1 + 2l k .In particular, we obtain from the last display that This combined with H l and (11) implies that H l+1 holds.
The goal now is to obtain a good lower bound on the quantity Υ k defined in (10).For this purpose, we introduce T k the time interval: and define R k as the maximal number of disjoint pairs (i, j) ∈ C q0 such that S i and S j intersect during T k without leaving some large ball.More precisely, let (13) (we set σ i k = ∞ when the set is empty) and define as the first time two particles intersect before one of them leaves the ball of radius M l 1/2 k and (I 1 , J 1 ) be the two particles involved.If the set is empty, we let τ 1 = ∞.Then, define iteratively: k and ∀s ≤ r, {i, j} ∩ {I s , J s } = ∅ , as the next time two new particles, distinct from all the previous particles I 1 , J 1 , . . ., I r , J r , meet.We denote by (I r+1 , J r+1 ) this new pair.When there is no such time, we set τ r+1 = ∞.Finally, denote by the total number of successive disjoint intersections.Note that the τ r depend on k, however we supress this dependence in the notation.
Remark 2.4.Note that a consequence of the definition is that R k is maximal in the sense that any sequence of disjoint intersecting couples Introduce the expression (14) The quantity a k will serve below as a lower bound on the (multiplicative) contribution of a couple (I r , J r ) to the total expectation.Considering that we have R k such contributions, we now prove the following result.
Proposition 2.5.With notation as above, we have that for all k ∈ 1, K , 1) so that Therefore, it holds that Recall the definition of a k in (14).Our goal is to show that for all R ≥ 0, (Again, Φ R depends on k, x, y, but we supress this from the notation.)The equation ( 16) holds trivially for R = 0. Now suppose R ≥ 1.Let F n denote the sigma-algebra generated by the walks until time n and denote by F τ1 the sigma-field stopped by τ 1 .Observe that by independence of the random walks and Markov's property, where τr , Ĩr , Jr , Rk are defined as τ r , I r , J r , R k but for q 0 − 2 particles and with T k replaced by 0, where in the equality we have used Markov's property as above in the reverse direction.Iterating this process leads to (16).Then, putting together (15) and ( 16) and summing over R entails Proposition 2.5.
Next, we define: (Recall that lim sup Γ ′ keeps γ and ε 0 fixed when taking the limsup, see Section 1.2.) Proof.Throughout the proof, we write where we have supressed the dependence on x and k in the notation.By Markov's property, (18) We first show that when where, for some To show (19), we rely on the local central limit theorem given in Appendix A. First observe that o k − t − l k ≥ ν 2 l k when t ∈ T k .We will use this repeatedly.Moreover, for |z and t, x, y i as above, we have that Hence Theorem A.1 applies and we obtain that where ps (z) = 1 πs e −|z| 2 /s and . Note that d k ≤ cN −γ with a constant c depending on δ, ν 2 and M .Then, one finds by a simple computation that for x = (x, x), . By the Cauchy-Schwarz inequality, the absolute value of each of the two terms in the last exponential is smaller than Moreover, Putting things together leads to (19).Coming back to (18), the bound (19) entails that We have Recall the definition of λ 2 k in (17).Given that l k ≥ N γ , one can see from the proof of Proposition 3.4 in [4] that there exists Moreover, by Hölder's inequality with p −1 + (p ′ ) −1 = 1 and p > 1 small enough so that . (We have also relied on Hoeffding's inequality to bound the probability, using that |x| ≤ M l 1/2 k .)Combining (20), ( 22) and (23) with the two last displays, we obtain that To conclude the proof of the lemma, observe that for all k ≤ K we have λ 2 k ≤ λ 2 , so that we can choose , and observe (using (20)) that it satisfies lim sup Γ ′ ∆ Γ = 0.
For technical reasons, we will also need a uniform upper bound on a k .
Lemma 2.7.We have Proof.Since a k ≥ 1, the lower bound is trivial.To see the upper bound, we proceed as in the proof of Proposition 2.6 and write as in (18): We also use the expression in (21) and estimate for |z and x, y i in the ranges appearing in the definition of a k , The estimate of (25) actually extends to the range z : with c a universal constant; for z > l 3/5 k , we use a simple large deviations estimate and obtain that We thus obtain, in analogy with ( 22), which, using [4, Proposition 3.4], is bounded above by a universal constant depending only on β.
Recall that q 0 = ⌊(1 − ε 0 )q⌋, see (2.3).Our next goal is to show that R k is close to a Poisson random variable of parameter α q0 2 / log N by relying on the "two moments suffice" theorem [1].To verify the hypothesis of the latter, it is more convenient to work with the quantity = ∞ when the set of the infimum above is empty) which counts the number of all the couples that intersect in the time interval T k (whereas R k counts the maximal number of independent couples).The next proposition states that the law of Rk can be approximated by a Poisson law of mean ᾱ q0 2 and that R k and Rk are close in distribution.Before stating the proposition, we introduce a few quantities.For all (i, j) ∈ C q0 , we let p (i,j) = P o k ,y x (τ (i,j) k < ∞) and define: We also set p (i,j),(i ′ ,j ′ ) = P o k ,y x (τ Note that all these quantities depend on k, x, y, but we will show in Section 3 that this dependence can be neglected asymptotically.In fact, we prove that p (i,j) can be approximated by ᾱ and that µ can be approximated by ᾱ q0 2 .
We turn to (30).By a standard property of the distance in total variation, Then, observe that on the event {R k = Rk }, there exist two couples (i, j), , which gives (30).
Proof.Let R be distributed as P(µ) and recall ε ⋆ N from Proposition 2.8.For all r 0 ∈ N, we have (32) where we have used that E[a R ] = e µ(a −1) , that E[a R 1 R≥r ] ≤ e µ(a−1) µ r r! for all a > 0, r ∈ N and that e µ(a k −1) ≥ 1.
Putting everything together yields the lower bound lim inf Γ

Estimates for "two moments suffice"
3.1.Two-particle intersection probability.The goal of this section is to give an estimate on p (i,j) = P o k ,y x (τ To simplify future notations, we write The following proposition provides the desired asymptotics.(Note that p (i,j) = p (xi,xj),(yi,yj ) ).We will prove (41) using a sequence of lemmas (we refer to the end of the section for the proof of Proposition 3.1).As a first step, we show that p x,y can be replaced by p x = P ⊗2 x (τ k < ∞), i.e. p x is defined as p x,y except there is no conditioning on the endpoint.Lemma 3.2.There exists ∆ Γ,3.2 > 0 satisfying lim sup Γ ′ ∆ Γ = 0 such that for all k ≤ K and all x ∈ B 2,k , (42) sup Proof.By Markov's property, we have Define: Since by definition where since then lim sup Γ ′ ∆ Γ,3.2 = 0. Similarly to the proof of Proposition 2.6, the argument leading to (43) relies on the local central limit theorem.In the following we assume that z ∈ B(0, M l where ps (x) = 1 πs e −|x| 2 /s and ).We now come back to V k .Letting z = (z, z), we find that Recall b 0 and b 1 in (44).The first term in the exponential above is positive and smaller than -(ii) and Remark 2.2.The sum of the absolute values of the two other terms in the exponential is smaller than by the Cauchy-Schwarz inequality and (6)(i),(ii).Moreover, Combining these estimates entails (43) using that |e x −1| ≤ |x|e |x| for all x ∈ R.
Next, we show that we can neglect the condition n < σ k .Thus, we define: Proof.We have: We will bound from above the term corresponding to i = 1 in the sum.The other term is treated the same way.Since n , hence by Markov's property, where h k (z) = P z (∃n ≤ l k : S n = 0).It follows from [6, Théorème 3.6] that We thus split the sum that appears in (47 where by ( 6) and ( 3)-(iv), we have for some c > 0, by Doob's inequality and Hoeffding's lemma.(Note that for the last inequality, we have used Remark 2.2).Then, under the condition |x − y| 2 < l k .Thus, given that m ≥ ν 1 l k−1 , we can apply the local limit theorem (Theorem A.1) to obtain that: and hence where in the second inequality, we have used a comparison to an integral where Using that in the last exponential term we have m ≤ 2l k , we get via (49 64 .This gives (46) since l k ≥ N γ .
We introduce the shorthand notation g k (x) = P x (∃n ∈ T k : S n = 0) that satisfies . Our aim is to use the KMT coupling (see [11] and references therein) to estimate g k (x).The KMT coupling ensures that one can couple, with high probability, the simple random walk (S n ) to a standard 2-dimensional Brownian motion (B t ) with an error term We will use the coupling to compare the hitting time of 0 of the random walk to the entry time of Brownian motion in a ball of radius c log N .This will turn out helpful as there are good estimates by Spitzer [10] on the probability of the last event. 1 Let t 1 = ν 1 l k−1 and t 2 = t 1 + l k denote the boundaries of T k and t ′ 2 = t 1 + l k /2.We define: |B t | ≤ c 0 log N .
1 There exist similar estimates for the random walk itself, such as [9], but unfortunately they are not sharp enough to estimate g k (x) directly in our context.
Lemma 3.5.Let c 0 be as in Lemma 3.4.There exists N 0 = N 0 ( Γ) and ∆ Γ,3.5 > 0 such that ∆ Γ,3.5 < 1 for all N > N 0 , lim sup Γ ′ ∆ Γ,3.5 = 0 and for all k ≤ K, We begin with the second inequality (upper bound) in ( 56).(The first inequality in (56) is immediate).With r 1 = c 0 log N √ t1 , we have (57) where the first inequality follows from the fact that the modulus of the Brownian motion is a Bessel process and one can couple a Bessel process X x t started at x to B 0 t so that B x t ≥ B 0 t for all t, and the equality follows from Brownian scaling.In [10], it is shown that h r (t) = log r −2 P 0 inf s∈ [1,t] |B s | ≤ r , satisfies h r (t) → log t as r → 0 for all fixed t ≥ 1.Since t → h r (t) is increasing and t → log t is continuous, this convergence can be extended to a uniform convergence on each compact subset of [1, ∞).By (6) we have hence by the equality in (57), (58) log r −2 where ε N = ε N (α, γ, ν 1 ) → 0 as N → ∞ since r 1 vanishes as N → ∞.Moreover, by (8), there exists N , where we have used that ν 1 l k−1 /l k ≤ 1 (Remark 2.2).Hence, by (58), we find that Since log l k−1 ≥ γ log N , the numerator is smaller than 1 + log 2+εN +ε ′ N γα and for α, γ and ν 1 fixed, the denominator writes as (1 + o N (1)).This gives (56).
We are now ready to complete the proof of Proposition 3.1.
Before turning to the proof of Proposition 3.6, we state a few lemmas.As in the previous section, we first observe that we can forget about the conditioning on the endpoints.Letting p (3)  x (1 + ∆ Γ,3.7 ).We omit the proof which is very similar to the one of Lemma 3.2.Next, we let T 0 = inf{n ≥ 0 : S n = 0}.The following holds.
where in the first sum we have bounded h k by 1 and used that p n (z) It then trivially holds that p x ≤ p(3) x .Furthermore, by symmetry, where T (1,3) = inf{n ≥ 0|S 1 n = S 3 n } and θ k denotes shift in time of k steps.Let T 0 = inf{n ≥ 0 : S n = 0} and h k (x) = P x (T 0 ≤ ℓ k ).By Markov's property, Then, combine Lemma 3.8 with the identity E x (τ ) and the upper bounds in Lemma 3.4 and Lemma 3.5 to obtain that with lim sup Γ ′ ∆ Γ,3.5 = 0. We conclude with Lemma 3.7.
Appendix A. Local central limit theorem Let p t (x) be the probability transition function of the simple random walk on Z d and pt (x) = 1 πt e − d|x| 2 2t .We say that x ∼ t y when x and y have the same parity, that is p t (x− y) > 0. The following theorem can be obtained from Theorem 2.3.11 in [5].(See also the proof of [5, Theorem 2.1.3]and the paragraph above the statement of that theorem).
Since p 2n (x) is maximal at x = 0 (this is a direct consequence of the Cauchy-Schwarz inequality), the theorem implies the next general bound: Corollary A.2.There exists C > 0 such that for all n ≥ 1, sup x∈Z p n (x) ≤ C n .
Appendix B. The case 1 ≪ q 2 ≤ log log N In the regime 1 ≪ q 2 ≤ log log N , the error in Proposition 2.11 becomes too large.The reason is that we ask for many particles to meet in a ball at each time L k , and there are around log N such times.This event has a cost which is too big compared to the value of the moment E[W q N ] when q ≤ log log N .To fix this issue, we can divide [0, N ] into less intervals [L k , L k+1 ) by defining ᾱ = α ( q 2 ) instead of ᾱ = α log N .With this change, the error term in Proposition 2.11 can be neglected.However, when choosing ᾱ = α ( q 2 ) , the quantity t 2 /t 1 diverges in the proof of Lemma 3.5, so that we cannot resctrict ourselves to a compact set in order to extend the pointwise convergence of [10] to a uniform one in the argument for (58).Hence, we need to extend the main result in [10] to allow for a uniform control on time and space.There exist uniform results by Ridler and Rowe [9], both for the random walks and the Brownian motion, but they are given for the quantity P x (T > n) (where T is the first return time to 0) instead of P x (T < n) that we need, and unfortunately the error term given is not enough to go from one quantity to the other in our case.
The following lemma deals with this problem.It is obtained by adapting the arguments of Spitzer [10] and Ridler-Rowe [9].Consider the Bessel process R t = |B t | and define T a = inf{t ≥ 0 : R t = a}.Lemma B.1.For all c > 0, it holds that P r (T a ≤ t) = log(t/r 2 ) log(t/a 2 ) (1 + o(1)), where the error term o(1) vanishes as r 2 /t → 0 uniformly for a < r.
Proof.The goal is to deduce bounds on P r (T a ≤ t) from its Laplace transform A(a, r, λ) = ∞ 0 e −λt P r (T a ≤ t)dt, a < r, λ > 0.
We follow closely the approach used in [9, Main Proof and Proof of Theorem 2] which is based on a Karamata method of obtaining Tauberian theorems.The starting point is the next formula, proved in [10, (1.4)], (62) A(a, r, λ) = K 0 r √ 2λ λK 0 a √ 2λ , with K 0 (u) = − log u + C + O(u) as u → 0.
In particular, it holds that (63) A(a, r, λ) = 1 λ log(r 2 λ) log(a 2 λ) where o(1) vanishes as r 2 λ → 0 uniformly for a < r.Then, the idea is to relate P r (T ≤ t) to its Laplace transform via the formula where g(u) = u −1 when e −1 ≤ u ≤ 1 and 0 otherwise.We will first obtain bounds on B(a, r, t) and deduce a bound on its t-derivative P r (T a ≤ t) in a second step.Let ε ∈ (0, 1) and A similar computation leads to an upper bound in (67), with 1 − ε replaced by 1 + ε.Hence, setting λ −1 = t, we obtain that for all a < r and r