Functional Weak Limit of Random Walks in Cooling Random Environment

We prove an annealed weak limit of the trajectory of the random walks in cooling random environment (RWCRE) under both slow (polynomial) and fast (exponential) cooling. We identify the weak limit when the underlying static environment is recurrent (Sinai's model). Avena and den Hollander have previously proved law of large numbers and Gaussian fluctuation of RWCRE. We find that the weak limit of the trajectory exists as a time-rescaled Brownian motion in the slow cooling case but the limit degenerates to a constant function in the fast cooling case.


Introduction, Background, and Main Results
Research on random walks in disordered environments has attracted a lot of attention by mathematicians and physicists over the last few decades. The model of random walks in random environments (RWRE) was first studied by Solomon in [18]. In this model the disorder in the environment is random 1 , but fixed for all time by the walk. Much of the subsequent interest in this model was driven by the fact that RWREs could exhibit a surprisingly rich array of asymptotic behaviors such as transience with asymptotically zero speed [18], and limiting distributions which are non-Gaussian and have non-diffusive scaling [14,15,17]. These interesting phenomena can be understood as occurring because of the "trapping" effects of the environment. See [19] for an overview of basic results in RWRE.
More recently, there has been interest in a generalization of RWRE called random walks in dynamic random environments (RWDRE) in which the disorder of the environment is random in both space and time. One can see that RWDRE interpolates between simple random walk (SRW) and RWRE: If the dynamics are "frozen", i.e. the environment is not changing after initial set-up, then this is simply a RWRE. On the other hand, if the environment is space-time i.i.d. then it is easy to see that the distribution of the RWDRE (under the annealed measure) is the same as that of a SRW. For RWDRE models which are between these two extremes there is an interplay between the trapping effects introduced by the randomness of the environment and the rate at which the time dynamics of the environment causes these traps to disappear. One might expect therefore that environments with "fast" mixing time dynamics should have similar characteristics as a SRW (e.g. path convergence to Brownian motion) while "slow" mixing time dynamics might retain some of the strange behaviors of RWRE (e.g., non-Gaussian limiting distributions or transience with sublinear speed).
Many of the results thus far in RWDRE have focused on dynamic environments which are in some sense fast mixing, see [1,7,8]. For example, the environment may be assumed to be a Markov chain with uniformly mixing time dynamics or which satifies a Poincaré inequality. A variety of approaches have been used in these papers, but in all cases one can obtain convergence to Brownian motion after centering and diffusive scaling.
Environments which are more slowly mixing present different problems as the trapping effects of the environment may possibly be stronger. Examples of environments like conservative particle systems have poor mixing rates [9,11]. A particularly interesting example is the case where the dynamic environment is given by a simple symmetric exclusion process. Avena and Thomann have made conjectures based on simulations that this model can exhibit many of the same strange behaviors as that of RWRE (e.g., transience with zero speed and non-diffusive scaling). However, the results for this model have been limited to some cases where the parameters of the model are near their extremes and in these cases once again the distribution of the walk converges under diffusive scaling to a Brownian motion. Other examples of slow mixing environments for which the RWDRE has been shown to converge to Brownian motion are [4,12,13].
All the above results for RWDRE have shown limiting behavior which is like that of a SRW. Recently, however, Avena and den Hollander have introduced a new model of RWDRE, random walks in cooling random environment(RWCRE), in which the dynamics can be slow enough that the model retains some of the strange behavior of RWRE [5]. In this model the environment is totally refreshed at some points called resampling times. Results for this model have included a strong law of large numbers, a quenched large deviation principles, sufficient conditions for recurrence/transience, and limiting distributions [2,3,5]. Most relevant to the results of the present paper, for certain cases of RWCRE they prove that the limiting distributions are Gaussian but with non-diffusive scalings that interpolate between the (log n) 2 scaling of recurrent RWRE and the diffusive √ n scaling of SRW [3,5]. The main goal of this paper is to determine the appropriate limiting distributions for the path of the walk in these cases.
The paper is organized as follows. We introduce the model of one-dimensional RWCRE in Section 1.1. In Section 1.2 we review the limiting distribution results for both recurrent RWRE (Sinai's random walk) and corresponding model of RWCRE. In Section 1.3, we give our main result, the functional weak limit under both slow (polynomial) and fast (exponential) cooling. The proof is given in Section 2.

Random Walks in Cooling Random Environment
We will use the same notations as in Avena and den Hollander [5]. Let N 0 = N ∪ {0}. The classical onedimensional random walk in random environment (RWRE) is defined as follows. Let ω = {ω(x) : x ∈ Z} be an i.i.d. sequence with probability distribution for some probability distribution α on (0,1). The random walk in the spatial environment ω is the Markov process Z = (Z n ) n∈N 0 starting at Z 0 = 0 with transition probabilities The properties of Z are well understood, both under the quenched law P ω (·) and the annealed law The random walk in cooling random environment (RWCRE) is a model where ω is updated along a growing sequence of determined times. Let τ : N 0 → N 0 be a strictly increasing map such that τ(0) = 0 and τ(k) ≥ k for k ∈ N. Define a sequence of random environments Ω = (ω n ) n∈N 0 as follows: At each time τ(k), k ∈ N 0 , the environment ω τ(k) is freshly resampled from µ = α Z and does not change during the time interval [τ(k), τ(k + 1)). That is, ω n = ω τ(k) where k is such that τ(k) ≤ n < τ(k + 1). The random walk in the space-time environment Ω is the Markov process X = (X n ) n∈N 0 starting at X 0 = 0 with transition probabilities We call X the random walk in cooling random environment with resampling rule α and cooling rule τ. The distribution P Ω,τ of the random walk for a given space time environment is called the quenched law. The annealed law of the walk {X n } n≥0 is obtained by averaging the quenched with respect to the distribution Q = Q α,τ on Ω.

Slow and Fast Cooling: Gaussian Fluctuations for Recurrent RWRE
In Solomon's seminal paper [18], he showed that the recurrence/transience of a RWRE is determined by the sign of E α [log ρ(0)], where and E α [·] denotes expectations with respect to the measure α. In particular, if E α [log ρ(0)] = 0 then the RWRE is recurrent. Subsequently, the scaling limit in the recurrent case was identified by Sinai [17] and the explicit form of the limiting distribution by Kesten [15]. Moreover, it was shown by Avena and den Hollander [5] that the convergence also holds in L p .The next proposition summarises their results. Proposition 1.
[Scaling limit RWRE: recurrent case] Let α be any probability distribution on (0,1) satisfying E(log ρ(0)) = 0 and σ 2 Then, under the annealed law P µ , the sequence of random variables converges in distribution and in L p to a random variable V on R that is independent of α. The law of V has a density p(x), x ∈ R, with respect to the Lebesgue measure that is given by In their initial paper on RWCRE Avena and den Hollander introduced several kinds of cooling regimes that are interesting to research. For RWCRE in our paper, following their works, we focus on two kinds of growth regimes for When the distribution α is as in Proposition 1, Avena and den Hollander [5] proved a limiting distribution for the walk under both the fast and slow cooling regimes. Later in [3] they strengthened this to L p convergence. The following proposition summaries their results. Note that here and throughout the remainder of the paper we will use N (µ, σ 2 ) to denote a Gaussian random variable with mean µ and variance σ 2 . where with σ 2 µ the variance of the random variable log ρ(0) and σ 2 V the variance of the random variable with density (8). Moreover, in (R2) the centering part can be removed. That is, Remark 1. In the most recent work [3], the authors have studied more general cooling regimes and have found their limiting behavior. In fact, despite the sequence being always tight, depending on the relative variance weight, the centered walk may not always converge. In short, relative variance weight describes how significant the variance of the walk in a single cooling interval over the variance of X n . The results (Theorem 1.9 and Corollary 1.10 in [3]) showed that for general cooling sequences there might be no limiting distribution for X n / Var(X n ), but that one can identify a class of limiting distributions along subsequences which are mixtures of Kesten's distribution and standard Gaussian. See Examples 5 and 6 in [3] for more details.

Functional Weak Limit under the Slow and Fast Cooling
In this section we will introduce our main results of the weak limit of (X k / χ n (τ), k = 1, 2, ..., n) whereX k = X k (ω) − E(X k ) is the centered walk 2 of X k under both polynomial and exponential cooling. Since the walk (X k , k = 1, 2, ..., n) is a discrete time random walk and we are considering the scaled (under both time and space parameters) weak limit of it, it is reasonable to make this discrete-time-walk a continuous random walk X n t within the time interval t ∈ [0, 1]. The simplest way to solve this is to make the process piecewise linear. To this end, define Obviously X n t is a random function in C[0, 1], the space of continuous functions on [0, 1], equipped with the uniform topology. The main results are stated as follows.
Theorem 1. [Slow cooling: Functional weak limit for recurrent RWRE] Let α be as in Proposition 1. In regime (R1), X n t given in (12). Under the annealed law P, where (B t ,t ∈ [0, 1]) is a standard Brownian motion. The limit in the right hand side means a time-scaled Brownian motion. The convergence in law holds in the uniform topology on C[0, 1].
In the exponential cooling case, the result is slightly different. The functional weak limit of X n t is a random constant function and the law of the random constant is a standard Gaussian distribution.

Proof of the Theorem
We begin by noting the following useful decomposition property of RWCRE. Let be the number of resamplings of the environment prior to time n. It's easy to see k(n) ∼ (n/B) 1/β in (R1) and k(n) ∼ (1/C) log n in (R2). Furthermore, X n has a decomposition that will be very useful in the following proof of the theorems, where Y j = X τ( j) − X τ( j−1) , j = 1, 2, .., k(n),Ȳ n = X n − X τ(k(n)) . A simple fact is that all terms in (16) are independent under the annealed measure. Moreover, under the annealed measure, Y j has the same distribution as Z T j for j ≥ 1, and Y n has the same distribution as Z n−τ(k(n)) for n ≥ 1, where {Z n } n≥0 is a RWRE. Since we will deal with the remainder partȲ n throughout the proof, we will use the notationT n = n − τ(k(n)) andT c n = τ(k(n) + 1) − n.
To find the weak limit of (X ⌊sn⌋ −X ⌊tn⌋ )/ χ n (τ), we will follow the approach of [5] in using the following Lyapunov condition.
Lemma 1. (Lyapunov condition, Petrov [16]) Let U = (U k ) k∈N be a sequence of independent random variables (at least one of which has a non-degenerate distribution). Let m k = E(U k ) and σ 2 k = Var(U k ). Define Then the Lyapunov condition for some p > 2 implies that .
Recall X n has the decomposition Define the variance of X ⌊sn⌋ − X ⌊tn⌋ (which is also the variance ofX ⌊sn⌋ −X ⌊tn⌋ ) for any s < t and n large enough Since Y j has the same distribution as Z T j , then by Proposition 4 in [5] the following two asymptotic estimates hold as Applying these to (22) and (23) we obtain Moreover, using that ∑ k j=1 log 2p j ∼ k 1 log 2p xdx ∼ k log 2p k for all p ≥ 2 and that k(n) ∼ (n/B) 1/β , we have SinceȲ n has the same distribution as ZT n , we can again use Proposition 4 in [5] to obtain that there exists These upper bounds will be used to control Var(Ȳ c n ) and E |Ȳ c n − E(Ȳ c n )| p . For n large enough, From (27) and (28), By (26) and (29), we can therefore give the asymptotic of χ t,s n (τ) and χ t,s n (τ; p), From these asymptotics it is easy to check that the Lyapunov condition holds, and thus In order to prove the vector (X n t , X n s − X n t ) converges to a 2-d Gaussian vector with independent components, it suffices to show that any linear combination of X n t and X n s − X n t converges to the corresponding linear combination of the components of the limiting Gaussian vector. To this end, the proof is quite similar to what we did above: Decompose λX ⌊tn⌋ + µ(X ⌊sn⌋ − X ⌊tn⌋ ) into independent sums and check the Lyapunov condition (19). Notice that The key point to the proof is the expressions of the variance of λX ⌊tn⌋ + µ(X ⌊sn⌋ − X ⌊tn⌋ ) and the sum of centered p-th moments of the independent components in the above decomposition, The last term in each expression above cannot be separated into two parts because those two random variables are not independent under the annealed measure. But still, we can estimate the last term by the fact that Var(X + Y ) ≤ 2(Var(X) + Var(Y )) (and similarly, E(|X + Y | p ) ≤ 2 p−1 (E|X| p +E|Y | p ) for the p-th moment) for any two random variables X and Y . Thus, with the same approach, the last two terms in (33) and (34) will be dominated by the first two sums. Moreover, the asymptotics of the first two sums in (33) and (34) can be obtained using the same methods as in the first part of the proof above. The result is for any λ > 0, µ > 0, λX n t + µ(X n s − X n where (B t ,t ∈ [0, 1]) is a standard Brownian motion. The proof of this statement follows the same steps as what we did in dimension 2: Decompose ∑ k i=1 λ i (X ⌊t i n⌋ − X ⌊t i−1 n⌋ ) into independent sums where t 0 = 0. Then take the variance and the the sum of centered p−th moment of the independent components of the decomposition to check the Lyapunov condition (19). The decomposition is So the variance and the sum of centered p−th moment of the independent components above are and All the terms in the big brackets are dominated by sums to the left of the brackets. To check the Lyapunov condition holds in this case is nothing new but repeat our works (25) and (26). The details are tedious and we omit them in our paper.
To complete the proof of the theorem under the slow cooling case, the tightness of the sequence X n is needed. To this end, by Theorems 7.3 and 7.4 in [6] it is enough to show that for any ε > 0, η > 0, ∃δ > 0 and a sequence of and ∃n 0 > 0, for all n > n 0 , Since (X n t , t ∈ [0, 1]) is the continuous process of (X ⌊tn⌋ / χ n (τ), t ∈ [0, 1]), the biggest difference in the continuous process within a given interval is, up to an error smaller than 2/ χ n (τ), bounded by the biggest difference in the discrete time process. Hence we can check the condition (41) by replacing X n s , s ∈ [t i−1 ,t i ] and X n t i−1 byX m / χ n (τ), m ∈ [t i−1 n,t i n] andX ⌊t i−1 n⌋ / χ n (τ) separately.
Let m be |X m −X ⌊t i−1 n⌋ |= sup s∈[t i−1 n,t i n] |X s −X ⌊t i−1 n⌋ |, i.e. the exact value of s to make the biggest difference happens. If there are more than one candidates, choose one arbitrarily. We have the following decomposition, Let's deal with the decomposition above in two parts: • Given q = ⌊β⌋ + 1 > 1, define the martingale {M l } as M 0 = 0, Since the function x 2q is convex, {M 2q l } is a submartingale. By Doob's Maximal Inequality [10], for integer L > 0, To estimate the order of E[M   L ], by (24), it is bounded from above by C 0 log 4q n for some C 0 > 0 since we are dealing with the case within the interval [0, n]. So Now back to the first part in (42), 8 Combining with (44) and (45) and recalling (k(⌊t i n⌋) − k(⌊t i−1 n⌋)) ∼ (n/B) 1/β (t , we obtain that there exists C * > 0, depending only on ε, such that • To deal withỸ m , notice that |Ỹ m | is bounded by the maximum of , where k(m) can be from k(⌊t i−1 n⌋) to k(⌊t i n⌋). Hence, Moreover, by the same proof of the Proposition 4 in [5] (both Z n > a and Z * n > a mean T (a) < n), for all p > 0, From (49), Chebyshev's Inequality, and (50), there exists C ′ > 0 depending only on ε such that Now the upper bound of (48) is clear, The right hand side goes to zero as n goes to infinity since k(n) ∼ (n/B) 1/β .

Fast cooling
We do the proof in the same way as above: Find the order of the variance χ t,s n (τ) then determine the limit distribution of (X ⌊sn⌋ −X ⌊tn⌋ )/ χ n (τ). Check the tightness of the distribution of the process (X n t , t ∈ [a, 1]) to get the desired result.
To check the tightness condition, it is enough to show that (let δ = 1) for any ε > 0, lim sup n→∞ P sup a≤s≤t≤1 |X n s − X n t |≥ ε = 0, which is equivalent to Since sup ⌊an⌋≤k≤l≤n |X k −X 1 |≤ 2 sup ⌊an⌋≤s≤n |X s −X ⌊an⌋ |, we can deal with |X s −X ⌊an⌋ | in the following proof. Let m be |X m −X ⌊an⌋ |= sup ⌊an⌋≤s≤n |X s −X ⌊an⌋ |. the decomposition of it is Combining (44) and (45) under the case q = 1, recall that k(n)−k(⌊an⌋) ∼ −(1/C) log a, there exists C 2 > 0, depending only on ε, ForỸ m , following all the steps from (48) to (52), there exists C ′′ > 0, depending only on ε, and obviously the right hand side goes to zero as n goes to infinity. By (27), (63) and (64), The tightness condition holds. Hence (14) is proved.