Random walk in random environment in a two-dimensional stratified medium with orientations

We consider a model of random walk in ${\mathbb Z}^2$ with (fixed or random) orientation of the horizontal lines (layers) and with non constant iid probability to stay on these lines. We prove the transience of the walk for any fixed orientations under general hypotheses. This contrasts with the model of Campanino and Petritis, in which probabilities to stay on these lines are all equal. We also establish a result of convergence in distribution for this walk with suitable normalizations under more precise assumptions. In particular, our model proves to be, in many cases, even more superdiffusive than the random walks introduced by Campanino and Petritis.


Introduction
In this paper we consider a random walk (M n ) n starting from 0 on an oriented version of Z 2 . Let ε = (ε k ) k∈Z be a sequence of random variables with values in {−1, 1} and joint distribution µ. We assume that the k th horizontal line is entirely oriented to the right if ε k = 1, and to the left if ε k = −1. We suppose that the probabilities p k to stay on the k th horizontal line are given by a sequence of independent identically distributed random variables ω = (p k ) k∈Z (with values in (0, 1) and joint distribution η) and that the probability to go up or down are equal.

E E
Our model corresponds to a random walk in a two dimensional stratified medium with oriented horizontal layers and with random probability to stay on each layer.
The model with p k = 1/2 and with the ε k s iid and centered can be seen as a discrete version of a model introduced by G. Matheron and G. de Marsily in [15] to modelize transport in a stratified porus medium. This discrete model appears in [2] to simulate the Matheron and de Marsily model.
In [3], M. Campanino and D. Petritis proved that, when the p k 's are all equal, the behaviour of the walk depends on the choice of the orientations (ε k ) k . First, they prove that the walk is recurrent when ε k = (−1) k (i.e. when the horizontal even lines are oriented to the right and the uneven to the left). Second, they prove that the walk is almost surely transient when the ε k s are iid and centered. Let us mention that extensions of this second model can be found in [7,17].
We start by stating a result of transience. Theorem 1. Let (p k ) k be a sequence of independent identically distributed random variables. Suppose here that p 0 is non-constant and that E[(1 − p 0 ) −α ] < ∞ (for some α > 1). Then, for every deterministic or random sequence (ε k ) k , the random walk (M n ) n is transient for almost every ω.
If β = 2, we denote by Z a two-sided standard Brownian motion. We also introduce a standard Brownian motion (B t , t ≥ 0), and denote by (L t (x), x ∈ R, t ≥ 0) the jointly continuous version of its local time. We assume that Z and B are defined in the same probability space and are independent processes. We now define, as in [13], the continuous process We prove the following result.
Theorem 2. Let (p k ) k be a sequence of independent identically distributed random variables with values in (0, 1). Suppose here that E p 0 1−p 0 < ∞ and that the distribution belongs to the normal domain of attraction of a strictly stable distribution of index β ∈ (1, 2] (i.e. that we have (1) and (2)).
The proof of this second result is based on the proof of the functional limit theorem established by N. Guillotin and A. Le Ny [8] for the walk of M. Campanino and D. Petritis (with (p k ) k constant and (ε k ) k centered, independent and identically distributed).
The proof of our first result is built from the proof of [3, Thm 1.8] with many adaptations. The idea is to prove that, when (ε k ) k∈Z Z is a fixed sequence of orientations, that is when µ is a Dirac measure, k≥1 P(M k = (0, 0)) < +∞. ( In the model we consider here, contrarily to the models envisaged in [3], the second coordinate of (M n ) n is not a random walk but it is a random walk in a random environment, since the probability to stay on a horizontal line depends on the line, which complicates the model. Even if a central limit theorem and functional limit theorem have been established in [10] and in [9] for M (2) n , the local limit theorem for M (2) n has not already been proved, to the extent of our knowledge. Moreover, in Theorem 1 we do not assume that the distribution of p 0 1−p 0 belongs to the domain of attraction of a stable distribution. For these reasons, it does not seam simple to make a precise estimation of P(M n = (0, 0)) as it has been done in [4].
It will be useful to observe that under P ε,ω and P, (M (2) Tn ) n is a simple random walk (S n ) n on Z, where the T n 's are the times of vertical displacement : We will use several times the fact that there exists M > 0 such that, for every n ≥ 1, we have P(S n = 0) ≤ M n − 1 2 . Now, let us write X n the first coordinate of M Tn . We observe that where ξ n := T n+1 − T n − 1 corresponds to the duration of the stay on the horizontal line S n after the n-th change of line. Moreover, given ω = (p k ) k∈Z , ε = (ε k ) k∈Z Z and S = (S k ) k , the ξ k 's are independent and with distribution given by P ε,ω (ξ k = m|S) = (1 − p S k )p m S k for every k ≥ 0 and m ≥ 0. With these notations, we have This representation of (M Tn ) n will be very useful in the proof of both the results.

Estimate of the variance
To point out the difference between our model and the model with (p k ) k constant considered by M. Campanino and D. Petritis in [3], we start by estimating the variance of X 2n under the probability P for these two models in the particular case when ε k = (−1) k for every k ∈ Z and when (1 − p 0 ) −1 is square integrable.
Proof of Proposition 3. We observe that This gives the result in case (1). Now, to prove the result in case (2), we notice that, since p y and p y are independent as soon as y = y , we have We conclude as H. Kesten and F. Spitzer did in [13, p. 6], using the fact that P(S 2k = 0) ∼ ck −1/2 (as k goes to infinity) for some c > 0.

Proof of Theorem 1 (transience)
We come back to the general case. It is enough to prove the result for any fixed (ε k ) k . Let (ε k ) k∈Z be some fixed sequence of orientations. Hence µ is a Dirac measure on {−1, 1} Z Z . Without any loss of generality, we assume throughout the proof of Theorem 1 that ε 0 = 1 and α ≤ 2. We have k≥1 P(M k = (0, 0)) = n≥1 P(S 2n = 0 and X 2n ≤ 0 ≤ X 2n+1 ).
For every y ∈ Z and m ∈ N, we define N m (y) := #{k = 0, ..., m − 1 : S k = y}. We will use the fact that X 2n = S 2n + D 2n with Roughly speaking, the idea of the proof is that X 2n ≤ 0 ≤ X 2n+1 implies that X 2n cannot be very far away from 0, which means that D 2n and S 2n should be of the same order, but this is false with a large probability. More precisely, we will prove that, with a large probability, we have |D 2n | > n Let n ≥ 1. Following [3], we consider δ 1 > 0 and δ 2 > 0 and we define : Our first lemma is standard, we give a proof for the sake of completeness.
Proof. Let p > 1. Thanks to Doob's maximal inequality and since E( 2 ) and so, by the Chebychev inequality, The result follows by taking p large enough.
Let δ 0 > 0 and set We have Proof. Indeed, since S is independent of (p k ) k∈Z Z , we have We also consider the conditional expectation of X 2n with respect to (ω, (S p ) p ) which is equal to D 2n = y∈Z εypy 1−py N 2n (y). We introduce δ 3 > 0 and . . , 2n − 1}, a < p S k < b}, P := {y ∈ Z Z, a < p y < b}, and ζ y : Proof. Uniformly on E 0 (n) ∩ E 1 (n), we have In order to apply an inequality due to S.V. Nagaev [16], we define and so if n is large enough, since . We can now apply Nagaev ([16], Thm 1), which gives uniformly on A n ∩ E 1 (n) ∩ E 2 (n), where c > 0 and c > 0 are universal constants. We recall that Integrating this proves the lemma, by (8) and (11).
Proof. According to [6, Thm 1.3] applied twice with u = γ 0 /2 : first with the scenery (γ 0 − 1 {a<p 2y <b} ) y∈Z and with the strongly aperiodic Markov chain (S 2n /2) n≥0 , and second with the scenery (γ 0 −1 {a<p 2y+S 1 <b} ) y∈Z and with the strongly aperiodic Markov chain ((S 2n+1 −S 1 )/2) n≥0 conditionally on S 1 , we get the existence of c 1 > 0 such that, for every n ≥ 1, we have Proof. We recall that, taken ω and S, ξ k − Hence, for every n ≥ 1 and by taking N large enough. Integrating this on E 4 (n) yields the result.
Lemma 9. We have on E 2 (n), uniformly on ω, S and on k ∈ Z: Proof. On E 2 (n), we have : Since χ p (t) is decreasing in p and since 0 < a < p y < b < 1 for y ∈ P , there exist 0 < β < π/2 and c > 0 such that Let us define a n := 2 ln(n)/(cγ 0 n).

Proof of Theorem 2 (functional limit theorem)
We assume that (p k ) k satisfies the conditions of Theorem 2.
Proof. We first notice that it is enough to prove that Let us defineẼ We proceed as in formula (17) (with a conditioning with respect to S only, and α < β but close enough to β) to prove that P[(Ẽ 4 (N, v) Now, taken (ε, S, ω), is a martingale. Hence, according to the maximal inequality for martingales we have, for every θ > 0, From this we conclude that lim n→+∞ P sup n≤N The next lemma follows from the proof of [8,Thm 4] when β = 2. The proof of the general case β ∈ (1, 2] is postponed to Section 5. Lemma 15. Let β ∈ (1, 2]. Let S = (S n ) n≥0 be a random walk on Z starting from S 0 = 0, with iid centered square integrable and non-constant increments and such that gcd{k : P(S 1 = k) > 0} = 1. Let (ε y ) y∈Z be a sequence of iid random variables independent of S with symmetric distribution and such that (n − 1 β n k=1ε k ) n converges in distribution to a random variable Y with stable distribution of index β. Then, the following convergence holds in distribution in t is a Brownian motion such that V ar(B 1 ) = V ar(S 1 ) and with (L t (x)) t,x the jointly continuous version of its local time and wherẽ withZ independent ofB given by two independent right continuous stable processes (Z x ) x≥0 and (Z −x ) x≥0 with stationary independent increments such thatZ 1 ,Z −1 have the same distribution as Y . Now, we prove a functional limit theorem for (X nt , S nt ) from which we will deduce our theorem 2. Proof of Proposition 16. We observe that X n can be rewritten According to Lemma 14, it is enough to prove, under P, the convergence in distribution in D([0; +∞), R 2 ).
In case (a) with β = 2, we observe that n−1 k=0 ε S k is equal to 0 if n is even and is equal to 1 if n is odd. Hence, ((n −3/4 nt −1 k=0 ε S k ) t≥0 ) n converges to 0 in D([0; +∞), R) and it remains to prove the convergence of Let us write λ for the characteristic function of p 0 1−p 0 − E p 0 1−p 0 . Since p 0 1−p 0 has a finite variance and λ(ε y ·) behaves as λ at 0, we can follow the proof of the convergence of the finite distributions of [8, prop 1], which gives the convergence in distribution in D([0; +∞), R 2 ) thanks to the tightness that can be proved for the first coordinate as in [13]. Now, let us explain how case (a) with β ∈ (1, 2) will also be deduced from Lemma 15. This comes from the following lemma. Lemma 17. Let β ∈ (1, 2). Let S = (S n ) n be a simple symmetric random walk on Z starting from S 0 = 0. Let (ã y ) y∈Z be a sequence of iid random variables such that E(|ã 0 |) < ∞, independent of S. We have in distribution as n goes to infinity (in D([0; +∞), R 2 )), with δ := 1 2 + 1 2δ .
Proof of Lemma 17. Let us write We notice that it is enough to prove that Let η > 0 be such that 2η < 1 2β − 1 4 (such a η exists since β < 2). For every n ≥ 1, we consider the set Ω n defined by Let us show that lim n→+∞ P(Ω n ) = 1. As in Lemma 4, we have, as n goes to infinity (see [13, lem 3] and [12, p. 77]). Hence, using again the Markov inequality and taking m large enough, we get On Ω n , using the fact that for every k = 0, ..., n, we have Hence, thanks to the Markov inequality, we get for θ > 0.
Now we observe that the characteristic function ofε y := p 2y 1−p 2y − p 2y−1 1−p 2y−1 is t → |χ (t)| 2 (whereχ stands for the characteristic function of p 0 1−p 0 ). The distribution ofε 0 is symmetric and (n − 1 β n k=1ε k ) n converges in distribution to a random variable with characteristic function θ → exp(−2A 1 |θ| β ). According to Lemma 15 applied with the random walk S k : Brownian motion such that V ar(B 1 ) = 1 2 and with (L t (x)) t,x the jointly continuous version of its local time and wherẽ withZ independent ofB given by two independent right continuous stable processes (Z x ) x≥0 and (Z −x ) x≥0 , the characteristic functions ofZ 1 and ofZ −1 being θ → exp (−2A 1 |θ| β ). Hence, we have  and so with B t := 2B t/2 . Now we observe that where L denotes the local time of B and with Z x :=Z x/2 . Now Lemma 17 applied to py 1−py y∈Z gives (19), which proves Proposition 16 in the case (a) with β ∈ (1, 2).
Proof of Theorem 2. We recall that for every n, we have Tn . Moreover we observe that we have that can be rewritten We recall that γ = 1 + E p 0 1−p 0 and we define (U n ) n such that We notice that the sequences of processes , R) to 0. The first convergence follows from Lemma 14 where we take ε k = 1 for every k ∈ Z Z. The second convergence is a consequence of [13, Thm 1.1] since n δ /n → 0 as n → +∞. Hence n −1 T nt , t ≥ 0 n converges in distribution to (γt) t , We conclude that n −1 U nt t≥0 n converges in distribution (in D([0; +∞), R)) to (t/γ) t .
Therefore, according to Proposition 16 and to [1, Lem p. 151, Thm 3.9], the sequence of processes t≥0 . This means that is a standard Brownian motion, and (Z x ) x∈R has the same distribution as (Z x ) x∈R and is independent of (B t ) t≥0 . Furthermore we have where (L t ) t≥0 is the local time of (B t ) t and so Now we observe that we have and that for every θ > 0 and T > 0, for η > 0 small enough, since δβ > 1 and since This completes the proof of Theorem 2.

Proof of Lemma 15
The proof is very similar to those in [13] and [8], with some adaptations.
We defineD n := y∈Zε y N n (y), n ∈ N.
Before proving Lemma 18, we first introduce some preliminary results.
We observe that n −1/β n y=1ε y converges in distribution to a stable random variable of parameter β, with characteristic function ζ β (θ) := exp(−A 0 |θ| β ) (for some A 0 > 0). We can now compute the characteristic function of the finite dimensional distributions of (∆ t ,B t ) t≥0 .
Proof. We condition byB and we proceed as in [13,Lem 5]. We get which gives the result.