Sub-ballistic random walk in Dirichlet environment

We consider random walks in Dirichlet environment (RWDE) on $\Z ^d$, for $ d \geq 3 $, in the sub-ballistic case. We associate to any parameter $ (\alpha_1, ..., \alpha_{2d}) $ of the Dirichlet law a time-change to accelerate the walk. We prove that the continuous-time accelerated walk has an absolutely continuous invariant probability measure for the environment viewed from the particle. This allows to characterize directional transience for the initial RWDE. It solves as a corollary the problem of Kalikow's 0-1 law in the Dirichlet case in any dimension. Furthermore, we find the polynomial order of the magnitude of the original walk's displacement.


Introduction
The behaviour of random walks in random environment (RWRE) is fairly well understood in the case of dimension 1 (see Solomon ([16]), Kesten,Kozlov,Spitzer ([8]) and Sinaï([15])). In the multidimensional case, some results are available under ballisticity conditions (we refer to [20] and [2] for an overview of progress in this direction), or in the case of small perturbations. But some simple questions remain unanswered. For example, there is no general characterization of recurrence, Kalikow's 0 − 1 law is known only for d 2 ( [21]).
Random walks in Dirichlet environment (RWDE) is the special case when the transition probabilities at each site are chosen as i.i.d. Dirichlet random variables. RWDE are interesting because of the analytical simplifications they offer, and because of their link with reinforced random walks. Indeed, the annealed law of a RWDE corresponds to the law of a linearly directed-edge reinforced random walk ( [4], [11]). This model first appeared in [11] in relation with edge reinforced random walks on trees. It was then studied on Z × G ( [7]), and on Z d ( [5], [18], [12], [13], [14]).
We are interested in RWDE on Z d for d 3. A condition on the weights ensures that the mean time spent in finite boxes is finite. Under this condition, it was proved ( [13]) that there exists an invariant probability measure for the environment viewed from the particle, absolutely continuous with respect to the law of the environment. Using [14], this gives some criteria on ballisticity.
In this paper, we focus on the case when the condition on the weights is not satisfied. Then the mean time spent in finite boxes is infinite, and there is no absolutely continuous invariant probability measure ( [13]). The law of large numbers gives a zero speed. To overcome this difficulty, we construct a time-change that accelerates the walk, such that the accelerated walk spends a finite mean time in finite boxes. An absolutely continuous invariant probability measure then exists. With ergodic results, it gives a characterization of the directional recurrence in the sub-ballistic case. As a corollary, it solves the problem of Kalikow's 0 − 1 law in the Dirichlet case (the case d = 2 has been treated in [21]).
Besides, in the directionally transient case, we show a law of large numbers with positive speed for our accelerated walk. This gives the polynomial order of the magnitude of the original walk's displacement, and could be a first step towards a limit theorem for the original RWDE.

Definitions and statement of the results
Let (e 1 , . . . , e d ) be the canonical base of Z d , d 3, and set e j = −e j−d , for j ∈ [[d + 1, 2d]]. The set {e 1 , . . . , e 2d } is the set of unit vectors of Z d . We denote by z = d i=1 |z i | the L 1 -norm of z ∈ Z d , and write x ∼ y if y − x = 1. We consider the set of directed edges E = {(x, y) ∈ (Z d ) 2 , x ∼ y}. Let Ω be the set of all possible environments on Z d : ω(x, x + e i ) = 1}.
In [13], it was proved that when there exists an invariant probability measure for the environment viewed from the particle, absolutely continuous with respect to P (α) . This leads to a complete description of ballistic regimes and directional transience. However, when κ 1, such an invariant probability does not exist, and we only know that the walk is sub-ballistic. In this paper, we focus on the case κ 1. We prove the existence of an invariant probability measure for an accelerated walk. This allows to characterize recurrence in each direction for the initial walk.
Let σ = (e 1 , . . . , e n ) be a directed path. By directed path, we mean a sequence of directed edges e i such that e i = e i+1 for all i (e and e are the head and tail of the edge e). We note ω σ = n i=1 ω(e i ). Let Λ be a finite connected set of vertices containing 0. Our accelerating function is γ ω (x) = 1 ωσ , where the sum is on all σ finite simple (each vertex is visited at most once) paths starting from x, going out of x + Λ, and stopped just after exiting x + Λ. Let X t be the continuous-time Markov chain whose jump rate from x to y is γ ω (x)ω(x, y), with X 0 = 0. Then Z n = X tn , for t n = n where the E i are independent exponentially distributed random variables with rate parameters 1 : X t is an accelerated version of the walk Z n .
We note (τ x ) x∈Z d the shift on the environment defined by : τ x ω(y, z) = ω(x+y, x+z), and call process seen from the particle the process defined by ω t = τ Xt ω. Under P ω 0 0 (ω 0 ∈ Ω), ω t is a Markov process on state space Ω, his generator R is given by for all bounded measurable functions f on Ω. Invariant probability measures absolutely continuous with respect to the law of the environment are a classical tool to study processes viewed from the particle. The following theorem provides one for our accelerated walk. Theorem 1. Let d 3 and P (α) be the law of the Dirichlet environment for the weights (α 1 , . . . , α 2d ). Let κ Λ > 0 be defined by α e , K connected set of vertices , 0 ∈ K and ∂Λ ∩ K = ∅} where ∂ + (K) = {e ∈ E, e ∈ K, e / ∈ K} and ∂Λ = {x ∈ Λ|∃y ∼ x such that y / ∈ Λ}. If κ Λ > 1, there exists a unique probability measure Q (α) on Ω that is absolutely continuous with respect to P (α) and invariant for the generator R. Furthermore, dQ (α) dP (α) is in L p (P (α) ) for all 1 p < κ Λ .  α e (dashed arrows) for an arbitrary K (thick lines).

Remark 1.
If Λ is a box of radius R Λ , the formula is explicit : Remark 2. κ Λ can be made as big as we want by taking the set Λ big enough. Then for each (α 1 , . . . , α 2d ), there exists an acceleration function such that the accelerated walk verifies theorem 1.
As (X t ) t∈R + and (Z n ) n∈N go through exactly the same vertices in the same order, and as the two processes stay a finite time on each vertex without exploding, recurrence and transience for the original walk Z n · e i follow from those of X t · e i .
The proof of theorem 2 allows besides to solve the problem of Kalikow's 0 − 1 law in the Dirichlet case.  a.s. when d 3 and d α = 0. As I was finishing this article, Tournier informed me about the existence of a more general version of theorem 1 of [14]. Using this result instead of [14] in the proof of theorem 2 allows to show that the asymptotic direction is dα |dα| , see [19] for details.
In the transient sub-ballistic case, we also obtain the polynomial order of the magnitude of the walk's displacement : Theorem 5. Let d 3, P (α) be the law of the Dirichlet environment with parameters (α 1 , . . . , α 2d ) on Z d , and Z n the associated random walk in Dirichlet environment. We Remark 4. The directional transience shown in [19] should also enable to extend the results of theorem 2, corollary 3 and theorem 5 from (e i ) i=1,...,2d to any l ∈ R d .

Proof of theorem 1
We first give some definitions and notations. Let (G, V ) be an oriented graph. For e ∈ E, we note e the tail of the edge, and e his head, such that e = (e, e). The divergence operator is : div : For N ∈ N * , we set T N = (Z/NZ) d the d-dimensional torus of size N. We note G N = (T N , E N ) the directed graph obtained by projection of (Z d , E) on the torus T N .
Let Ω N be the space of elliptic random environments on the torus : We denote by P (α) N the law on the environment obtained by choosing independently for each x ∈ T N the exit probabilities of x according to a Dirichlet law with parameters (α 1 , . . . , α 2d ).
For ω ∈ Ω N , we note π ω N the unique (because of ellipticity) invariant probability measure of Z ω n on the torus in the environment ω. Then is an invariant measure for X ω t on the torus in the environment ω, and is the associated invariant probability. Define N is an invariant probability measure on Ω N . We can now reduce theorem 1 to the following lemma.
Once this lemma is proved, the proof of theorem 1 follows easily, we refer to [13], pages 5, 6, where the situation is exactly the same, or to [2], pages 11 and 18, 19.
Proof of lemma 1. This proof is divided in two main steps. First we introduce the "time-reversed environment" and prepare the application of the "time reversal invariance" (lemma 1 of [12], or proposition 1 of [14]). Then we apply this invariance, and use a lemma of the type "max-flow min-cut problem".
Step 1 : Let (ω(x, y)) x∼y be in Ω N . The time-reversed environment is defined by : . Let p be a real, 1 < p < κ Λ . We have : Introducing the immediate fact that it gives : we can then use the arithmetico-geometric inequality : Take θ N : E N → R + , and defineθ N by : ∀x ∼ y,θ N (x, y) = θ N (y, x). It is clear that where by λ β we mean e∈E N λ(e) β(e) (resp. x∈T N λ(x) β(x) ) for any couple of functions λ, β on E N (resp. T N ). Therefore, if we choose θ N : We therefore only have to show that we can find (θ N ) N ∈N such that for all N, θ N satisfies (3.3) and such that : Step 2 : Take p > 1. We first construct a sequence (θ N ) N ∈N that satisfies (3.3), and then we show that it satisfies (3.4). Construction of (θ N ) N ∈N . We want to use lemma 2 of [13], which is a result of type maw-flow min-cut (see for example [10], section 3.1, for a general description of the max-flow min-cut problem). We first recall some definitions and notions on the matter. In an infinite graph G = (V, E), a cut-set between x ∈ V and ∞ is a subset S of E such that any infinite simple directed path (i.e. an infinite directed path that does not go twice through the same vertex) starting from x must necessarily go through one edge in S. A cut-set which is minimal for inclusion is necessarily of the form : where A is a finite subset of V containing x and such that any y ∈ A can be reached by a directed path in A starting from x. Let (c(e)) e∈E be a family of non-negative reals, called the capacities. The minimal cut-set sum between 0 and ∞ is defined by : m((c(e)) e∈E ) = inf{c(S), S a cut-set separating 0 and ∞} where c(S) = e∈S c(e). Remark that the infimum can be taken only on minimal cut-sets, i.e. cut-sets of the form (3.5).
Subsequent calculations will show the need for a θ N depending on an arbitrary path σ from 0 to Λ c . Set N ∈ N , we define : Then m((α (σ) (e)) e∈E N ) κ Λ . Indeed : • If some e ∈ σ is in the min-cut, it is obvious.
• Otherwise, as 0 ∈ σ the min-cut is of the form S = ∂ + (K) with σ ⊂ K and K a finite connected set of vertices. The definition of κ Λ in theorem 1 gives directly m((α (σ) (e)) e∈E N ) κ Λ .
Then lemma 2 of [13], with c(e) = p κ Λ α (σ) (e), gives that for all N N 0 there is a function θ N satisfying (3.3) and such that θ N (e) p κ Λ α (σ) (e). Preliminary computations about (3.4). Let q and r be positive reals such that 1 r + 1 q = 1 and pq < κ Λ . Using in a first time Hölder's inequality and then the time-reversed environment (lemma 1 of [12]), we obtain : In order to simplify notations, we note dλ Ω = e∈Ẽ N dω(e), where we obtainẼ N from E N by removing for each x one arbitrary edge leaving x. We can now compute the first expectation in (3.6) : where all the sums on σ correspond to the sums on simple paths. As where σ x is an arbitrarily chosen simple path in the preceding sum. Then As Λ is finite, an edge can be in only a finite number of σ x . We have then for all e, This proves that β σ is well defined and takes only finite values.
The second expectation in (3.6) is easy to compute : We did not check that the previous expressions are well defined : we need to prove that for the given θ N , the arguments of the Gamma functions are positive. As it is a bit tedious, we delay this checking to the next point in the proof.
We now have that E (α) ωθN We want to prove the finiteness of this expression. As we sum on a finite number of paths, we only have to show that the general term of the sum stays finite. We are reduced to prove that ∀σ : 0 → Λ c simple path, Checking that the previous Gamma functions were well defined. As for all e ∈ E N α(e) > 0 and θ N (e) 0, the result is straightforward except for Γ (β σ (e) − qθ N (e)) and Γ (β σ (x) − qθ N (x)). By construction of θ N , we know that . Then we just have to check the positivity of this second expression. Take e ∈ E N : As we assumed pq < κ Λ and κ Λ > 1, α(e) 1 − pq κ Λ > 0. The second term can be made as small as needed by choosing N big enough. Then Proof of (3.7). As σ is a finite path, the above tells us that there exists ε > 0 such that : and the same is true for α(x) by summing on e. Define : We have then, for any fixed σ : ν (α(e), θ N (e), β σ (e)) = 1 r ln Γ(α(e) + rθ N (e)) + 1 q ln Γ(β σ (e) − qθ N (e)) − ln Γ(α(e)) pq κ Λ α(e) and pq < κ Λ . Taylor's inequality gives : ∀e with C 1 and C 2 positive constants. Then we can find a constant C 3 > 0 independent of N N 0 such that : According to lemma 2 of [13], this is bounded by a finite constant independent of N.
It follows that the supremum on N is finite too. This concludes the argument for any fixed σ and proves (3.7). This proves the lemma.

Proof of theorem 2 and corollary 4
To obtain results on the initial random walk Z n , we need some estimates on our acceleration function γ ω . In particular, we will need the following lemma : For all x ∈ Z d and s < κ, As its proof is quite computational, we defer it to the appendix. Remark that it is nevertheless quite easy to get a weaker bound : γ ω (0) = 1 , where the sum is on all σ finite simple paths from 0 to Λ c , and where σ 1 is the path from 0 to Λ c going only through edges (ne 1 , (n + 1)e 1 ).
Theorem 2 is based on classical results on ergodic stationary sequences, see [3] pages 342 − 344. We need another preliminary lemma.
Proof. The proof of the first point is easily adapted from chapter 2 of [2], by replacing the discrete process by the continuous process : we use the continuous martingales convergence theorems, and the continuous version of Birkhoff's theorem (see for example [9], pages 9 − 11).
For the second point, as Q (α) is an invariant probability for ω t , it is straightforward that ∆ i is stationary. It remains to prove ergodicity. Set A ⊂ Z d N a measurable set such that ∀t, θ −1 t (A) = A with θ t the time-shift. We note r(x, ω) = P ω x ((∆ i ∈ A)) and r(ω) = r(0, ω).
Proof. Let N be the number of visits of 0 before exiting Λ. The random variable N follows a geometric law of parameter p N := 1 G ω,Λ (0,0) the inverse of the Green function killed at the exit time of Λ. We note T the total time spent on 0 before exiting Λ, where the E i are independent exponential random variables of parameter 1. Set ε > 0.
For all a > 0, let 0 < λ < κ, As λ < κ, lemma 2 gives : with C a positive constant independent of a. Then for a = ε − 1 λ+1 we have : If D(l, n) 2kR Λ , the walk went through at least k distinct sets X t + Λ of empty intersection. The time spent in such a set is bigger than the time spent on one point in the set, and those times are independent in disjoint sets (because the environments in the sets are independent). We get (for T 1 , . . . , T k i.i.d. of same law as T ) : This concludes the proof of the lemma.
Proof of theorem 2. Lemma 3 gives that the sequence (∆ i ) i∈N is stationary and ergodic under Q (α) (P ω 0 (.)). We apply Birkhoff's ergodic theorem to the ∆ i to get a law of large numbers : a.s. and thus P (α) 0 a.s. .
If d α ·e i = 0, the symmetry of the law of the environment gives E Q (α) [E ω 0 (X 1 )]·e i = 0. Then X k k → 0 when d α = 0. Furthermore theorem 6.3.2 of [3] gives that the processes X k is directionally recurrent when d α · e i = 0. As X t stays only a finite time on each vertex before the next jump, directional recurrence for (X k ) k∈N implies directional recurrence for (X t ) t∈R + (the probability to come back to 0 after a finite time is 1).
For l ∈ R d , we note A l = {X t k · l → ∞}, where (t k ) k∈N are the jump times. If l = 0 and if P
We now consider the limit for the continuous-time walk. For t > 0, we set k = ⌊t⌋. Then for all i = 1, . . . , 2d, k).
Lemma 4 gives : for ε > 0, Then by Borel-Cantelli's lemma, D(e i ,k) This gives the directional transience in the case d α · e i > 0, and finishes the proof.
Proof of corollary 4. We prove as in the proof of theorem 2 that a.s. . [21] proposition 3). It allows to find a finite interval I of R, of positive measure, containing 0 and such that (X t k · l) k∈N goes a finite number of times in I, P (α) 0 a.s. and thus Q (α) 0 a.s.. As before, it implies that (X k · l) k∈N goes a finite number of times in I, Q (α) 0 a.s.. We can then apply the theorem of [1] to (X k ) k∈Z (obtained via the extension of (X t ) t∈R + to t ∈ R) to get E Q (α) [E ω 0 (X 1 · l)] = 0. We then deduce from (4.2) that :

We still note
a.s. else-wise. As (X t ) t∈R + and (Z n ) n∈N go through exactly the same vertexes in the same order, and as the two processes stay a finite time on each vertex, without exploding (see lemma 4), recurrence and transience for Z n · l follows from those of X t · l. This gives as a consequence Kalikow's 0 − 1 law in the d 3 Dirichlet case.
The 0 − 1 law is true in the general case of random walks in random environments for d = 1 and d = 2 (see respectively Solomon ([16]) and Zerner and Merkl ([21])), it concludes the proof.

Proof of theorem 5
To prove the result, we need a preliminary theorem on the polynomial order of the hitting times of the walk. Theorem 6. Let d 3, P (α) be the law of the Dirichlet environment with parameters (α 1 , . . . , α 2d ) on Z d , and Z n the associated random walk in Dirichlet environment. We such that d α · l = 0. Let T l,Z n = inf i {i ∈ N |Z i · l n} be the hitting time of the level n in direction l, for the non-accelerated walk Z. Then : Proof. Upper bound Define A(t) = t 0 γ ω (X s )ds. Then X A −1 (t) is the continuous-time Markov chain whose jump rate from x to y is ω(x, y). This Markov chain has asymptotically the same behaviour as Z n , then we only have to prove that lim n→+∞ log(A(T l,X n )) log(n) 1 κ , with T l,X n = inf t {t ∈ R + |X t · l n}.
In the following, c and C will be finite constants, that can change from line to line. As P(D i = k) P(D i k) c C k k! by lemma 4 and qβ < 1 we get : As the Dirichlet laws are iid, the value of the expectation is independent of x. Lemma 2 then gives a uniform finite bound for all x.
Set l ∈ {e 1 , . . . , e 2d }. We introduce the exit times (with a minus sign instead of the plus if l = −e 1 ), and for k (where τ is the time-shift). We use the convention that Θ k = ∞ if T l,Z 2k = ∞. The only dependence between the times Θ k is that Θ j = ∞ implies Θ k = ∞ for all k j. The "2" in T l,Z 2k causes indeed Θ k to depend only on {x ∈ Z d |x · l ∈ {2k, 2k + 1}} which are disjoint parts of the environment.
For t 0 , . . . , t k ∈ N, one has, using the Markov property at time T l,Z 2k , the independence and the translation invariance of P (α) : where, under P (α) , the random variablesΘ k are independent and have the same distribution as Θ 0 . From this, we deduce that for all A ⊂ N N , In particular, for α > κ, In order to bound this probability, we compute the tail of the distribution of Θ 0 using Stirling's formula : with c a constant. We can then use the limit theorem for stable laws (see for example [3]) that gives :Θ where Y has a non-degenerate distribution. Then for α > κ,Θ 0 +···+Θ k−1 As T l,Z 2k Θ 0 + · · · + Θ k−1 , it gives Using an inversion argument, we can now prove theorem 5.
Proof of theorem 5. We note Z n = max i n Z i · l. As Z n m ⇔ T l,Z m n, theorem 6 gives that for any ε > 0 we have, for n big enough, n κ−ε Z n n κ+ε in P (α) -probability.
As Z n · l is transient, we can introduce renewal times τ i for the direction l (see [17] or [20] p71 for a detailed construction) such that τ i < +∞ P (α) a.s., for all i. Then When the walk Z n · l discovers a new vertex in direction l, there is a positive probability that this vertex will be the next Z τ i . As the vertexes have i.i.d. exit probabilities under P (α) , this probability is independent of the newly discovered vertex, and is independent of the path that lead to this vertex. Then (Z τ i+1 − Z τ i ) · l follows a geometric law of parameter P (α) (Z 0 = Z τ 1 ), for all i ∈ N . This means that we can find C and c two positive constants such that for all n, P (α) (Z τ i+1 − Z τ i ) · l n Ce −cn . Borel Cantelli's lemma then gives that, for n big enough, max i=0,...,n−1 (Z τ i+1 − Z τ i ) · l (log n) 2 P (α) a.s..
Appendix A. Proof of lemma 2 The proof that follows is largely inspired by the article [18] by Tournier. His result can however not be directly applied here, as γ ω (x) G ω,Λ (x, x), and some of the paths he considered are not necessarily simple paths. To adapt the proof to our case, we need an additional assumption on the graph (some symmetry property for the edges), which simplifies the proof (the construction of the set C(ω) is quite shorter).
To prove the result, we consider the case of finite directed graphs with a cemetery vertex. A vertex δ is said to be a cemetery vertex when no edge exits δ, and every vertex is connected to δ through a directed path. We furthermore suppose that the graphs have no multiple edges, no elementary loop (consisting of one edge starting and ending at the same point), and that if (x, y) ∈ E and y = δ, then (y, x) ∈ E.
We need a definition of γ ω (x) for those graphs. Let G = (V ∪ {δ}, E) be a finite directed graph, (α(e)) e∈E be a family of positive real numbers, P (α) be the corresponding Dirichlet distribution, and (Z n ) the associated random walk in Dirichlet environment. We need the following stopping times : the hitting times H x = inf{n 0|Z n = x} andH x = inf{n 1|Z n = x} for x ∈ G, the exit time T A = inf{n 0|Z n / ∈ A} for A ⊂ V , and the time of the first loop L = inf{n 1|∃n 0 < n such that Z n = Z n 0 }.
For x in such a G, we define : where we sum on simple paths from x to δ. In the following, we denote by 0 an arbitrary fixed vertex in G. We use the notations A = {e|e ∈ A} and A = {e|e ∈ A} for A ⊂ E, and we call strongly connected a subset A of E such that for all x, y ∈ A ∪ A, there is a path in A from x to y. Remark that if A is strongly connected, then A = A.
For the new function γ ω on G, we get the following result Theorem 7. Let G = (V ∪ {δ}, E) be a finite directed graph, where δ is a cemetery vertex. We furthermore suppose that G has no multiple edges, no elementary loop, and that if (x, y) ∈ E and y = δ, then (y, x) ∈ E. Let (α(e)) e∈E be a family of positive real numbers, and P (α) be the corresponding Dirichlet distribution. Let 0 ∈ V . There exist c, C, r > 0 such that, for t large enough, the minimum is taken over all strongly connected subsets A of E such that 0 ∈ A, and β A = e∈∂ + A α(e), (we recall that ∂ + (K) = {e ∈ E, e ∈ K, e / ∈ K}).
In Z d , we can identify Λ c (where Λ is the subset involved in the construction of γ ω ) with a cemetery vertex δ. We obtain a graph where the two definitions of γ ω coincide, and that verifies the hypothesis of theorem 7. Among the strongly connected subsets A of edges such that A contains a given x, the ones minimizing the "exit sum" β A are made of only two edges (x, x + e i ) and (x + e i , x), i ∈ [|1, 2d|]. Then Proof of theorem 7. This proof is based on the proof of the "upper bound" in [18]. We need lower bounds on the probability to reach δ by a simple path. We construct a random subset C(ω) where a weaker ellipticity condition holds. Quotienting by this subset allows to get a lower bound for the equivalent of P ω 0 (H δ <H 0 ∧ L) in the quotient graph. Proceeding by induction then allows to conclude.
We proceed by induction on the number of edges of G. More precisely, we prove : be a directed graph possessing at most n edges, and such that every vertex is connected to δ by a directed path. We furthermore suppose that G has no multiple edges, no elementary loop, and that if (x, y) ∈ E and y = δ, then (y, x) ∈ E. Let (α(e)) e∈E be positive real numbers. Then, for every vertex 0 ∈ V , there exist real numbers C, r > 0 such that, for small ε > 0, where β = min{β A |A is a strongly connected subset of V and 0 ∈ A}.
If |E| = 2, the only possible edges link 0 to δ, and another vertex x to δ, then P ω 0 (H δ <H 0 ∧ L) = 1 and the property is true. Let n ∈ N * . We suppose the induction hypothesis to be true at rank n. Let G = (V ∪ {δ}, E) be a directed graph with n + 1 edges, and such that every vertex is connected to δ by a directed path. We furthermore suppose that G has no multiple edges, no elementary loop, and that if (x, y) ∈ E and y = δ, then (y, x) ∈ E. Let (α(e)) e∈E be positive real numbers. To get a "weak ellipticity condition", we introduce the random subset C(ω) of E constructed as follows : Construction of C(ω). Let ω ∈ Ω. Let x be chosen for ω(0, x) to be a maximizer on all ω(0, y), y ∼ 0. If x = δ, we set If x = δ, we set C(ω) = {(0, δ)}. Remark that C(ω) is well defined as soon as x is uniquely defined, which means almost surely, as there is always a directed path heading to δ.
The support of the distribution of ω → C(ω) writes as a disjoint union C = C 0 ∪ C δ depending whether x = δ or not. For C ∈ C, we define the event As C is finite, it is sufficient to prove the upper bound separately on all events E C . If C ∈ C δ , on E C , P ω 0 (H δ <H 0 ∧ L) P ω 0 (Z 1 = δ) 1 |E| by construction of C(ω). Then we have for small ε > 0 : In the following, we will therefore work on E C , when C ∈ C 0 (ie when x = δ). In this case, C is strongly connected. Quotienting procedure. In our case, we consider the quotient graphG obtained by contracting C(ω), which is a strongly connected subset of E, to a new vertex0. We need to define the associated quotient environmentω ∈Ω. For every edge inẼ, if e / ∈ ∂ + C thenω(e) = ω(e), and if e ∈ ∂ + C,ω(e) = ω(e) Σ , where Σ = e∈∂ + C ω(e). This environment allows us to bound γ ω (0) using the similar quantity inG. Notice that, from 0, one way for the walk to reach δ without coming back to 0 and without making loops consists in exiting C without coming back to 0, and then reaching δ without coming back to C (0 or x) and without making loops. Then, for ω ∈ E C , where we used the Markov property, the construction of C, 1 |E| 1, and the definition of the quotient. Finally, we have Back to Dirichlet environment. Under P (α) ,ω does not follow a Dirichlet distribution because of the normalization. But we can reduce to the Dirichlet situation with the following lemma (which is a particular case of lemma 9 in [18]). i ) 1 i nx . Let Σ = e∈∂ + C ω(e) and β C = e∈∂ + C α(e). There exists positive constants c, c ′ such that, for every ε > 0, P (α) ΣPω 0 (H δ <H0 ∧ L) ε cP (α) Σ P ω 0 (H δ <H0 ∧ L) ε , whereP (α) is the Dirichlet distribution of parameter (α(e)) e∈Ẽ onΩ, ω is the canonical random variable onΩ, and, underP (α) ,Σ is a positive bounded random variable independent of ω and such that, for all ε > 0,P (α) (Σ ε) c ′ ε β C .
Remark that the symmetry property we imposed on the edges is important here : if there was no edge from x to 0, the probability for a walk inG to exit0 through one of the edges exiting x in G would necessarily be bigger than 1 2 . Then asymptotically, it could not be bounded by Dirichlet variables.
This lemma and (A.1) give : Induction. Inequality (A.2) relates the same quantities in G andG, allowing to complete the induction argument.
The edges in C do not appear inG any more :G has n−2 edges. In order to apply the induction hypothesis, we need to check that each vertex is connected to δ. This results directly from the same property for G. If (x, y) ∈Ẽ and y = δ, then (x, y) / ∈ C(ω) and (y, x) / ∈ C(ω). As only the edges of C(ω) disappeared, then (y, x) ∈Ẽ.G has no elementary loop. Indeed G has none, and the quotienting only merges the vertices of C, whose joining edges are those of C, deleted in the construction. It only remains to prove thatG has no multiple edges. It is not necessarily the case (quotienting may have created multiple edges), but it is possible to reduce to this case, using the additivity property of the Dirichlet distribution.
The induction hypothesis applied toG and0 then gives, for small ε > 0, where c ′′ > 0, r > 0 andβ is the exponent "β" from the statement of the induction hypothesis corresponding to the graphG. This inequality, associated with (A.2) and the following simple lemma (also see [18] for the proof of the lemma) then allows to carry out the induction : Lemma 6. If X and Y are independent positive bounded random variables such that, for some real numbers α X , α Y , r > 0, • there exists C > 0 such that P (X < ε) Cε α X for all ε > 0 (or equivalently for small ε); • there exists C ′ > 0 such that P (Y < ε) C ′ ε α Y (− ln ε) r for small ε > 0; then there exists a constant C ′′ > 0 such that, for small ε > 0, P (XY ε) C ′′ ε α X ∧α Y (− ln ε) r+1 (and r + 1 can be replaced by r if α X = α Y ).
It remains to prove thatβ β, where β is the exponent defined in the induction hypothesis relative to G and 0. LetÃ be a strongly connected subset ofẼ such that 0 ∈Ã. Set A =Ã ∪ C ⊂ E. In view of the definition ofẼ, every edge exiting A corresponds to an edge exiting A, and vice-versa (the only edges deleted in the quotient procedure are those of C). Thus, recalling that the weights of the edges are preserved in the quotient, βÃ = β A . Moreover,0 ∈ A and A is strongly connected, so that β A β. As a consequence,β β as announced.
Then β C ∧β β C ∧ β = β because C is strongly connected, and 0 ∈ C. It gives, for small ε > 0 : Summing on all events E C , C ∈ C concludes the induction and the proof.