Vertex reinforced non-backtracking random walks: an example of path formation

This article studies vertex reinforced random walks that are non-backtracking (denoted VRNBW), i.e. U-turns forbidden. With this last property and for a strong reinforcement, the emergence of a path may occur with positive probability. These walks are thus useful to model the path formation phenomenon, observed for example in ant colonies. This study is carried out in two steps. First, a large class of reinforced random walks is introduced and results on the asymptotic behavior of these processes are proved. Second, these results are applied to VRNBWs on complete graphs and for reinforced weights $W(k)=k^\alpha$, with $\alpha\ge 1$. It is proved that for $\alpha>1$ and $3\le m<\frac{3\alpha -1}{\alpha-1}$, the walk localizes on $m$ vertices with positive probability, each of these $m$ vertices being asymptotically equally visited. Moreover the localization on $m>\frac{3\alpha -1}{\alpha-1}$ vertices is a.s. impossible.


Introduction
The contributions of this paper are twofold. First, results concerning the asymptotic behavior of a large class of reinforced random walks (RRW) are proved. Second, we present a strongly reinforced random walk, useful to model the path formation phenomenon.
By formation of a path, we mean that after a certain time, the walk only visits a finite number of vertices, always in the same order. Such phenomena are observed in ant colonies. For some species, ants deposit pheromones along their trajectories. The pheromone is a chemical substance which attracts the ants of the same colony, and thus reinforces the sites visited by the ants. Depending on the succession of these deposits, trails appear between important places such as food sources and nest entries.
RRWs on graphs are natural to model such behavior : most visited vertices are more likely to be visited again. They have already been used to study ant behavior (see [DAGP90,VTG + 06, GGC + 09]). But as they are usually defined (see [Dav90,Tar04,Sel08,LT07,BRS13]), one can obtain a localization phenomenon, i.e. only a finite number of points are visited infinitely often, but no path formation is observed : there is no fixed order with which these vertices are visited by the walk.
Therefore, additional rules are necessary for the emergence of a path. In this paper, we choose to add a non-backtracking constraint : the walk cannot return immediately to the vertex it comes from. More precisely, let G = (X , E) be a locally finite non-oriented graph without loops, with X the set of its vertices and E ⊂ {{i, j} : i, j ∈ X , i = j} the set of its non-oriented edges. For {i, j} ∈ E, denote i ∼ j, and for i ∈ X , let N(i) := {j ∈ X : j ∼ i} be the neighborhood of i. Let X = (X n ) n≥0 be a non-backtracking random walk on G, i.e. for n ≥ 0, X n+1 ∼ X n and for n ≥ 1, X n+1 = X n−1 . We suppose that this walk is vertex reinforced : for n ≥ 0 and i ∈ X , P(X n+1 = i|X 0 , · · · , X n ) = W (Z n (i)) j∼Xn, j =X n−1 W (Z n (j)) where Z n (i) is the number of times the walk X has visited i up to time n and W : N → R * + is a reinforcement function. The walk X is called a vertex reinforced non-backtracking random walk (VRNBW). Non-backtracking random walks have first been introduced in Section 5.3 of [MS93], and named later non-backtracking random walks in [OW07].
The following result shows that for a strong reinforcement, with positive probability, VRNBWs build a path.
where a ℓ = (d i ℓ − 2)W (0). Since ∞ k=0 1 W (k) < ∞, this probability is positive. The general study of RRWs (in order to obtain almost sure properties) is difficult. Even without the non-backtracking constraint, almost sure localization on two vertices could only be proved recently by C. Cotar and D. Thacker in [CT16] for vertex reinforced random walks (VRRWs) on connected non-oriented graphs of bounded degree with a reinforcement function W satisfying ∞ k=0 1 W (k) < ∞. Using stochastic algorithm techniques and more precisely results from [BR10], a more complete study of VRRWs on complete graphs, with reinforcement function W (k) = (1 + k) α , with α ≥ 1, could be done by R. Pemantle in [Pem92] in the case α = 1, and by M. Benaïm, O. Raimond and B. Schapira [BRS13] in the case α > 1. The principle of these methods is to prove that the evolution of the empirical occupation measure of the walk is well approximated by an ordinary differential equation (ODE). To make that possible some hypotheses are made so that for large times, the walk behaves almost as an indecomposable Markov chain, whose mixing rate is uniformly bounded from below.
Because of the non-backtracking constraint, this last property fails for VRNBWs. To overcome this difficulty, we set up a framework which is a large particular case of the one in [BR10]. More precisely, we introduce a class of RRWs, which contains vertex and edge reinforced random walks (eventually non-backtracking) on nonoriented graphs. In order to introduce a dependence on the previously visited vertex, a walk in this class is defined via a process on the set of edges. This was not necessary in [BRS13]. Moreover at each time step, what is reinforced is a function of the edge that has just been traversed. We prove a result similar to Theorem 2.6 of [BR10] (approximation by an ODE), but under different assumptions.
Applying these results, we study VRNBWs on the complete graph with N ≥ 4 vertices and reinforcement function W (k) = (1+k) α , α ≥ 1. Such VRNBWs are then equivalent to urns with N colors, the two last chosen colors being forbidden. Note that, for a complete graph, the sets C as in Proposition 1.1 are the sets constituted by three different vertices.
Let us now state our main result for VRNBWs. Denote by S ⊂ X the set of infinitely often visited vertices by X. The non-backtracking assumption implies that |S| ≥ 3 and that a path has been selected only when |S| = 3.
On complete graphs, only paths on three vertices can be formed. But Proposition 1.1 shows that in more general graphs, VRNBWs provide more elaborated paths with positive probability. This result with Theorem 1.2 permits to write the following conjecture. Conjecture 1.3. Let X be a VRNBW on a connected non-oriented finite graph, with reinforcement function W (k) = (1 + k) α . Suppose that α > 3, then a.s. there is a random path C = {i 1 , . . . , i ℓ } as in Proposition 1.1 and k 0 ≥ 1 such that for all k ≥ k 0 and ℓ ∈ {1, . . . , L}, X kL+ℓ = i ℓ .
To prove such a conjecture is a difficult task. Note that the ordered statistics method used in [CT16] is not likely to be used for VRNBWs.
The phase transitions given by Theorem 1.2 are interesting for the understanding of ant behavior. Indeed, when α > 3, a path is formed. Thus, if ants were acting like a VRNBW and if they can change their sensibility to pheromones by modulating the parameter α, they could either make sure that a path will emerge (α > 3), or could continue to explore a selected area (α < 3). Simulation studies of agent-based models provide similar results (see [SLF97,PGG + The paper is organized as follows. The main notations of the paper are given in Section 2. In Section 3, the class of RRWs, introduced in Section 3.1, is studied. The main results are stated in Section 3.2 and their proofs are given in Sections 3.3 and 3.4. In Section 4, the results of Section 3 are applied to VRNBWs on complete graphs, and Theorem 1.2 is proved. In Sections 4.2, 4.3, 4.4 and 4.5, we verify that these VRNBWs satisfy the hypotheses of Section 3. This is the most delicate part of this paper, where we had to deal with the fact that the transition matrices of the walk may be very slowly mixing. A Lyapunov function is defined in Section 4.6. The description of the set of equilibria, given in Sections 4.7 and 4.8, is also much more complicated compared to the one done for VRRWs in [BRS13].

Notations
Let A be a finite set. We will often identify A to the set {1, . . . , N}, where N = |A|. We denote by S N the set of all permutations of N elements.
For a map f : A → R, we denote min(f ) = min{f (i) : i ∈ A} and max(f ) = max{f (i) : i ∈ A}. Denote by 1 A the map on A, which is equal to one everywhere.
A map µ : A → R will be viewed as a (signed) measure on A and for B ⊂ A, µ(B) = i∈B µ(i). For a measure µ on A and f : A → R, set µf = i∈A µ(i)f (i) . A probability measure on A is a measure µ such that µ(A) = 1 and µ(i) ≥ 0 for all i ∈ A. The support of µ, denoted Supp(µ), is the set of all i ∈ A such that µ(i) > 0. The space M A of signed measures on A can be viewed as a Euclidean space of dimension |A|, with associated Euclidean norm denoted by · . Subsets of M A will be equipped with the distance induced by this norm.
We denote by ∆ A the set of all probability measures on A. For K ≤ N, we denote by ∆ K A the set of probability measures on A, whose support is a set containing exactly K points. For Σ ⊂ ∆ A , let Σ K be defined by Let A and B be two finite sets and let T : and for a map f : For a ∈ A, T (a) is the measure on B defined by T (a)(b) = T (a, b), for b ∈ B. This measure will also be denoted T (a, ·) or T a . Note that T (a)f = T a f = T f (a).
For T : A × B → R and U : B × C → R with A, B and C three finite sets, Let A and B be two finite sets. A transition matrix from A to B is a map V : A × B → [0, 1] such that V a ∈ ∆ B , for all a ∈ A, and we have A Markov matrix on a finite set A is a transition matrix from A to A. We denote by M A the set of all Markov matrices on A. For i, j ∈ A and P ∈ M A , denote i The set R is called the recurrent class of P . It is well known that an indecomposable Markov matrix P has an unique invariant probability measure π ∈ ∆ A characterized by the relation πP = π. Moreover, the generator −I + P has kernel R1 A and its restriction to {f : A → R : πf = 0} is an isomorphism. It then follows that I − P admits a pseudo inverse Q characterized by where Π ∈ M A is defined by Π(i, j) = π(j), for i, j ∈ A. In other words, Π is the orthogonal projection on R1 A for the scalar product f, g π = i∈A f (i)g(i)π(i). In particular for all i ∈ A and f : Norms, denoted by · , on the set of functions on A and on M A are defined by For δ > 0 and f : A → R, we denote by B(f, δ) = {g : A → R : f − g ≤ δ} the closed ball of radius δ and centered at f for the norm · .
If Q ∈ T M A and V is a transition matrix from A to B, then for all a ∈ A, Let Γ be a compact subset of the Euclidean space R N . The interior of Γ is denoted byΓ and its boundary by ∂Γ = Γ \Γ. The gradient at v ∈Γ of a differentiable map where ∂ i H is the partial derivative of H with respect to its i-th coordinate. Let ·, · be the standard scalar product on R N .

A class of reinforced random walks
3.1. Definition. Let G = (X , E) be a finite non-oriented graph. To a non-oriented edge {i, j} ∈ E are associated two oriented edges, (i, j) and (j, i). Let E be the set of oriented edges. Set M = M E and M ind the set of indecomposable Markov matrices on E. Let R be a finite set, called the reinforcement set.
We study here discrete time random processes ((X n , P n , V n )) defined on (Ω, F , P), a probability space equipped with a filtration (F n ) n≥0 . These processes take their values in X × M × ∆ R , are adapted to (F n ) n≥0 and are such that for all n ≥ 1, • (X n , P n , V n ) is F n -measurable for each n ≥ 0, • E n := (X n−1 , X n ) ∈ E, • The conditional law of E n+1 with respect to F n is P n (E n ), i.e.
Set d = |R|. For each n ≥ 0, let v n ∈ ∆ R be the reinforcement probability measure at time n defined by The class of random processes we study also satisfy the following hypotheses.
Hypotheses 3.1. There are a transition matrix V from E to R, a compact convex subset Σ of ∆ R and a continuous map P : Σ → M such that for all n ≥ 1 (i) V n = V (E n ), (ii) v n ∈Σ and (iii) P n = P (v n ).
These hypotheses determine the conditional law of ((X n , P n , V n )) with respect to F 1 . More precisely Hypothesis 3.1-(i) gives the type of reinforcement, Hypothesis 3.1-(ii) gives a set to which the reinforcement probability measures belong and Hypothesis 3.1-(iii) gives the transition matrices, i.e. how ((X n , P n , V n )) is reinforced.
(i) When R = X and V is the transition matrix defined by the walk is vertex reinforced and, for each n, V n = δ Xn and v n is the empirical occupation measure at time n of the vertices by (X n ). (ii) When R = E and V is the transition matrix defined by the walk is edge reinforced and, for each n, V n = δ {X n−1 ,Xn} and v n is the empirical occupation measure at time n of the non-oriented edges by (X n ).
These are rather usual examples, but our setup permits to study other reinforced processes, by choosing different transition matrices V . For example, one can take R = {A : A ⊂ X } and V ((i, j), A) = 1 if A = N(j) and V ((i, j), A) = 0 otherwise, then it is not the actual visited vertex that is reinforced, but all of its neighbors. Note that, in most examples of interest, Hypotheses 3.1 are easily verified.

3.2.
Main results of Section 3. A description of the asymptotic of (v n ) with an ODE is given below in Theorem 3.8 under the following hypotheses. Remark 3.4. The present paper is widely inspired by [BR10]. But hypotheses 3.1 do not enter in the set-up of [BR10]. Indeed, the probability measure V n does not necessarily belong to Σ, and in [BR10], the map v → P (v) would be a continuous mapping from Σ to M ind . This is not the case here, P (v) may not be indecomposable for all v ∈ ∂Σ. This gives an additional difficulty in the study of the random process ((X n , P n , V n )). But our results deal with a larger class of reinforced walks.

Hypothesis 3.3-(ii) permits to define
A consequence of Hypothesis 3.3-(i) is that π :Σ → ∆ E and π V :Σ → ∆ R are locally Lipschitz. For all n, set π n = π(v n ) and π V n = π V (v n ). Example 3.6. If the walk is vertex reinforced, then π V n ∈ ∆ X and π V n = i∈X π n (i, ·). If the walk is edge reinforced, then π V n ∈ ∆ E and π V n ({i, j}) = π n (i, j) + π n (j, i) − π n (i, i)1 i=j , for all {i, j} ∈ E. The following hypotheses will also be needed.
(i) The map π V :Σ → ∆ R is continuously extendable to Σ and this extension is Lipschitz.
(ii) For all e ∈ E, the map v → Q(v)V (e) defined onΣ is continuously extendable to Σ.
µ is Lipschitz and its restriction to Σ is the identity map on Σ.
Let F : T 1 ∆ R → T 0 ∆ R be the vector field defined by Then F is Lipschitz (using Hypothesis 3.7-(i)) and induces a global flow Φ : A non empty compact set A is an attracting set if there exists a neighbourhood U of A and a function t : (0, ε 0 ) → R + with ε 0 > 0 such that for all ε < ε 0 and t ≥ t(ε), Φ t (U) ⊂ A ε , where A ε stands for the ε-neighbourhood of A. An invariant attracting set is called an attractor.
A closed invariant set A is called attractor free if there does not exist any subset B of A, which is an attractor for Φ A , the flow Φ restricted to A, defined by The limit set of (v n ) is the set L = L((v n )) consisting of all points v = lim k→∞ v n k for some sequence n k → ∞. Note that since v n ∈ Σ for all n, and since Σ is compact, then necessarily, L ⊂ Σ. The following theorem is similar to Theorem 2.6 of [BR10].
Theorem 3.8. Assume that Hypotheses 3.1, 3.3 and 3.7 are verified, then the limit set of (v n ) is attractor free for Φ, the flow induced by F .
In most examples of interest, Hypotheses 3.1 and 3.3 are easily verified. Hypothesis 3.7 may be difficult to check. It should be noted that these hypotheses do not imply Hypotheses 2.1 and 2.2 of [BR10]. There are also situations where one can check Hypotheses 3.1 and 3.3 but cannot hope to verify Hypotheses 2.1 and 2.2 of [BR10] (this is the case for VRNBWs studied in section 4).
When there is a strict Lyapunov function for Φ, the set L can be described more precisely. To this purpose we define what an equilibrium and a strict Lyapunov function are.
The following theorem is a direct application of Proposition 3.27 of [BHS05].
Theorem 3.11. If Hypotheses 3.1, 3.3 and 3.7 hold, if there exists a strict Lyapunov function H for Φ and if H(Λ) has an empty interior, then L is a connected subset of Λ and the restriction of H to L is constant.
When |Λ| < ∞, the connected subsets of Λ are singletons and we have Corollary 3.12. If Hypotheses 3.1, 3.3 and 3.7 hold, if there exists a strict Lyapunov function H for Φ and if Λ is a finite set, then v ∞ := lim n→∞ v n exists and v ∞ ∈ Λ.
In Section 3.4 we will discuss about the convergence of v n towards an equilibrium according to its stability. More precisely we will prove under some additional assumptions the convergence of v n towards any stable equilibrium with positive probability and the non-convergence of v n towards unstable equilibria.
3.3. Proof of Theorem 3.8. Using the fact that To prove Theorem 3.8 we will use Proposition 5.1 in [BR10]. In the following lemma, we restate this proposition in our setting (with the notations of Proposi- then the limit set of (v n ) is attractor free for the dynamics induced by F .
Remark 3.14. Actually Proposition 5.1 of [BR10] states that L is an internally chain transitive set. But a set is internally chain transitive if and only it is attractor free. This result comes from the theory of asymptotic pseudo-trajectories. For more details, we refer the reader to [BHS05] and precisely to Section 3.3 for the definitions and to Lemma 3.5 and Proposition 3.20 for the equivalence.
Lemma 3.13 implies that Theorem 3.8 holds as soon as (12) holds for all T ∈ N * . Lemma 3.15. If Hypotheses 3.1, 3.3 and 3.7 hold, then (12) is verified for all T ∈ N * .
Proof. Along this proof, C is a non-random positive constant that may vary from lines to lines. For all n, set Q n = Q(v n ) and Π n = Π(v n ) and recall that π n = π(v n ) and π V n = π V (v n ). Remark that for all e ∈ E, Π n V (e) = π n V = π V n by using (6). Hypotheses 3.7 and the compactness of Σ imply that the maps v → π V (v) and v → Q(v)V (e) are uniformly continuous on Σ, for all e ∈ E. Thus, using that v n+1 − v n ≤ C/n, we have that, Moreover, for n a positive integer, we have (using the definition of Q n ) where r n+1 = r n+1,1 + r n+1,2 + r n+1,3 , with For T ∈ N * , n ∈ N * and 1 ≤ i ≤ 3, set ǫ n (T ) = sup n≤k≤nT k q=n ǫ q q and r n,i (T ) = sup n≤k≤nT k q=n r q,i q .

Stable and unstable equilibria.
To define the stability of an equilibrium, we assume Definition 3.17. Let v * be an equilibrium. We say that v * is stable if all eigenvalues of DF (v * ) have a negative real part and v * is unstable if there exists at least one eigenvalue of DF (v * ) with a positive real part.
Remark 3.18. If v * is a stable equilibrium, then {v * } is an attractor.
Definition 3.19. Let v * be an equilibrium. A stable (unstable) direction of v * is an eigenvector of DF (v * ) associated to an eigenvalue with negative (positive) real part.
Remark 3.20. All eigenvectors of DF (v * ), with v * a stable equilibria, are stable directions and an unstable equilibrium always has at least one unstable direction.
3.4.1. Convergence towards stable equilibria with positive probability. In this section, it is proved that a stable equilibrium v * just has to be attainable by (v n ) in order to have that v n converges towards v * with positive probability.
The following theorem is a particular case of Theorem 7.3 of [Ben99] (using Remark 3.18).
) be a Markov chain of transition matrix P * and of initial law π * . Then A i,j is the set of vertices that can be reached by X * in one step coming from i and starting from j, and A j is the set of vertices that can be reached by X * in one step starting from j.
Denote by A the support of π 1 .
Proof. Let (E * n ) = ((X * n−1 , X * n )) be a Markov chain of transition matrix given by P * and of initial law π * . We know that, for all n ≥ 1, the law of E * n = (X * n−1 , X * n ) is π * , hence the law of X * n−1 is π 1 and the law of X * n is π 2 . Thus π 1 = π 2 . Since π * = π * P * , then (j, k) ∈ R * if and only if there exists i ∈ X , such that π * (i, j)P * ((i, j), (j, k)) > 0. This is equivalent to the fact that there exists i ∈ X , such that (i, j) ∈ R * and k ∈ A i,j , i.e. k ∈ A j .
Note finally that A j is not empty if and only if j ∈ Supp(π 2 ) = Supp(π 1 ) .
Lemma 3.25. There exists m ≥ 0, such that for all e ∈ E, Proof. Since R * is the unique recurrent class of P * and | E| < ∞, there exists m ≥ 1 such that for all e ∈ E and e ′ ∈ R * , there exists q ≤ m for which P q * (e, e ′ ) > 0. Hypotheses 3.26.
(ii) For all j ∈ A and k, k ′ ∈ A j , there exists i ∈ X such that (i, j) ∈ R * and k, k ′ ∈ A i,j .
(iii) There doesn't exist a constant C and a map g : This section is devoted to the proof of the following theorem.
Theorem 3.27. Let v * be an unstable equilibrium. If Hypotheses 3.1, 3.3, 3.7 and 3.26 hold, then Proof. Along this proof, C will denote a non-random positive constant that may vary from lines to lines. Equations (11) and (15) The expression of ǫ n+1 and r n+1 are given by (16) and (17) and we recall that E[ǫ n+1 |F n ] = 0 (see (18)). For n ∈ N, set Note that z n ∈ T 1 ∆ R . Indeed, using (4) and the definition of Q n (see (5)) The sequence (z n ) is a stochastic algorithm of step 1/(n + d) : for all n, Thus, to prove Theorem 3.27, we will apply Corollary 3.IV.15, p.126 in [Duf96] to (z n ).
Let e ∈ E and n 0 be an integer such that for all n ≥ n 0 , v n ∈ B(v * , δ) ∩ Σ, with δ > 0 defined as in Hypothesis 3.26-(i). Let n ≥ n 0 , then using Hypothe- Lipschitz constants of these mappings are uniformly bounded in e ∈ E, and The previous lemma directly implies that on {v n → v * }, n r n+1 2 < ∞. Let m be a positive integer such that (19) is verified. To achieve this proof, according to Corollary 3.IV.15, p126 of [Duf96], it remains to show that on {v n → v * }, Let µ ∈ ∆ E and G : E → R. Define the variance Var µ (G) by Recall that the conditional law of E n+1 with respect to F n is P n (E n , ·) = P (v n )(E n , ·). The conditional mean and variance with respect to for all v ∈ Σ and e ∈ E. We denote ϕ v * by ϕ * .
is Lipschitz on Σ and by using Hypothesis 3.26 We conclude using the property that the composition of two Lipschitz functions is Lipschitz.
On one hand, using (5), On the other hand, Hence we have proved that if I * = 0, then there exists a map g : A → R such that V f (i, j) = π V (v * )f + g(i) − g(j) for all (i, j) ∈ R * . This is impossible by Hypothesis 3.26-(iii). Thus I * > 0 and P(v n → v * ) = 0.

Vertex reinforced non-backtracking random walks
4.1. Definitions. Let G = (X , E) be the complete graph with N ≥ 4 vertices. Then X = {1, . . . , N} and E = {{i, j} : i, j ∈ X , i = j}. In this section, the reinforcement set R is the set of vertices X and the walk is vertex reinforced. Set (1)) are uniform on a subset of X containing exactly three points. The support of any measure inΣ contains at least four points, i.e.Σ ⊂ Σ \ Σ 3 .
Let V : E × X → R be the transition matrix from E to X defined by Set α ≥ 1 and let P : Σ → M E be the map defined by for all v ∈ Σ and (i, j), (j ′ , k) ∈ E. Provided that Hypotheses 3.1 is satisfied, let (X n , P n , V n ) be the process defined in Section 3 with these functions V and P . Then it is easy to check that X is a VRNBW associated to the reinforcement function W (k) = (1 + k) α . Recall that V n = δ Xn and P n = P (v n ), with v n the empirical occupation measure of the vertices by X n , defined by In this section, we prove Theorem 1.2 announced in the introduction. Theorem 1.2 is a consequence of Theorem 3.8, Corollary 3.12, Theorem 3.22, Theorem 3.27, Lemma 4.32 and Lemma 4.33. To apply Theorem 3.8 to VRNBWs, we verify Hypotheses 3.1 in Section 4.2, Hypotheses 3.3 in Section 4.3 and Hypotheses 3.7 in Sections 4.4 and 4.5. To apply Corollary 3.12, we prove in Section 4.6 that there exists a strict Lyapounov function and, in Section 4.7, that there is a finite number of equilibria. To apply Theorems 3.22 and 3.27, in Section 4.8, we discuss the stability of the equilibria and prove that stable equilibria are attainable. These results permit to conclude that v n converges a.s. to a uniform probability measure. It remains to prove that the support of this measure coincides with S. This last fact is a consequence of Lemmas 4.32 and 4.33.

4.2.
Hypotheses 3.1. The fact that Hypotheses 3.1-(i) and 3.1-(iii) hold follows from the definitions of P and V given by (27) and (28). It thus remains to check Hypothesis 3.1-(ii), i.e. that v n ∈Σ for n ≥ 0.
A first consequence of this lemma is that the only possible limit points v of (v n ) are such that v(i) ≤ 1/3 for all i. Proof. Note that v n ∈Σ if and only if max(v n ) < 1/3 + min(v n ). Lemma 4.2 and the fact that for all n ≥ 0, min(v n ) ≥ 1/(n+N), imply that max(v n )−min(v n ) ≤ 1 3 × n+2 n+N which is lower than 1/3 since N ≥ 4.

Remark 4.5.
For v ∈ Σ 3 , the matrix P (v) is not indecomposable. Indeed, v is uniform on exactly three different points {i 1 , i 2 , i 3 }. There are two irreducible classes R 1 and R 2 , with R 1 = {(i 1 , i 2 ), (i 2 , i 3 ), (i 3 , i 1 )} and R 2 = {(i 2 , i 1 ), (i 1 , i 3 ), (i 3 , i 2 )}, and we have Thus R 1 and R 2 define two paths for the Markov chain associated to P (v), i.e. vertices i 1 , i 2 and i 3 are visited infinitely often, in the same order.

The invariant probability measure of P(v)
. From now on, for v ∈ Σ and i ∈ X , v(i) will be denoted simply by v i . There should not be any confusion with v n ∈ Σ defined by (29). For i = j ∈ X , let H i,j : Σ → R * + , H i : Σ → R * + and H : Σ → R * + be the maps, which to v ∈ Σ associate Recall that for v ∈ Σ \ Σ 3 , π(v) denotes the invariant probability measure of P (v) and that π V (v) = π(v)V belongs to ∆ X . For (i, j) ∈ E and k ∈ X , we use the notation π i,j (v) and π V k (v) respectively for π(v)(i, j) and for π V (v)(k). The expression of these measures is explicitly given in the following Proposition.
Recall that V ((i, j), k) = δ j (k) for (i, j) ∈ E and k ∈ X . Hence for all k ∈ X , Since α ≥ 1 and since H(v) > 0, for all v ∈ Σ, it is straightforward to check that the map π V verifies Hypothesis 3.7-(i).

4.5.
The pseudo-inverse of I − P(v). In this section we prove that Hypothesis 3.7-(ii) holds. Using Proposition 4.4, we know that P (v) is indecomposable for all v ∈ Σ \ Σ 3 . Since P is C 1 on Σ, using the implicit function theorem, one can prove (as in Lemma 5.1 in [Ben97]) that, for e ∈ E, v → Q(v)V (e) is C 1 on Σ \ Σ 3 . It remains to extend this mapping by continuity to Σ 3 , which is the statement of the following proposition (by taking for all i ∈ X , g = V (·, i) : E → R defined by g(e) = V (e, i)).
Proposition 4.7. Let a : X → R and g : E → R be the map defined by g(i, j) = a(j) for all (i, j) ∈ E. Then, the map v → Q(v)g is continuously extendable to Σ 3 .
To lighten the notation, set X 1 = X To prove (37), we will give an estimate of Q(v)g as v goes to v 0 (or equivalently as ǫ → 0). More precisely we will give estimates of X 1 , X 2 , Y ℓ , Z ℓ and T ℓ as ǫ → 0. For all i, j, k ∈ X , such that |{i, j, k}| = 3, denote p i,j,k = P (v)((i, j), (j, k)). Remark that p i,j,k = p j,i,k . When {i, j, k} = {1, 2, 3}, then p i,j,k only depends on k. We denote this probability p k . Since (1 − ǫ k ) −α = 1 + O(ǫ) as ǫ goes to 0, we have This implies the Taylor expansion of p k as ǫ goes to 0 : We also have the following Taylor expansions as ǫ goes to 0 Remark that L 0 x = x for x ∈ R 3 . The following lemma gives Taylor expansions for (I − A 1 ) −1 and for (I − A 2 ) −1 .
Lemma 4.8. If p 1 p 2 p 3 = 1, then I − A 1 and I − A 2 are invertible. Moreover, for Proof. Since the determinants of I − A 1 and of I − A 2 are both equal to 1 − p 1 p 2 p 3 , I − A 1 and I − A 2 are both invertible when p 1 p 2 p 3 = 1. When it is the case, we have Since This implies (39) when q = 1. We prove (39) when q = 2 by the same way.
The following lemma gives the Taylor expansion of π(v) as ǫ goes to 0.
Since Hypotheses 3.1, 3.3 and 3.7 hold, the vector field F : T 1 ∆ X → T 0 ∆ X , defined by (9) induces a flow Φ for the differential equationv = F (v). Moreover Theorem 3.8 holds and the limit set of (v n ) is attractor free for Φ.
For i, j ∈ X , with i = j, the maps H i,j , H i and H are defined on Σ. But we will consider here that they are respectively defined on R N by (30), (31) and (32). Then, For i, j ∈ X , we have Thus This proves that H is a strict Lyapunov function for Φ.
Hypotheses 3.1, 3.3 and 3.7 hold and there is a strict Lyapunov function for Φ. Thus by applying Theorem 3.11 and Corollary 3.12, if H(Λ) has an empty interior, the limit set of (v n ) is a connected subset of Λ and if Λ is a finite set, then v ∞ := lim n→∞ v n exists and v ∞ ∈ Λ.
Thus for i ∈ A, Hence v ∈ Λ. 4.7.1. Case α = 1. Proof. Let v be an equilibrium and suppose that α = 1. Thus for all i ∈ Supp(v), Since the support of v contains at least three points, H i,j (v) > 0. Therefore, we must have v i = v j for all i, j ∈ Supp(v), i.e. v is uniform on its support.
A consequence of the previous proposition is that, when α = 1, Λ is finite.
When K ∈ {1, 2}, we have to take into account the fact that a ≤ 1/3.
We have thus proved that And as a consequence that (51) holds for all z ≥ 0.
We can now show that q is strictly convex. Inequality (51) implies that L(z) < 2t 3 D 4 (z). In order to show that q ′′ 2 < q ′′ 1 , it just remains to remark that 2t 3 ≤ t(1−t) for all t ≤ 1/2. Therefore q is strictly convex.
Proposition 4.25. Every equilibrium that is not a uniform probability measure is unstable.
Recall that v is an equilibrium if and only if we have ψ(x) = β.
Then H(v) = Ka α K 1 + Lb α K 2 (with L = M − K) and using the fact that b = (K(1 + x) 1/α + L) −1 , we have where g K,M is a smooth positive function on ]0, ∞[. By definition, when v = v K,M (p) is an equilibrium, then µ M − µ K is an unstable direction for v as soon as f ′ K,M (p) > 0, and is a stable direction as soon as f ′ K,M (p) < 0. A simple calculation shows that, when v = v K,M (p) is an equilibrium, f ′ K,M (p) = g K,M (p) dx dp ψ ′ (x) log(1 + x).
Since dx dp < 0, we thus have that v is unstable when ψ ′ (x) < 0, and that µ M − µ K is a stable direction if and only if ψ ′ (x) > 0. At last, note that when K ≥ 3 and when x is sufficiently large, ψ ′ (x) < 0 (see the end of the proof of Proposition 4.14). This implies that when K ≥ 3, µ M − µ K is a stable direction for µ K .  Proof. When β M < β < 1, D(K, M) = 1. Moreover ψ is increasing when K ∈ {1, 2}, hence the lemma.
Lemma 4.28. When 3 ≤ K ≤ M/2 and β M < β < β K,M , one of the equilibrium in ∆(K, M) is unstable, and µ M − µ K is a stable direction for the other equilibrium. When 3 ≤ K ≤ M/2 and β = β M , the equilibrium of ∆(K, M) is unstable.
Theorem 3.22 implies the following statements: when α = 1, v n has a positive probability to converge towards the uniform probability measure on X (see Proposition 4.21). When α > 1, v n has a positive probability to converge towards a uniform probability measure on a set containing less than 3α−1 α−1 vertices (see Proposition 4.23). 4.8.4. Localization on the supports of stable equilibria. Following [BRS13], we prove that for v is a stable equilibria, on the event {lim n→∞ v n = v}, the walk X n localizes almost surely on Supp(v), i.e. the set of infinitely often visited vertices by X n is Supp(v). This proposition is a consequence of the two following lemmas:  We do not give the proofs of the two previous lemmas here. They can be proved following the lines of the proofs of Lemma 3.13. and Lemma 3.14 of [BRS13].
Calculate for i, j ∈ A, Thus for all i, j ∈ A, f (j) = C + g(i) − g(j). This implies that g is constant on A and thus that f is constant. Since f ∈ T 0 ∆ X , i∈X f (i) = 0. Therefore f (i) = 0 for all i ∈ A, which is impossible.
This last lemma achieves the proof of Theorem 1.2. Indeed, Hypotheses 3.26 are satisfied and Theorem 3.27 can be applied.