On some generalized reinforced random walks on integers

We consider Reinforced Random Walks where transition probabilities are a function of the proportion of times the walk has traversed an edge. We give conditions for recurrence or transience. A phase transition is observed, similar to Pemantle \cite{Pem000} on trees.


Introduction
Let G = (V, E) be a graph with V the set of vertices and E the set of edges. The graph distance is denoted by d. Linearly reinforced random walks (X n , n ≥ 0) are nearest neighbor walks on G (i.e. X n ∈ V and (X n , X n+1 ) ∈ E) whose laws are defined as follows: denote by (F n ) n≥0 the natural filtration associated to (X n , n ≥ 0) and for (x, y) ∈ E, set a n (x, y) = a 0 (x, y) + ∆ n i=1 1 {(Xi−1,Xi)=(x,y)} , with a 0 (x, y) > 0 and ∆ > 0. Then for (x, y) ∈ E and on the event {X n = x}, P(X n+1 = y | F n ) = a n (x, y) {z|(x,z)∈E} a n (x, z) .
The law of a directed linearly reinforced random walk is the same as the law of a random walk in a random environment. Indeed it is equivalent to attach independent Polya urns to all sites and then De Finetti Theorem implies that it is equivalent to attach independent random probability vectors to each site. These probability vectors give the transition probabilities for the walk (when located at this site). When the graph is a non-oriented tree, the undirected reinforced random walk, with initial position ρ, has the same law as a directed reinforced random walk on G with a 0 (x, y) = b 0 (x, y) if d(ρ, y) = d(ρ, x) + 1 and a 0 (x, y) = b 0 (x, y) + ∆ if d(ρ, y) = d(ρ, x) − 1 and reinforcement parameter 2∆, or with a 0 (x, y) = b 0 (x, y)/(2∆) if d(ρ, y) = d(ρ, x) + 1 and a 0 (x, y) = (b 0 (x, y) + ∆)/(2∆) if d(ρ, y) = d(ρ, x) − 1 and reinforcement parameter 1. This representation was first observed by Coppersmith and Diaconis [CDia].
For this class of models results on random walks in random environment can be applied. When the graph is Z, Solomon's theorem shows that (directed and undirected) reinforced random walks are a.s. recurrent when a 0 (x, x+1) = a 0 (x, x− 1) = a 0 > 0 or b 0 (x, x + 1) = b 0 > 0. When the graph is the binary tree and b 0 (x, y) = b 0 > 0, the undirected reinforced random walk is transient for small ∆ (or equivalently for large b 0 ) and recurrent for large ∆ (or equivalently for small b 0 ). This last result was proved by R. Pemantle in [Pem1].
In this paper we address the question (posed by M. Benaïm to one of the authors) of what happens when the graph is Z and when on the event {X n = x}, P(X n+1 = x + 1 | F n ) = f a n (x, x + 1) a n (x, x − 1) + a n (x, x + 1) , where f : [0, 1] → (0, 1) is a smooth function. For general functions f , these walks are no longer random walks in random environment. So we have to use different techniques. But one can still attach to each site independent urn processes (generalized Polya urns). Under the assumption that the number of fixed points of f is finite, if the walk is recurrent, stochastic algorithms techniques show that for all x, a n (x, x + 1)/(a n (x, x − 1) + a n (x, x + 1)) converges a.s. toward a random variable α x . This random variable takes its values in the set of fixed point of f . If a 0 (x, x + 1) = a 0 (x, x − 1) = a 0 > 0, the sequence (α x , x ∈ Z) is i.i.d. Let us remark that Solomon's theorem states that the random walk in the random environment (α x , x ∈ Z) is a.s. recurrent if and only if E[ln(α x /(1 − α x ))] = 0. We focus here on cases when either f has a unique fixed point or all the fixed points are greater or equal to 1/2 and f ≥ 1/2 on [1/2, 1]. We particularly study the case a 0 (x, x + 1) = a 0 (x, x − 1) = a 0 > 0. We give criteria for recurrence and transience: • when there exists one fixed point greater than 1/2 the walk is transient and • when 1/2 is the unique fixed point, depending on the initial condition a 0 and on the shape of f around 1/2, the walk can be either recurrent or transient. This last result shows that Solomon's criterion applied to the limiting values (α x , x ∈ Z) does not determine recurrence versus transience. The proofs of the theorems given here involve martingale techniques inspired by the work of Zerner on multiexcited random walks on integers [Zer,Zer2].
The paper is organized as follows: in section 2 reinforced random walks are defined and their representation with urn processes is given. In section 3 are given the results on urns that are needed to prove the theorems given in sections 5 and 6. A zero-one law is proved in section 4: recurrence occurs with probability 0 or 1. In section 5 and 6 the case f ≥ 1/2 and the case when there is a unique fixed point are studied. The last section develops some examples.

Notation
2.1. The urn model. We consider an urn model where balls of the urn are only of two types or colors, let say Red and Blue. Given a function f : [0, 1] → (0, 1) we alter the draw by choosing at each step a Red ball with probability f (α), if α is the proportion of Red balls. Then we put back two Red (respectively Blue) balls in the urn if a Red (respectively Blue) ball was drawn. In other words an urn process associated to f is a Markov process ((α n , l n ), n ≥ 0) on [0, 1] × (0, +∞), where the transition probabilities are defined as follows: for all n, l n+1 = l n + 1 and α n+1 is equal to (l n α n + 1)/(l n + 1) with probability f (α n ), or equal to l n α n /(l n + 1) with probability 1 − f (α n ). In fact α n represents the proportion of Red balls and l n the total number of balls in the urn at time n (at least if l 0 and α 0 l 0 are integers). By abuse of notation we will sometime call the first coordinate (α n , n ≥ 0) an urn process associated to f (if there is no ambiguity on l 0 ). This model was introduced in [HLS], and then further studied in particular by Pemantle [Pem4], Duflo [D] and Benaïm and Hirsch (see [BH] and [B]). We refer also the reader to the survey [Pem5] section 2.4, 2.5 and 3.2 for more details and references.
2.2. Generalized reinforced random walks. Here we consider a particular model of (directed) reinforced random walk (X n , n ≥ 0) on Z where the evolution is driven by urns of the preceding type on each integer. For other models see [Pem5]. A first way to define it is as follows. Let f : [0, 1] → (0, 1) and be the total time spent in x up to time n − 1 by the random walk. Let theñ be the proportion of times it has moved to the right (up to some initial weights). Now if X n = x, for some x ∈ Z and n ≥ 0, then X n+1 = x + 1 with probability f (α x n ) and X n+1 = x − 1 with probability 1 − f (α x n ). This defines recursively the random walk. Moreover for n ≥ 1, define τ x n as the time of the n th return to x: for n ≥ 1, τ x n = inf{k > τ x n−1 : X k = x} with the convention inf ∅ = ∞, and τ x 0 = inf{k ≥ 0 : X k = x}. Set also l x n := l x 0 + n, for n ≥ 1. Let α x n :=α x τ x n−1 +1 , when τ x n−1 < ∞, and α x n = 0 otherwise. Then the processes ((α x n , l x n ), 0 ≤ n ≤ L x ∞ ) form a family of urn processes of the type described above stopped at the random time L x ∞ . More precisely, {L x ∞ > n} = {τ x n < ∞} ∈ F τ x n and on this event, In the case the walk is recurrent, these urn processes are independent, and they are identically distributed when (α x 0 , l x 0 ) does not depend on x. There is another way to define this random walk which goes in the other direction. Assume first that we are given a family of independent urn processes ((α x n , l x n ), n ≥ 0) x∈Z indexed by Z. One can consider this as the full environment for instance. Then given the full environment the random walk evolves deterministically: first it starts from X 0 . Next let n ≥ 0 be given and assume that the random walk has been defined up to time n. Suppose that X n = x and L x n = k, for some k ≥ 0. Then X n+1 = x + 1 if α x k+1 > α x k , and X n+1 = x − 1 otherwise. For n ≥ 0, we define the environment w n ∈ ([0, 1] × (0, +∞)) Z at step n by We denote by F n the σ-algebra generated by (X 0 , w 0 , . . . , w n ), or equivalently by (w 0 , X 0 , . . . , X n ). For x ∈ Z and w some environment, we denote by E x,w the law of the random walk starting from X 0 = x and with initial environment w 0 = w. If no ambiguity on x or w is possible we will sometime forget them in the notation. A random walk of law E x,w will be called a generalized reinforced random walk started at (x, ω) associated to f . Observe that ((w n , X n ), n ≥ 0) is a Markov process (whereas (X n , n ≥ 0) is not), and in particular: for any (x, w) and any bounded measurable function g.
Note that the directed reinforced random walk started at x 0 , with initial weight (a 0 (x, y); x ∈ Z, y ∈ {x − 1, x + 1}) and reinforcement parameter ∆ defined in the introduction has law E x0,w0 with f (x) = x and w x 0 = (α x 0 , l x 0 ) defined by and The undirected reinforced random walk defined in the introduction has also the law of a certain generalized reinforced random walk. For example, the undirected reinforced random walk started at 0 with initial weights b 0 (x, x + 1) = b 0 > 0 and reinforcement parameter ∆ has law E 0,ω x 0 , with w x 0 = (α x 0 , l x 0 ) defined by This corresponds to the case when l x 0 = l 0 ∈ (0, +∞) for all x = 0, α x 0 = α 0 ∈ (0, 1) for x ≥ 1, and α x 0 = 1 − α 0 , for x ≤ −1.
In the following w 0 will satisfy: Hypothesis 2.1. The starting environment is such that for all x ≥ 1, w x 0 = w 1 0 .
or will satisfy: Hypothesis 2.2. The starting environment is such that for all x ≥ 1, w x 0 = w 1 0 and for all x ≤ −1, w x 0 = w −1 0 .
2.3. Hypothesis on f and stable points. Throughout the paper f : [0, 1] → (0, 1) will be a regular function (C 3 is enough for our purpose). We say that p is a fixed point if f (p) = p. It is called stable if f ′ (p) ≤ 1. We will assume that all fixed points of f are isolated.
2.4. Statement of the main results. Let X be a reinforced random walk of law P 0,w0 , for some initial environment w 0 . This walk is called recurrent if it visits every site infinitely often, and it is called transient if it converges to +∞ or to −∞. We denote by R the event of recurrence, and by T the event of transience, In section 4, it will be shown that, under Hypothesis 2.2 (or under Hypothesis 2.1 if f ≥ 1/2), X is either a.s. recurrent or a.s. transient.
The drift accumulated at time n by X is equal to The methods developed in this paper are well adapted to the particular case f ≥ 1/2, making this drift nonnegative and nondecreasing. In this case one can define for all x ∈ Z, which is the drift accumulated at site x if the random walk visits x infinitely often. Then we have Proof. Let p be a stable fixed point of f , with p = 1/2. As states Theorem 3.1 below, α 1 k converges to p with positive probability. Thus, with positive probability δ 1 ∞ = ∞. When p = 1/2 is the only fixed point and if f ′′ (1/2) > 0, Proposition 3.4 below shows that δ 1 ∞ = +∞ a.s. and we conclude by using Theorem 2.1. Without the assumption that f ≥ 1/2 , we will prove the Theorem 2.2. Assume Hypothesis 2.2 and that f has a unique fixed point p.
are still well defined for all x under the hypothesis of the theorem.
The sufficient condition to get P[R] = 0 in the case p = 1/2 has to be compared to the result of [KZer] in the context of cookie random walks, where it is proved that this is also a necessary condition. Here we were not able to prove this.
Theorem 2.2 implies in particular Proof. This follows from Proposition 3.4 which shows that δ 1 These results allow to describe interesting phase transitions. This will be done in the last section. For example, there exists a function f ≥ 1/2 having 1/2 as a unique stable fixed point, such that if X has law P = P 0,w0 , with w x 0 = (1/2, l) for all x, then This phase transition is similar, yet opposite, to the one observed by Pemantle for edge-reinforced random walks on trees [Pem1]: there exists ∆ 1 > 0 such that if the reinforcement parameter ∆ is smaller than ∆ 1 it is transient whereas it is recurrent when this parameter is greater than ∆ 1 . Indeed, in the non-oriented reinforced framework discussed in the introduction, starting with small l is equivalent to starting with large ∆.

Preliminaries on urns
3.1. Convergence of urn processes. We recall here some known results about convergence of urn processes. In particular the next theorem is of fundamental importance in all this paper. Remember that all functions f considered in this paper satisfy the hypothesis of section 2.3.
Theorem 3.1 ( [HLS], [Pem4]). Let (α n , n ≥ 0) be an urn process associated to some function f . Then almost surely α n converges to a stable fixed point of f and for any stable fixed point, the probability that α n converges to this point is positive.
The convergence to a stable fixed point p with positive probability was first proved in [HLS], when f (x) − x changes of sign near p, and in [Pem4] in the special case when the sign of f (x) − x is constant near p. The non existence a.s. of other limiting points was also first proved in [HLS] (for extensions to more general settings see [D], [B], [Pem2], [Pem3]). There is also a central limit theorem, which can be extracted from the book of Duflo: Theorem 3.2 ( [D] Theorem 4.III.5). Suppose that p ∈ (0, 1) is a stable fixed point of f . Let a = f ′ (p) and v 2 = p(1 − p). If a < 1/2, then conditionally on α n → p, √ n(α n − p) converges in law, as n tends to +∞, toward a normal variable with variance v 2 /(1 − 2a).

3.2.
Convergence of the drift. For n ∈ N, we set Then δ n will correspond to the drift accumulated at a given site after n + 1 visits to this site. If δ n converges when n → +∞, we denote by δ ∞ its limit. We will need also to consider its negative and positive parts defined respectively by for all n ≥ 0. In fact we can always define in the same way δ − ∞ and δ + ∞ , even when . It happens that for our purpose, such finiteness result will be needed.
The problem is that the convergence of the drift appears to be a non-trivial question. To be more precise, we were able to obtain a satisfying result essentially only when f has a unique fixed point. When this fixed point is greater (resp. smaller) than 1/2, it is immediate to see that the drift converges a.s. toward +∞ (resp. −∞). However to see that is finite, some non-trivial argument is needed. Since it is the same as in the more difficult case when 1/2 is the unique fixed point, we start by this case. Let us give here an heuristic of how we handle this convergence problem when p = 1/2: the central limit theorem (Theorem 3.2) shows that √ k(α k − 1/2) converges in law.
A Taylor expansion of f shows that The first term is of order k −1/2 , the second of order k −1 and the third of order for k large enough. Then the central limit theorem (Theorem 3.2) gives k E[(2f (α k )− 1) + ] = +∞. This proves that E[δ + ∞ ] = +∞, as wanted.

Now the proof of the preceding case shows that
This proves by induction that E[x 2 n ] ≤ C/n for some constant C > 0. Let us now consider the moments of order 4. Since 4x 3 h(x) ≤ −3x 4 in [−ǫ, ǫ] for some ǫ > 0, (4) gives similarly n ] ≤ C ′ n −2 , with C ′ > 0 another constant. Then Cauchy-Schwarz inequality gives (up to constants) n ]E[x 4 n ]) 1/2 ≤ n −3/2 , which is summable. This finishes the proof of the proposition.
Proof. To fix ideas assume that f ′′ (1/2) > 0. The other case is analogous. A limited development of f near 1/2 gives with C > 0 some positive constant. For n ≥ 0, we set z n := √ n(α n − 1/2). We already saw in Theorem 3.2 that conditionally on {α n → 1/2}, z n converges in law toward a normal variable. In fact this holds in the sense of the trajectory. More precisely, an elementary calculus shows that (z n , n ≥ 0) is solution of a stochastic algorithm of the form: where r n+1 = O( √ n(α n − 1/2) 2 + n −1 ). For t ∈ [log n, log(n + 1)], let , t ≥ 0) the continuous time process defined by Y (u) t = Y u+t for t ≥ 0. Then Theorem 4.II.4 in [D] says that (conditionally on {α n → 1/2}) the sequence of processes (Y (u) t , t ≥ 0) converges in law in the path space toward an Ornstein-Uhlenbeck process (U s , s ≥ 0), when u → +∞ (the condition on r n in the hypothesis of the theorem is not needed here, as one can see with Theorem 4.III.5 and its proof in [D]). Now we will deduce from this result that a.s. on the event {α n → 1/2}, If we define z t for all t ≥ 0 by z t = z [t] , then one can check that So if this series is finite, then n+1 n z 2 e t dt → 0 when n → +∞. Moreover, using (6) and (7), we have that a.s. on the event {α n → 1/2}, Y 2 t = z 2 e t + o(1). Therefore a.s. on the event

A zero-one law
In all this section X is a generalized reinforced random walk of law P = P 0,w0 associated to f , and w 0 satisfies Hypothesis 2.2. We will try to relate its asymptotic behavior with urn characteristics. Our first result is general. It is a zero-one law for the property of recurrence. Remember that the random walk is recurrent if all sites are visited infinitely often. Proof. First Borel-Cantelli Lemma implies that if a site is visited infinitely often, then the same holds for all sites. So there are only three alternatives. Either the random walk is recurrent, or it tends toward +∞ or toward −∞. In other words then 1 {Tn<+∞} converges toward 1 R∪{Xn→+∞} , when n → +∞. In the same way the event {X n > 0 ∀n > 0} is included in {X n → +∞}. In fact there is a stronger relation: Lemma 4.2. For any initial environment w 0 and any k ≥ 0, P[X n → +∞] > 0 if, and only if, P k,w0 [X n > k; ∀n > 0] > 0.
Proof. We do the proof for k = 0. The other cases are identical. This proof is similar to Zerner's proof of Lemma 8 in [Zer]. We just have to prove the only if part. Call τ 2 the last time the random walk visits the integer 2. If C is some path of length k starting from 0 and ending in 2 on Z, call E C the event that the random walk follows the path C during the first k steps. Define also w C as the state of all urns once the walker has performed the path C. If P[X n → +∞] > 0, then for some path C from 0 to 2, we have Now construct C ′ as follows: it starts by a jump from 0 to 1 and then we add (in chronological order) all the excursions of C above level 1. Then clearly which proves the lemma.
We can finish now the proof of Lemma 4.1. The martingale convergence theorem and the Markov property imply Then multiply the left and right part of this inequality by 1 R and take expectation. This gives In the same way we have These two equalities and Lemma 4.2 prove the lemma.

The case with only non-negative drift
Here we assume that f ≥ 1/2 and that w 0 satisfies Hypothesis 2.1. In the following X is a reinforced walk of law P = P 0,w0 . In this case we have a more precise zero-one law.
Lemma 5.1. Assume Hypothesis 2.1 and f ≥ 1/2. We have the alternative: either (X n , n ≥ 0) is almost surely transient toward +∞, or it is almost surely recurrent.
Proof. Since f ≥ 1/2, at each step the random walk has probability at least 1/2 to jump to the right. Thus an elementary coupling argument (with the usual simple random walk on Z) shows that a.s. the random walk does not converge toward −∞. We conclude with (9) (which holds when assuming only Hypothesis 2.1) and Lemma 4.2.
Remark 5.1. We notice here that the hypothesis f < 1 made in section 2.3 is not needed when f ≥ 1/2. Indeed the only place where it is used is in the proof of Lemma 4.2 to show that P[E C ′ ] > 0, but the reader can check that this is not needed when f ≥ 1/2. This remark will be of interest for the last section.
Proof of Theorem 2.1: We follow essentially the proof of Theorem 12 in [Zer]. Let us recall the main lines. First we introduce some notation. For n ≥ 0, let A straightforward computation gives the equation Let (M + n , n ≥ 0) be the process defined by It is a basic fact that (M + n , n ≥ 0) is a martingale. In particular for all a ≥ 0 and all n ≥ 0, using (10) with the martingale property, Now Lemma 5.1 implies that T a is a.s. finite. Moreover (U n , n ≥ 0) and (D + n , n ≥ 0) are non-decreasing processes. Thus letting n go to +∞ gives with the monotone convergence theorem Moreover the Markov property shows that for any integer x ∈ [1, a], , where the last equality holds because for all y ≥ x, w y Tx = w 1 0 . Moreover E[D 0 Ta ] and E 1,w0 [D 1 Ta−1 ] differ at most by E [N ], where N is the number of visits to 0 before the first visit to 1. Since the probability to jump from 0 to 1 is bounded away from 0, E[N ] is finite. Therefore This gives already the only if part of the theorem.
Assume now that the random walk is transient. Then Lemma 4.2 shows that was equal to 0, this would mean that a.s. δ 1 n = δ 1 0 for all n. In other words the walk would evolve like the simple random walk, which is recurrent. This is absurd.
∞ ] = 1. From (11) we see that it is equivalent to prove the Lemma 5.2. If the random walk is a.s. transient, then lim a→+∞ E[U Ta ]/a = 0.
Proof. This lemma can be proved by following the argument of Lemma 6 in [Zer], that we reproduce here. For i ≥ 1, let Next U Ti+1 − U Ti = 0 only on the set A i := {σ i < T i+1 }. Moreover (11) holds for any starting environment. Thus for all i. It remains to prove that when a → +∞. Let Y i = P[A i | F Ti ]. Since the random walk is transient, the conditional Borel-Cantelli lemma implies 1 i≥1 Y i < +∞ a.s.
Moreover a coupling argument with the simple random walk and standard results for this random walk show that a.s., Y i ≤ 1/i for all i. Let ǫ > 0. For all i, 1 since we were not able to find a reference we give here a short proof: let (Hn, n ≥ 0) be the (F Tn ) n≥0 martingale defined by Hn := P n i=1 1 A i − Y i , for n ≥ 0. Let l ≥ 1 and let T ′ l = inf{k | H k ≥ l}. Then H n∧T ′ l a.s. converges toward some limiting value α l ∈ R, when n → +∞. If a.s. only a finite number of A i 's occur, then a.s. T ′ l is infinite for some l ≥ 1. This implies the desired result.
So we can divide the sum in (12) in two parts. One is lower than ǫ and the other one is equal to But since (13) holds, a.s. the density of the i ≤ a such that Y i ≥ ǫ/i tends to 0 when a tends to +∞. Thus the preceding sum converges to 0. This concludes the proof of the lemma.
This completes the proof of Theorem 2.1.
In section 7 we will see different examples of functions f ≥ 1/2, symmetric with respect to 1/2 which show in particular that in the case when 1/2 is the only stable fixed point and f ′′ (1/2) = 0, both regimes (recurrence and transience) may appear.

The case with a unique fixed point
Here we do not assume anymore that f ≥ 1/2, but we assume that f has a unique fixed point. The initial environment satisfies Hypothesis 2.2 and still P = P 0,w0 .

Proof of Theorem 2.2:
The idea of the proof is the same as for Theorem 2.1. However a priori we have to be careful when taking limits since the drift (D + n ) n≥0 is not anymore a non-decreasing function. But for any integer x ≥ 0, Proposition 3.2 and Proposition 3.
< +∞, if we replace n by any increasing sequence of stopping times τ n converging toward τ ∞ , these propositions show that E [D x τn ] converges toward E [D x τ∞ ]. So in fact we get lim Next observe that for all a ≥ 0 and n ≥ 0, X + Ta∧n ≤ a. Thus, using that (M + n ) n≥0 is a martingale, we have E[D + Ta ] ≤ a ∀a ≥ 0. Assume P(R) = 1. Then the Markov property implies that if 1 ≤ x ≤ a, Letting a tend to +∞ in (14), and using the fact that Let us state now the following standard monotonicity argument: Lemma 6.1. Let f ≤ g be two functions. Then there exists a coupling of two urn processes ((α n , l n ), n ≥ 0) and ((β n , l ′ n ), n ≥ 0) associated respectively to f and g, such that l 0 = l ′ 0 , α 0 = β 0 , and almost surely α n ≤ β n for all n ≥ 0. Proof. The proof is standard. Let (U i , i ≥ 0) be a sequence of i.i.d. random variables uniformly distributed on [0, 1]. We define two urn processes starting with initial conditions like in the lemma. Then at step n, α n+1 > α n if, and only if f (α n ) ≥ U n . The same for β n+1 (with g in place of f ). Assume now that for some n, α n > β n . Assume also that n is the lowest index where such inequality occurs. This means that α n−1 = β n−1 . But since f ≤ g, by definition of our processes, we get an absurdity.
This lemma together with Theorem 2.2 allows to consider also the case when f has possibly more than one fixed point but under the condition f ≥ 1/2 on [1/2, 1]. More precisely we have Proof. If any of the two hypothesis of the corollary is satisfied, then there exists a function g such that g ≤ f , g has a unique fixed point equal to 1/2, and g ′ (1/2) = 0. We can also assume that g is increasing on [0, 1/2]. Applying Lemma 6.1 we see that there exists an urn process (β n , n ≥ 0) associated to g such that β n ≤ α n for all n. Now the proof of Proposition 3.2 shows that Since g is increasing on [0, 1/2] and f ≥ 1/2 on [1/2, 1], this implies that E[δ − ∞ ] is finite. Moreover we know that δ ∞ = +∞ a.s. So we have everything to apply the proof of Theorem 2.2 and to conclude.

Some examples
Our goal here is to give examples of functions f leading to interesting behavior for the associated random walk, in view of the previous results. In all this section we consider a function f , symmetric with respect to 1/2, i.e. such that f (1/2−x) = f (1/2 + x) for all x ∈ [0, 1/2], decreasing on [0, 1/2] and increasing on [1/2, 1]. We assume also that f has a unique fixed point, equal to 1/2, and that f ′′ (1/2) = 0.
We start now by a comparison result. Let u be some positive real number. Define f u by the equation 2f u − 1 = u(2f − 1) ∧ 1. One can see immediately that f u has the same properties as f for all u, and moreover that f u ≤ f v if u ≤ v. Denote by ((α u n , l u n ), n ≥ 0) an urn process associated to f u such that (α u 0 , l u 0 ) = (1/2, l) with l > 0, and set δ u ∞ := n≥0 (2f (α u n ) − 1). Then we have the , are nondecreasing respectively on (0, 1] and on [0, +∞).
Proof. The first claim results from the proof of Proposition 3.2. For the second claim, consider first 0 < u < v ≤ 1. By symmetry, for any k ≥ 0, Moreover, since f v is nondecreasing on [1/2, 1] and since one may couple α u k and α v k such that α u k ≤ α v k a.s. by Lemma 6.1, The result follows by summation. The fact that u → E[δ u ∞ ] is nondecreasing on [0, +∞[ is similar. It remains to find the limit when u → +∞. For this, fix some n ≥ 1. Then one can observe that there exists ǫ > 0, such that |α u 2k+1 − 1/2| ≥ ǫ for all k ≤ n. This implies that for u large enough, E[δ u ∞ ] ≥ n/2. Since this holds for all n, the result follows.
The preceding lemma and Theorem 2.1 show that there is a phase transition: let X be a generalized random walk started at (0, w 0 ) associated to f u , where the initial environment is such that w x 0 = (1/2, l). Then there exists some u 0 > 0 such that for u > u 0 , the random walk associated to f u is transient, whereas for u < u 0 it is recurrent. In particular recurrence and transience may both appear. The question of what happens at u 0 is related to the continuity of E[δ ∞ ] with respect to f . But explicit calculus show that if u → u 0 , then for all n, E[δ u n ] → E[δ u0 n ]. Together with the monotonicity of E[δ u ∞ ] in u, this proves that E[δ u ∞ ] is continuous in u. In particular for u = u 0 the random walk is recurrent.
Proof. We use a standard coupling argument. Let (U i ) i≥0 be a family of i.i.d. random variables, uniformly distributed in [0, 1]. Let start two urn processes (α n , n ≥ 0) and (β n , n ≥ 0), respectively from (α, 2l) and (1/2, 2l). They evolve according to the following rule. If at step n, α n or β n is equal to x ≥ 1/2, then we add one Red ball in the corresponding urn if U n ≤ f (x). Now if x < 1/2, then we add a Red ball if U n ≥ 1 − f (x). The condition 2αl − l ∈ N assures by induction that l n α n − l n β n ∈ Z for all n ≥ 1. This in turn shows that the two urn processes (as well as their symmetric with respect to 1/2) cannot cross each other without meeting them. Thus for all n ≥ 0, |β n − 1/2| ≤ |α n − 1/2|. The lemma follows.
The preceding results show in particular that the property of recurrence or transience may depend on the initial conditions of the urns (even if l 0 is fixed). Indeed it suffices to consider f such that E 1/2,2l0 [δ ∞ ] = 1, which is possible by Lemma 7.1 and the continuity in u of E[δ u ∞ ] as explained above. Then the preceding lemma shows that for any α = 1/2 satisfying the condition of the lemma, the random walk associated with urns starting from (α, 2l 0 ) is always transient, whereas it is recurrent if they start from (1/2, 2l 0 ).
We arrive now to our last result.