Limit theorems for vertex-reinforced jump processes on regular trees

Consider a vertex-reinforced jump process defined on a regular tree, where each vertex has exactly $b$ children, with $b \ge 3$. We prove the strong law of large numbers and the central limit theorem for the distance of the process from the root. Notice that it is still unknown if vertex-reinforced jump process is transient on the binary tree.


Introduction
Let D be any graph with the property that each vertex is the end point of only a finite number of edges. Denote by Vert(D) the set of vertices of D. The following, together with the vertex occupied at time 0 and the set of positive numbers {a ν : ν ∈ Vert(D)}, defines a right-continuous process X = {X s , s ≥ 0}. This process takes as values the vertices of D and jumps only to nearest neighbors, i.e. vertices one edge away from the occupied one. Given X s , 0 ≤ s ≤ t, and {X t = x}, the conditional probability that, in the interval (t, t + dt), the process jumps to the nearest neighbor y of x is L(y, t)dt, with L(y, t) := a y + t 0 1l {Xs=y} ds, a y > 0, where 1l A stands for the indicator function of the set A. The positive numbers {a ν : ν ∈ Vert(D)} are called initial weights, and we suppose a ν ≡ 1, unless specified otherwise. Such a process is said to be a Vertex Reinforced Jump Process (VRJP) on D.
Consider VRJP defined on the integers, which starts from 0. With probability 1/2 it will jump either to 1 or −1. The time of the first jump is an exponential random variable with mean 1/2, and is independent on the direction of the jump. Suppose the walk jumps towards 1 at time z. Given this, it will wait at 1 an exponential amount of time with mean 1/(2 + z). Independently of this time, the jump will be towards 0 with probability (1 + z)/(2 + z).
In this paper we define a process to be recurrent if it visits each vertex infinitely many times a.s., and to be transient otherwise. VRJP was introduced by Wendelin Werner, and its properties were first studied by Davis and Volkov (see [8] and [9]). This reinforced walk defined on the integer lattice is studied in [8] where recurrence is proved. For fixed b ∈ N := {1, 2, . . .}, the b-ary tree, which we denote by G b , is the infinite tree where each vertex has b + 1 neighbors with the exception of a single vertex, called the root and designated by ρ, that is connected to b vertices. In [9] is shown that VRJP on the b-ary tree is transient if b ≥ 4. The case b = 3 was dealt in [4], where it was proved that the process is still transient. The case b = 2 is still open.
Another process which reinforces the vertices, the so called Vertex-Reinforced Random Walk (VRRW), shows a completely different behaviour. VRRW was introduced by Pemantle (see [17]). Pemantle and Volkov (see [19]) proved that this process, defined on the integers, gets stuck in at most five points. Tarrès (see [23]) proved that it gets stuck in exactly 5 points. Volkov (in [24]) studied this process on arbitrary trees.
The reader can find in [18] a survey on reinforced processes. In particular, we would like to mention that little is known regarding the behaviour of these processes on infinite graphs with loops. Merkl and Rolles (see [13]) studied the recurrence of the original reinforced random walk, the so-called linearly bond-reinforced random walk, on two-dimensional graphs. Sellke (see [21]) proved than once-reinforced random walk is recurrent on the ladder.
We define the distance between two vertices as the number of edges in the unique self-avoiding path connecting them. For any vertex ν, denote by |ν| its distance from the root. Level i is the set of vertices ν such that |ν| = i. The main result of this paper is the following. Theorem 1.1. Let X be VRJP on G b , with b ≥ 3. There exist constants K (1) b ∈ (0, ∞) and K (2) b ∈ [0, ∞) such that a.s., (1.1) where we took the limit as t → ∞, ⇒ stands for weak convergence and Normal(0, 0) stands for the Dirac mass at 0.
Durrett, Kesten and Limic have proved in [11] an analogous result for a bond-reinforced random walk, called one-time bond-reinforced random walk, on G b , b ≥ 2. To prove this, they break the path into independent identically distributed blocks, using the classical method of cut points. We also use this approach. Our implementation of the cut point method is a strong improvement of the one used in [3] to prove the strong law of large numbers for the original reinforced random walk, the so-called linearly bond-reinforced random walk, on G b , with b ≥ 70. Aidékon, in [1] gives a sharp criteria for random walk in a random environment, defined on Galton-Watson tree, to have positive speed. He proves the strong law of large numbers for linearly bond-reinforced random walk on G b , with b ≥ 2.

Preliminary definitions and properties
From now on, we consider VRJP X defined on the regular tree G b , with b ≥ 3. For ν = ρ, define par(ν), called the parent of ν, to be the unique vertex at level |ν| − 1 connected to ν. A vertex ν 0 is a child of ν if ν = par(ν 0 ). We say that a vertex ν 0 is a descendant of the vertex ν if the latter lies on the unique self-avoiding path connecting ν 0 to ρ, and ν 0 = ν. In this case, ν is said to be an ancestor of ν 0 . For any vertex µ, let Λ µ be the subtree consisting of µ, its descendants and the edges connecting them, i.e. the subtree rooted at µ. Define We give the so-called Poisson construction of VRJP on a graph D (see [20]). For each ordered pair of neighbors (u, v) assign a Poisson process P (u, v) of rate 1, the processes being independent. Call h i (u, v), with i ≥ 1, the inter-arrival times of P (u, v) and let ξ 1 := inf{t ≥ 0 : X t = u}. The first jump after ξ 1 is at time where the minimum is taken over the set of neighbors of u. The jump is towards the neighbor v for which that minimum is attained. Suppose we defined {(ξ j , c j ), 1 ≤ j ≤ i − 1}, and let The first jump after ξ i happens at time By virtue of our construction of VRJP, (2.1) can be interpreted as follows. When the process X visits the vertex µ 0 for the first time, if this ever happens, the weight at its parent is exactly 1 + h 1 par(µ 0 ), µ 0 while the weight at µ is 1. Hence condition (2.1) implies that when the process visits µ 0 (if this ever happens) then it will visit µ before it returns to par(µ 0 ), if this ever happens.
The next Lemma gives bounds for the probability that VRJP returns to the root after the first jump.
Lemma 2.2. Let α b := P X t = ρ for some t ≥ T 1 , and let β b be the smallest among the positive solutions of the equation where, for k ∈ {0, 1, . . . , b}, Proof. First we prove the lower bound in (2.4). The left-hand side of this inequality is the probability that the process returns to the root with exactly two jumps. To see this, notice that L(ρ, T 1 ) is equal 1 + min ν : |ν|=1 h 1 (ρ, ν). Hence T 1 = L(ρ, T 1 ) − 1 is distributed like an exponential with mean 1/b. Given that T 1 = z, the probability that the second jump is from X T 1 to ρ is equal to (1 + z)/(b + 1 + z). Hence the probability that the process returns to the root with exactly two jumps is As for the upper bound in (2.4) we reason as follows. We give an upper bound for the probability that there exists an infinite random tree which is composed only of good vertices and which has root at one of the children of X T 1 . If this event holds, then the process does not return to the root after time T 1 (see the proof of Theorem 3 in [4]). We prove that a particular cluster of good vertices is stochastically larger than a branching process which is supercritical. We introduce the following color scheme. The only vertex at level 1 to be green is X T 1 . A vertex ν, with |ν| ≥ 2, is green if and only if it is good and its parent is green. All the other vertices are uncolored. Fix a vertex µ. Let C be any event in H µ := σ(h i (η 0 , η 1 ) : i ≥ 1, with η 0 ∼ η 1 and both η 0 and η 1 / ∈ Λ µ ), (2.5) that is the σ-algebra that contains the information about X t observed outside Λ µ . Next we show that given C ∩ {µ is green}, the distribution of h 1 (par(µ), µ) is stochastically dominated by an exponential (1). To see this, first notice that h 1 (par(µ), µ) is independent of C. Let D := {par(µ) is green} ∈ H µ and set The random variable W is independent of h 1 (par(µ), µ) and is absolutely continuous with respect the Lebesgue measure. By the definition of good vertices we have Denote by f W the conditional density of W given D ∩ C ∩ {h 1 (par(µ), µ) < W }. We have Using the facts that h 1 (par(µ), µ) is independent of W, C and D and we get that the expression in (2.7) is less or equal to P h 1 (par(µ), µ) ≥ x . Summarising The inequality (2.7) implies that if µ 1 is a child of µ and C ∈ H µ we have To see this, it is enough to integrate over the value of h 1 (par(µ), µ) and use the fact that, conditionally on h 1 (par(µ), µ), the events {µ 1 is green} and {µ is green} ∩ C are independent. The probability that µ 1 is good conditionally on {h 1 (par(µ), µ) = x} is a non-increasing function of x, while the distribution of h 1 (par(µ), µ) is stochastically smaller than the conditional distribution of h 1 (par(µ), µ) given {µ is green} ∩ C, as shown in (2.8).
Hence the cluster of green vertices is stochastically larger than a Galton-Watson tree where each vertex has k offspring, k ∈ {0, 1, . . . , b}, with probability p k defined in (2.3). To see this, fix a vertex µ and let µ i , with i ∈ {0, 1, . . . , b} be its children. It is enough to realize that p k is the probability that exactly k of the h 1 (µ, µ i ), with i ∈ {0, 1, . . . , b}, are smaller than 1 + h 1 (par(µ), µ) −1 h 1 µ, par(µ) . As the random variables h 1 (µ, µ i ), h 1 µ, par(µ) and h 1 (par(µ), µ) are independent exponentials with parameter one, we have (2.10) From the basic theory of branching processes we know that the probability that this Galton-Watson tree is finite (i.e. extinction) equals the smallest positive solution of the equation x k p k = 0. (2.11) The proof of (2.4) follows from the fact that 1 − β b ≤ 1 − α b . This latter inequality is a consequence of the fact that the cluster of green vertices is stochastically larger than the Galton-Watson tree, hence its probability of non-extinction is not smaller. As b ≥ 3, the Galton-Watson tree is supercritical (see [4]),hence β b < 1.
For example, if we consider VRJP on G 3 , Lemma 2.2 yields 0.3809 ≤ α 3 ≤ 0.8545. Definition 2.3. Level j ≥ 1 is a cut level if the first jump after T j is towards level j + 1, and after time T j+1 the process never goes back to X T j , and L(X T j , ∞) < 2 and L(par(X T j ), ∞) < 2.
Define l 1 to be the cut level with minimum distance from the root, and for i > 1, Define the i-th cut time to be τ i := T l i . Notice that l i = |X τ i |.

l 1 has an exponential tail
For any vertex ν ∈ Vert(G b ), we define fc(ν), which stands for first child of ν, to be the (a.s.) unique vertex connected to ν satisfying h 1 (ν, fc(ν)) = min h 1 (ν, µ) : par(µ) = ν . (3.1) For definiteness, the root ρ is not a first child. Notice that condition (3.1) does not imply that the vertex fc(ν) is visited by the process. If X visits it, then it is the first among the children of ν to be visited.
For any pair of distributions f and g, denote by Recall the definition of p i , i ∈ {0, . . . , b}, given in (2.3). Denote by p (1) the distribution which assigns to i ∈ {0, . . . , b} probability p i . Define, by recursion, The distribution p (j) describes the number of elements, at time j, in a population which evolves like a branching process generated by one ancestor and with offspring distribution p (1) . If we let then the mean of p (j) is m j . The probability that a given vertex µ is good is, by definition, As the h 1 par(µ 0 ), µ 0 is exponential with parameter 1, conditioning on its value and using independence between different Poisson processes, we have that the probability above equals Next we want to define a sequence of events which are independent and which are closely related to the event that a given level is a cut level. For any vertex ν of G b let Θ ν be the set of vertices µ such that • µ is a descendant of ν, • the difference |µ| -|ν| is a multiple of ζ, • µ is a first child.
By subtree rooted at ν we mean a subtree of Λ ν that contains ν. Set ν = fc(ν) and let A(ν) := ∃ an infinite subtree of G b root at a child of ν, which is composed only by good vertices and which contains none of the vertices in Θ ν } (3.4) Notice that if the process reaches the first child of ν and if A(ν) holds, then the process will never return to ν. Hence if A i holds, and if X T i+1 = X T i + 1, then i is a cut level, provided that the total weights at X T i and its parent are less than 2.
Proof. We recall that ζ ≥ 2. We proceed by backward recursion and show that the events A iζ depend on disjoint Poisson processes collections. Choose integers 0 < i 1 < i 2 < . . . < i k , with i j ∈ ζN := {ζ, 2ζ, 3ζ, . . .} for all j ∈ {1, 2, . . . , k}. It is enough to prove that Fix a vertex ν at level i k . The set A(ν) belongs to the sigma-algebra generated by As the two events belong to disjoint collections of independent Poisson processes, they are independent. As P(A(ν)) = P(A(ρ)), we have The events A(ν) and {X T i k = ν} are independent, and by virtue of the self-similarity property of the regular tree we get P A(ρ) = P A i k . Hence Reiterating (3.7) we get (3.5).
Lemma 3.2. Define γ b to be the smallest positive solution of the equation where ζ and (q (n) k ) have been defined at the beginning of this section. We have Proof. Fix i ∈ N and let ν * = X T i . We adopt the following color scheme. The vertex fc X T i is colored blue. A descendant µ of ν * is colored blue if it is good, its parent is blue, and either • |µ| − |ν * | is not a multiple of ζ, or • 1 ζ |µ| − |ν * | ∈ N and µ is not a first child. Vertices which are not descendants of ν * are not colored. Following the reasoning given in the proof of Lemma 2.2, we can conclude that the number of blue vertices at levels |ν * | + jζ, with j ≥ 1, is stochastically larger than the number of individuals in a population which evolves like a branching process with offspring distribution q (ζ) , introduced at the beginning of this section. Again, from the basic theory of branching processes we know that the probability that this tree is finite equals the smallest positive solution of the equation (3.8). By virtue of (3.3) we have that γ b < 1.
The proof of the following Lemma can be found in [10] pages 26-27 and 35.
We have the following large deviations estimate, for s ∈ [0, 1], i) Let ν be a vertex with |ν| ≥ 1. The quantity Hence the smaller x is, the more likely ν 1 is good. This is true for any child of ν. As for descendants of ν at level strictly greater than |ν| + 2, their status of being good is independent of h 1 (ν, fc(ν)). Hence T (x) ⊃ T (y) for x < y. This implies that the connected component of good vertices contining ν is larger if Using symmetry we get i). In order to prove ii), use i) and the fact that the distribution of h 1 (ν, fc(ν)) is stochastically larger that the conditional distribution of h 1 (ν, fc(ν)) given Denote by [x] the largest integer smaller than x.
Theorem 3.5. For VRJP defined on G b , with b ≥ 3, and s ∈ (0, 1), we have where γ b was defined in Lemma 3.2, and Proof. By virtue of Proposition 3.1 the sequence 1l A kζ , with k ∈ N, consists of i.i.d. random variables. The random variable [n/ζ] j=1 1l A jζ has binomial distribution with parameters P A(ρ) , [n/ζ] . We define the event B j :={the first jump after T j is towards level j + 1 and L X T j , T j+1 < 2, Let F t be the smallest sigma-algebra defined by the collection {X s , 0 ≤ s ≤ t}. For any stopping time S define F S : where the inequality holds a.s.. In fact, by time T i the total weight of the parent of X T i is stochastically smaller than 1+ an exponential of parameter b, independent of F T i−1 . Hence the probability that this total weight is less than 2 is larger than 1−e −b . Given this, the probability that the first jump after T i is towards level i + 1 is larger than b/(b + 2). Finally, the conditional probability that T i+1 − T i < 1 is larger than 1 − e −(b+1) . This implies, together with ζ ≥ 2, that the random variable [n/ζ] j=1 1l B j is stochastically larger than a binomial(n, ϕ b ). For any i ∈ N, and any vertex ν with |ν| = iζ, set Moreover, the random variable Z and the event E are both measurable with respect the sigmaalgebra Let f Z be the density of Z given {h 1 (ν, fc(ν)) < Z} ∩ E. Using 3.4, ii), and the independence between h 1 (ν, fc(ν)) and H ν , we get (3.13) The first equality in the last line of (3.13) is due to symmetry. Hence (3.14) If A k ∩ B k holds then k is a cut level. In fact, on this event, when the walk visits level k for the first time it jumps right away to level k + 1 and never visits level k again. This happens because X T k+1 = fc(X T k ) has a child which is the root of an infinite subtree of good vertices. Moreover the total weights at X T k and its parent are less than 2. Define By virtue of (3.9), (3.12), (3.14) and Proposition 3.1 we have that e n is stochastically larger than a bin([n/ζ], Corollary 3.6. For n > 1/ (1 − γ b )φ b , by choosing s = 1/n in Theorem 3.5, we have

15)
where, from the definition of H we have The goal of this section is to prove the finiteness of the 11/5 moment of the first cut time. We adopt the following strategy • first we prove the finiteness of all moments for the number of vertices visited by time τ 1 , then • we prove that the total time spent at each of these sites has finite 12/5-moment.
Fix n ∈ N and let Π n := number of distinct vertices that X visits by time T n , Π n,k := number of distinct vertices that X visits at level k by time T n .
The process X δ(t,E) is called the restriction of X to E. Proposition 4.1 (Restriction principle (see [8])). Consider VRJP X defined on a tree J rooted at ρ. Assume this process is recurrent, i.e. visits each vertex infinitely often, a.s.. Consider a subtree J rooted at ν. Then the process X δ(t, e J ) is VRJP defined on J . Moreover, for any subtree J * disjoint from J , we have that X δ(t, e J ) and X δ(t,J * ) are independent.
Proof. This principle follows directly from the Poisson construction and the memoryless property of the exponential distribution.
Definition 4.2. Recall that P (x, y), with x, y ∈ Vert G b are the Poisson processes used to generate X on G b . Let J be a subtree of G b . Consider VRJP V on J which is generated by using P (u, v) : u, v ∈ Vert(J ) , which is the same collection of Poisson processes used to generate the jumps of X from the vertices of J . We say that V is the extension of X in J . The processes V t and X δ(t,J ) coincide up to a random time, that is the total time spent by X in J .
We construct an upper bound for Π n,k , with 2 ≤ k ≤ n − 1. . Let G(k) be the finite subtree of G b composed by all the vertices at level i with i ≤ k − 1, and the edges connecting them. Let V be the extension of X to G(k). This process is recurrent, because is defined on a finite graph. The total number of first children at level k − 1 is b k−2 , and we order them according to when they are visited by V, as follows. Let η 1 be the first vertex at level k − 1 to be visited by V. Suppose we have defined η 1 , . . . , η m−1 . Let η m be the first child at level k − 1 which does not belong to the set {η 1 , η 2 , . . . , η m−1 }, to be visited. The vertices η i , with 1 ≤ i ≤ b k−2 are determined by V. All the other quantities and events such as T (ν) and A(ν), with ν running over the vertices of G b , refer to the process X. Define f n (k) := 1 + b 2 inf{m ≥ 1 : 1l A(par(ηm)) = 1}.
Let J := inf{n : T (η n ) = ∞}, if the infimum is over an empty set, let J = ∞. Suppose that A(η m ) holds, then X, after time T (η m ), is forced to remain inside Λ ηm , and never visits fc(η m ) again. This implies that T (η m+1 ) = ∞. Hence, if J = m then m−1 i=1 (A(par(η i ))) c holds, and We conclude that f n (k) overcounts the number of vertices at level k which are visited, i.e. Π n,k ≤ f n (k).
Recall that h 1 (ν, fc(ν)), being the minimum over a set of b independent exponentials with rate 1, is distributed as an exponential with mean 1/b.
Proof. Given m−1 i=1 (A(par(η i )) c the distribution of h(par(η m ), η m ) is stochastically smaller than an exponential with mean 1/b. Fix a set of vertices ν i with 1 ≤ i ≤ m − 1 at level k − 1 and each with a different parent. Given η i = ν i for i ≤ m − 1, consider the restriction of V to the finite subgraph obtained from G(k) by removing each of the ν i and par(ν i ), with i ≤ m − 1. The restriction of V to this subgraph is VRJP, independent of m−1 i=1 (A(par(η i )) c , and the total time spent by this process in level k − 2 is exponential with mean 1/b. This total time is an upper bound for h(par(η m ), η m ). This conclusion is independent of our choice of the vertices ν i with 1 ≤ i ≤ m − 1. Finally, using Proposition 3.4 i), we have Let a n , c n be numerical sequences. We say that c n = O(a n ) if c n /a n is bounded. Proof. Consider first the case p > 1. Notice that Π n,0 = Π n,n = 1. As for the case p = 1, where the sum is over the vertices of G b . In words, Π is the number of vertices visited before τ 1 .
In the last inequality we used Corollary 3.6.
A ray σ is a subtree of G b containing exactly one vertex of each level of G b . Label the vertices of this ray using {σ i , i ≥ 0}, where σ i is the unique vertex at level i which belongs to σ. Denote by S the collection of all rays of G b .
Proof. By the tower property of conditional expectation, At this point we focus on the process restricted to {0, 1}. This restricted process is VRJP which starts at 1, with initial weights a 1 = 1, and a 0 = 1 + h 1 (σ 0 , σ 1 ) and σ 0 = ρ. By applying Lemma 4.6, and using the fact that h 1 (σ 0 , σ 1 ) is exponential with mean 1, we have (4.7) The Lemma follows by recursion and restriction principle.
Next, we prove that L(ρ, T (σ n )) ≤ L (σ) (σ 0 , T (σ) n ). If A(ν) holds, after the first time the process hits the first child of ν, if this ever happens, it will never visit ν again, and will not increase the local time spent at the root. Roughly, our strategy is to use the extensions on paths to give an upper bound of the total time spent at the root by time T k and show that the probability that k i=1 D c i decreases quite fast in k. Using the independence between disjoint collections of Poisson processes, we infer that A(ν), with |ν| = k − 2 are independent. In fact each A(ν) is determined by the Poisson processes attached to pairs of vertices in Λ ν . Hence  Using (4.11), Holder's inequality (with p = 5/4) and (4.10) we have Lemma 4.9. For ν = ρ, there exists a random variable ∆ ν which is σ P (u, v) : u, v ∈ Vert(Λ ν ) -measurable, such that i) L(ν, ∞) ≤ ∆ ν , and ii) ∆ ν and L(ρ, ∞) are identically distributed.
Proof. Let X := { X t , t ≥ 0} be the extension of X on Λ ν . Define By construction, this random variable satisfies i) and ii) and is σ P (u, v) : u, v ∈ Vert(Λ ν )measurable.
Proof. Label the vertices at level 1 by µ 1 , µ 2 , . . . , µ b . Let τ 1 (µ i ) be the first cut time of the extension of X on Λ µ i . This extension is VRJP on Λ µ i with initial weights 1, hence we can apply Theorem 4.10 to get Hence, it remains to prove that for x ∈ [1, 2] where we used Jensen's inequality, the independence of τ (µ i ) and T 1 and Lemma 4.11. In fact, as L ρ, ∞ ≥ 1 , we have

Splitting the path into one-dependent pieces
does not depend on ν. Hence This implies that Z is a Markov chain. The self-similarity property of G b and X yields the homegeneity.
From the previous proof, we can infer that given Proof. We only prove (5.1), the proof of (5.2) being similar. Define C := X t = ρ, ∀t > T 1 and fix a vertex ν. Notice that by the self-similarity property of G b , we have By the proof of Lemma 2.2, we have that Hence sup x : x∈ [1,2] E (τ 1 ) 11/5 L(ρ, Next we prove that Z satisfies the Doeblin condition. Lemma 5.3. There exists a probability measure φ(·) and 0 < λ ≤ 1, such that for every Borel subset B of [1,2], we have Proof. As Z i is homogeneous, it is enough to prove (5.4) for i = 1. In this proof we show that the distribution of Z 2 is absolutely continuous and we compare it to 1+ an exponential with parameter 1 conditionated on being less than 1. The analysis is technical because Z i depend on the behaviour of the whole process X. Our goal is to find a lower bound for Moreover, we require that this lower bound is independent of z ∈ [1,2].
Fix ε ∈ (0, 1). Our first goal is to find a lower bound for the probability of the event {Z 2 ∈ (x, y), Z 1 ∈ I ε (z)}, where I ε (z) := (z − ε, z + ε). Fix z ∈ [1,2] and consider the function Its derivative with respect t is which is non-positive for t ∈ [1, 2] and u ∈ [1,2]. In fact Hence for fixed u ∈ [1,2], the function in (5.6) is non-increasing for t ∈ [1,2]. For 1 ≤ x < y ≤ 2, we have e −(b+u)(x−1) − e −(b+u)(y−1) ≥ (b + 1)e −(b+2) e −(x−1) − e −(y−1) . (5.7) We use this inequality to get a lower bound for the probability of the event {Z 2 ∈ (x, y), Z 1 ∈ I ε (z)}. Our strategy is to calculate the probability of a suitable subset of the latter set. Consider the following event. Suppose that a) T 1 < 1, then b) the process spends at X T 1 an amount of time enclosed in (z − 1 − ε, z − 1 + ε), then c) it jumps to a vertex at level 2, spends there an amount of time t where t + 1 ∈ (x, y), and d) it jumps to level 3 and never returns to X T 2 .
In the event just described, levels 1 and 2 are the first two cut levels, and {Z 2 ∈ (x, y), Z 1 ∈ I ε (z)} holds. The probability that a) holds is exactly e −b . Given T 1 = s − 1, the time spent in X T 1 before the first jump is exponential with parameter (b + s). Hence b) occurs with probability larger than inf s∈ [1,2] e −(b+s)(z−ε) − e −(b+s)(z+ε) .
Given a) and b), the process jumps to level 2 and then to level 3 with probability larger than b/(b + 2) b/(b + z + ε)). The conditional probability, given a) and b), that the time gap between these two jumps lies in (x − 1, y − 1) is larger than inf u∈Iε(z) e −(b+u)(x−1) − e −(b+u)(y−1) .
At this point, a lower bound for the conditional probability that the process never returns to We have [1,2] e −(b+s)(z−ε) − e −(b+s)(z+ε) , (5.8) where in the last inequality we used (5.7). Notice that there exists a constant C (4) b > 0 such that Summarizing, we have where C (5) b depends only on b. In order to find a lower bound for (5.5) we need to prove that sup ε∈(0,1) for some positive constant C (6) b . To see this, recall the definition of B j from the proof of Theorem 3.5, and ζ from (3.3). The event that level i is not a cut level is subset of (B i ∩ A i ) c (see the proof of Theorem 3.5). Denote by m i = h 1 X T i , fc(X T i ) , which is exponential with mean 1/b. Then where the constant C is independent of ε and z. It remains to prove that the sum in the right-hand side is bounded by a constant independent of ε. Notice that, for i > ζ, A i−ζ and B i−ζ are independent of m i . Moreover the events are independent by the proof of Proposition 3.1. Hence Combining (5.8), (5.9) and (5.11), we get for some λ > 0. . A finite measure defined on field A can be extended uniquely to the sigmafield generated by A, and this extension coincides with the outer measure. We apply this result to prove that (5.12) holds for any Borel set C ⊂ [1,2], using the fact that it holds in the field of finite unions of intervals. For any interval E, the right-hand side of (5.12) can be written in an integral form as dx.
Fix a Borel set C ⊂ [1, 2] and ε > 0 choose a countable collection of disjoint intervals The first inequality is true because of the extension theorem, and the fact that the right-hand side is a lower bound for the outer measure, for a suitable choice of the E i s. The inequality (5.4), with φ(C) = C e −x+1 / (1 − e −1 ) dx, follows by sending ε to 0.
The proof of the following Proposition can be found in [2].
With a similar proof we get the following result.
Definition 5.7. A process {Y k , k ≥ 1}, is said to be one-dependent if Y i+2 is independent of {Y j , with 1 ≤ j ≤ i}.
Proof. Given Z N i−1 , Υ i is independent of {Υ j , j ≤ i − 2}. Thus, it is sufficient to prove that Υ i is independent of Z N i−1 . To see this, it is enough to realize that given Z N i , Υ i is independent of Z N i−1 , and combine this with the fact that Z N i and Z N i−1 are independent. The variables Z N i are i.i.d., hence {Υ i , i ≥ 2}, are identically distributed.
The Strong Law of Large Numbers holds for one-dependent sequences of identically distributed variables bounded in L 1 . To see this, just consider separately the sequence of random variables with even and odd indices and apply the usual Strong Law of Large Numbers to each of them.
Hence, for some constants 0 < C (7) b , C (8) b < ∞, we have lim i→∞ τ N i i → C (7) b , and lim i→∞ l N i i → C (8) b , a.s.. (5.14) Proof of Theorem 1. If τ N i ≤ t < τ N i+1 , then by the definition of cut level, we have Hence which are the constants in (5.14). Then lim sup Similarly, we can prove that lim inf t→∞ |X t | t ≥ K (1) b , a.s.. Now we turn to the proof of the central limit theorem. First we prove that there exists a constant C ≥ 0 such that where Normal(0, 0) stands for the Dirac mass at 0. To prove (5.16) we use a theorem from [15]. The reader can find the statement of this theorem in the Appendix, Theorem 6.1, (see also [22]). In order to apply this result we first need to prove that the quantity The quantity in (5.17) can be written as The random variables Y i are identically distributed with the exception of Y 1 . From the definition of K (1) b given in (5.15), we have Hence Y i , with i ≥ 1, is a zero-mean one-dependent process, and we get (5.18) This proves that the limit in (5.17) exists and is equal to E[Y 2 2 ] + 2E[Y 3 Y 2 ]. Now we face two options. If the limit is equal to zero, then using Chebishev we get that lim m→∞ P l Nm − Cτ Nm √ m > ε = lim If the limit of the quantity in (5.17) is positive, then we can apply Theorem 6.1 and deduce central limit theorem for Y i , i ≥ 1, yielding (5.16).