THE SCALING LIMIT OF SENILE REINFORCED RANDOM WALK.

We prove that the scaling limit of nearest-neighbour senile reinforced random walk is Brownian Motion when the time T spent on the ﬁrst edge has ﬁnite mean. We show that under suitable conditions, when T has heavy tails the scaling limit is the so-called fractional kinetics process , a random time-change of Brownian motion. The proof uses the standard tools of time-change and invariance principles for additive functionals of Markov chains.


Introduction
The senile reinforced random walk is a toy model for a much more mathematically difficult model known as edge-reinforced random walk (for which many basic questions remain open [e.g. see [15]]). It is characterized by a reinforcement function f : → [−1, ∞), and has the property that only the most recently traversed edge is reinforced. As soon as a new edge is traversed, reinforcement begins on that new edge and the reinforcement of the previous edge is forgotten. Such walks may get stuck on a single (random) edge if the reinforcement is strong enough, otherwise (except for one degenerate case) they are recurrent/transient precisely when the corresponding simple random walk is [9]. Formally, a nearest-neighbour senile reinforced random walk is a sequence {S n } n≥0 of d -valued random variables on a probability space (Ω, , f ), with corresponding filtration { n = σ(S 0 , . . . , S n )} n≥0 , defined by: • For n ∈ , e n = (S n−1 , S n ) is an n -measurable undirected edge and m n = max{k ≥ 1 : e n−l+1 = e n for all 1 ≤ l ≤ k} (1.1) • For n ∈ and x ∈ d such that |x| = 1, Note that the triple (S n , e n , m n ) (equivalently (S n , S n−1 , m n )) is a Markov chain. Hereafter we suppress the f dependence of the probability f in the notation. The diffusion constant is defined as lim n→∞ n −1 [|S n | 2 ] (=1 for simple random walk) whenever this limit exists. Let T denote the random number of consecutive traversals of the first edge traversed, and p = (T is odd). Then when [T ] < ∞, the diffusion constant is given by ([9] and [11]) which is not monotone in the reinforcement. Indeed one can prove that (1.3) holds for all f (in the case d = 1 and f (1) = −1 this must be interpreted as " 1/0 = ∞"). The reinforcement regime of most interest is that of linear reinforcement f (n) = C n for some C. In this case, by the second order mean-value theorem applied to log(1 − x), x < 1 we have where u i ∈ (0, 2d−1 2d+C j ), and γ is a constant arising from the summable infinite series and the approximation of the finite sum by a log. An immediate consequence of (1.4) is that for f (n) = C n, [T ] is finite if and only if C < 2d − 1. A different but related model, in which the current direction (rather than the current edge) is reinforced according to the function f was studied in [12,10]. For such a model, T is the number of consecutive steps in the same direction before turning. In [10], the authors show that in all dimensions the scaling limit is Brownian motion when σ 2 =Var(T ) < ∞ and σ 2 + 1 − 1/d > 0. In the language of this paper, the last condition corresponds to the removal of the special case d = 1 and f (1) = −1. Moreover when d = 1 and T has heavy tails (in the sense of (2.1) below) they show that the scaling limit is an α-stable process when 1 < α < 2 and a random time change of an α-stable process when 0 < α < 1. See [10] for more details. Davis [4] showed that the scaling limit of once-reinforced random walk in one dimension is not Brownian motion (see [15] for further discussion).
In Section 2 we state and discuss the main result of this paper, which describes the scaling limit of S n when either [T ] < ∞ or (T ≥ n) ∼ n −α L(n) for some α > 0 and L slowly varying at infinity. When (T < ∞) < 1 the walk has finite range since it traverses a random (geometric) number of edges before getting stuck on a random edge. To prove the main result, in Section 3 we observe the walk at the times that it has just traversed a new edge and describe this as an additive functional of a particular Markov chain. In Section 4 we prove the main result assuming the joint convergence of this time-changed walk and the associated time-change process. In Section 5 we prove the convergence of this joint process.

Main result
The assumptions that will be necessary to state the main theorem of this paper are as follows: (A1) (T < ∞) = 1, and either d > 1 or (T = 1) < 1.
(A2a) Either [T ] < ∞, or for some α ∈ (0, 1] and L slowly varying at infinity, (2.1) (A2b) If (2.1) holds but [T ] = ∞, then we also assume that Note that both (2.1) and [T ] < ∞ may hold when α = 1 (e.g. take L(n) = (log n) −2 ). By [6, Theorem XIII.6.2], when α < 1 there exists ℓ(·) > 0 slowly varying such that By [3, Theorem 1.5.12], there exists an asymptotic inverse function g −1 α (·) (unique up to asymptotic equivalence) satisfying g α (g −1 α (n)) ∼ g −1 α (g α (n)) ∼ n, and by [3, Theorem 1.5.6] we may assume that g α and g −1 α are monotone nondecreasing. A subordinator is a real-valued process starting at 0, with stationary, independent increments, such that almost every path is nondecreasing and right continuous. Let B d (t) be a standard ddimensional Brownian motion. For α ≥ 1, let V α (t) = t and for α ∈ (0, 1), let V α be a standard α-stable subordinator (independent of B d (t)). This is a strictly increasing pure-jump Levy process whose law is specified by the Laplace transform of its one-dimensional distributions (see e.g. Define the right-continuous inverse of V α (t) and (when α < 1 the fractional-kinetics process) Z α (s) by Since V α is strictly increasing, both V −1 α and Z α are continuous (almost-surely). The main result of this paper is the following theorem, in which D(E, d ) is the set of cadlag paths from E to d . Throughout this paper =⇒ denotes weak convergence.
where the convergence is in D([0, 1], d ) equipped with the uniform topology.

Discussion
The limiting object in Theorem 2.1 is the scaling limit of a simple random walk jumping at random g. see [13]) that are independent of the position and history of the walk. In [1] the same scaling limit is obtained for a class of (continuous time) trap models with d ≥ 2, where a random jump rate or waiting time is chosen initially at each site and remains fixed thereafter. In that work, when d = 1, the mutual dependence of the time spent at a particular site on successive returns remains in the scaling limit, where the time change/clock process depends on the (local time of the) Brownian motion itself. The independence on returning to an edge is the feature which makes our model considerably easier to handle. For the senile reinforced random walk, the direction of the steps of the walk is dependent on the clock and we need to prove that the dependence is sufficiently weak so that it disappears in the limit. While the slowly varying functions in g α and g −1 α are not given explicitly, in many cases of interest one can use [6, Theorem XIII.6.2] and [3, Section 1.5.7] to explicitly construct them. For example, let L(n) = κ(log n) β for some β ≥ −1. For α = 1 we can take Assumption (A1) is simply to avoid the trivial cases where the walk gets stuck on a single edge (i.e. when (1 + f (n)) −1 is summable [9]) or is a self-avoiding walk in one dimension. For linear reinforcement f (n) = C n, (1.4) shows that assumption (A2) holds with α = (2d − 1)/C. It may be of interest to consider the scaling limit when f (n) grows like nℓ(n), where lim inf n→∞ ℓ(n) = ∞ but such that The condition (2.2) when α = 1 is so that one can apply a weak law of large numbers. The condition holds for example when L(n) = (log n) k for any k ≥ −1. For the α < 1 case, the condition (2.2) holds (with α o = α e and L o = L e ) whenever there exists n 0 such that for all n ≥ n 0 , f (n) ≥ f (n − 1) − (2d − 1) (so in particular when f is non-decreasing). To see this, observe that for all n ≥ n 0

Invariance principle for the time-changed walk
In this section we prove an invariance principle for any senile reinforced random walk (satisfying (A1)) observed at stopping times τ n defined by It is easy to see that τ n = 1 + n i=1 T i for each n ≥ 1, where the T i , i ≥ 1 are independent and identically distributed random variables (with the same distribution as T ), corresponding to the number of consecutive traversals of successive edges traversed by the walk.
The process S τ n is a simpler one than S n and one may use many different methods to prove Proposition 3.1 (see for example the martingale approach of [11]). We give a proof based on describing S τ n as an additive functional of a Markov chain. This is not necessarily the simplest representation, but it is the most natural to the author. Let denote the collection of pairs (u, v) such that v is one of the unit vectors u i ∈ d , for i ∈ {±1, ±2, · · · ± d} (labelled so that u −i = −u i ) and u is either 0 ∈ d or one of the unit vectors u i = −v. The cardinality of is then | | = 2d + 2d(2d − 1) = (2d) 2 . Given a senile reinforced random walk S n with parameter p = (T odd) ∈ (0, 1], we define an irreducible, aperiodic Markov chain X n = (X [1] n , X [2] n ) with natural filtration n = σ(X 1 , . . . , X n ), and finite state-space , as follows. For n ≥ 1, let X n = (S τ n −1 − S τ (n−1) , S τ n − S τ n −1 ), and Y n = X [1] n + X [2] n . It follows immediately that S τ n = n m=1 Y m and , for each i, j, ( j = −i).

(3.2)
Now T n is independent of X 1 , . . . , X n−1 , and conditionally on T n being odd (resp. even), S τ n −S τ (n−1) (resp. S τ n − S τ n −1 ) is uniformly distributed over the 2d − 1 unit vectors in d other than −X [2] n−1 (resp. other than X [2] n−1 ). It is then an easy exercise to verify that {X n } n≥1 is a finite, irreducible and aperiodic Markov chain with initial distribution (3.2) and transition probabilities given by By symmetry, the first 2d entries of the unique stationary distribution π ∈ M 1 ( ) are all equal (say π a ) and the remaining 2d(2d − 1) entries are all equal (say π b ), and it is easy to check that As an irreducible, aperiodic, finite-state Markov chain, {X n } n≥1 has exponentially fast, strong mixing, i.e. there exists a constant c and t < 1 such that for every k ≥ 1, Since Y n is measurable with respect to X n , the sequence Y n also has exponentially fast, strong mixing. To verify Proposition 3.1, we use the following multidimensional result that follows easily from [8, Corollary 1] using the Cramér-Wold device.

Proof of Proposition 3.1
Since S τ n = n m=1 Y m where |Y m | ≤ 2, and the sequence {Y n } n≥0 has exponentially fast strong mixing, Proposition 3.1 will follow from Corollary 3.2 provided we show that where the superscript n−1 , and the Markov property for X n implies that For n > m ≥ 1, and letting r = (3.10) 2d−1 , for m ≥ 2. Combining these results, we get that 2d − 1 . (3.11) Since r < 1, the second sum over l is bounded by a constant, uniformly in n. Thus, this is equal to (3.12) Dividing by n and taking the limit as n → ∞ verifies (3.7) and thus completes the proof of Proposition 3.1.

Proof of Theorem 2.1
Theorem 2.1 is a consequence of convergence of the joint distribution of the rescaled stopping time process and the random walk at those stopping times as in the following proposition. Proof of Theorem 2.1 assuming Proposition 4.1. Since ⌊g −1 α (n)⌋ is a sequence of positive integers such that ⌊g −1 α (n)⌋ → ∞ and n/g α (⌊g −1 α (n)⌋) → 1 as n → ∞, it follows from (4.1) that as n → ∞,    . Thus, 3. Together with (4.4) and the fact that g −1 α (n)/⌊g −1 α (n)⌋ → 1, this proves Theorem 2.1.

Proof of Proposition 4.1
The proof of Proposition 4.1 is broken into two parts. The first part is the observation that the marginal processes converge, i.e. that the time-changed walk and the time-change converge to B d (t) and V α (t) respectively, while the second is to show that these two processes are asymptotically independent.

Convergence of the time-changed walk and the time-change.
Lemma 5.1. Suppose that assumptions (A1) and (A2) hold for some α > 0, then as n → ∞, Proof. The first claim is the conclusion of Proposition 3.1, so we need only prove the second claim.
it is enough to show convergence of τ * ⌊nt⌋ = (τ ⌊nt⌋ − 1)/g α (n). For processes with independent and identically distributed increments, a standard result of Skorokhod essentially extends the convergence of the one-dimensional distributions to a functional central limit theorem. When [T ] exists, convergence of the one-dimensional marginals τ * ⌊nt⌋ /n [T ] =⇒ t is immediate from the law of large numbers. The case α < 1 is well known, see for example [6, Section XIII.6] and [16,Section 4.5.3]. The case where α = 1 but (2.1) is not summable is perhaps less well known. Here the result is immediate from the following lemma. if and only if n (|X | > g n ) → 0.

Proof of Lemma 5.2. Note that
Now by assumption (A2b),

Asymptotic Independence
Tightness of the joint process in Proposition 4.1 is an easy consequence of the tightness of the marginal processes (Lemma 5.1), so we need only prove convergence of the finite-dimensional distributions (f.d.d.s). For α ≥ 1 this is simple and is left as an exercise. To complete the proof of Proposition 4.1, it remains to prove convergence of the f.d.d.s when α < 1 (hence p < 1). Let 1 and 2 be convergence determining classes of bounded, -valued functions on d and + respectively, each closed under conjugation and containing a non-zero constant function, then {g(x 1 , x 2 ) := g 1 (x 1 )g 2 (x 2 ) : g i ∈ i } is a convergence determining class for d × + . This follows as in [5,Proposition 3.4.6] where the closure under conjugation allows us to extend the proof to complex-valued functions. Therefore, to prove convergence of the finite-dimensional distributions in (4.1) it is enough to show that for every 0 ≤ t 1 < · · · < t r ≤ 1, k 1 , . . . , k r ∈ d and η 1 , . . . , η r ≥ 0, (5.7) From (2.6) and the fact that V α has independent increments, the rightmost expectation can be written as exp{− r l=1 (η * l ) α (t l − t l−1 )}, where η * l = r j=l η j . Let n = i ∈ {1, . . . , n} : T i is odd , ⌊n t⌋ = ( ⌊nt 1 ⌋ \ ⌊nt 0 ⌋ , . . . , ⌊nt r ⌋ \ ⌊nt r−1 ⌋ ) and t 0 = 0. For fixed n and t, we write A = (A (1) , . . . , A (r) ) to denote an element of the range of the random variable ⌊n t⌋ , where A (i) ⊆ {⌊nt i−1 ⌋+1, . . . , ⌊nt i ⌋} for each i ∈ 1, . . . , r. Observe that | (l) ⌊n t⌋ | has a binomial distribution with parameters ⌊nt l ⌋ − ⌊nt l−1 ⌋ and p. Then for ε ∈ (0, 1 2 ) and B n ( t) := {A : , and conditioning on ⌊n t⌋ , the left hand side of (5.7) is equal to where we have used the fact that S τ n is conditionally independent of the collection {T i } i≥1 given I {T i even} , i = 1, . . . , n, to obtain the last equality. Writing ⌊nt l ⌋ i=⌊nt l−1 ⌋+1 T i and using the mutual independence of T i , i ≥ 1, the last line of (5.8) is equal to a term o(1) plus ⌊n t⌋ = A (l) } .
Similarly define g e α e (n) = n 1 α e ℓ e (n). Observe that where n l := ⌊nt l p⌋ − ⌊nt l−1 p⌋ and n * l := ||A (l) | − n l | ≤ n 1−ε since A ∈ B n ( t). By definition of g α and standard results on regular variation we have that g α (n l )/g α (n) → (p(t l − t l−1 )) 1 α and g α (n * l )/g α (n) → 0. Since α = α o ∧ α e ≤ α o , the term on the right of (5.11) converges in probability to 0. Thus, as in the second claim of Lemma 5.1, we get that