Random walks in doubly random scenery

We provide a random walk in random scenery representation of a new class of stable self-similar processes with stationary increments introduced recently by Jung, Owada and Samorodnitsky. In the functional limit theorem they provided, only a single instance of this class arose as a limit. We construct a model in which a significant portion of processes in this new class is obtained as a limit.


Random walks in random scenery
Our model is based in the framework of random walks in random scenery models. They were first considered in [4], where a number of limit theorems regarding the scaling limits of these models were proved. The more specific context in which we will be working was presented in [1]. The model considered therein can be briefly sketched as follows. Assume that there is a user moving randomly on the network (in this paper the network is just Z) which earns random rewards (governed by the random scenery) associated to the points in the network that they visit. The quantity of interest is then the total amount of rewards collected.
To be more precise, assume that that the movement of the user is a random walk on Z which after suitable scaling converges to the β-stable Lévy process with β ∈ (1,2]. Furthermore, let the random scenery be given by i.i.d. random variables (ξ j ) j∈Z which belong to the normal domain of attraction of a symmetric strictly stable distribution with index of stability α ∈ (0, 2]. Then the random walk in random scenery is given by where S k = k j=1 X k is the random walk determining the movement of the user. If we consider a large number of independent random walkers moving in independent random sceneries, then the scaling limit in the corresponding functional limit theorem (see Theorem 1.2 in [1]) leads to the process which has the integral representation given by where (L t (x, ω ′ )) t≥0,x∈R is a jointly continuous version of the local time of the symmetric β-stable Lévy motion (defined on some probability space (Ω ′ , F ′ , P ′ )) and M α is a symmetric α-stable random measure on R×Ω ′ with control measure λ 1 ⊗ P ′ , which is itself defined on some other probability space (Ω, F , P). The process (1.2) was also obtained in [6] where it arose as a limit of partial sums of a stationary and infinitely divisible process.

The limit process
Very recently Jung, Owada and Samorodnitsky in their paper [3], which was an extension of the model considered in [6], introduced a new class of self-similar stable processes whose members have an integral representation given by (S γ (t, ω ′ )) t≥0 is a symmetric γ-stable Lévy motion and (M β (t, ω ′ )) t≥0 is an independent β-Mittag-Leffler process (see section 3 in [6] for more on the latter). Both of these processes are defined on a probability space (Ω ′ , F ′ , P ′ ). Finally Here we use β instead of β so as not to confuse it with the notation we have adopted for this paper. Similarly as in the proof of (3.10) in [6] we can show that for β ∈ (0, 1 2 ) where c β is a constant depending only on β, (L t (x)) t≥0 is the local time of a symmetric β-stable Lévy motion defined independent of the process S γ (both defined on (Ω ′ , F ′ , P ′ )), β = (1 − β) −1 and Z α is a symmetric α-stable random meaure on (Ω ′ , R) with control measure P ′ ⊗ λ 1 .
The limit process obtained in [3] corresponds to γ = 2 in (1.3) it is our purpose to provide a model in which the scaling limit is given by processes of the form (1.4) for any allowable choice of parameters α, β ans γ.
2 Description of the model and the result Imagine that each x ∈ Z is associated with a reward (or punishment) given by ξ x which takes integer values. Now imagine a random walker moving on Z independently of the rewards and starting at 0. Before the movement the walker generates a strategy Y 1 , Y 2 , . . . of i.i.d. random variables which are independent of the ξ x 's and his movement. Now, any time the walker visits a point x he gets a reward (or receives punishment) given by Y k × ξ x , where k is number of times that the walker has already stayed at x (including this time). Thus the amount by which a potential reward is being multiplied depends only on the number of the visits. The total reward/punishment at time n in this scheme is given by denotes the number of visits to the point x ∈ Z up to time n ∈ N and S k = X 1 + . . . X k is the random walk performed.
The specific context in which our model is investigated is an extension of the one presented in Section 1.2 of [1] and goes as follows. Let (S n ) n≥0 be a random walk on Z such that 1 a n S n ⇒ Z β , where Z β has symmetric β-stable distribution 1 < β < 2. In particular, we assume that the random walk is recurrent. In the most general setting (a n ) n≥1 is regularly varying at infinity with exponent β. We will assume more, i.e., that (S n ) is in the normal domain of attraction of Z β and take a n = n 1/β . Let ξ = (ξ x ) x∈Z be a family of i.i.d. random variable such that where Z α is a symmetric α-stable random variable with α ∈ (0, 2). What is different from the model considered in [1] is that we introduce more randomness to the model with an i.i.d. sequence (Y n ) n≥1 such that where Z γ has a symmetric γ-stable distribution with α < γ ≤ 2. In the original formulation of [1] all the Y n 's are equal to one. For technical reasons we will also assume that for some κ > 1. The above condition can be viewed as a restriction on the distributution of Y 1 . A sufficient condition for (2.6) to hold is given in the lemma below. We denote the characteristic function of Y 1 by φ.
for some r > 0 and there is a finite constant K such that |φ ′ (θ)| ≤ K|θ| γ−1 for θ in some neighbourhood of zero.
The proof of Lemma 2.1 is given in the Appendix.
The base for our study is the behaviour of the process We also define the rescaled version of (2.8) by with r n = n 1/γ+1/(αβ)−1/(γβ) . We are interested in the scaling limit in which we consider the aggregate behaviour of a large number of independent walkers with independent strategies and having independent environments from which they collect the rewards. More precisely, consider an i.i.d. sequence of processes (D where c n is any sequence of positive integers converging to +∞. Now we may state our result concerning the scaling limit of the above process. Theorem 2.2. For any 0 < α < γ ≤ 2 the process (G n (t)) t≥0 defined by (2.10) converges (up to a multiplicative constant) as n → ∞, in the sense of finitedimensional distributions, to the process given by (1.4).

Proof of Theorem 2.2
For clarity we divided the proof of Theorem 2.2 into a number of lemmas. Basically, we prove the convergence of finite-dimensional distributions by showing the convergence of appropriate characteristic functions. First we will state them and then proceed to their proofs. In order to simplify the notation we put for n ∈ N and x ∈ Z. Since we are going to work a lot with the characteristic function of ξ 0 we introduce the following notation. Let We want to show the convergence of the characteristic function of k j=1 θ j G n (t j ) to the corresponding characteristic function of the process given by (1.3).
The first lemma in this section removes the first layer of randomness in our scheme and expresses the characteristic function in question solely in terms of the random walk and the sequence (Y k ) k≥1 .
The second lemma says that in the limit only the asymptotic behaviour of λ near zero matters.
The third lemma is the backbone of the whole proof. Then, and It is evident that given the lemmas above, Theorem 2.2 follows immediately (see the proof of Theorem 1.2 in [1]). First, however, we will show that the random variables B n , n ∈ N introduced in the formulation of Lemma 3.3 are uniformly integrable. We do this by showing that E|B n | κ is bounded uniformly in n ∈ N for some κ > 1.
Then there is a constant C, independent of n ∈ N, such that for all κ > 1 sufficiently close to 1 we have Notice that by Jensen inequality, for any κ > 1 we have (3.12) Since the sequence (Y n ) n∈N and the random walk are independent, by conditioning on the random walk, we get where R m = x∈Z 1 {Nm(x) =0} for m ∈ N. We now claim that is bounded uniformly in n ∈ N for all κ > 1 sufficiently close to 0. Using Hölder inequality with p = γ ακ and q = γ γ−ακ we see that (3.14) is no bigger than where the inequality in (3.15) follows from Hölder inequality as long as κ ≤ γ γ−α . By Lemma 1 in [4], E(R [nt] ) ≤ c 1 [nt] 1/β for some constant c 1 depending only on β. We thus conclude that (3.14) can be bounded by which is bounded uniformly in n ∈ N.
The proof of Claim 3.1 is the same as the proof of Lemma 3.4 in [1] and, therefore, we skip it and proceed directly to the proof of 3.2.
Proof of Claim 3.2. The proof presented here is very similar to the proof of Lemma 3.5 in [1]. Recall that, by assumption, (3.17) It is easy to see that Define g(v) = |v| −α |λ(v) −λ(v)|, for v = 0 and g(0) = 0. Then g is bounded and continuous. With this notation (3.19) equals Fix any ǫ > 0 and choose δ > 0 such that |z| < δ implies |g(z)| < ǫ. Then, (3.20) can be bounded by which in turn is bounded by Since, by Lemma 3.4 the sequence of random variables ( x∈Z |U n (x)| α ) n∈N is uniformly integrable, the first sumand in (3.22) is bounded by ǫ times a constant independent of n ∈ N and the second converges to 0 as n → ∞. The choice of ǫ was arbitrary and hence the proof is finished.
Proof of Claim 3.3. First we are going to show that (3.7) holds. Without losing generality we may assume that 0 ≤ t 1 ≤ . . . ≤ t k . For convenience we also put t 0 = 0. We can rewrite E(B n ) as where Z (1) (·), . . . , Z (k) (·) are i.i.d. copies of the sequence (we put Z (j) (0) = 0 for convenience) which are independent of the random walk (S n ). By Skorochod representation theorem we may assume that for j = 1, . . . , k, Z (j) (m) converges almost surely to Z (j) , which has SγS distribution and a the random variables Z (j) are independent. Let We are going to show that E(B n ) − E(C n ) converges to 0 as n → ∞. For that we will need the inequalities: and Assume first that α > 1. Put

By triangle inequality
Notice that by (2.6) the sequence of random variables is uniformly integrable and hence, by conditioning on the random walk and using triangle inequality once again (now for the α-norm of a random variable), we conclude that where f : N ∪ {0} → R + is a bounded function such that lim m→∞ f (m) = 0. Using (2.6) again one can easily notice that both EA α and EB α can be bounded by for some finite constant c 1 independent of n. Thus, to show that |E(B n )−E(C n )| goes to zero as n → ∞ it remains to prove that for any j = 1, . . . , k converges to 0 as n → ∞. The integrand in (3.34) is bounded by the function were (L t (x)) t≥0,∈R is a jointly continuous version of local time of symmetric a β-stable Lévy process. By Lemma 3.3 in [1] the convergence holds also in L 1 (Ω).
Since the expected value of (3.37) converges to 0 as K → ∞ (see Lemma 2. converges to zero as n → ∞. This is relatively easy and we will only sketch the idea. Fix any r > 0 and j = 1, . . . , k. The integral in (3.39) can be written as a sum of two integrals I 1 , I 2 depending on whether which finishes the proof of (3.7). Now let us turn to (3.8). Define f n (x) := c n (1 − exp(−c −1 n (x))) for x ∈ R, n ∈ N. Then, (3.8) is equivalent to We can write, for δ > 0 where (using |f n (x)| ≤ |x| for all x ∈ R and n ∈ N) which converges to 0 as n → ∞ by the uniform integrability of (B n ) n≥1 . Using this, and taking δ < 1 2 we see that (again by the uniform integrability of (B n ) n≥1 ) (3.44) holds.

Appendices
A Proof of Lemma 2.1. Take any κ > 1 such that ακ < γ. In the proof c 1 , c 2 , . . . will denote constants independent of k and θ. Since the random variable Y 1 is symmetric we may write (using Lemma 1. for some constant c 1 which depends only on α and κ. Here φ k denotes the characteristic function of (1/k 1/γ )(Y 1 + . . . Y k ). Recall that by φ we denote the characteristic function of Y 1 . Since Y 1 in the domain of normal attraction of Z γ we conclude (see [2] for proofs) that the function is regularly varying at 0 with exponent γ and in particular Changing variables θ = (k − 1) 1/γ θ and using Theorem 10.5.6 in [7] we conclude that lim sup k→∞ I 1 < ∞. The fact that for any c > 0, I 2 is bounded uniformly in k ∈ N follows directly from the assumptions of Lemma 2.1