Lévy Classes and Self-Normalization

We prove a Chung's law of the iterated logarithm for recurrent linear Markov processes. In order to attain this level of generality, our normalization is random. In particular, when the Markov process in question is a diffusion, we obtain the integral test corresponding to a law of the iterated logarithm due to Knight.

§1. Introduction. Suppose (B t ; t ≥ 0) is a one-dimensional Brownian motion. Let (l t ; t ≥ 0) denote its local time process evaluated at the level 0. Amongst other results, Knight [Kn,Theorem 3] has proven the following Chung type of iterated logarithm law: almost surely, (1.0) lim inf t→∞ ln ln t l t sup 0≤s≤t |B s | = 1.
Here and throughout, ln x log e (x ∨ e) where log e is the natural logarithm. (1.0) is achieved by demonstarting that for any ε > 0, almost surely sup 0≤s≤t |B s | ≥ (1 − ε)l t / ln ln t, for all t large enough, while there is (a.s.) a random sequence t n → ∞, such that for all n, sup 0≤s≤t n |B s | ≤ (1 + ε)l t n / ln ln t n . In other words, (1 + ε)l t / ln ln t is in the upper Lévy class of sup 0≤s≤t |B s | if ε > 0 and in the lower Lévy class of sup 0≤s≤t |B s | if ε < 0. It is worth mentioning that (1.0) extends to other diffusions. Furthermore, the results of [Kn] are local, i.e., they hold for t → 0 + . However, the proofs translate to the case t → ∞ with no essential changes. Finally, the results of [Kn] are stated for the maximal unreflected process sup 0≤s≤t B s and due to the choice of the speed measure, the local times in [Kn] are twice ours. The proofs go through with no essential changes. The main goal of this paper is to extend (1.0) to a broad class of strong Markov processes while providing an exact integral test determining when a suitable function is in the upper (or lower) Lévy class of the maximal process. To this end, let (X t ; t ≥ 0) denote a recurrent irreducible linear strong Markov process in the sense of Blumenthal and Getoor [BG]. We shall work under the regime X 0 = 0 although this is not necessary. Furthermore, we assume that X possesses local times (L t ; t ≥ 0) at 0 (say); for details we refer the reader to [BG], Getoor and Kesten [GK] and Sharpe [Sh]. In brief, this means that (L t ; t ≥ 0) is a continuous additive functional of X whose Revuz measure is proportional to the point mass at 0. (The constant of proportionality does not play a rôle in our main results.) It is well known that the recurrence of X implies that lim t→∞ L t = ∞, almost surely (cf. the remarks after Lemma (A.4) below.) Hence, even at this level of generality, it is plausible to try to gauge the lower envelope of sup s≤t |X s | by the random function L t .
As a consequence of our work, we prove in Theorem (4.2) that when X is a β-symmetric stable Lévy process for β ∈ (1, 2), almost surely, where the constant χ comes from the characteristic exponent of X as: P exp iξX t = exp −tχ|ξ| β . Moreover, the process L is normalized as follows: . See (4.6) below, together with Getoor and Kesten [GK] for more remarks on normalization of local times. Note also that upon letting χ 1 2 and β 2, one gets back (1.0). This is not surprising, since in this case X is standard Brownian motion.
The above is part of the motivation behind this paper. Indeed, it is known (cf. Dupuis [Du], Fristedt [F] and Wee [We1,We2] However, c is the principle eigenvalue of corresponding to the Dirichlet problem for ∆ β on [−1, 1] with a 0 exterior condition (cf. Widom [Wi]). As such, the value of c is unknown. One of the interesting aspects of (1.1) is that the corresponding problem in self-normalization is quite calculable. Furthermore, together with the LIL of Marcus and Rosen [MR], one can get reasonably good lower bounds on c via K(β). We start with some notation. For any linear Borel set A, we let As is sometimes customary, we write P the probability measure as well as the expectation operator. Furthermore, almost surely (with no reference to the underlying measure) means P-almost surely. Finally, define for all x > 0, is a decreasing function on (0, ∞). Furthermore, by the strong Markov property, L T (F x ) is an exponential random variable and hence m(x) ∈ (0, ∞). We are ready to state our main result: For the sake of completeness, we also provide the following which generalizes [Kn,Theorem 2]; see Motoo [Mo] and Erickson [Er] for related results.
(1.5) Theorem. Let h : (0, ∞) → (0, ∞) be a nondecreasing function such that for all x large enough, (1.6) Remark. Suppose X is a Hunt process on a locally compact space E with a countable basis for which there exists a distinguished point b ∈ E, such that b is regular for {b} and is recurrent. Let L t denote the process of local times at b (cf. [BG] and [Sh] for details). By the Appendix, L ∞ = ∞. An inspection of the proofs of Theorems (1.4) and (1.5) show that they immediately extend to results about sup s≤t g(X s ); t ≥ 0 where g : (1.7) Remark. Working at the level of generality of Remark (1.6), it is possible to prove local version of Theorems (1.4) and (1.5), i.e., where t ↓ 0. In this case, X need not have a recurrent point; only that L t exists which is the same as b being regular for {b}.
Let us finish by discussing what happens when X is a one-dimensional diffusion with scale function S and speed measure M ; see Revuz and Yor [RY] for the necessary background. Define for any x > 0, This is the Green's function for the interval [−x, x]. One can normalize local times as in [RY,Chapter 7] so that for all x > 0, In particular, note that when X is symmetric (i.e., X and −X have the same finite-dimensional The organization of this paper is as follows. In Sections 2 and 3, we prove Theorems (1.4) and (1.5), respectively. In Section 4, we show how one can make the necessary computations and use these results when (X t ) is a symmetric stable Lévy process of index β ∈ (1, 2]. When β = 2, we have Brownian motion. In this case, Theorem (1.4) provides the integral test for the Lévy class corresponding to (1.0) above. In Section 5, we discuss the discrete-time analogues of (1.4) and (1.5). More precisely, we discuss self-normalized laws of the iterated logarithms for recurrent walks in Z d . For a special class of such walks, another self-normalization appears in the literature. See for example, Griffin and Kuelbs [GK1,GK2] and Shao [S]; see also Lewis [Le] and the references in [S]. The advantage of self-normalizing by local times is apparent in that the class of analyzable processes is quite large. Finally, we provide an appendix which contains several general remarks about NBU random variables and local times. §2. The Proof of Theorem (1.4). The strategy for proving (1.4) is to time-change We will exploit the fact that X * τ (t) ; t ≥ 0 is an extreme-value process. This would in turn allow us to use facts about such processes: see Barndorff-Nielsen [BN], Robbins and Siegmund [RS] and Klass [Kl]. Indeed, embedded in our proofs, one can find a streamlined derivation of some of these results. Let us define where h is given by (1.4). We begin with some technical lemmas.
3) as well as the fact that excursions are always counted in the clock τ .) By [It], N x (t); t ≥ 0 is a Poisson process with mean PN x (t) = tPN x (1) = tm(x). Part (i) follows since as desired. To prove part (ii), apply the strong Markov property at time τ (t). Since X τ (t) = X τ (t) + = 0, and since h is increasing, We are done by part (i). ♦ Following Erdős [E], define t n exp(n/ ln n) for all n ≥ 1. Let us begin with some useful combinatorial properties of (t n ; n ≥ 1).
(2.5) Lemma. There exists c 1 > 1 such that Proof. By Taylor's expansion, it is easy to see that lim n→∞ ln n · t n+1 − t n t n = 1.
Part (i) follows. To prove part (ii), notice that n → t n / ln n is increasing. Hence, by (i), The result follows. ♦ We need some more notation. For positive integers n < N, let us define, Note that the sets G N n (for good), B N n (for bad) and U N n (for ugly) form an integer partition of [1, N ]. For integers 0 < n < N and j ≥ 1, define the discrete annuli, Finally, define (2.9) Lemma. There exists a c 2 > 0 such that for all integers n, k, j ≥ 1, Proof. Without loss of generality, we may suppose that B N n ∩ A N n (j) = ∅ and U N n ∩ A N n (j) = ∅, for otherwise there is nothing to prove. Let k ∈ B N n ∩ A N n (j). By (2.6) and Lemma (2.5)(ii), By the definition of B N n , t n+k ≤ t n+[2 ln n·ln ln n]+1 . By Taylor's expansion, as n → ∞, Therefore, the following is finite: To recapitulate, whenever k ∈ B N n ∩ A N n (j), On the other hand, for our values of n, N, k and j, Since lim n→∞ t n+[ln n] /t n = e > 1, it follows that for some c 4 ∈ (0, ∞), (j + 1) ≥ c 4 ln n. By (2.11), this means that whenever k ∈ B N n ∩ A N n (j), k ≤ c 3 c −2 4 (j + 1) 3 . In other words, Next, suppose k ∈ U N n ∩ A N n (j). By (2.7) and (2.10), Since lim n→∞ t n+[ln n]+1 /t n = e, we have shown that for some c 5 ∈ (0, ∞), k ≤ c 5 (j + 1). In other words, # U N n ∩ A N n (j) ≤ c 5 (j + 1) ≤ c 5 (j + 1) 3 . Together with (2.8) and (2.12), we obtain the lemma. ♦ We are ready to prove Theorem (1.4). Define ψ(x) xm(h(x)). Recalling (2.3) and that t n = exp(n/ ln n), let ψ n ψ(t n ), E n E(t n ) and P n P E n , for simplicity. By the argument of Erdős [E], it suffices to prove (1.4) when (2.13) c 6 ln n ≤ ψ n ≤ c 7 ln n, for some 0 < c 6 < c 7 < ∞ and all n ≥ 1. With this in mind, it is easy to see that By Lemma (2.4)(i) and the definition of P n , It clearly suffices to prove that P E n , i.o. = 1. By (2.14), it follows that n P n = ∞. By Kochen and Stone [KS], in turn, it suffices to show that Recalling the integral partition of [1, N ] given by (2.6) and (2.8), it suffices to show the following: Note that by Lemma (2.4)(i) and (ii), Thus, to prove (2.15), it suffices to show that there exists some c 8 > 0 (independent of N ≥ 1), such that However, for any k ∈ G N n , t n+k ≥ t n+2[ln n·ln ln n] ∼ t n (ln n) 2 . By (2.13) and monotonicity, there exist some c 8 , c 8 ∈ (0, ∞), such that This implies (2.17) with c 8 (c 8 ) −1 . As mentioned before, (2.15) follows. To prove (2.16), use Lemmas (2.4)(ii) and (2.9) in the following manner: since n → ψ n is increasing. By (2.13), ψ n ≥ c 7 ln n. Hence, by Lemma (2.9), where c 9 c 2 ∞ j=0 (j +1) 3 e −c 7 j < ∞. This proves (2.16) and hence the divergence half of Theorem  (1.4).
To prove the convergence half, suppose (Compare this with E n E(t n ) defined in (2.3).) By the proof of Lemma (2.4)(i), Applying (2.13), we see that By Lemma (2.5)(i), t n−1 /t n ≥ 1 − (c 1 / ln n). Therefore, for all n ≥ exp(c 1 + c −1 7 ), by Lemma (2.4)(i). By (2.14) and the Borel-Cantelli lemma, P E n , i.o. = 0. A monotonicity argumentà la [E] finishes the proof. ♦ §3. The Proof of Theorem (1.5). Let s n 2 n and recalling (2.1) and (2.2) define, (3.1) Lemma. There exists ε > 0 so that for all n ≥ 1, Proof. In the excursion theory notation of §2, using the fact that for proving (i), since s n+1 = 2s n . The proof of the upper bound in (ii) is similar. To prove the lower bound in (ii), we can use the fact that e −x ≤ 1 − x + 1 2 x 2 for x ≥ 0 to see that for all n large, since by assumption xm(h(x)) ≤ 1 for all x large. The result follows for ε appropriately small. ♦ We are now ready to prove Theorem (1.5). Suppose ∞ 1 m(h(t))dt < ∞. By Lemma (3.1)(i), n P( F n ) < ∞. By the Borel-Cantelli lemma and monotonicity, it follows that Now for any n, k ≥ 1, consider, by the strong Markov property. By monotonicity, the second summand is bounded above by P(F n )P(F n+k ). Furthermore, the first summand equals 1 − P N h(s n+k ) (s n ) = 0 = 1 − exp − s n m(h(s n+k )) ≤ 2 −k s n m h(s n+k ) ≤ 2 −k ε −1 P(F n+k ), by Lemma (3.1)(ii). Hence, Together with (3.2) and the lemma of Kochen and Stone [KS], this implies that P F n , i.o. = 1 which finishes the proof. ♦ §4. Examples. Throughout this section, X denotes a symmetric β-stable Lévy process with β ∈ (1, 2]. That is, X is an infinitely divisible process whose Fourier transform is given by where ξ ∈ R 1 and χ > 0 is arbitrary. It is well-known that X is recurrent and possesses local times L at 0; cf. Getoor and Kesten [GK] for this and much more. Let X * t sup 0≤s≤t |X s |. The main result of this section is the following explicit calculation of the function m of (1.2) in this setting.
We will use the rest of this section to prove Theorem (4.2).
For α > 0, define for all x ∈ R 1 , Note that v α is a symmetric function and in the language of [GK], the α-resolvent density, u α , of X is given by See Bretagnolle [B] and Kesten [K]. Since 0 is used as a distinguished point, we will have need for one more piece of notation. Define for all α > 0, Next, we need to construct a version of the local times L. In order to do so, let γ(α) be an independent exponential random variable with mean α −1 . Since u α (0, ·) is excessive, the Doob-Meyer decomposition implies the following which is a special case of the work of [GK]: stopped at time γ(α). Let us start with some technical lemmas.
Hence, (4.14) implies that when β = 2, In light of (4.3), this proves Theorem (4.2) when β = 2. To finish the proof of (4.2), let β ∈ (1, 2). By Widom [Wi], (see also Blumenthal et al. [BGR] and its references), (Actually, the above references prove (4.15) for the case χ = 1. The general case follows from their work together with scaling considerations.) Moreover, by (4.14), scaling and symmetry, By (4.15) and some calculus, Contour integration shows that for all z ∈ C, Γ(z)Γ(1 − z) = π/ sin(πz). Consequently, Therefore, Recall the following well-known identity: To evaluate the aformentioned integral, we can write x −β (1 − cos x) as x 2−β times x −2 (1 − cos x) and perform integration by parts to see that the integral equals (β − 1) −1 ∞ 0 x 1−β sin xdx and the latter is computable by standard means. Together with (4.16) and (4.17), this finishes the proof.♦ §5. Recurrent walks in Z d . Let Y 1 , Y 2 , · · · be i.i.d. random vectors, taking values in Z d . The corresponding random walk, X n is defined by We shall assume that X is a genuinely d-dimensional random walk. By recurrence, this implies that d ≤ 2. Define the local time of X as Arguments similar to those presented in Section 1 show that the recurrence of X is equivalent to P lim n→∞ L n = ∞ = 1. As in §1, define for any Borel set A ⊂ Z d , Interpreting |x| as the 2 norm of x ∈ Z 2 \ {0}, we can define m and F x as in (1.2) and (1.3), respectively. The main result of this section is the following analogue of Theorem (1.4): A discrete-time analogue of (1.5) is also possible; its statement (and proof) is left to the interested reader.
The proof of (5.4) is much like that of (1.4) except that one need use discrete excursion theory. The latter is not as well-documented as continuous-time excursion theory. Therefore, we will include an outline of the proof of (5.4). Before doing so, however, let us investigate two interesting examples. Namely, the simple walk in Z 1 and the simple walk in Z 2 .
(5.5) Example. Suppose d = 1 and Y 1 = ±1 with probability 1 2 each. In order to apply (5.4), we first and foremost need to compute the function m. One can proceed in complete analogy to §4. However, there is a simpler way to compute m. For all x ∈ Z 1 \ {0}, where T a T ({a}). This is essentially a renewal-theoretic argument; see the proof of (5.4). Let Therefore, by the gambler's ruin calculation, m(x) = 1/ x . Theorem (5.4) then implies the following: It is possible to show that ln ln L n ∼ ln ln n. Thus, we obtain the analogue of (1.0): almost surely, lim inf n→∞ ln ln n L n max 1≤k≤n |X k | = 1.
(Many results of this type can be found in Révész [R], for example). Therefore, we obtain lim inf n→∞ ln ln ln n L n max 1≤k≤n ln |X k | = π 2 , almost surely. An interesting consequence of the above -suggested to us by an anonymous referee -is the following: since ln max 1≤k≤n |X k | ∼ 1 2 ln n, almost surely, lim sup n→∞ L n ln n · ln ln ln n = 1 π .
Thus, we obtain an alternative proof of the LIL of Erdős and Taylor (cf. Révész [R, p.202]): ♦ The proof of Theorem (5.4). As in (2.1) and (2.2) define for all n ≥ 1, Notice that τ (n) is none other than the n-th return time to 0. In analogy with (2.3), define for all n ≥ 1 E(n) ω : X * τ (n) < h(n) .
Since the excursions of X from 0 are i.i.d., X * τ (n) = max 1≤j≤n max τ (j−1)<k≤τ (j) |X k | is the maximum of n i.i.d. random variables (τ (0) 0). Moreover, where T a T ({a}). Hence, by renewal theory, P X . To see this, define path-valued processes e k as follows: The e k 's are the excursions from 0 and are ordered according to their natural clock, τ , in which things are measured. Let A be any Borel set on the space of paths. (We shall not bother with the topological asides here. They are straight-forward, especially as the state space is discrete.) By the strong Markov property, N (A) m m j=1 1 A (e j ) is a Bernoulli process in that it is a sum of m i.i.d. random variable and its growth times are geometric random variables with parameter p(A) P e 1 ∈ A . Applying the optional sampling theorem, we see that for all stopping times σ (in the filtration of N (A)) and all n ≥ 1, PN (A) σ∧n = p(A)P(σ ∧ n). By the monotone convergence theorem, PN (A) σ = p(A)Pσ. Now let σ be the first growth time of N (A). In other words, σ = L T (A) . Clearly, σ is a N (A)-stopping time. Since N (A) σ = 1, it follows that p(A) = PL T (A) −1 .
The above argument can be pushed further to show that (m, A) → N (A) m is a Bernoulli point process in that it is a Bernoulli process for each A and N (A) and N (B) are independent if A∩B = ∅. This is the essentials of discrete excursion theory. Now to see why P X τ (1) n . Without loss of generality, we can assume that h(n) → ∞, as n → ∞ for otherwise, there is nothing to prove. Furthermore, just as in (2.13), we can assume that for some c 12 > 1, c −1 12 ln ln k ≤ m(h(k)) ≤ c 12 ln ln k.
Since the combinatorial aspects of the proof of Theorem (5.4) are not at all different from those of Theorem (1.5), the same proof now goes through without any further difficulties. ♦ Appendix: NBU random variables and local times. Following Shaked and Shanthikumar [SS,p.11], we say that a positive random variable S is NBU if for all a, b > 0, (A.1) P S > a + b ≤ P S ≥ a P S ≥ b .
(NBU stands for New Better than Used.) As is customary, we write S p P 1/p S p for the moments of S.Let us begin with a basic L p (P) estimate for NBU random variables.
Proof. When p = 1, this is clear. Suppose we could prove the result for all integer p ≥ 1. By the convexity theorem of Riesz (cf. Stein [St], Theorem B.1 for a version of this), we are done. To this end, we proceed by induction. We can suppose that for some integer p ≥ 1, S p ≤ Γ 1/p (1+p) S 1 . We will strive to prove that it also holds for p + 1. To do this, we use the induction hypothesis and integrate by parts as follows: by (A.1). Making a change of variables from (x, y) to (u, v) = (x + y, y), we see that It is not difficult to see that the above is sharp.
(A.3) Lemma. Let S n be a sequence of NBU random variables. Suppose further that PS n increases to infinity, as n → ∞. Then with probability one, lim sup n→∞ S n = ∞.
First, let θ ↓ 0 and then p ↓ 1 to obtain the result. ♦ The relation to local times is the following result which is a consequence of the strong Markov property.
Since t → L t is a.s. increasing, Lemmas (A.3) and (A.4) together imply that lim t→∞ L t = ∞, a.s. if and only if EL ∞ = ∞. This condition is the well-known condition that X is point recurrent. (Indeed, the corresponding potential measure is U (A) ∞ 0 P X s ∈ A ds; cf. Blumenthal and Getoor [BG] and Sharpe [Sh].)