On the Range of Subordinators

In this note we look into detail at the box-counting dimension of subordinators. Given that $X$ is a non-decreasing Levy process, which is not a compound Poisson process, we show that in the limit, the minimum number of boxes of size $\delta$ that cover the range of $(X_s)_{s\leq t}$ is a.s. of order $t/U(\delta)$, where $U$ is the potential function of $X$. This is a more refined result than the lower and upper index of the box-counting dimension computed in the literature which deals with the asymptotic number of boxes at logarithmic scale.


Introduction and Results
In this note we consider the minimal number of intervals that cover the range of a given subordinator X := (X s ) s≥0 . The problem of studying the set properties of the range of Lévy processes in general and subordinators in particular have a long history. Various measures for dimension have been discussed for the range of Lévy processes. We refer to [B99] and [B96] for more information on the range of subordinators and to the work of [KX05], [KX06], for results on more general Lévy processes. In this work we improve the results on the box-counting dimension of subordinators presented in [B99,Ch.5] by showing the a.s. convergence of the random variable that counts the minimal number of intervals that cover the range of a subordinator up to a given time t rescaled by the potential function of the subordinator X. Previously the behaviour of the number of intervals has only been discussed at a logarithmic scale ( see Remark 2 ), and even then a precise convergence has not been available.
Recall that for any subordinator X defined on some probability space (Ω, F , P), we have that where d ≥ 0 is the linear drift of the subordinator X and the measure Π, satisfying ∞ 0 min{1, x}Π(dx) < ∞, describes the intensity and the size of the jumps of X. In the sequel we shall assume either that the jumps of X are infinitely many on any finite interval of time ( that is, Π (0, ∞) = ∞ ) or, if the jumps of X are finitely many on any finite interval of time ( that is, Π (0, ∞) < ∞ ), that d > 0. This is equivalent to X not being a compound Poisson process.
Denote by N (t, δ) the minimal number of intervals of length at most δ that are needed to cover the range of X up to time t > 0. The most economic covering of this type is constructed in the following way: Denote by T 0 = 0 and which clearly implies that {N (t, δ) ≥ k} = {T k (δ) ≤ t}. For clarity we write T k (δ) = T k when there is no ambiguity. The minimal covering of the range of X up to any time t > 0 is the collection of random intervals { X Tn−1 , X Tn− } {n≥0,Tn−1<t} . In the following, we write η i := T i − T i−1 and 1 note that (η i ) i≥1 is a sequence of independent identically distributed random variables. We will frequently use the q-potentials of X defined by , for all q > 0 (1.3) and we abbreviate U (δ) : = U 0 (δ) = E [T 1 (δ)] noting that the first identity in (1.3) makes sense even when q = 0. Our aim is to show that U (δ)N (t, δ) converges to t almost surely. A case where this would fail is the compound Poisson process case in which it is apparent that U (0+) > 0.  The quantity lim inf δ→0 is known as the lower box-counting dimension whereas lim sup δ→0 is the upper box-counting dimension, see [B99,Chap. 5].
Remark 3. Note that the relation U (δ) ≍ 1 for two absolute constants 0 < C 1 < C 2 < ∞, see [B96, Ch III,Prop. 1], which leads to |ln(U(δ))| almost surely, shows in alternative way, due to Theorem 1, that it may happen that ind(Φ) < ind(Φ). Theorem 1, however, further demonstrates that the scale ln 1 δ is not the right one for ln (N (t, δ)) and shows the existence of a correct deterministic scale even for N (t, δ).
Remark 4. The notion of lower and upper box dimension is tightly related to the notion of packing dimension and packing measure. In fact ind(Φ) = dim P (X), see e.g. [B99, Chap. 5, p.42], where dim P (X) is the packing dimension of the range of the subordinator X for t = 1. Moreover, the possible packing measures generated by the measure functions ϕ have been extensively studied, see [FT92] for more detail. The notable conclusion is that unless the subordinator is not of Cauchy type, see [FT92, Section 4], then the ϕ-packing measure is either zero or infinity. To our understanding, this does not allow, these otherwise very refined results, to shed light on the problem we discuss, i.e. the a.s. behaviour of N (t, δ).

Some applications
The first remark is the very precise a.s. behaviour we can obtain for N (t, δ), as δ → 0, whenever d > 0.
Corollary 1. Let d > 0. We have that a.s., for any t > 0, as x → 0, The second remark is also immediate but we formulate it for convenience. Note that the information at logarithmic level, namely using the available results about ln (N (t, δ)), hides away a good deal of precision as to the fluctuations of N (t, x) which are due to a second order variation in Φ (λ). This is particularly apparent when α = 0 in the statement.
and L a slowly varying function. Then where Γ(x) is the celebrated Gamma function.
Next, we turn our attention to an interesting property that can be deduced using Theorem 1. Put Then it is known from [FP71] that with I . standing for a generic finite interval The following result shows that usually the efficient covering of the set (X s ) s≤1 is not achieved by intervals of length proportionate to δ, as δ → 0. For any finite collection of sets A δ = {i : |I i | < δ} denote by A c δ = {I ∈ A δ : |I| > cδ}, for any c ∈ (0, 1). Then we have the following result. Proposition 1. Let X be a subordinator satisfying we have that, for any c ∈ (0, 1), Remark 5. Note that using [B96, Chap. 3, Prop 1] one checks that Φ(x)f (x) ≥ 1. Also (2.5) may fail to hold whenever Φ(x) ∼ xL(x), as x → ∞ with L(x)-some slowly varying function. However, if Φ(x) is not close to linear behaviour then (2.5) does hold and one deduces from (2.6) that the number of the intervals proportionate to δ in efficient coverings of (X s ) s≤1 is of a smaller magnitude than the most economic covering N (1, δ).

Proof of Theorem 1
We start the proof by showing a couple of auxiliary results.
Lemma 1. If X is a subordinator with Π (0, ∞) = ∞ or, if Π (0, ∞) < ∞, then d > 0, the following convergence holds: Proof. First note that the definitions above directly imply that P (N (t, δ) We compute their Laplace transform, using the fact that the random variables η i are independent and identically distributed, to get .

Lemma 2.
With N (1, δ) specified as above, we have for all j > 0, where N i 1 j , δ are i.i.d. copies of N 1 j , δ and −j < A j ≤ 0 is an integer valued random variable. Proof. The proof can be done easily by induction once one observes that, for any 0 < t < 1, 1 and N (1 − t, δ) and N (t, δ) are independent. We argue pathwise. It is not difficult to see the following: Up to time t we have some number of intervals that cover the range of the subordinator. At time t we start a new covering of the remaining time 1 − t and continue the old covering for the whole length 1. It is easy to see that the new covering of the range on (t, 1) will exceed the old covering by at most 1 since in the worst case scenario we have started our new covering at X t when X t was in an interval of the original covering of the whole (0, 1). The next lemma is in fact [LS,Lem. 3.1] which we reproduce for clarity.
Lemma 3. There is an absolute constant C a > 0 such that for each δ > 0 and x > 0 where C a does not depend on X.
Proof. By Markov's inequality we obtain for arbitrary λ > 0 that .
Using the estimates of [LS] for λ = 2Φ 1 δ we obtain that It remains to observe that according to [B96, Chap.3, Prop.1] there is an absolute constant C a not depending on the subordinator such that Φ 1 δ ≤ Ca U(δ) .
Our next aim is to estimate the variance of N (t, δ) via its second moment.
Lemma 4. For any subordinator X we have that where M and K are absolute constants not depending on X.
As the left hand side converges to r and the right hand side converges to 1/r as δ tends to zero, the proof is finished since r is arbitrarily close to 1.

Proofs for Applications
We discuss first Corollary 1.
Proof of Corollary 1. The proof follows from Theorem 1 and [DS11, Prop.1] by simple integration and putting q = 0 since this result discusses the potential density u(x) = dU(x) dx . Next we consider Corollary 2 Proof of Corollary 2. The result is immediate from Theorem 1 and the fact that x α Γ(1 + α)L 1 x , see [B96, Ch III, p. 75].