L 1 {NORM OF INFINITELY DIVISIBLE RANDOM VECTORS AND CERTAIN STOCHASTIC INTEGRALS

Equivalent upper and lower bounds for the L 1 norm of Hilbert space valued in(cid:12)nitely divisible random variables are obtained and used to (cid:12)nd bounds for di(cid:11)erent types of stochastic integrals.


L -norm of infinitely divisible random vectors
Let X be an infinitely divisible random vector in a separable Hilbert space H. (See e.g. [3], [7].) Assume that E X < ∞, EX = 0 and that X does not have a Gaussian component. The characteristic function of X can be written in the form where Q is a unique σ-finite Borel measure on H with Q({0}) = 0 and Q is called the Lévy measure of X. It is one of the principle entities used to describe the distribution of X. We obtain bounds for E X in terms of a functional of Q. Assume that Q(H) > 0 and let l = l(Q) > 0 be a solution of the equation It follows from (2) that l(Q) is uniquely defined. Note that (3) only depends on the projection of Q on R + , which we denote by Q r , i.e.
Thus we can also write (3) in the form and define l(Q r ) to be the value of l for which the integral in (5) is equal to one. Clearly l(Q) = l(Q r ).
The following theorem is the main result of this paper.
If Q is symmetric, the constant in the upper bound of (6) can be decreased to 1.25.
In preparation for the proof of this theorem we consider a decomposition of X. We write X = Y + Z, where Y and Z are independent mean zero random vectors with characteristic functions given by where l = l(Q). The next lemma considers some moments of Y and Z. It is well known when H = R.
x ≥l When Q is symmetric, each number 2 in (10) can be replaced by 1.

Proof
Let {e j } be an orthonormal basis in H. LetQ l denote Q restricted to { x < l}, the Lévy measure of Y . Since e j , Y is a mean-zero real infinitely divisible random variable which proves (8).
To prove (9) we consider an infinitely divisible random vector in R 2 given by ( e j , Y , e k , Y ). This random vector has characteristic function Differentiating it twice with respect to s and t and then setting s and t equal to zero gives Therefore Since the last term of this equation can be written as the proof of (9) is complete.
Let N be a Poisson random variable with mean λ and independent of {W i }. Then Consequently which proves the upper bound in (10). (When Q is symmetric, EW 1 = 0 which gives the factor 1 instead of 2 in (10).) To obtain the lower bound in (10) we first assume that Q is symmetric. Consequently the W i 's are symmetric and we get If Q is not symmetric, then we considerZ = Z − Z , where Z is an independent copy of Z.Z has symmetric Lévy measureQ(A) = Q(A) + Q(−A) for A ∈ B(H) and Q({ x ≥ l}) = 2λ. Applying (12) we get x ≥l x Q (dx) This completes the proof of Lemma 1.1.
Proof of Theorem 1. 1 We first obtain the upper bound in (6). Using (8) and (10) we see that The last bound follows from the definition of l(Q) by elementary calculus. Since the factor 2 can be dropped in (10) when Q is symmetric, the number 2.125 in (13) can be replaced by 1.25 in this case.
We now obtain the lower bound. It follows from (3) that either x ≥l x Q(dx) ≥ (0.5)l. Suppose that (a) holds. By Hölder's inequality we have and by (8) and (9) x <l Combining (14) and (15) we get Using assumption (a) and (8) we see that E Y ≥ (0.5)l 2 (l 2 + 3(0.5)l 2 ) 1/2 because the function y = t/(l 2 + 3t) 1/2 is increasing for t > 0. Therefore This gives the lower bound in (6) when (a) holds. Suppose now that (b) holds. Then Using this in (10) we get This completes the proof of Theorem 1.1. (1). Combining this observation with Theorem 1.1 gives (16). (6) is homogeneous in X, i.e. if X is replaced by cX, for some constant c > 0 then the bounds also change by a factor of c. Corollary 1.1 is often useful but (16) is not homogeneous in X.

Remark 1.2
There is no reason to believe that the constants in (6) of Theorem 1.1 are best possible. However, they provide good estimates for the absolute moment of an infinitely divisible random variable. To illustrate this point, consider Hence the smallest possible constant on the right hand side of (6) must be at least 2. (In Theorem 1.1 we get 2.125). If λ is a positive integer, then E|X| = 2λ λ+1 e −λ /λ! ∼ (2/π) 1/2 l(Q). Hence the largest possible constant on the left hand side of (6) must smaller than 0.8. One can lower this bound further by considering the first moment of the symmetrization of X.
In this case Q = λL(Y 1 ) and l = l(Q) is the unique solution of the equation One sees from this that l(Q) = K Y1 (λ), where K Y1 is the K-function of M. Klass, [4]. The proof of Theorem 1.1 parallels, to a certain extent, the estimates given in Section 1.4.3, [1] for the absolute moment of sums of independent identically distributed random variables in terms of the K-function. However our results seem to be easier to use when dealing with infinitely divisible random vectors. For example let EY 1 = 0 and H = R. A straightforward application of Proposition 1.4.10, [1] gives the bounds for E|X| in terms of EK Y1 (N ). But l(Q) = K Y1 (EN ). Since K Y1 is nonlinear, the passage from EK Y1 (N ) to K Y1 (EN ) is far from obvious. By working with infinitely divisible random vectors directly we avoid this difficulty.
We give another set of bounds for E X which, in certain circumstances, may be easier to compute. Let (ψ r is the Lévy exponent of a real valued symmetric infinitely divisible random variable. Indeed, when X itself is real valued and symmetric, ψ r is the Lévy exponent of X ). Let Theorem 1.2 Let X, Q and Q r be as in Theorem 1.1 and let ψ r be as given in (18).
If Q is symmetric the constant in upper bound in (20) can be replaced by 1.25.
The proof of this theorem follows easily from the following lemma.
Proof of Theorem 1.2 It follows from (21) that This implies that if ζ(t * ) ≥ 5/2 then ξ(t * ) ≥ 1 and consequently t * ≤ l(Q r ). Thus the lower bound in (20) follows from the lower bound in (6). The proof of the upper bound in (20) follows similarly.
It is particularly useful to know that the first moment of an infinitely divisible random variable is finite. Then we can obtain bounds for random variables with mean zero by first obtaining bounds for their symmetrized version. We use this technique in [6] to study the continuity of a wide class of infinitely divisible processes. Of course it would also be interesting to obtain bounds for other moments. The proof we give here does not readily extend to higher moments.
In the next section we consider stochastic integrals. In Section 3 we give some simple examples of how to use Theorems 1.1 and 1.2 to estimate the expected value of the norm of Hilbert space valued infinitely divisible random variables and certain stochastic integrals.

Stochastic integrals with respect to infinitely divisible random measures
Consider the infinitely divisible H-valued random variable X defined in (1). Given X there are two natural objects to consider, < f, X > where f ∈ H and F X where F is an operator from H to another Hilbert space K. In Theorem 1.1 we obtain bounds for E X . As an extension of this result and as a corollary of this theorem we can obtain bounds for E| < f, X > | and E F X . However, it is natural to consider this extension in greater generality. The "products" < f, X > and F X constitute natural bilinear forms taking values in L 1 (Ω; R) and L 1 (Ω; K), respectively. Therefore, for any measure M , defined on a measurable space S, and taking values in L 1 (Ω; H), integrals of the form S < f (s), M(ds) > and S F (s)M (ds) are well defined as Bartle type integrals. It is known that bounded deterministic functions are integrable in this setting. ( See e.g. [2]). Suppose that M , in addition, is an independently scattered infinitely divisible random measure. In this section we use Theorem 1.1 to obtain nice two-sided estimates for E| S f (s), M(ds) | and E S F (s) M (ds) in terms of an Orlicz norm of f and F . Such bounds are relevant to our previous work [6] and can also be useful in modeling based on time-space infinitely divisible random noise. Furthermore, Theorem 1.1 gives a complete characterization of the class of L 1 -integrable deterministic functions with respect to independently scattered infinitely divisible random measures. The L 1 bounds obtained in Theorem 1.1 can be generalized further to the case of of stochastic integrals of random predictable integrands. This step is possible by applying decoupling inequalities along the lines of [5]. Such a generalization is not immediate, especially if one requires specific constants in the bounds. We will not consider it here. However, the L 1 bounds obtained in this section constitute a base for such a generalization.
Let g : S → H be a measurable function. We define an Orlicz space pseudo-norm of g by (32)

Theorem 2.1 Let f : S → H be a measurable function satisfying
The definition of the stochastic integral in (29) can be extended to where E S f , dM = 0 and Proof Consider (29) and note that Hence X = S f, dM has the same form as (1) where Q is the image of the measure θ(dx, s)m(ds) under the map (s, x) → f (s), x . ξ(l) computed for this Q (or Q r ) in (3) also can be written as Comparing (36) and (30) we see that This bound combined with Theorem 1.1 yields Since φ satisfies the ∆ 2 -condition the results for f can be extended to hold for f .
If H = R the integral (34) is the usual stochastic integral and · φ is a norm.
The integral in (34) is a real valued random variable. We now consider a Hilbert space valued stochastic integral. Let F : S → L(H, K), where L(H, K) is the space of bounded linear operators from H into another separable Hilbert space K. We assume that for each x ∈ H and y ∈ K, the function s → y, F (s)x is measurable and that We consider the stochastic integral where F * : S → L(K, H), is the transpose of the operator F (s), and {e j } is an orthonormal basis for K. (This integral could be defined via simple functions as outlined in the introduction to this section, however, since we have already established the existence of the inner product integrals, it is easier to use (39) ). The integral in (39) is well defined. To see this we show that the series in (39) converges and the limit does not depend on the choice of {e j }. To begin note that implying that (33) holds and S F * e j , dM is well-defined. Next consider the random vector Let P k,n y = n j=k y, e j e j and ξ k,n (1) It follows from Corollary 1.1 that E S k,n ≤ (2.125) max{ξ k,n (1), ξ k,n (1)}.
By (38) and the Dominated Convergence Theorem we see that the series in (39) converges in L 1 H . To show that the right hand side of (39) does not depend on the choice of basis we perform the following formal computation which can be justified in the same way as the above proof of convergence. Let {g j } be another orthonormal basis in K. Then We proceed to develop bounds for the L 1 norm of the stochastic integral. Let where φ 0 is given by (31), and define a pseudo-norm It is easy to see that X = S F dM can be described by (1) in which Q is the image of the measure θ(dx, s)m(ds) under the map (s, x) → F (s)x. By the same argument used in the proof of (37) we get for every F satisfying (38).

Remark 2.1
Stochastic integrals in which the integrand is real valued and the random measure is Hilbert space valued and vice versa are special cases of (39), so that (43) holds for them also.

Examples
Let X, Q and Q r be as given in Section 1. In particular let p, q ∈ (1, 2). It is elementary to verify that the value of l for which (3) and (5) is satisfied is given by the solution of one of the following equations (The value of l is unique. If it turns out that l > 1, then this value satisfies (45) and (46) has no solution. If l < 1 the opposite occurs. When l = 1, both equations are the same.) In particular, if p = q and X a Hilbert space valued infinitely divisible random variable with Q r (dx) given by (44), then by Theorem 1.1 If X is symmetric, the constant 2.125 in the upper bound of (47) can be decreased to 1.25.
There is a well known canonical way to define a mean zero, strictly p-stable, H-valued random vector X so that it has a nice characteristic function. For 1 < p < 2 we set where σ is a finite measure on the unit sphere ∂U of H, called spectral measure of X. This equation comes from (1) with the Lévy measure Q of X written in spherical coordinates as Using (1) .
It follows from (47) where Note that k p is a continuous function of p ∈ (1, 2); k p ∼ (2/π)(p − 1) −1 as p decreases to 1 and k p → √ 2 as p increases to 2.
We now use Theorem 1.2 to obtain a lower bound for E X which holds under fairly general assumptions.
Lemma 3.1 Let X and ψ r be associated as in (18). Assume that ψ r (u)/u 2 is decreasing as u increases and that ψ(u) is increasing as u increases. Then Proof Under the hypotheses on ψ r we see that ζ(t) ≥ ψ r (1/t). Therefore, if ψ r (1/t * ) = 5/2, the result follows from Theorem 1.2.