SPDEs with $\alpha$-stable L\'evy noise: a random field approach

This article is dedicated to the study of an SPDE of the form $$Lu(t,x)=\sigma(u(t,x))\dot{Z}(t,x) \quad t>0, x \in \cO$$ with zero initial conditions and Dirichlet boundary conditions, where $\sigma$ is a Lipschitz function, $L$ is a second-order pseudo-differential operator, $\cO$ is a bounded domain in $\bR^d$, and $\dot{Z}$ is an $\alpha$-stable L\'evy noise with $\alpha \in (0,2)$, $\alpha\not=1$ and possibly non-symmetric tails. To give a meaning to the concept of solution, we develop a theory of stochastic integration with respect to $Z$, by generalizing the method of Gin\'e and Marcus (1983) to higher dimensions and non-symmetric tails. The idea is to first solve the equation with"truncated"noise $\dot{Z}_{K}$ (obtained by removing from $Z$ the jumps which exceed a fixed value $K$), yielding a solution $u_{K}$, and then show that the solutions $u_L,L>K$ coincide on the event $t \leq \tau_{K}$, for some stopping times $\tau_K \uparrow \infty$ a.s. A similar idea was used in Peszat and Zabczyk (2007) in the setting of Hilbert-space valued processes. A major step is to show that the stochastic integral with respect to $Z_{K}$ satisfies a $p$-th moment inequality, for $p \in (\alpha,1)$ if $\alpha<1$, and $p \in (\alpha,2)$ if $\alpha>1$. This inequality plays the same role as the Burkholder-Davis-Gundy inequality in the theory of integration with respect to continuous martingales.


Introduction
Modeling phenomena which evolve in time or space-time and are subject to random perturbations is a fundamental problem in stochastic analysis. When these perturbations are known to exhibit an extreme behavior, as seen frequently in finance or environmental studies, a model relying on the Gaussian distribution is not appropriate. A suitable alternative could be a model based on a heavy-tailed distribution, like the stable distribution. In such a model, these perturbations are allowed to have extreme values with a probability which is significantly higher than in a Gaussian-based model.
In the present article, we introduce precisely such a model, given rigorously by a stochastic partial differential equation (SPDE) driven by a noise term which has a stable distribution over any space-time region, and has independent values over disjoint space-time regions (i.e. it is a Lévy noise). More precisely, we consider the SPDE: with zero initial conditions and Dirichlet boundary conditions, where σ is a Lipschitz function, L is a second-order pseudo-differential operator on a bounded domain O ⊂ R d , andŻ(t, x) = ∂ d+1 Z ∂t∂x 1 ...∂x d is the formal derivative of an α-stable Lévy noise with α ∈ (0, 2), α = 1. The goal is to find sufficient conditions on the fundamental solution G(t, x, y) of the equation Lu = 0, which will ensure the existence of a mild solution of equation (1). We say that a predictable process u = {u(t, x); t ≥ 0, x ∈ O} is a mild solution of (1) if for any t > 0, x ∈ O, x, y)σ(u(s, y))Z(ds, dy) a.s.
We assume that G(t, x, y) is a function in t, which excludes from our analysis the case of the wave equation with d ≥ 3.
As the term on the right-hand side of (2) is a stochastic integral with respect to Z, such an integral should be constructed first. Due to the poor integrability properties of the measure ν α , this cannot be done directly from (4), using integration with respect to N and N. Our construction of the integral is an extension to random fields of the construction provided by Giné and Marcus in [11] in the case of an α-stable Lévy process {Z(t)} t∈ [0,1] . Unlike these authors, we do not assume that the measure ν α is symmetric.
Since any Lévy noise is related to a PRM, in a broad sense, one could say that this problem originates in Itô's papers [12] and [13] regarding the stochastic integral with respect to a Poisson noise. SPDEs driven by a compensated PRM were considered for the first time in [14], using the approach based on Hilbert-space-valued solutions. This study was motivated by an application to neurophysiology leading to the cable equation. In the case of the heat equation, a similar problem was considered in [1], [26] and [3] using the approach based on random-field solutions. One of the results of [26] shows that the heat equation: x, u(t, x); z) N(t, x, dz) + g(t, x, u(t, x)) has a unique solution in the space of predictable processes u satisfying sup (t,x)∈[0,T ]×R d E|u(t, x)| p < ∞, for any p ∈ (1 + 2/d, 2]. In this equation, N is the compensated process corresponding to a PRM N on R + × R d × U of intensity dtdxν(dz), for an arbitrary σ-finite measure space (U, B(U), ν) with measure ν satisfying U |z| p ν(dz) < ∞. Because of this later condition, this result cannot be used in our case with U = R\{0} and ν = ν α . For similar reasons, the results of [3] also do not cover the case of an α-stable noise. However, in the case α > 1, we will be able to exploit successfully some ideas of [26] for treating the equation with "truncated" noise Z K , obtained by removing from Z the jumps exceeding a value K (see Section 5.2 below).
The heat equation with the same type of noise as in the present article was examined in [16] and [18] in the cases α < 1, respectively α > 1, assuming that the noise has only positive jumps (i.e. q = 0). The methods used by these authors are different from those presented here, since they investigate the more difficult case of a non-Lipschitz function σ(u) = u δ with δ > 0. In [16], Mueller removes the atoms of Z of mass smaller than 2 −n and solves the equation driven by the noise obtained in this way; here we remove the atoms of Z of mass larger than K and solve the resulting equation. In [18], Mytnik uses a martingale problem approach and gives the existence of a pair (u, Z) which satisfies the equation (the so-called "weak solution"), whereas in the present article we obtain the existence of a solution u for a given noise Z (the so-called "strong solution"). In particular, when α > 1 and δ = 1/α, the existence of a "weak solution" of the heat equation with α-stable Lévy noise is obtained in [18] under the condition which we encounter here as well. It is interesting to note that (5) is the necessary and sufficient condition for the existence of the density of the super-Brownian motion with "α − 1"-stable branching (see [7]). Reference [17] examines the heat equation with multiplicative noise (i.e. σ(u) = u), driven by an α-stable Lévy noise Z which does not depend on time.
To conclude the literature review, we should point out that there are many references related to stochastic differential equations with α-stable Lévy noise, using the approach based on Hilbert-space valued solutions. We refer the reader to Section 12.5 of the monograph [22], and to [21], [2], [15], [23] for a sample of relevant references. See also the survey article [20] for an approach based on the white noise theory for Lévy processes.
This article is organized as follows.
• In Section 2, we review the construction of the α-stable Lévy noise Z, and we show that this can be viewed as an independently scattered random measure with jointly α-stable distributions.
• In Section 3, we consider the linear equation (1) (with σ(u) = 1) and we identify the necessary and sufficient condition for the existence of the solution. This condition is verified in the case of some examples.
• Section 4 contains the construction of the stochastic integral with respect to the α-stable noise Z, for α ∈ (0, 2). The main effort is dedicated to proving a maximal inequality for the tail of the integral process, when the integrand is a simple process. This extends the construction of [11] to the case random fields and non-symmetric measure ν α .
• In Section 5, we introduce the process Z K obtained by removing from Z the jumps exceeding a fixed value K, and we develop a theory of integration with respect to this process. For this, we need to treat separately the cases α < 1 and α > 1. In both cases, we obtain a p-th moment inequality for the integral process for p ∈ (α, 1) if α < 1, and p ∈ (α, 2) if α > 1. This inequality plays the same role as the Burkholder-Davis-Gundy inequality in the theory of integration with respect to continuous martingales.
• In Section 6 we prove the main result about the existence of the mild solution of equation (1). For this, we first solve the equation with "truncated" noise Z K using a Picard iteration scheme, yielding a solution u K . We then introduce a sequence (τ K ) K≥1 of stopping times with τ K ↑ ∞ a.s. and we show that the solutions u L , L > K coincide on the event t ≤ τ K . For the definition of the stopping times τ K , we need again to consider separately the cases α < 1 and α > 1.
• Appendix A contains some results about the tail of a non-symmetric stable random variable, and the tail of an infinite sum of random variables. Appendix B gives an estimate for the Green function associated to the fractional power of the Laplacian. Appendix C gives a local property of the stochastic integral with respect to Z (or Z K ).

Definition of the noise
In this section we review the construction of the α-stable Lévy noise on R + × R d and investigate some of its properties.
, defined on a probability space (Ω, F , P ), with intensity measure dtdxν α (dz), where ν α is given by (3). Let (ε j ) j≥0 be a sequence of positive real numbers such that ε j → 0 as j → ∞ and 1 = ε 0 > ε 1 > ε 2 > . . .. Let Remark 2.1 The variable L 0 (B) is finite since the sum above contains finitely many terms. To see this, we use the "one-point uncompactification" as explained in Section 6.1.3 of [25], i.e. we view N as a point process on the space R + × R d × (R\{0}). Since the set Γ 0 is relatively compact in R\{0}, and the point process N has finitely many points in any relatively compact set, For any j ≥ 0, the variable L j (B) has a compound Poisson distribution with jump intensity measure |B| · ν α | Γ j , i.e.
Recall that a random variable X has an α-stable distribution with param- (see Definition 1.1.6 of [27]). We denote this distribution by S α (σ, β, µ).
We first express the characteristic function (8) of Y (B) in Feller's canonical form (see Section XVII.2 of [9]): is an α-stable random measure, in the sense of Definition 3.3.1 of [27], with control measure m(B) = σ α |B| and constant skewness intensity β. In particular, Z(B) has a S α (σ|B| 1/α , β, 0) distribution. We say that Z is an α-stable Lévy noise. Coming back to the original construction (7) of Y (B) and noticing that it follows that Z(B) can be represented as: Here N is the compensated Poisson measure associated to N, i.e. N (A) = N(A) − E(N(A)) for any relatively compact set A in R + × R d × (R\{0}). In the case α = 1, we will assume that p = q so that ν α is symmetric around 0, E(L j (B)) = 0 for all j ≥ 1, and Z(B) admits the same representation as in the case α < 1.

The linear equation
As a preliminary investigation, we consider first equation (1) with σ = 1: with zero initial conditions and Dirichlet boundary conditions. In this section By definition, the process {u(t, x); t ≥ 0, x ∈ O} given by: is a mild solution of (13), provided that the stochastic integral on the righthand side of (14) is well-defined.
We define now the stochastic integral of a deterministic function ϕ: If ϕ ∈ L α (R + × R d ), this can be defined by approximation with simple functions, as explained in Section 3.4 of [27]. The process {Z(ϕ); ϕ ∈ L α (R + × R d )} has jointly α-stable finite dimensional distributions. In particular, each Z(ϕ) has a S α (σ ϕ , β, 0)-distribution with scale parameter: More generally, a measurable function ϕ : R + × R d → R is integrable with respect to Z if there exists a sequence (ϕ n ) n≥1 of simple functions such that ϕ n → ϕ a.e., and for any B ∈ B b (R + × R d ), the sequence {Z(ϕ n 1 B )} n converges in probability (see [24]).
The next results shows that condition ϕ ∈ L α (R + × R d ) is also necessary for the integrability of ϕ with respect to Z. Due to Lemma 2.2, this follows immediately from the general theory of stochastic integration with respect to independently scattered random measures developed in [24].
Condition (15) can be easily verified in the case of several examples.
is the generator of a Markov process with values in R d , without jumps (a diffusion). Assume that O is a bounded domain in R d or O = R d . By Aronson estimate (see e.g. Theorem 2.6 of [22]), under some assumptions on the coefficients a ij , b i , there exist some constants c 1 , c 2 > 0 such that for all t > 0 and x, y ∈ O. In this case, condition (15) is implied by (5).
where G(t, x, y) is the fundamental solution of ∂u ∂t − ∆u = 0 on O and g t,γ is the density of the measure µ t,γ , (µ t,γ ) t≥0 being a convolution semigroup of measures on [0, ∞) whose Laplace transform is given by: Note that if γ < 1, g t,γ is the density of S t , where (S t ) t≥0 is a γ-stable subordinator with Lévy measure ρ γ (dx) = γ If

then (15) holds if and only if
If O is a bounded domain in R d , then G(t, x, y) ≤ G(t, x − y) (by Lemma 2.1 of [16]). In this case, if α > 1, then (15) is implied by (21).

Stochastic integration
In this section we construct a stochastic integral with respect Z by generalizing the ideas of [11] to the case of random fields. Unlike these authors, we do not assume that Z(B) has a symmetric distribution, unless α = 1.
A simple process is a linear combination of elementary processes. Note that any simple process X can be written as: Without loss of generality, we assume that Y 0 = 0.
We denote by P the predictable σ-field on Ω×R + ×R d , i.e. the σ-field generated by all simple processes. We say that a process Remark 4.1 One can show that the predictable σ-field P is the σ-field generated by the class C of processes X such that t → X(ω, t, x) is leftcontinuous for any ω ∈ Ω, x ∈ R d and (ω, Let L α be the class of all predictable processes X such that We identify two processes X and Y for which X − Y α = 0, i.e. X = Y ν-a.e., where ν = P dtdx. In particular, we identify two processes X and Y if X is a modification of Y , i.e.
The space L α becomes a metric space endowed with the metric d α : This follows using Minkowski's inequality if α > 1, and the inequality |a The following result can be proved similarly to Proposition 2.3 of [29].
Proposition 4.2 For any X ∈ L α there exists a sequence (X n ) n≥1 of bounded simple processes such that X n − X α → 0 as n → ∞.
By Proposition 5.7 of [25], the process {Z(t, B) = Z([0, t] × B); t ≥ 0} has a càdlàg modification, for any B ∈ B b (R d ). We work with these modifications. If X is a simple process given by (23), we define Note that for any The following result will be used for the construction of the integral. This result generalizes Lemma 3.3 of [11] to the case of random fields and non-symmetric measures ν α .
for any T > 0 and B ∈ B b (R d ), where c α is a constant depending only on α.
Proof: Suppose that X is of the form (23). Since {I(X)(t, B)} t∈[0,T ] is càdlàg, it is separable. Without loss of generality, we assume that its separating set D can be written as D = ∪ n F n where (F n ) n is an increasing sequence of finite sets containing the points (t k ) k=0,...,N . Hence, Fix n ≥ 1. Denote by 0 = s 0 < s 1 < . . . < s m = T the points of the set F n . Say t k = s i k for some 0 = i 0 < i 1 < . . . < i N . Then each interval (t k , t k+1 ] can be written as the union of some intervals of the form (s i , s i+1 ]: where (24), for any k = 0, . . . , N − 1 and i ∈ I k , For any i ∈ I k , let N i = m k , and for any j = 1, . . . , . With this notation, we have: Consequently, for any l = 1, . . . , m Using (26) and (28), it is enough to prove that for any λ > 0, First, note that This follows from the definition (23) of X and (27), since . We now prove (29).
For the event on the left-hand side, we consider its intersection with the event {max 0≤i≤m−1 |W i | > λ} and its complement. Hence the probability of this event can be bounded by We treat separately the two terms.
For the first term, we note that We examine the tail of By Lemma A.1 (Appendix A), there exists a constant c * α > 0 such that for any λ > 0. Hence We now treat II. We consider three cases. For the first two cases we deviate from the original argument of [11] since we do not require that β = 0.
Using the independence between β i and Z i it follows that Using (30) and Remark A.2 (Appendix A), we get: From (31), (32) and (33), it follows that: We first treat the term II ′ . Note that {M l = l i=0 X i , F s l+1 ; 0 ≤ l ≤ m − 1} is a zero-mean square integrable martingale, and Using (30) and Remark A.2 (Appendix A), we get: As in Case 1, we obtain that: and hence is a semimartingale and hence, by the submartingale inequality, To evaluate E(Y i ), we note that for almost all ω ∈ Ω, due to the independence between β i and Z i . We let Case 3. α = 1. In this case we assume that β = 0.
is a zero-mean square integrable martingale. By the martingale maximal inequality, The result follows using (34).
We now proceed to the construction of the stochastic integral. If Y = {Y (t)} t≥0 is a jointly measurable random process, we define: Let X ∈ L α be arbitrary. By Proposition 4.2, there exists a sequence (X n ) n≥1 of simple functions such that X n − X α → 0 as n → ∞. Let T > 0 and B ∈ B b (R d ) be fixed. By linearity of the integral and Theorem 4.3, as n, m → ∞. In particular, the sequence {I(X n )(·, B)} n is Cauchy in probability in the space D[0, T ] equipped with the sup-norm. Therefore, there exists a random element Y (·, B) in D[0, T ] such that for any λ > 0, The process Y (·, B) does not depend on the sequence (X n ) n and can be extended to a càdlàg process on [0, ∞), which is unique up to indistinguishability. We denote this extension by I(X)(·, B) and we write .
For an arbitrary Borel set O ⊂ R d (possibly O = R d ), we assume in addition, that X ∈ L α satisfies the condition: Then we can define I(X)(·, O) as follows.
as k, l → ∞. This shows that {I(X)(·, O k )} k is a Cauchy sequence in probability in the space D[0, T ] equipped with the sup norm. We denote by I(X)(·, O) its limit. As above, this process can be extended to [0, ∞) and I(X)(t, O) is F t -measurable for any t > 0. We denote Similarly to Lemma 4.4, one can prove that for any X ∈ L α satisfying (38),

The truncated noise
For the study of non-linear equations, we need to develop a theory of stochastic integration with respect to another process Z K which is defined by removing from Z the jumps whose modulus exceed a fixed value K > 0. More precisely, for any B ∈ B b (R + × R d ), we define We treat separately the cases α ≤ 1 and α > 1.

5.1
The case α ≤ 1 is an independently scattered random measure on R + × R d with characteristic function given by: We first examine the tail of Z K (B).
where r α > 0 a constant depending only on α (given by Lemma A.3).
Proof: This follows from Example 3.7 of [11]. We denote by ν α,K the restriction of ν α to {z ∈ R; 0 < |z| ≤ K}. Note that and hence sup t>0 t α ν α,K ({z ∈ R; |z| > t}) = 1. Next we observe that we do not need to assume that the measure ν α,K is symmetric since we use a modified version of Lemma 2.1 of [10] given by Lemma A.3 (Appendix A).
In fact, since the tail of ν α,K vanishes if t > K, we can obtain another estimate for the tail of Z K (B) which, together with (41), will allow us to control its p-th moment for p ∈ (α, 1). This new estimate is given below.
Proof: We use the same idea as in Example 3.7 of [11]. For each k ≥ 1, let Z k,K (B) be a random variable with characteristic function: Since {Z k,K (B)} k converges in distribution to Z K (B), it suffices to prove the lemma for Z k,K (B). Let µ k be the restriction of ν α to {z; k −1 < |z| ≤ K}.
Since µ k is finite, Z k,K (B) has a compound Poisson distribution with where µ * n k denotes the n-fold convolution. Note that we consider the intersection with the event {max 1≤i≤n |η i | > u} and its complement. Note that P (|η i | > u) = 0 for any u > K. Using this fact and Markov's inequality, we obtain that for any u > K, Combining all these facts, we get: for any u > K and the conclusion follows from (42). Assume now that α = 1. In this case, E(η i 1 {|η i |≤u} ) = 0 since η i has a symmetric distribution. Using Chebyshev's inequality this time, we obtain: The result follows as above using the fact that for any u > K, where C α,p is a constant depending on α and p. If α = 1, then where C p is a constant depending on p.
Proof: Note that We consider separately the integrals for u ≤ K and u > K. For the first integral we use (41): For the second one we use Lemma 5.2: if α < 1 then and if α = 1, then We now proceed to the construction of the stochastic integral with respect to Z K . For this, we use the same method as for Z. Note that is the σ-field generated by Z K ([0, s] × A) for all s ∈ [0, t] and A ∈ B b (R d ). For any B ∈ B b (R d ), we will work with a càdlàg modification of the Lévy process If X is a simple process given by (23), we define by the same formula (24) with Z replaced by Z K . The following result shows that I K (X)(t, B) has the same tail behavior as I(X)(t, B).

Proposition 5.4 If X is a bounded simple process then
for any T > 0 and B ∈ B b (R d ), where d α is a constant depending only on α.
Proof: As in the proof of Theorem 4.3, it is enough to prove that . This reduces to showing that U * i = N i j=1 x j Z * ij satisfies an inequality similar to (30) for any x ∈ R N i , i.e.
for any λ > 0, for some d * α > 0. We first examine the tail of Z * ij . By (41), we obtain that for any u > 0,

By Lemma A.3 (Appendix A)
, it follows that for any λ > 0, for any sequence (b j ) j=1,...,N i of real numbers. Inequality (44) (with d * α = r 2 α ) follows by applying this to b j = x j K 1/α ij . In view of the previous result and Proposition 4.2, for any process X ∈ L α we can construct the integral in the same manner as I(X)(t, B), and this integral satisfies (43). If in addition the process X ∈ L α satisfies (38), then we can define the integral . This integral will satisfy an inequality similar to (43) with B replaced by O.
The appealing feature of I K (X)(t, B) is that we can control its moments, as shown by the next result.
Theorem 5.5 If α < 1, then for any p ∈ (α, 1) and for any X ∈ L p , for any t > 0 and B ∈ B b (R d ), where C α,p is a constant depending on α, p.
If O ⊂ R d is an arbitrary Borel set, and we assume in addition, that the process X ∈ L p satisfies: then inequality (45) holds with B replaced by O. Proof: Step 1. Suppose that X is an elementary process of the form (22).
We evaluate the inner integral. We split this integral into two parts, for u ≤ K|y|, respectively u > K|y|. For the first integral, we use (41). For the second one, we use Lemma 5.2. Therefore, the inner integral is bounded by: |X(s, x)| p dxds.
Step 2. Suppose now that X is a simple process of the form (23). Then Using the linearity of the integral, the inequality |a + b| p ≤ |a| p + |b| p , and the result obtained in Step 1 for the elementary processes X ij , we get: |X(s, x)| p dxds.
Step 3. Let X ∈ L p be arbitrary. By Proposition 4.2, there exists a sequence (X n ) n of bounded simple processes such that X n −X p → 0. Since α < p, it follows that X n − X α → 0. By the definition of I K (X)(t, B) there exists a subsequence {n k } k such that {I K (X n k )(t, B)} k converges to I K (X)(t, B) a.s. Using Fatou's lemma and the result obtained in Step 2 (for the simple processes X n k ), we get: |X(s, x)| p dxds.
By the definition of I K (X)(t, O), there exists a subsequence (k i ) i such that {I K (X)(t, O k i )} i converges to I K (X)(t, O) a.s. Using Fatou's lemma, the result obtained in Step 3 (for B = O k i ) and the monotone convergence theorem, we get: Remark 5.6 Finding a similar moment inequality for the case α = 1 and p ∈ (1, 2) remains an open problem. The argument used in Step 2 above relies on the fact that p < 1. Unfortunately, we could not find another argument to cover the case p > 1.

The case α > 1
In this case, the construction of the integral with respect to Z K relies on an integral with respect to N which exists in the literature. We recall briefly the definition of this integral. For more details, see Section 1.2.2 of [26], Section 24.2 of [28] or Section 8.7 of [22]. Let E = R d × (R\{0}) endowed with the measure µ(dx, dz) = dxν α (dz) and B b (E) be the class of bounded Borel sets in E. For a simple process Y = {Y (t, x, z); t ≥ 0, (x, z) ∈ E}, the integral I N (Y )(t, B) is defined in the usual way, for any t > 0, B ∈ B b (E). The process I N (Y )(·, B) is a (càdlàg) zero-mean square-integrable martingale with quadratic variation and predictable quadratic variation By approximation, this integral can be extended to the class of all P × B(R\{0})-measurable processes Y such that, for any T > 0 and B ∈ B b (E) The integral is a martingale with the same quadratic variations as above, and has the isometry property: E|I N (Y )(t, B)| 2 = Y 2 2,T,B . If in addition, Y 2,T,E < ∞, then the integral can be extended to E. By the Burkholder-Davis-Gundy inequality for discontinuous martingales, for any p ≥ 1 The previous inequality is not suitable for our purposes. A more convenient inequality can be obtained for another stochastic integral, constructed for p ∈ [1, 2] fixed, as suggested on page 293 of [26]. More precisely, one can show that for any bounded simple process Y , where C p is the constant appearing in (47) (see Lemma 8.22 of [22]).
By the usual procedure, the integral can be extended to the class of all P × B(R\{0})-measurable processes Y such that [Y ] p,T,E < ∞. The integral is defined as an element in the space L p (Ω; D[0, T ]) and will be denoted by Its appealing feature is that it satisfies inequality (48). From now on, we fix p ∈ [1,2]. Based on (40), for any B ∈ B b (R d ), we let for any predictable process X = {X(t, x); t ≥ 0, x ∈ R d } for which the rightmost integral is well-defined. Letting Y (t, x, z) = X(t, x)z1 {0<|z|≤K} , we see that this is equivalent to saying that p > α and X ∈ L p . By (48), where C α,p = C p α/(p − α). If in addition, the process X ∈ L p satisfies (46) then (49) holds with B replaced by O, for an arbitrary Borel set O ⊂ R d . Note that (49) is the counterpart of (45) for the case α > 1. Together, these two inequalities will play a crucial role in Section 6.
The following table summarizes all the conditions: X ∈ L α and X ∈ L p and X satisfies (38) X satisfies (46) for some p ∈ (α, 2]  Theorem 6.1 Let α ∈ (0, 2), α = 1. Assume that for any T > 0, for some p ∈ (α, 1) if α < 1, or for some p ∈ (α, 2] if α > 1. Then equation (1) has a mild solution. Moreover, there exists a sequence (τ K ) K≥1 of stopping times with τ K ↑ ∞ a.s. such that for any T > 0, , x) is the fundamental solution of Lu = 0 on R d . Condition (52) holds if p < 1 + 2/d. If α < 1, this condition holds for any p ∈ (α, 1). If α > 1, this condition holds for any p ∈ (α, 1 + 2/d], as long as α satisfies (5). Conditions (50) and (51) hold by the continuity of the function G in t and x, by applying the dominated convergence theorem. To justify the application of this theorem, we use the trivial bound (2πt) −dp/2 for both G(t+h, x, y) p and G(t, x + h, y) p , which introduces the extra condition dp < 2. Unfortunately, we could not find another argument for proving these two conditions.  (17). Assuming (18), we see that (52) holds if p < 1 + 2/d. The same comments as for the heat equation apply here as well. (Although in a different framework, a condition similar to (50) was probably used in the proof of Theorem 12.11 of [22] (page 217) for the claim lim s→t E|J 3 (X)(s) − J 3 (X)(t)| p L p (O) = 0. We could not see how to justify this claim, unless dp < 2.) Example 6.4 (Heat equation with fractional power of the Laplacian) Let L = ∂ ∂t + (−∆) γ for some γ > 0. By Lemma B.23 of [22], if α > 1, then condition (52) holds for any p ∈ (α, 1 + 2γ/d), provided that α satisfies (21). (This condition is the same as in Theorem 12.19 of [22], which examines the same equation using the approach based on Hilbert-space valued solution.) To verify condition (50) and (51), we use the continuity of G in t and x and apply the dominated convergence theorem. To justify the application of this theorem, we use the trivial bound C d,γ t −dp/(2γ) for both G(t + h, x, y) p and G(t, x + h, y) p , which introduces the extra condition dp < 2γ. This bound can be seen from (19), using the fact that G(t, x, y) ≤ G(t, x − y) where G and G are the fundamental solutions of ∂u ∂t − ∆u = 0 on O, respectively R d . (In the case of the same equation on R d , elementary estimates for the time and space increments of G can be obtained directly from (20), as on p. 196 of [4]. These arguments cannot be used when O is a bounded domain.) The remaining part of this section is dedicated to the proof of Theorem 6.1. The idea is to solve first the equation with the truncated noise Z K (yielding a mild solution u K ), and then identify a sequence (τ K ) K≥1 of stopping times with τ K ↑ ∞ a.s. such that for any t > 0, x ∈ O and L > K, u K (t, x) = u L (t, x) a.s. on the event {t ≤ τ K }. The final step is to show that process u defined by u(t, x) = u K (t, x) on {t ≤ τ K } is a mild solution of (1). A similar method can be found in Section 9.7 of [22] using an approach based on stochastic integration of operator-valued processes, with respect to Hilbert-space-valued processes, which is different from our approach.
Since σ is a Lipschitz function, there exists a constant C σ > 0 such that: In particular, letting D σ = C σ ∨ |σ(0)|, we have: For the proof of Theorem 6.1, we need a specific construction of the Poisson random measure N, taken from [21]. We review briefly this construction.
Let (O k ) k≥1 be a partition of R d with sets in B b (R d ) and (U j ) j≥1 be a partition of R\{0} such that ν α (U j ) < ∞ for all j ≥ 1. We may take be independent random variables defined on a probability space (Ω, F , P ), such that is a Poisson random measure on R + ×R d ×(R\{0}) with intensity dtdxν α (dz). This section is organized as follows. In Section 6.1 we prove the existence of the solution of the equation with truncated noise Z K . Sections 6.2 and 6.3 contain the proof of Theorem 6.1 when α < 1, respectively α > 1.

The equation with truncated noise
In this section, we fix K > 0 and we consider the equation: with zero initial conditions and Dirichlet boundary conditions. A mild solution of (56) is a predictable process u which satisfies (2) with Z replaced by Z K . For the next result, O can be a bounded domain in R d or O = R d (with no boundary conditions).
Proof: We use the same argument as in the proof of Theorem 13 of [6], based on a Picard iteration scheme. We define u 0 (t, x) = 0 and for any n ≥ 0. We prove by induction on n ≥ 0 that: The statement is trivial for n = 0. For the induction step, assume that the statement is true for n. By an extension to random fields of Theorem 30, Chapter IV of [8], u n has a jointly measurable modification. Since this modification is (F t ) t -adapted, (in the sense of (iii)), it has a predictable modification (using an extension of Proposition 3.21 of [22] to random fields). We work with this modification, that we call also u n .
To prove (iv), we first show the right continuity in t. Let h > 0. Writing the interval [0, t + h] as the union of [0, t] and (t, t + h], we obtain that s, x, y))σ(u n (s, y))Z K (ds, dy) Using again (54) and the moment inequality (45) (or (49)), we obtain: It follows that both I 1 (h) and I 2 (h) converge to 0 as h → 0, using (50) for I 1 (h), respectively the Dominated Convergence Theorem and (52) for I 2 (h). The left continuity in t is similar, by writing the interval [0, t − h] as the difference between [0, t] and (t − h, t] for h > 0. For the continuity in x, similarly as above, we see that E|u n+1 (t, x + h) − u n+1 (t, x)| p is bounded by: which converges to 0 as |h| → 0 due to (51). This finishes the proof of (iv). We denote M n (t) = sup x∈O E|u n (t, x)| p . Similarly to (58), we have: By applying Lemma 15 of Erratum to [6] with f n = M n , k 1 = 0, k 2 = 1 and g(s) = CJ p (s), we obtain that: We now prove that {u n (t, x)} n converges in L p (Ω), uniformly in (t, x) ∈ [0, T ] × O. To see this, let U n (t) = sup x∈O E|u n+1 (t, x) − u n (t, x)| p for n ≥ 0. Using the moment inequality (45) (or (49)) and (53), we have: By Lemma 15 of Erratum to [6], n≥0 U n (t) 1/p converges uniformly on [0, T ]. (Note that this lemma is valid for all p > 0. ) We denote by u(t, x) the limit of u n (t, x) in L p (Ω). One can show that u satisfies properties (ii)-(iv) listed above. So u has a predictable modification. This modification is a solution of (56). To prove uniqueness, let v be another solution and denote H(t) = sup x∈O E|u(t, x) − v(t, x)| p . Then Using (52), it follows that H(t) = 0 for all t > 0.
6.2 Proof of Theorem 6.1: case α < 1 In this case, for any t > 0 and B ∈ B b (R d ), we have: (see (11)) The characteristic function of Z(t, B) is given by: Note that {Z(t, B)} t≥0 is not a compound Poisson process since ν α is infinite.
We introduce the stopping times (τ K ) K≥1 , as on page 239 of [21]: where Z(t−, B) = lim s↑t Z(s, B). Clearly, τ L (B) ≥ τ K (B) for all L > K. We first investigate the relationship between Z and Z K and the properties of τ K (B). Using construction (55) of N and definition (39) of Z K , we have: We observe that {Z j,k (t, B)} t≥0 is a compound Poisson process with Note that τ K (B) > T means that all the jumps of {Z(t, B)} t≥0 in [0, T ] are smaller than K in modulus, i.e.
Using an approximation argument and the construction of the integrals I(X) and I K (X), it follows that for any X ∈ L α and for any L > K, a.s. on {τ K (B) > T }, we have: The next result gives the probability of the event {τ K (B) > T }.
Lemma 6.6 For any T > 0 and B ∈ B b (R d ), |z| > K}) = K −α and (τ j,k K (B)) j,k≥1 are independent, it is enough to prove that for any j, k ≥ 1, Note that {τ j,k K (B) > T } = {ω; |Z j,k i (ω)| ≤ K for all i for which T j,k i ≤ T and X j,k i ∈ B} and (T j,k n ) n≥1 are the jump times of a Poisson process with intensity λ j,k . Hence which yields (61).
To prove the last statement, let A (n) K ) = 1 for any n ≥ 1, and hence P ( n≥1 lim K A (n) K ) = 1. Hence, with probability 1, for any n, there exists some K n such that τ Kn > n. Since (τ K ) K is non-decreasing, this proves that τ K → ∞ with probability 1.
Remark 6.7 The construction of τ K (B) given above is due to [21] (in the case of a symmetric measure ν α ). This construction relies on the fact that B is a bounded set. Since Z(t, R d ) (and consequently τ K (R d )) is not well-defined, I could not see why this construction can also be used when B = R d , as it is claimed in [21]. To avoid this difficulty, one could try to use an increasing sequence (E n ) n of sets in B b (R d ) with n E n = R d . Using (60) with B = E n and letting n → ∞, we obtain that I(X)(t, In what follows, we denote τ K = τ K (O). Let u K be the solution of equation (56), whose existence is guaranteed by Theorem 6.5. Proof: By the definition of u L and (60), a.s. on the event {t ≤ τ K }. Using the definition of u K and Proposition C.1 (Appendix C), we obtain that, with probability 1, Using the moment inequality (45) and the Lipschitz condition (53), we get: where C = C α,p K p−α C p σ . Using (52), it follows that M(t) = 0 for all t > 0.

Proposition 6.9
Under the assumptions of Theorem 6.1, the process u = {u(t, x); t ≥ 0, x ∈ O} defined by: is a mild solution of equation (1).
Proof: We first prove that u is predictable. Note that The process X(ω, t, x) = 1 {t≤τ K } (ω) is clearly predictable, being in the class C defined in Remark 4.1. By the definition of Ω t,x , since u K , u L are predictable, it follows that (ω, t, x) → 1 Ω * t,x (ω) is P-measurable. Hence, u is predictable. We now prove that u satisfies (2). Let t > 0 and x ∈ O be arbitrary. Using (60) and Proposition C.1 (Appendix C), with probability 1, we have: G(t − s, x, y)σ(u(s, y))Z(ds, dy).
For the second last equality, we used the fact that processes X(s, y) In this case, for any t > 0 and B ∈ B b (R d ), we have: (see (12)) To introduce the stopping times (τ K ) K≥1 we use the same idea as in Section 9.7 of [22].
Let M(t, B) = j≥1 (L j (t, B) −EL j (t, B)) and P (t, B) = L 0 (t, B), where L j (t, B) = L j ([0, t] × B) was defined in Section 2. Note that {M(t, B)} t≥0 is a zero-mean square-integrable martingale and {P (t, B)} t≥0 is a compound Poisson process with E[P (t, B)] = t|B|µ where µ = |z|>1 zν α (dz) = β α α−1 . With this notation, and µ K = 1<|z|≤K zν α (dz). Recalling definition (40) of Z K , it follows that: For any K > 0, we let where P (t−, B) = lim s↑t P (s, B). Lemma 6.6 holds again, but its proof is simpler than in the case α < 1, since {P (t, B)} t≥0 is a compound Poisson process. By (55), Using (62) and (63), it follows that: Let p ∈ (α, 2] be fixed. Using an approximation argument and the construction of the integrals I(X) and I K (X), it follows that for any X ∈ L α and for any L > K, a.s. on We denote τ K = τ K (O). We consider the following equation: with zero initial conditions and Dirichlet boundary conditions. A mild solution of (65) is a predictable process u which satisfies: s, y))dyds a.s.
for any t > 0, x ∈ O. The existence and uniqueness of a mild solution of (65) can be proved similarly to Theorem 6.5. We omit these details. We denote this solution by v K . Lemma 6.10 Under the assumptions of Theorem 6.1, for any t > 0, Proof: By the definition of v L and (64), a.s. on the event Using the definition of v K and Proposition C.1 (Appendix C), we obtain that, with probability 1, Letting We estimate separately the two terms. For the first term, we use the moment inequality (49) and the Lipschitz condition (53). We get: where C = C α,p K p−α C p σ . For the second term, we use Hölder's inequality | f gdµ| ≤ ( |f | p dµ) 1/p ( |g| q dµ) 1/q with f (s, y) = G(t−s, x, y) 1/p (σ(v K (s, y))− σ(v L (s, y))1 {s≤τ K } and g(s, y) = G(t−s, x, y) 1/q , where p −1 + q −1 = 1. Hence, (66) and (67), we obtain that: . This implies that M(t) = 0 for all t > 0. For any t > 0 and x ∈ O, we let Ω t, where K and L are positive integers, and Ω * t,x = Ω t,x ∩ {lim K→∞ τ K = ∞}. By Lemma 6.10, P (Ω * t,x ) = 1.

Proposition 6.11
Under the assumptions of Theorem 6.1, the process u = {u(t, x); t ≥ 0, x ∈ O} defined by: is a mild solution of equation (1).
Proof: We proceed as in the proof of Proposition 6.9. In this case, with probability 1, we have: The conclusion follows letting K → ∞, since τ K → ∞ a.s. and b K → 0.

A Some auxiliary results
The following result is used in the proof of Theorem 4.3.
In the proof of Theorem 4.3 and Lemma A.3 below, we use the following remark, due to Adam Jakubowski (personal communication).
The next result is a generalization of Lemma 2.1 of [10] to the case of non-symmetric random variables. This result is used in the proof of Lemma 5.1 and Proposition 5.4. Lemma A.3 Let (η k ) k≥1 be independent random variables such that sup λ>0 λ α P (|η k | > λ) ≤ K ∀k ≥ 1 (68) for some K > 0 and α ∈ (0, 2). If α > 1, we assume that E(η k ) = 0 for all k, and if α = 1, we assume that η k has a symmetric distribution for all k. Then for any sequence (a k ) k≥1 of real numbers, we have: where r α > 0 is a constant depending only on α.

C A local property of the integral
The following result is the analogue of Proposition 8.11 of [22].
Suppose that there exists an event A ∈ F T such that Proof: We only prove the result for I(X), the proof for I K (X) being the same. Moreover, we include only the argument for α < 1; the case α > 1 is similar. The idea is to reduce the argument to the case when X is a simple process, as in the proof Proposition of 8.11 of [22].
Step 1. We show that the proof can be reduced to the case of a bounded set O. Let X n (t, x) = X(t, x)1 On (x) where O n = O ∩ E n and (E n ) n ia an increasing sequence of sets in B b (R d ) such that n E n = R d . Then X n ∈ L α satisfies (70). By the dominated convergence theorem, Step 2. We show that the proof can be reduced to the case of a bounded processes. For this, let X n (t, x) = X(t, x)1 {|X(t,x)|≤n} . Clearly, X n ∈ L α is bounded and satisfies (70) for all n. By the dominated convergence theorem, [X n −X] α → 0, and hence I(X n k )(T, O) → I(X)(T, O) a.s. for a subsequence {n k }. It suffices to show that I(X n )(T, O) = 0 a.s. on A for all n.
Step 3. We show that the proof can be reduced to the case of bounded continuous processes. Assume that X ∈ L α is bounded and satisfies (70). For any t > 0 and x ∈ R d , we define X n (t, x) = n d+1 t (t−1/n)∨0 (x−1/n,x]∩O X(s, y)dyds,