L ESTIMATES FOR SPDE WITH DISCONTINUOUS COEFFICIENTS IN DOMAINS

. Stochastic partial diﬁerential equations of divergence form with discontinuous and unbounded coe–cients are considered in C 1 domains. Existence and uniqueness results are given in weighted L p spaces, and H˜older type estimates are presented.


Introduction
Let G be an open set in R d . We consider parabolic stochastic partial differential equations of the form du = (D i (a ij u x j + b i u + f i ) +b i u x i + cu +f ) dt + (ν k u + g k ) dw k t , (1.1) given for x ∈ G, t ≥ 0. Here w k t are independent one-dimensional Wiener processes, i and j go from 1 to d, and k runs through {1, 2, ...}. The coefficients a ij , b i ,b i , c, ν k and the free terms f i ,f , g k are random functions depending on t > 0 and x ∈ G.
This article is a natural continuation of the article [15], where L p estimates for the equation with discontinuous coefficients was constructed on R d .
Our approach is based on Sobolev spaces with or without weights, and we present the unique solvability result of equation (1.1) on R d , R d + (half space) and on bounded C 1 domains. We show that L p -norm of u x can be controlled by L p -norms of f i ,f and g if p is sufficiently close to 2.
Pulvirenti [13] showed by example that without the continuity of a ij in x one can not fix p even for deterministic parabolic equations. For an L p theory of linear SPDEs with continuous coefficients on domains, we refer to [1], [2] and [7].
Actually L 2 theory for type (1.1) with bounded coefficients was developed long times ago on the basis of monotonicity method, and an account of it can be found in [14]. But our results are new even for p = 2 (and probably even for determistic equation) since, for instance, we are only assuming the functions ρb i , ρb i , ρ 2 c, ρν k are bounded, where ρ(x) = dist(x, ∂G). Thus we are allowing our coefficients to blow up near the boundary of G.
An advantage of L p (p > 2) theory can be found, for instance, in [16], where solvability of some nonlinear SPDEs was presented with the help of L p estimates for linear SPDEs with discontinuous coefficients. Also we will see that some Hölder type estimates are valid only for p > 2 (Corollary 2.5).

Main Results
Let (Ω, F, P ) be a complete probability space, {F t , t ≥ 0} be an increasing filtration of σ-fields F t ⊂ F, each of which contains all (F, P )-null sets. By P we denote the predictable σ-field generated by {F t , t ≥ 0} and we assume that on Ω we are given independent one-dimensional Wiener processes w 1 t , w 2 t , ..., each of which is a Wiener process relative to {F t , t ≥ 0}.
In other words, there exist constants r 0 , K 0 > 0 such that for any x 0 ∈ ∂G there exists a one-to-one continuously differentiable mapping Ψ from x), c(t, x) and ν k (t, x) are predictable functions of (ω, t).
(ii) There exist constants λ, Λ ∈ (0, ∞) such that for any ω, t, x and ξ ∈ R d , λ|ξ| 2 ≤ a ij ξ i ξ j ≤ Λ|ξ| 2 . (iii) For any x, t and ω, To describe the assumptions of f i ,f and g we use Sobolev spaces introduced in [7], [8] and [12]. If n is a non negative integer, then where {ζ n : n ∈ Z} is a sequence of functions ζ n ∈ C ∞ 0 (G) such that and define ζ n (x) = ζ(e n x), then (2.3) becomes It is known that up to equivalent norms the space H γ p,θ is independent of the choice ζ, and H γ p,θ (G) and its norm are independent of {ζ n } if G is bounded.
We use above notations for 2 -valued functions g = (g 1 , g 2 , ...). For instance . Fix (see [5]) a bounded real-valued function ψ defined inḠ such that for any multi-index α, and the functions ψ and ρ are comparable in a neighborhood of ∂G. As in [11], by M α we denote the operator of multiplying by (x 1 ) α and M = M 1 . Define By H γ p,θ (G, τ ) we denote the space of all functions u ∈ ψH γ p,θ (G, τ ) such that u(0, ·) ∈ U γ p,θ (G) and for some (2.6) in the sense of distributions. In other words, for any φ ∈ C ∞ 0 (G), the equality . It is easy to check that up to equivalent norms the space H γ p,θ (G, τ ) and its norm are independent of the choice of ψ if G is bounded. We Similarly we define stochastic Banach space H γ p (τ ) on R d (and its norm) by formally taking ψ = 1 and replacing H γ p,θ (G), U γ p,θ (G) by H γ p , U γ p , respectively, in the definition of the space H γ p,θ (G, τ ). We drop τ in the notations of appropriate Banach spaces if τ ≡ ∞.
. Some properties of the spaces H γ p,θ , H γ p,θ (G, τ ) and H γ p (τ ) are collected in the following lemma (see [3], [7], [8] and [12] for detail). From now on we assume that In addition, under either of these three conditions There exists a constant N depending only on d, p, γ, T (and θ) such that for any t ≤ T , (2.10) (iv) Let γ − d/p = m + ν for some m = 0, 1, ... and ν ∈ (0, 1), then for any k ≤ m, Here are our main results.
where the constant N is independent of f i ,f , g, u and u 0 . Lemma 2.3 (iv) and (v) yield the following results. It is crucial that p is bigger than 2.
(i) Then for any 0 ≤ s < t ≤ τ , thus if θ ≤ d, then the function u itself is Hölder continuous in x.
The following corollary shows that if some extra conditions are assumed, then the solutions are Hölder continuous in (t, x) (regardless of the dimension d).
Then there exists α = α(q, r, d, G) > 0 such that Proof. It is shown in [3] that under the conditions of the corollary, there is a solution v ∈ H 1 2,d,0 (G, T ) satisfying (2.17). By the uniqueness result (Theorem 2.4) in the space H 1 2,d (G, T ), we conclude that u = v and thus v ∈ H 1 p,d (G, T ). We will see that the proof of Theorems 2.4 depends also on the following results on R d + and R d . Theorem 2.7. Assume that Then there exist p 0 = p 0 (λ, Λ, d) > 2, β 0 = β 0 (p, d, λ, Λ) ∈ (0, 1) and
1) with initial data u 0 admits a unique solution u in the class H 1 p (τ ) and for this solution, where N depends only d, p, λ, Λ, K and T .

Proof of Theorem 2.7
First we prove the following lemmas. Then Proof. It is well known (see [11]) that (3.1) has a unique solution u ∈ H 1 p,d,0 (T ) and We will show that one can take N (2) = 1. Let Θ be the collections of the form and τ i are stopping times, τ i ≤ τ i+1 ≤ T . It is well known that the set Θ is dense in H γ p,θ (T ) for any γ, θ ∈ R. Also the collection of sequences g = (g k ), such that each g k ∈ Θ and only finitely many of g k are different from zero, is dense in H γ p,θ (T, 2 ). Thus by considering approximation argument, we may assume that f and g are of this type.
We continue f (t, x) to be an even function and g(t, x) to be an odd function of x 1 . Then obviously f, g ∈ H γ p (T ) for any γ and p. By Theorem 5.1 in [7], equation (3.1) considered in the whole R d has a unique solution v ∈ H 1 p and v ∈ H γ p for any γ. Also by the uniqueness it follows that v is an odd function of x 1 and vanishes at x 1 = 0. Moreover remembering the fact that v satisfies dv = ∆v dt outside the support of f and g, we conclude (see the proof of Lemma 4.2 in [10] for detail) that v ∈ H γ p,d for any γ. Thus, both u and v satisfy (3.1) considered in R d + and belong to H 1 p,d . By the uniqueness result (Theorem 3.3 in [11]) on R d + , we conclude that u = v.
Finally, we see that (3.2) follows from Itô's formula. Indeed (remember that u is infinitely differentiable and vanishes at x 1 = 0),

5)
where N is independent of T .
Take p 0 from Lemma 3.2. The method of continuity shows that to prove the theorem it suffices to prove that if p ≤ p 0 , then (2.19) holds true given that a solution u ∈ H 1 p,d (T ) already exists.
Step 2(general case). By the result of step 1, . Now it is enough to choose β 0 such that for any β ≤ β 0 , . The theorem is proved.

Proof of Theorem 2.8
First we need the following result on R d proved in [15].
Again, to prove the theorem, we only show that the apriori estimate (2.20) holds for p < p 0 (also see step 1 below).
(ii) It is enough to repeat the arguments in (i) using Theorem 2.9 in [1] (instead of Theorem 5.1 in [7]). Now, to complete the proof, we repeat the arguments in [4]. Take Next instead of random processes on [0, T ] one considers processes given on [t m , T ] and, in a natural way, introduce spaces H γ p ([t m , T ]), . Then one gets a counterpart of the result of step 2 and concludes that Thus by the induction hypothesis we conclude ). We see that the induction goes through and thus the theorem is proved.
To proceed we need the following results.