A W 12 -theory of Stochastic Partial Differential Systems of Divergence type on C 1 domains

In this paper we study the stochastic partial differential systems of divergence type with C 1 space domains in R d . Existence and uniqueness results are obtained in terms of Sobolev spaces with weights so that we allow the derivatives of the solution to blow up near the boundary. The coefﬁcients of the systems are only measurable and are allowed to blow up near the boundary.


Introduction
In this article we are dealing with W 1 2 -theory of the stochastic partial differential systems (SPDSs) of d 1 equations of divergent type: du k = (D i (a i j kr u r x j +b i kr u r +f ik ) + b i kr u r x i + c kr u r + f k )d t +(σ i kr,m u r x i + ν kr,m u r + g k m )d w m t , t > 0 (1.1) Here, {w m t : m = 1, 2, . . .} is a countable set of independent one-dimensional Brownian motions defined on a probability space (Ω, , P). Indices i and j run from 1 to d while k, j = 1, 2, · · · , d 1 and m = 1, 2, · · · . To make expressions simple, we are using the summation convention on i, j, r, m. The coefficients a i j kr ,b i kr , b i kr , c kr , σ i kr,m and ν kr,m are measurable functions depending on ω ∈ Ω, t, x. Detailed formulation of (1.1) follows in the subsequent sections.
Demand for a general theory of stochastic partial differential systems(SPDSs) arises when we model the interactions among unknowns in a natural phenomenon with random behavior. For example, the motion of a random string can be modeled by means of SPDSs(see [20] and [2]).
We note that, if d 1 = 1, then the system (1.1) becomes a single stochastic partial differential equation (SPDE) of divergence type. In this case L 2 -theory on R d was developed long ago and an account of it can be found, for instance, in [21] and [22] (even if d 1 = 1, L 2 -theory on R d , Theorem 2.4, can be easily obtained by adopting the approaches in [21] and [22]). Also, L p -theory(p ≥ 2) of such single equations on C 1 -domains can be found in [4], [6] and [23] in which weighted Sobolev spaces are used to allow derivatives of the solutions to blow up near the boundary. For comparison with L p -theory of SPDEs of non-divergence type, we refer to [5], [8], [14], [12] and references therein.
The main goal of this article is to extend the results [22], [4], [6], [23] for single equations to the case of systems under no smoothness assumptions on the coefficients. We prove the uniqueness and existence results of system (1.1) in weighted Sobolev spaces so that we allow the derivatives of the solutions to blow up near the boundary. The coefficients of the system are only measurable and are allowed to blow up near the boundary (See (4.32)).
We declare that W 1 p -theory, a desirable further result beyond W 1 2 -theory, is not successful yet even under the assumption that the coefficients a i j kr and σ i kr are constants. This is due to the difficulty caused by considering SPDSs instead of SPDEs. For L p -theory, p > 2, one must overcome tremendous mathematical difficulties rising in the general settings; one of the main difficulties in the case p > 2 is that the arguments we are using in the proof of Lemma 3.3 below are not working since in this case we get some extra terms which we simply can not control.
For previous works on certain non-linear stochastic systems such as Stochastic Navier-Stokes equations we refer the authors to [1,16,18,17,19] and references therein.
The organization of the article is as follows. Section 2 handles the Cauchy problem. In section 3 and section 4 we develop our theory of the system defined on R d + and bounded domain , respectively. As usual, R d stands for the Euclidean space of points x = (x 1 , ..., If we write c = c(· · · ), this means that the constant c depends only on what are in parenthesis.
The authors are sincerely grateful to the referee for giving the authors many helpful comments and finding few errors in the earlier version of this article.

The systems on R d
In this section we develop some solvability results of linear systems defined on space domain R d . These results will be used later for systems defined on R d + or a bounded C 1 domain . Let (Ω, , P) be a complete probability space and { t } be a filtration such that 0 contains all Pnull sets of Ω; the probability space (Ω, , P) is rich so that we define independent one-dimensional { t }-adapted Wiener processes {w m t } ∞ m=1 on it. We let denote the predictable σ-algebra on Ω × (0, ∞).
For p ∈ [2, ∞) and γ ∈ (−∞, ∞) we define the space of Bessel potential H Here, is the Fourier transform. Define Note that H γ p are usual Sobolev spaces for γ = 0, 1, 2, . . .. It is well known that the first order differentiation operators, Using the spaces mentioned above, for a fixed time T , we define the stochastic Banach spaces Lastly, we set U .
in the sense of distributions, that is, for any φ ∈ C ∞ 0 and k = 1, 2, · · · , d 1 , the equality holds (a.s.) for all t ≤ T . We write f = Du, g = Su to denote the deterministic part, the stochastic part of u, respectively. Also write D k u = f k , S k u = g k and D k m u = g k m . The norm in Remark 2.2. Note that since the coefficients in system (1.1) are only measurable, the space γ+2 p (T ) is not appropriate for system (1.1) unless γ = −1.
We set A i j = (a where the latter is the case that the elements are in 2 . Throughout the article we assume the following. where (R d ) denotes Borel σ-field in R d .
(ii) There exist finite constants δ, K j ( j = 1, . . . , d), L > 0 so that holds for any ω ∈ Ω, t > 0, where ξ is any (real) d 1 × d matrix, ξ i is the ith column of ξ; again the summations on i, j are understood. Moreover, we assume that for any ω, t > 0, x ∈ R d , i, j = 1, . . . , d, Our main theorem in this section is the following.
3. Applying the stochastic product rule d|u k | 2 = 2u k du k + du k du k for each k (see Remark 2.5 below), we have Note that, making the summation on r, i appeared, we get By taking expectation, integrating with respect to x, and using integrating by parts in turn on (2.9), we obtain for any > 0; similarly, we get Hence, it follows that Choosing small , we obtain where c does not depend on T . Now we recall the remark in step 1, and see that the first inequality implies (2.7). Also the second inequality and Gronwall's inequality lead us to (2.8). The theorem is proved.
Remark 2.5. In (2.9) we assumed that u k (t, x) has Itô differential for each x, however Itô's formula works even when u k has Itô's differential in the sense of distributions (see Theorem 2.1 [10]).
Alternatively, one can proceed as follows: Take a nonnegative function ψ ∈ C ∞ 0 (B 1 (0)) with unit integral, and for > 0 define ψ ( Considering Itô's formula, integrating over R d and taking the expectation, for each k we get 3 The system on R d + In this section we present some results for the systems defined on R d + . In the next section, these results will be modified and be used to develop our theory of the systems defined on C 1 -domains.
Here we use the Banach spaces introduced in [13 where c is a constant. Note that any nonnegative function ζ, ζ > 0 on [1, e], satisfies (3.15). For as zero for x 1 ≤ 0 one can regard it as a distribution defined on R d . If g = (g 1 , g 2 , . . . , g d 1 ) and each g k is an 2 -valued function, then we define .
It is known (see [13]) that up to equivalent norms the space H γ p,θ is independent of the choice of ζ. Also, for any η ∈ C ∞ 0 (R + ), we have where c depends only on d, d 1 , γ, θ , p, η, ζ. Furthermore, if γ is a nonnegative integer, then Below we collect some other properties of spaces H γ p,θ . Let M α be the operator of multiplying by (x 1 ) α and M = M 1 .
(i) Assume that γ − d/p = m + ν for some m = 0, 1, · · · and ν ∈ (0, 1]. Then for any u ∈ H γ p,θ and i ∈ {0, 1, · · · , m}, we have Proof. All results are taken from [13]. We only give a short comment on (v), since the statement may look different. By Remark 2.15 of [13], for any u ∈ H γ p,θ , there exist u 1 , . Thus it is enough to apply this result with u = M v We define the following stochastic Banach spaces. .
Let us denote
2. Again, as in the proof of Theorem 2.4, applying the stochastic product rule d|u k | 2 = 2u k du k + du k du k for each k (see Remark 2.5), we get where the summations on i, j, r are understood. Denote c = θ − d. For each k, we have Note that, by integration by parts, we get . Also, the second term in the right hand side of (3.21) is Thus, by summing up the terms in (3.21) over k and rearranging the terms, we obtain for any κ, > 0. This is because for any vectors v, w ∈ R n and κ > 0 and consequently, Now, Assumption (2.4), inequality (3.22), the inequality (see Corollary 6.2 in [13]), and Lemma 3.1 (iv) lead us to Now, it is enough to take κ = 2K/(d + 1 − θ ) and observe that (3.19) is equivalent to the condition The lemma is proved.
Here is the main result of this section.
Proof. As before, we only prove that the a priori estimate (3.26) holds given that a solution u already exists.
Step 1. Assume that u vanishes when x 1 is near zero or infinity, andb i = b i = c = 0 and ν = 0. Then in this case, the a priori estimate follows from Lemma 3.3.
Step 2. Only assume u vanishes when x 1 is near zero or infinity. Then, by Step 1, Since · H −1 2,θ ≤ · L 2,θ , we easily see that the above is less than Now it is enough to take β 0 so that cβ < 1/2 for any β ≤ β 0 .
Step 3. General case. Let β 0 be from Step 2. Take a sequence of smooth function η n (x) = η n (x 1 ) ∈ C ∞ 0 (R + ) so that η n (x) → 1, M Dη n are bounded uniformly in n, and η n v → v in H γ 2,θ (T ) as n → ∞ for any v ∈ H γ 2,θ (T ) (see for instance the proof of Theorem 2.9 in [14]). Note that u n := η n u satisfies du k n = (D i (a i j kr u r nx j +b i kr u r n +f ik n ) + b i kr u r nx i + c kr u r n + f k n )d t +(σ i kr,m u r nx i + ν kr,m u r n + g k n,m )d w m t , wherē Then by the result of Step 2, . Finally one gets the desired estimate by taking n → ∞. Indeed, for instance, since u ∈ H 1 2,θ (T ) and M η nx are bounded uniformly in n, by DCT Remark 3.5. We do not know how sharp (3.19) is. However, if θ ∈ (d − 1, d + 1) then Theorem 3.4 is false even for the heat equation u t = ∆u + f (see [13]). We also mention that if the coefficients are sufficiently smooth in x, then one can get quite wider range of θ . This will be shown in the subsequent article [7].

The system on ⊂ R d
In this section we assume the following. u . In other words, for any x 0 ∈ ∂ , there exist constants r 0 , K 0 ∈ (0, ∞) and a one-to-one continuously differentiable mapping Ψ of B r 0 (x 0 ) onto a domain J ⊂ R d such that To proceed further we introduce some well known results from [3] and [9]. To describe the assumptions off i s, f , and g in (1.1) with space domain we use the Banach spaces introduced in [9] and [15]. Let ζ ∈ C ∞ 0 (R + ) be a nonnegative function satisfying (3.15). For x ∈ and n ∈ Z := {0, ±1, ...} we define ζ n (x) = ζ(e n ψ(x)).
Then we have n ζ n ≥ c in and For θ , γ ∈ R, let H γ p,θ ( ) denote the set of all distributions u = (u 1 , u 2 , · · · u d 1 ) on such that  If g = (g 1 , g 2 , . . . , g d 1 ) and each g k is an 2 -valued function, then we define . It is known (see, for instance, [15]) that up to equivalent norms the space H γ p,θ ( ) is independent of the choice of ζ and ψ. Moreover, if γ = n is a non-negative integer, then it holds that (4.29) By comparing (3.18) and (4.29), one finds that two spaces H γ p,θ (R d + ) and H γ p,θ are different since ψ is bounded. Also, it is easy to see that, for any nonnegative function ξ = ξ( (4.30) In particular, if u(x) = 0 for x 1 ≥ r, then for any α ∈ R we get where c = c(r, α, γ, p, θ ). We also mention that the space H γ p,θ can be defined on the basis of (4.28) by formally taking ψ(x) = x 1 so that ζ −n (e n x) = ζ(x) and (4.28) becomes We place the following lemma similar to Lemma 3.1.
in the sense of distributions. The norm in H γ+2 p,θ ( , T ) is introduced by The following result is due to N.V.Krylov (see, for instance, [11]). .
In particular, for any t ≤ T , ds.
Assumption 4.6. There is control on the behavior ofb i kr , b i kr , c kr and ν kr near ∂ , namely, Note that Assumption 4.6 allows the coefficients to be unbounded and to blow up near the boundary. (4.32) holds if, for instance, for some c, > 0.
Here is the main result of this section.
Theorem 4.7. Let = R d + or be bounded. Suppose (3.19) and Assumption 4.6 hold. Then for anȳ , and u 0 ∈ U 1 2,θ ( ), the system (1.1) admits a unique solution u ∈ H 1 2,θ ( , T ), and for this solution we have To prove Theorem 4.7 we need the following a priori estimate near the boundary when ∂ ∈ C ∞ . Lemma 4.9. Assume that u ∈ H 1 2,θ ( , T ) is a solution of system (1.1) such that u(t, x) = 0 for any x ∈ \B r (x 0 ), where x 0 ∈ ∂ and r > 0. Then there exists constant r 1 ∈ (0, 1), independent of x 0 and u, such that if r ≤ r 1 , then a priori estimate (4.33) holds.
Then, in fact, Lemma 4.9 holds if u(t, x) = 0 for x 1 ≥ r 1 for some r 1 . Indeed, by (4.32) there is r 1 > 0 so that for x 1 ≤ r 1 . Now, if u(t, x) = 0 for x 1 ≥ r 1 , then without affecting the system we may putb i = b i = c = 0 and ν = 0 for x 1 ≥ r 1 so that (4.38) holds for all x. Consequently the assertion follows from Theorem 3.4 and (4.31).
Since a i j is only measurable, at most we get and consequently (4.41) only leads us to the useless inequality Hence, to avoid estimating the norm ψa i j kr u r x j ζ nx i H −1 2,θ ( ,T ) we proceed as in [6]. We note that for each k we have ψ −1 a i j kr u r x j ζ nx i ∈ ψ −1 L 2,θ ( , T ).
We see that the induction goes through and thus the theorem is proved.