ON THE UNIQUE SOLVABILITY OF SOME NONLINEAR STOCHASTIC PDES

: The Cauchy problem for 1-dimensional nonlinear stochastic partial di(cid:11)erential equations is studied. The uniqueness and existence of solutions in H 2 p ( T )-space are proved.


Introduction
The aim of this paper is to prove the unique solvability of the following one-dimensional nonlinear stochastic partial differential equations(SPDEs): du = [a(t, x, u)u + f(t, x)] dt + [σ k (t)u + g k (t, x)] dw k t , u(0, ·) = u 0 , under suitable conditions on a, σ, f, g k and u 0 . There are not so many works on the solvability of nonlinear SPDEs except some special classes of nonlinear SPDEs.
In [8], [9], Krylov developed an L p -theory of SPDEs including some nonlinear equations. The main assumption (Assumption 4.6 in [9]) was that the nonlinear terms are strictly subordinated to the linear main operators. Our equation (1.1) does not fall into this class because we have a nonlinearity in the main operator.
There are some other interesting classes of nonlinear SPDEs. Among others are semilinear equations and nonlinear equations of monotone type.
Semilinear equations have been extensively studied mostly using semigroup theory. They are evolution equations defined on some Hilbert spaces: du = [Au + F (t, u)] dt + B(t, u) dW (t), (1.3) where A is the infinitesimal generator of a strongly continuous semigroup S(t) and W (t) is a Hilbert space-valued Wiener process. Under some conditions on nonlinear operators F and B like (local) Lipshitzness or dissipativeness, one obtains the solvability of (1.3). The idea is to convert (1.3) into an integral equation using S(t) and then employ the fixed point type argument. We refer the readers to Da Prato-Zabczyk [7]. The theory for the nonlinear equations of monotone type has been developed by many mathematicians using the variational approach, Bensoussan-Temam [3], [4] (via time discretisation), Pardoux [16], [17], Krylov-Rozovskii [13], Ahmed [1] (via Galerkin approximation), Bensoussan [2] (via splitting up method). One tries to approximate the given nonlinear infinite-dimensional equations by a sequence of "solvable" ones. One then makes use of the monotonicity of the nonlinear operators to pass to the limit.
We note that (1.1) is not included in neither of these two theories.
We also mention recent two papers which consider equations similar to ours. Dalecky and Goncharuk [5] studied abstract quasilinear SPDEs which includes the following equation (in a very simplified form) as an example: ∂u ∂t = a(t, u(t))u + D c(t, x, y, u(t, x))ξ(t, y) dy, (1.4) where ξ(t, y) is a space-time white noise and D is a smooth bounded domain. But it is assumed that a(t, ·) does not depend on the pointwise value of the solution u, but it is a functional of u. Da Prato and Tubaro [6] considered the following equation as an application of their theory: Under rather strong regularity assumptions on b and h, they proved that (1.5) has a unique solution. The key idea was to transform the stochastic PDE (1.5) to a deterministic equation for almost all ω ∈ Ω using the stochastic characteristic method (see [18], [19] and references therein for this method). This allows them to use the Hölder space theory for linear PDEs with nonsmooth coefficients and nonlinear PDEs. We also apply this transformation technique to reduce (1.1) to (1.2) (if σ k 's also depend on x, this transformation does not work in our approach). But note that after this reduction, we still have a stochastic PDE. Using the idea in [11], one can further transform (1.2) to a completely deterministic problem in the spirit of [6]. But this process requires a very strong regularity assumption on g. Thus we do not make this transformation, which is the main difference between our work and [6]. Then the Hölder space theory is not available for (1.2). This is the main difficulty in getting the solvability of nonlinear SPDEs like (1.2) in our approach. We first work in some stochastic Sobolev space setting which is available for us by the work of Krylov [8], [9], and then apply the embedding theorems for these spaces to control the Hölder norms of the solution and its derivatives. Now we briefly describe our method and the organization of this paper.
Our approach heavily depends on Krylov's L p -theory [8], [9]. Thus in Section 2, we briefly summarize some of the notations, definitions and theorems of this theory. In particular, the solvability of linear equations in H n p (τ )-spaces and the embedding theorems for these spaces enable us to prove in Section 3 the "local existence" of (1.2) by adapting the method of continuity argument for nonlinear PDEs (see §1.1 of [10]) to our stochastic PDE. In Section 4, we first prove the uniqueness of the solution of (1.2). Then we show that the a priori estimate for onedimensional SPDEs with discontinuous coefficients established in [20] along with the local solvability and the uniqueness yields the main result of this paper, the unique solvability of (1.2). The unique solvability of (1.1) follows from this. In many places, we introduce a sequence of stopping times, which turns out to be a very useful technical tool.
Finally, we remark that we could consider slightly more general equations, for example, we can add lower order terms in (1.1), and get the same result under slightly weaker conditions. But we didn't do this since we tried to present the main idea as free of extra technical details as possible. We also think that one can get the unique solvability of (1.1) with σ k = σ k (t, x) by a perturbation argument under some relatively mild assumption on the regularity of σ k 's. We will make this generalization elsewhere.

Notation and Preliminary Results
Here we present some of the notations, definitions and theorems of Krylov [8], [9]. We formulate them in a convenient form for our purpose. We also state an a priori estimate for one-dimensional SPDEs with discontinuous coefficients which is used in Section 4.
Let R 1 be 1-dimensional Euclidean space, T a fixed positive number, (Ω, F , P ) a complete probability space, ({F t }, t 0) be an increasing filtration of σ-fields F t ⊂ F containing all P -null subsets of Ω, and P the predictable σ-field generated by {F t }. Let {w k t ; k = 1, 2, · · · } be independent one-dimensional F t -adapted Wiener processes defined on (Ω, F , P ). For the above standard terminologies, we refer the readers to [12].
Let D be the set of real-valued Schwartz distributions defined on C ∞ 0 (R 1 ). For given p ∈ [2, ∞) and nonnegative real number n, define the space H n p = H n p (R 1 ) (called the space of Bessel potentials or the Sobolev space with fractional derivatives) as the space of all generalized functions u such that (1 − ∆) n/2 u ∈ L p = L p (R 1 ). For u ∈ H n p and φ ∈ C ∞ 0 by definition For u ∈ H n p one introduces the norm where · p is the norm in L p . It is known that H n p is a Banach space with the norm · n,p and the set C ∞ 0 is dense in H n p . Recall that for integers n 0 the space H n p coincides with the Sobolev space W n p = W n p (R 1 ). We apply the same definitions to l 2 -valued functions g, where l 2 is the set of all real-valued sequences g = {g k ; k = 1, 2, · · · } with the norm defined by |g| 2 l 2 := k |g k | 2 . Specifically, g p := |g| l 2 p , g n,p := |(1 − ∆) n/2 g| l 2 p . For n ∈ R and (τ,l 2 ) . Every stopping time τ appearing in this paper satisfies τ T a.s. (τ ) such that for any φ ∈ C ∞ 0 , with probability 1 the equality holds for all t τ and u(0, ·) ∈ L p (Ω, F 0 ; H n−2/p p ). We also define 2) holds, we write f = Du, g = Su and we also write We always understand equation like (1.1) in the sense of Definition 2.2, which means that we look for a function u ∈ H n p (τ ) such that Du = a(t, x, u)u + f, S k u = σ k (t)u + g k . Lemma 2.3. The spaces H n p (τ ) and H n p,0 (τ ) are Banach spaces with norm (2.3). In addition for u ∈ H n p (τ ) Proof. See Theorem 2.7 of [9]. K, where K is a finite constant, then there exists a subsequence j and a function u ∈ H n p (T ) such that (i) u j , u j (0, ·), Du j , and Su j converge weakly to u, u(0, ·), Du, and Proof. See Theorem 2.11 of [9].
On [0, τ ], consider the following equation , c, f are real-valued and g is a l 2 -valued function defined for ω ∈ Ω, t 0, x ∈ R 1 . We consider this equation in the sense of Definition 2.2.
f(t, x), g(t, x) are predictable as functions taking values in H n p and H n+1 p (R 1 , l 2 ), respectively.
Theorem 2.5. Let Assumptions 2.1-2.4 be satisfied and let Then the Cauchy problem for equation (2.4) on [0, τ ] with the initial condition u(0, ·) = u 0 has a unique solution u ∈ H n+2 p (τ ). For this solution, we have where the constant N depends only on n, λ, Λ, K, T and the function κ ε .
As a final preliminary result, we state a theorem recently established by the author [20]. In this theorem, we assume that a is P × B(R 1 )measurable and satisfies λ a Λ.
Moreover, u satisfies

"Local" Solvability and Regularity
We consider the following 1-dimensional nonlinear SPDE: We make the following assumptions: where λ and Λ are fixed positive constants.
Assumption 3.2. q 2 and 0 < ν 1 are numbers satisfying the conditions: 1/2 > β > α > 1/q and ν > 2β + 1/q, for some α and β. Assumption 3.3. For any ω ∈ Ω, a is Hölder continuous in t, continuously differentiable in x and twice continuously differentiable in u. Moreover, ) and the functions f(t, x), g(t, x) are predictable as functions taking values in L p ∩ H ν q and H 1 Let Assumptions 3.1-3.4 be satisfied. Then there exists a stopping time τ T and u ∈ H 2 p (τ ) such that E τ > 0 and u is a solution of Proof. For simplicity of presentation, we assume that u 0 = 0. The general case is treated in a similar way.
Step 1 Consider If µ = 1, there exists a unique solution u 1 ∈ H 2 p (T ) ∩ H 2+ν q (T ) by Theorem 2.5. Moreover by Theorem 2.5, 2.6 and Assumption 3.3, Let M be a positive number. By (3.3), Assumption 3.2 and Sobolev embedding H 2+ν−2β q ⊂ C 2+ν−2β− 1 q , there exists a positive constant γ and a stopping time τ 1 T such that Notice that we can take τ 1 as close to T as we wish by taking larger s.} and then consider the following problem with respect to function v with zero initial condition, where w is a given function from Φ(δ) and µ is a number in [0, 1].
x, u 1 )u 1 w} + f and g k := g k , then, it is easy to check that a, c, f and g k satisfy Assumption 2.1-2.4 with n = 0. For example, we show that a u (·, ·, u 1 )u 1 w ∈ L p (τ 1 ): By Lemma 4.2(i) of [9], it suffices to check that u 1 ∈ L p (τ 1 ) and a u (·, ·, u 1 ) w ∈ C 0 . Now by our assumption, both inclusions are obvious.
Thus, by Theorem 2.5, (3.4) or In this way, we can define an operator We claim that for some δ > 0 and all µ which are sufficiently close (3.5) For fixed ω, (3.5) is a uniformly parabolic equation in [0, τ 1 (ω)] × R 1 . By Assumption 3.2 and the definition of Φ(δ), a(t, x, u 1 ) , a u (u 1 ) , a(t, x, w), w , u 1 and their products which appear in (3.5) are Hölder continuous in (t, x). Thus by Theorem 5.1 of [15] p320 where χ(δ) depends only on δ, K 1 , λ, Λ, T and χ(δ) 0 as δ 0. Indeed, by the fundamental theorem of calculus, We can further transform the above equation to get [ After this transformation one can easily see that our claim follows from Assumption 3.2 and the definition of Φ(δ).
Thus, by first taking δ sufficiently small and then choosing µ close to 1 according to δ , we can make This shows that Ψ µ : Φ(δ) → Φ(δ). Now we proceed to show the contraction. If By the same argument as above, one can show that Now by reducing δ and also µ according to the new δ if necessary, we can make Ψ µ : Φ(δ) → Φ(δ) and Nχ(δ) < 1/2. We fix δ and corresponding µ. We denote such a µ by µ 0 .
We consider the following equation for v: It is easy to see that one can apply Theorem 2.5 (n=0) to the above equation and we obtain a unique solution v k ∈ H 2 p (τ )∩H 2 q (τ ). Moreover v k H 2 q (τ ) N(k, λ, Λ, T, K 1 , K 2 ). (3.14) Now we claim that v k (ω, ·, ·) = u(ω, ·, ·) for almost all ω ∈ Ω k . Indeed, v k − u ∈ H 2 p,0 (τ ) satisfies For fixed ω ∈ Ω k , v k − u ∈ H 1,2 p (τ (ω)) is a generalized solution of a uniformly elliptic parabolic PDE with Hölder continuous coefficient. Since the initial condition is zero, our claim follows from the uniqueness of the solution.
We apply Theorem 2.6 once more to v k . Then by (3.14), we get But since H 2−2β q ⊂ C 2−2β− 1 q and 2 − 2β − 1 q > 1 by Assumption 3.2, we have for some γ > 0 (3.15) By Chebychev inequality and (3.15), Recall that v k (ω, ·, ·) = u(ω, ·, ·) for almost all ω ∈ Ω k . Thus, Note that we can make N k p + N (k) l q as small as we like by taking large enough k and l. We first take large enough k and fix. Then choose sufficiently large l according to k. ¿From the above construction, we see that if we define ) N(m, λ, Λ, p, T, K 1 , K 2 ). One can also show that v m (ω, ·, ·) = u(ω, ·, ·) for almost all ω ∈ Ω m by arguing as before. The theorem is proved.

Uniqueness and "Global" Solvability
Throughout this section, Assumptions 3.1-3.4 are in force.
Then there exist a solution u ∈ H 2 By Theorem 4.2, u 1 = u 2 ∈ H 2 p (τ 1 ∧ τ 2 ), so u is well-defined. It is clear that u is a solution in the sense of distribution. Thus it suffices to show that u ∈ H 2 p (τ 1 ∨ τ 2 ). For that, we check: E . Similarly, one can show that (a(t, x, u)u + f, g) ∈ F p (τ 1 ∨ τ 2 ). The lemma is proved.
We present the main theorem of this paper.

Theorem 4.4. (Unique solvability)
There exists a unique solution u ∈ H 2 p (T ) of (3.1) and u satisfies Proof. We only need to prove the existence. The uniqueness and the estimate follow from Theorem 4.2 and Theorem 2.7.
Clearly τ τ a.s. But τ τ a.s., for otherwise there exists a τ ∈ Π such that E τ > E τ = r. This is a contradiction to the definition of supremum. Thus, τ =τ a.s. andτ is a stopping time.
By the definition of H 2 p (τ) and H 2+ν q (τ), . We define f := (Dv − ∆v)I t τ and g k : By Theorem 2.5, (4.7) has a unique solution y ∈ H 2 p (T ) ∩ H 2+ν q (T ). The difference y − v satisfies the heat equation on [0,τ ] with zero initial condition, thus it follows that y(ω, ·, ·) = v(ω, ·, ·) on [0,τ] a.s. Now we are ready to solve the following equation "locally": As we showed above, since v has an extension as a function in H 2 . Actually we understand the initial condition exactly in this sense in Krylov's L p -theory.
We apply Theorem 3.1 to (4.8). It is easy to check that the coefficient and the data satisfy the assumption of Theorem 3.1. The only thing is that we have w k t+τ as a driving noise. But the proofs of Theorem 2.5 and Theorem 2.6 go without any change in this situation. Thus, by repeating almost word by word, we can prove the local existence for (4.8). This implies that there exists a stopping time σ such that Eσ > 0 and if we define then U is a well-defined function in H 2 p (τ + σ) and U satisfies (3.1) on [0,τ + σI Ω ] by the construction and the definition of v and z.
Remark. The choices ν = 1, q = 4 satisfy the Assumption 3.2. In this case, one can give a simpler proof for the existence. Indeed, if we let v := u − t 0 g k (s, ·) dw k s , then v satisfies for each fixed ω ∂v ∂t = a(·, which is a deterministic partial differential equation. Finally, we prove the unique solvability of (1.1).
Theorem 4.5. Suppose that σ(t) = {σ k (t), k 1} is an l 2 -valued function, where σ k 's are P-measurable and α(t) : Then there exists a unique solution u ∈ H 2 (4.9) and Proof. Suppose that u is a generalized solution i.e., solution in the sense of distribution (see Definition 3.6 of [9]) of (4.9). We define a process x t and a function v by Then by the Itô-Wentzell formula (see [14]), we get x − x t )σ k (t)] dt + g k (t, x − x t ) dw k t . Let a(t, x, v) := a(t, x − x t , v) − α(t), f(t, x) := f(t, x − x t ) − (g k ) (t, x − x t ) σ k (t), and g k (t, x) := g k (t, x − x t ). Then v satisfies dv = [ a(t, x, v)v + f(t, x)] dt + g k (t, x) dw k t v(0, ·) = u 0 . (4.10) By Lemma 3.7 of [9], (4.9) and (4.10) are equivalent, that is, (4.9) holds (in the sense of distribution) if and only if (4.10) holds (in the sense of distribution). Thus, it suffices to show that (4.10) has a unique solution in H 2 p (T ). Because of x t in the definition of a, the Hölder norm of a with respect to t is not uniform in ω. So we cannot apply Theorem 4.4 directly to (4.10). Therefore, we proceed in the following way.