Nonlinear diffusion equations with nonlinear gradient noise

We prove the existence and uniqueness of entropy solutions for nonlinear diffusion equations with nonlinear conservative gradient noise. As particular applications our results include stochastic porous media equations, as well as the one-dimensional stochastic mean curvature flow in graph form.


Introduction
In this work we consider stochastic partial differential equations of the type where T d is the d-dimensional torus, β k are independent R-valued Brownian motions, Φ : R → R is a monotone function (cf. Assumption 2.2 below) and the coefficients G : T d × R → R d , σ k : T d × R → R d are regular enough (cf. Assumption 2.3 below). The main results of this work are the existence and uniqueness of entropy solutions to (1.1) (Theorem 2.6 below) and the stability of (1.1) with respect to Φ (Theorem 4.1 below).
Stochastic partial differential equations of the type (1.1) arise as limits of interacting particle systems driven by common noise, with notable relation to the theory of mean field games [LL06a,LL06b,LL07], in the graph formulation of the stochastic mean curvature/curve shortening flow [KO82,SY04,DLN01,ESvR12] and as simplified approximating models of fluctuations in non-equilibrium statistical physics [DSZ16]. We refer to [FG18b] and the references therein for more details on these applications. In particular, the results of this work imply the well-posedness of the stochastic mean curvature flow in one spatial dimension with spatially inhomogeneous noise, in the graph form, (1.2) and thus extend the works [ESvR12,GR17] which were restricted to noise either satisfying a smallness condition or being independent of the spatial variable. For an alternative approach to stochastic mean curvature with spatially inhomogeneous noise based on stochastic viscosity solutions see [LS98a,LS98b,Sou16] and the references therein. 1 account on these developments and to [BVW15, GT14, BR15, BR17, DHV16, Ges12, GH18,FG18a] and the references therein for recent contributions. While linear gradient noise (cf. e.g. [DG17,Töl18,MR18]), that is, σ k (x, u) = h k (x)u in (1.1) to some extend can be treated by these methods, the nonlinear structure of the gradient noise in (1.1) requires entirely different techniques. Only in recent years, in a series of works [LPS13, LPS14, GS17a, GS17b, GS17c, GS15] a kinetic approach to (simpler versions of) (1.1) was developed based on rough path methods, cf. also [HKRSs18,GPS15,GG18,GGLS18]. for numerical methods and regularity/qualitative properties of the solutions. In the most recent contribution [FG18b] the path-by-path well-posedness of kinetic solutions to (1.1), with Φ(u) = u|u| m−1 for m ∈ (0, ∞) (fast and slow diffusion), was proved for the first time for non-negative initial data, while for sign-changing data the uniqueness was restricted to the case m > 2. As it is well-known from the theory of rough paths, such path-by-path methods require stronger regularity assumptions on the diffusion coefficients than what would be expected based on probabilistic methods. More precisely, when applied to (1.1), the results of [FG18b] require σ k (x, u) ∈ C γ b (T d × R) ∀k ∈ N, for some γ > 5. Moreover, the construction of kinetic solutions presented in [FG18b] relies on the fractional Sobolev regularity of the solutions, which is available only in the particular case Φ(u) = |u| m−1 u, m ∈ (0, ∞).
The key aims of the current work are to obtain well-posedness without sign restrictions on the initial data that covers the full spectrum of m for the slow diffusion (m > 1), to relax the regularity assumptions on the diffusion coefficients σ k , and to treat a general class of diffusion nonlinearities Φ. These aims are achieved by developing a probabilistic entropy approach to (1.1) leading to the relaxed regularity assumption (cf. Assumption 2.3 below for details) σ k (x, u) ∈ C 3 b (T d × R) ∀k ∈ N. The treatment of general diffusion nonlinearites Φ is achieved by using quantified compactness in order to prove stability of (1.1) with respect to variations in Φ. Based on this, the strong convergence of approximations can be shown, without relying on the compactness arguments from [FG18b] which were restricted to the case Φ(u) = |u| m−1 u. In particular, this generalization allows the application to the stochastic mean curvature flow. The proof of stability relies on entropy techniques and a careful control of the errors arising in the corresponding doubling the variables argument which was initiated in [DGG18] and is disjoint from the kinetic techniques put forward in [FG18b].
The structure of the article is as follows. In Section 2 we formulate our main results concerning equations of porous medium type. In Section 3 we gather some lemmata that are needed for the proof of our main results. In Section 4, we prove the main estimates in L 1 (T d ) leading to uniqueness and stability and in Section 5 we show existence and uniqueness for non-degenerate equations. In Section 6 we use the results if the two previous sections in order to prove our main theorem. Finally, in Section 7, we explain the modifications that need to be done in the proof of Theorem 2.6 in order to obtain existence and uniqueness of solutions of equation 1.2.
1.1. Notation. We fix a filtered probability space (Ω, (F t ) ∈[0,T ] , P)) carrying a sequence (β k (t)) k∈N,t∈[0,T ] of independent, one-dimensional, (F t )-Wiener processes. We introduce the notations Ω T = Ω × [0, T ], Q T = [0, T ] × T d . Lebesgue and Sobolev spaces are denoted in the usual way by L p and W k p , respectively. When a function space is given on Ω or Ω T , we understand it to be defined with respect to F := F T and the predictable σ-algebra, respectively. In all other cases the usual Borel σ-algebra will be used. Moreover, throughout the whole article we fix a constant m > 1.
We fix a non-negative smooth function ρ : R → R which is bounded by 2, supported in (0, 1), integrates to 1 and, for θ > 0, we set ρ θ (r) = θ −1 ρ(θ −1 r). When smoothing in time by convolution with ρ θ , the property that ρ is supported on positive times will be crucial.
For spatial regularisation this fact will be irrelevant, but for the sake of simplicity, we often use ρ ⊗d θ for smoothing in space as well. In the proofs of lemmas/theorems/propositions, we will often use the notation a b which means a ≤ N b for a constant N which depends only on the parameters stated in the corresponding lemma/theorem/proposition. For a function g : T d × R → R we will often use the notation [g](x, r) := r 0 g(x, s) ds.
If g does not depend on x ∈ T d , then we will write [g](r). For a function g on T d × R, we will write g r , ∂ r g for the derivative of g with respect to the real variable r ∈ R and g x i , ∂ x i g for the partial derivatives of g in the periodic variable For β ∈ (0, 1), C β will denote the usual Hölder spaces and [·] C β will denote the usual semi-norm. In addition, the summation convention with respect to integer valued indices will be in use. In particular, expressions of the form respectively, unless otherwise stated. Finally, when confusion does not arise, in integrals we will drop some of the integration variables from the integrands for notational convenience.

Formulation and main results
For i, j ∈ {1, ..., d}, let us set With this notation we rewrite (1.1) in Itô form (2.1) Remark 2.1. Formally, we have In (2.1) we add b i /2 and then we subtract it from G i in order to make cancellations with terms coming from the Itô correction when applying Itô's formula apparent. Despite the fact that ∂ x i b i and ∂ x i f i are of the same nature, they will be treated slightly differently to exploit this cancellations.
We will often write Π(Φ, ξ) to address equation (2.1) with initial condition ξ and nonlinearity Φ. To formulate the assumptions on Φ let us set a(r) = Φ ′ (r).
Assumption 2.2. The following hold: (a) The function Φ : R → R is differentiable, strictly increasing and odd. The function a is differentiable away from the origin, and satisfies the bounds as well as ,β ∈ (0, 1), and a constant N 0 ∈ R such that for all i, l ∈ {1, ..., d}, r ∈ R we have: Remark 2.4. By Assumption 2.3, it follows that there exists a constant N 1 such that, for all (2.13) We now motivate the concept of entropy solutions. Suppose that we approximate equation (2.1) with a viscous equation, that is, in place of Φ(u) we have Φ(u) + εu for ε > 0. Let us choose a non-negative φ ∈ C ∞ c ([0, T ) × T d ) and a convex η ∈ C 2 (R). If u(= u ε ) solves the viscous version of (2.1), by Itô's formula we have (formally) (2.14) By integration by parts and the cancellations we have (2.15) Now we want to pass to the limit ε ↓ 0. Assuming for the moment that u ε converges to some u as ε ↓ 0 we may expect that In contrast, this may not be valid for the term since, in general, ∇u ε 2 L 2 (Q T ) ∼ ε −1 . However, since I ε ≤ 0, one may drop the term I ε from the right hand side of (2.15), replace the equality with an inequality, and then pass to the limit ε ↓ 0. This motivates the following definition.
Definition 2.5. An entropy solution of (2.1) is a predictable stochastic process u : (2.16) Theorem 2.6. Let Φ, ξ satisfy Assumptions 2.2 and σ, G satisfy Assumption 2.3. Then, there exists a unique entropy solution of equation (2.1) with initial condition ξ. Moreover, if u is the unique entropy solution of equation (2.1) with initial conditionξ, then where N is a constant depending only on N 0 , N 1 , d and T .

Auxiliary results
In this section we state and we prove some tools that will be used for the proofs of the main theorem. We begin with two remarks.
Remark 3.1. For any functions f : R × T d → R, u : T d → R, φ : T d → R (that are regular enough for the following expressions to make sense) and any a ∈ R we have (3.1) Lemma 3.3. Let u be an entropy solution (2.1). Then we have that We first estimate the second term on the right hand side for h ∈ [0, T ]. Take a decreasing, non-negative function γ ∈ C ∞ ([0, T ]), such that Take furthermore for each δ > 0, η δ ∈ C 2 (R) defined by and notice that η δ (r) → r 2 as δ → 0. Let y ∈ T d and a ∈ R. Then, using the entropy inequality (2.16) with φ(t, where for the second term on the right hand side we have used (2.2), (2.7), (2.11), (2.9), (2.4), and (2.12). Notice that all the terms are continuous in a ∈ R. Upon substituting a = ξ(y) taking expectations, integrating over y ∈ T d , and using the bounds on γ, one gets In the limit δ → 0 this yields Consequently, by (3.2) we get lim sup from which the claim follows, since right hand side goes to 0 as ε → 0 due to the continuity of translations in L 2 (T d ).
The proof of the following lemma can be found in [DGG18, Lemma 3.1].
Lemma 3.4. Let Assumption 2.2 hold, let u ∈ L 1 (Ω × Q T ) and for some ε ∈ (0, 1), let ̺ : R d → R be a non-negative function integrating to one and supported on a ball of radius ε. Then one has the bound where N depends on d, K and T .
We now introduce the definition of the (⋆)-property, an analog of of which was first introduced in [FN08] in the context of stochastic conservation laws. It is somewhat technical but important in order to obtain the uniqueness of entropy solutions. To be more precise, as a first step, we will estimate the difference of two entropy solutions provided that one of them has the (⋆)-property.
Definition 3.6. A function u ∈ L m+1 (Ω T × T d ) is said to have the (⋆)-property if for all h, ̺, ϕ,ũ as above, and for all sufficiently small θ > 0, we have that F θ (·, ·, u) ∈ L 1 (Ω T × T d ) and hold with some constant N independent of θ.
Corollary 3.9. (i) Let u n be a sequence bounded in L m+1 (Ω T × T d ), satisfying the (⋆)property uniformly in n, that is, with constant N in (3.5) independent of n. Suppose that u n converges for almost all ω, t, x to a function u. Then u has the (⋆)-property.
By Lemma 3.8, and the fact that E t,x |F θ (t, x, 0)| < ∞, we see that the right hand side above is uniformly integrable in (ω, t, x). Hence, one can take limits on the left-hand side of (3.5) to get By similar (in fact, easier) arguments one can see the convergence of the second term on the right-hand side of (3.5), and since the constant N was assumed to be independent of n ∈ N, we get the claim.
the claim simply follows from Lemma 3.8.
Proof. The majority of the proof is identical for (i) and (ii), so their separation is postponed to the very end.
Furthermore, for each δ > 0, let η δ ∈ C 2 (R) be defined by We apply the entropy inequality (2. Assuming that θ is sufficiently small, one has φ θ,ε (0, x, s, y) = 0, and thus we get Notice that all the expressions in (4.6) are continuous in (a, s, y) . We now substitute a = u(s, y), integrate over (s, y), and take expectations. For the last term in (4.6) this is justified by (3.18). All of the other terms are continuous in a and can be bounded by N (|a| m +X) with some constant N and some integrable random variable X (recall (2.2)), so that substituting a =ũ(s, y) and integrating out s, y, and ω, results in finite quantities. After writing the analogous inequality with the roles of u, t, x andũ, s, y reversed, using the symmetry of η δ , and adding both inequalities, one arrives at x, s, y), and .
For the term containing F 1 θ at the right hand side of (4.7) we have the following: ∂ x i φ θ,ε is supported on [s, s + θ], hence the integration in t is over [s, (s + θ) ∧ T ]. Then we plug in a quantity with is F s -measurable. Therefore, this term vanishes in expectation (a rigorous justification follows from a limiting procedure similar to (3.17)). We now pass to the θ → 0 limit. For this, we use [DGG18, Proposition 3.5, see also p.15] and the (⋆)-property with h = η ′ and ̺ = ̺ ε to get where here and below u = u(t, x) andũ =ũ(t, y). Notice that that by (2.5) and (2.13) we have that for all x, y ∈ T d and r,r ∈ R where N depends only on N 0 , N 1 , and d. Under this condition and under Assumption 2.2 (a) it is shown in [DGG18, Theorem 4.1, p.13-15, see (4.8) and (4.18) therein] that for all α ∈ (0, 1 ∧ (m/2)) we have We proceed with the estimation of the remaining terms at the right-hand side of (4.8). By Remark 3.1 (with a =ũ(t, y)), the relation ∂ x i x j φ ε = −∂ x i y j φ ε , and the identity (4.11) By symmetry we have that (4.14) By adding (4.11) and (4.12) and using (4.13), (4.14) we obtain We further set (4.17) We next estimate A + E 1 (u,ũ). Notice that By the definition of a ij we have that Using the fact that ∂ x i y j φ ε = ∂ x j y i φ ε we see that we obtain where we have used Assumption 2.3. We proceed with an estimate for B 2 +E 2 (u,ũ)+E 4 (u,ũ).

Approximations
In Section 4 we showed that if we have two entropy solutions of equation (2.1) with the same initial condition, then they coincide provided that one of them satisfies the (⋆)property. Hence, in order to conclude the existence and uniqueness of entropy solutions, it suffices to show the existence of an entropy solution possessing the (⋆)-property. To do so, we use a vanishing viscosity approximation. In order to prove the strong (probabilistically) existence of solutions for the approximating equations, we use a technique from [GK96], where a characterization of the convergence in probability is used to show that weak existence combined with strong uniqueness implies strong existence. This has been used in the past in the context of SPDEs (see [Hof13,GH18] and the references therein). For the proof of the following Proposition see [DGG18, Proposition 5.1].
Proposition 5.1. Let Φ satisfy Assumption 2.2 (a) with a constant K ≥ 1. Then, for all n there exists an increasing function Φ n ∈ C ∞ (R) with bounded derivatives, satisfying Assumption 2.2 (a) with constant 3K, such that a n (r) ≥ 2/n, and sup |r|≤n |a(r) − a n (r)| ≤ 4/n.
If u n is an L 2 -solution of Π(Φ n , ξ n ), then the following estimates hold (see Lemma A.1 in the Appendix) where the constant N depends only on N 0 , N 1 , K, T, d, p and m (but not on n ∈ N). Notice that |ξ n | is bounded by n, which implies that the right hand side of the above inequalities is finite. Moreover, by construction of ξ n one concludes that for all p ≥ 2 with N depending only on N 0 , N 1 , K, T, d, p and m. Finally, since a n ≥ 2/n > 0, we have |∇u n | ≤ N (n)|∇[a n ](u n )|, and so by (5.5), we have the (n-dependent) bound E ∇u n p L 2 (Q T ) < ∞. (5.7) Lemma 5.3. For each n ∈ N, let u n be an L 2 -solution of Π(Φ n , ξ n ). Then, u n has the (⋆)-property. If in addition ξ L 2 (T d ) has moments of order 4, then the constant N in (3.5) is independent of n.
Proof. Fix θ > 0 small enough so that (3.6) holds. To ease notation we drop the lower index in F θ . We proceed by two approximations: first, as in Corollary 3.9 (ii), the substitution of u n (t, x) into F (t, x, ·) is smoothed, and second, u n is regularised.
For a function f ∈ L 2 (T d ) let f (γ) := (ρ γ ) ⊗d * f denote its mollification. Then, u (γ) n satisfies (pointwise) the equation as γ → 0. By (3.6) we have EF (t, x, a)X = 0 for any F t−θ -measurable bounded random variable X. Hence, . By (5.8) and Itô's formula one has t,x,a (5.10) By (3.6) and integration by parts (in x) we have λ,γ . After integration by parts with respect to a, by the Cauchy-Schwarz inequality, inequalities (3.1), (5.4) and Lemma 3.8, we have Similarly, this time integrating by parts twice in a we have for all sufficiently small θ ∈ (0, 1) .
To bound the right-hand side, note that by (5.7), ∇u (γ) n → ∇u n in L p (Ω; L 2 (Q T )), for any p, and by (5.6), ∇(Φ n (u n )) (γ) → ∇Φ n (u n ) in L 2 (Ω; L 2 (Q T )). Therefore, by (5.3) Together with (5.11), we therefore get We now estimate C (2) λ,γ + C (4) λ,γ . After integrating by parts in x we have (5.14) Using the identity integration by parts (in x and a), as well as the linear growth of σ x i , b i and the boundedness of a ij , a ij x j , one derives similarly to (5.11) the estimate lim sup We continue with an estimate for C Using Remark 3.1 and letting γ → 0 gives By integration by parts we get Similarly and Hence, one easily sees that where E is defined in (3.4). Putting all of (3.17), (5.9), (5.10), (5.13), (5.15), (5.16), and (5.17) together, we conclude as claimed. Moreover, if E ξ 4 L 2 (T d ) < ∞, then by virtue of (5.5) and (5.6) it is clear that in (5.11), (5.12), (5.15), (5.16) we can choose N independent of n ∈ N, which completes the proof.
Proof. We fix n ∈ N, and since n is fixed, in order to ease the notation we drop the ndependence and we relabelΦ := Φ n ,ξ := ξ n , (Φ n is given in Proposition 5.1 and ξ n is given in (5.2)) and we are looking for a solution u. Let (e k ) ∞ k=1 ⊂ C ∞ (T d ) be an orthonormal basis of L 2 (T d ) consisting of eigenvectors of (I − ∆), and let Π l : W −1 2 → V l := span{e 1 , ..., e l } be the projection operator, that is, Then, the Galerkin approximation is an equation on V l with locally Lipschitz continuous coefficients having linear growth. Consequently, it admits a unique solution u l , for which we have . After applying Itô's formula for the function u → u 2 L 2 (T d ) , for p ≥ 2, after standard arguments (see for example the proof of Lemma A.1 in the Appendix) one obtains and for all p ≥ 2 E sup t≤T u l (t) p L 2 (T d ) ≤ N (1 + E ξ p L 2 (T d ) ). (5.20) In these inequalities the constant N is independent of l ∈ N. In W −1 2 (T d ) we have almost surely, for all t ∈ [0, T ] ) is compact. It follows that for any sequences (l q ) q∈N , (l q ) q∈N , the laws of (u lq , ul q ) are tight on X × X . Let us set where (e k ) ∞ k=1 is the standard orthonormal basis of l 2 . By Prokhorov's theorem, there exists a (non-relabelled) subsequence (u lq , ul q ) such that the laws of (u lq , ul q , β) on Z := X × X × C([0, T ]; l 2 ) are weakly convergent. By Skorohod's representation theorem, there exist Zvalued random variables (û,ǔ,β), ( u lq , ul q ,β q ), q ∈ N, on a probability space (Ω,F ,P) such that in Z,P-almost surely ( u lq , ul q ,β q ) → (û,ǔ,β), (5.21) as l → ∞, and for each q ∈ N, as random variables in Z ( u lq , ul q ,β q ) d = (u lq , ul q , β).
(5.23) Let (F t ) t∈[0,T ] be the augmented filtration of G t := σ(û(s),ǔ(s),β(s); s ≤ t), and letβ k (t) := √ 2 k (β(t), e k ) l 2 . It is easy to see thatβ k , k ∈ N, are independent, standard, real-valuedF t -Wiener processes. Indeed, they areF t -adapted by definition and they are independent since β k are. We only have to show that they areF t -Wiener processes. Let us fix s < t and let V  which by using uniform integrability and passing to the limit q → ∞ shows thatβ k (t) is a G tmartingale. Similarly, |β k (t)| 2 − t is a G t -martingale. By continuity ofβ k (t) and |β k (t)| 2 − t, and the fact that their supremum in time is integrable in ω, one can easily see that they are alsoF t -martingales. Hence, by Lévy's characterization theorem (see, e.g., [KS91, p.157, Theorem 3.16])β k areF t -Wiener processes.
We now show thatû andǔ both satisfy the equation Notice that due to (5.19), we haveû ∈ L 2 (Ω T ; W 1 2 (T d )). Let us set We will show that for any φ ∈ W −2 2 (T d ) and k ∈ N, the processeŝ ds are continuousF t -martingales. We first show that they are continuous G t -martingales. Assume for now that φ = (I − ∆) 2 ψ, where ψ ∈ V lq . For, i = 1, 2, 3, let us also define the processesM i q , M i q similarly toM i , but withM ,û, ∂ x i σ ki (·) replaced byM q , u lq , Π lq ∂ x i σ ik (·) and M q , u lq , Π lq ∂ x i σ ik (·), respectively. Let us fix s < t and let V be a bounded continuous function on C([0, s]; W −2 2 (T d )) × C([0, s]; l 2 ). We have that Next, notice that where the convergence follows from (5.21) and the fact that are uniformly integrable on Ω (which in turn follows from (5.19)). Notice also that for Since [a ij ](u)(x, r), [a ij x i ](x, r) are Lipschitz continuous in r ∈ R uniformly in x (by Assumption 2.3), we getẼ Similarly one shows that Hence, by (5.25), (5.26), (5.27), and (5.21) we see that for each t ∈ [0, T ] in probability. Then, one can easily verify thatM i q (t) →M i (t) in probability. Moreover, for any φ ∈ W −2 2 (T d ) and any p ≥ 2 we have, by (5.22) and (5.20) . From this, one easily deduces that for each i = 1, 2, 3, and t ∈ [0, T ], M i q (t) are uniformly integrable. Hence, we can pass to the limit in (5.24) to obtaiñ (5.29) In addition, using the continuity ofM i (t) in φ, uniform integrability, and the fact that ∪ q (I + ∆) 2 V lq is dense in W −2 2 (T d ), it follows that (5.29) holds also for all φ ∈ W −2 2 (T d ). Hence, for all φ ∈ W −2 2 (T d ),M i are continuous G t -martingales having all moments finite. In particular, by Doob's maximal inequality, they are uniformly integrable (in t), which combined with continuity (in t) implies that they are alsoF t -martingales. By [Hof13, Proposition A.1] we obtain that almost surely, for all φ ∈ W −2 Notice thatû(0) d =ξ, which implies thatû(0) ∈ L m+1 (T d ) almost surely. Choosing φ = (1 + ∆) 2 ψ for ψ ∈ C ∞ (T d ), we obtain that for almost all (ω, t) If follows (see [KR79]) thatû is a continuous L 2 (T d )-valued F t -adapted process. Hence, u is an L 2 -solution of equation Π(Φ,ξ) (on (Ω, (F t ) t ,P) with driving noise (β k ) ∞ k=1 ) wherê ξ :=û(0). Again, by standard arguments, for all p ≥ 2 one has the estimate Using this and Itô's formula (see, e.g., [Kry13]) for the function and Itô's product rule, one can see thatû is an entropy solution (on (Ω, (F t ) t ,P) with driving noise (β k ) ∞ k=1 ) with initial conditionξ :=û(0). In the exact same wayǔ is an L 2 -solution and an entropy solution of Π(Φ,ξ) (again, on (Ω, (F t ) t ,P) with driving noise (β k ) ∞ k=1 ) witȟ ξ :=ǔ(0). Further, we have for δ > 0 Henceû andǔ are both entropy solutions with the same initial condition. Moreover, by Lemma 5.3 they have the (⋆)-property. Hence, by Theorem 4.1 we conclude thatû =ǔ. By [GK96, Lemma 1.1] we have that the initial sequence (u l ) ∞ l=1 converges in probability in X to some u ∈ X . Using this convergence and the uniform estimates on u l , it is then straightforward to pass to the limit in (5.18) and to see that the limit u is indeed an L 2 -solution.
We are ready to proceed with the proof of Theorem 2.6.

Proof of the main theorem
Proof of Theorem 2.6.
Let f ∈ C b (R). For each n, we clearly have [a n f ](u n ) ∈ L 2 (Ω T ; W 1 2 (T d )) and ∂ x i [a n f ](u n ) = f (u n )∂ x i [a n ](u n ). Also, we have |[a n f ](r)| ≤ f L∞ 3K|r| (m+1)/2 for all r ∈ R, which combined with (5.5) and (5.6) gives that that Hence, for a subsequence we have [a n f ](u n ) ⇀ v f , [a n ](u n ) ⇀ v for some v f , v ∈ L 2 (Ω T ; W 1 2 (T d )). By (5.1) and (6.1),(6.2) it is easy to see where for the last equality we have used that ∂ x i [a n ](u n ) ⇀ ∂ x i [a](u) (weakly) and f (u n ) → f (u) (strongly) in L 2 (Ω T ; L 2 (T d )). Hence, (ii) from Definition 2.5 is also satisfied. We now show (iii). Let η and φ be as in (iii) and let B ∈ F. By Itô's formula (see, e.g., [Kry13]) for the function u → x η(u)̺, and Itô's product rule, we have Notice that ∂ x i [ √ η ′′ a n ](u n ) = η ′′ (u n )∂ x i [a n ](u n ). As before we have (after passing to a φη ′′ (u n )|∇[a n ](u n )| 2 .
On the basis of (6.1), (6.2) and the construction of ξ n and a n one can easily see that the remaining terms in (6.3) converge to the corresponding ones from (2.16). Hence, taking lim inf in (6.3) along an appropriate subsequence, we see that u satisfies Definition 2.5, (iii).
To summarise, we have shown that if in addition to the assumptions of Theorem 2.6 we have that E ξ 4 L 2 (T d ) < ∞, then there exists an entropy solution to (2.1) which has the (⋆)-property (therefore, it is also unique by Theorem 4.1). In addition, we can pass to the limit in (5.5)-(5.6) to obtain that with a constant N depending only on N 0 , N 1 , d, K, T and m.
Step 2: We now remove the extra condition on ξ. For n ∈ N, let ξ n be defined again by ξ n = (n ∧ ξ)∨ (−n) and let u (n) be the unique solution of E(Φ, ξ n ). Notice that by step 1, u (n) has the (⋆)-property. Hence, by Theorem 4.1 (i) we have that (u (n) ) is a Cauchy sequence in L 1 (Ω T ; L 1 (T d )) and therefore has a limit u. In addition, u (n) satisfy the estimates (6.4) uniformly in n ∈ N. With the arguments provided above it is now routine to show that u is an entropy solution.
We finally show (2.17) which also implies uniqueness. Letũ be an entropy solution of E(Φ,ξ). By Theorem 4.1 we have ess sup where u (n) are as above. We then let n → ∞ to finish the proof.

Stochastic mean curvature flow
In this section we demonstrate the proof of well-posedness for the one-dimensional stochastic mean curvature flow in graph form by minor modifications of the techniques developed in the previous sections.
The stochastic mean curvature flow describes the evolution of a curve M t = φ(t, M 0 ) ⊂ R 2 , t ∈ [0, T ] given by the flow φ : y)) is the mean curvature vector of M t at the point (x, y) ∈ M t and ν Mt (x, y) denotes the normal vector of M t at (x, y) ∈ M t . Assuming that M t is the level set of a function f (t, ·) : R 2 → R, one derives the SPDE In the graph case, that is, when f (x, y) = y − v(x) the above equation becomes In [ESvR12] the well-posedness of (7.1) is shown under the assumption that h 1 = ε, for some ε ≤ √ 2 and h k = 0 for k = 1. Here, we assume that h k (x, y) = h k (x). Hence, taking the derivative in x in the above equation, we derive the following SPDE for For a function Φ : R → R, let E(Φ, ξ) denote the periodic problem with initial condition ξ. Therefore, we aim to solve E(Φ, ξ) for Φ(u) = arctan(u). As mentioned above, the proofs of the statements in this section are almost identical to the corresponding ones of the previous sections. For this reason, we will restrict to pointing out the differences. For n ∈ N, let b n be the unique real function on R defined by the following properties (1) b n is continuous and odd (2) b n (r) = −r(1 + r 2 ) −3/2 for r ∈ [0, n] E u(t) p L 2 (T d ) < ∞, for all p > 2 .
Choosing m = 3 in (3.7) from Lemma 3.8 gives the following.
Similarly to Corollary 3.9 one has: Corollary 7.8. (i) Let u n be a sequence bounded in L 2 (Ω T × T), satisfying the ( * )-property uniformly in n, that is, with constant N in (7.4) independent of n. Suppose that u n converges for almost all ω, t, x to a function u. Then u has the ( * )-property.
Similarly to (5.5)-(5.6), we have that if u n are L 2 -solutions to E(ξ, Φ n ) for n ∈ N, then for all p ≥ 2 E sup t≤T u n p L 2 (T) + E ∂ x [a n ](u n ) p L 2 (Q T ) ≤ N (1 + E ξ p L 2 (T) ), (7.10) E sup t≤T u n 2 L 2 (T) + E ∂ x Φ n (u n ) 2 L 2 (Q T ) ≤ N (1 + E ξ 2 L 2 (T) ), (7.11) where N depends only on N 0 , T, d, and p. Using these estimates, Corollary 7.8, and Lemma 7.7, one proves the following analogue of Lemma 5.3 : Lemma 7.10. Let Assumptions 7.2-7.3 hold, and for each n ∈ N, let u n be an L 2 -solution of E(Φ n , ξ). Then, u n has the ( * )-property and the constant N in (7.4) is independent of n.
Moreover, similarly to Proposition 5.4 one proves the following.
Finally, using Proposition 7.11, Lemma 7.10, and Theorem 7.9, we obtain the following theorem in a similar manner as Theorem 2.6 is concluded from Proposition 5.4, Lemma 5.3, and Theorem 4.1. where N is a constant depending only on N 0 and T .
Remark 7.13. Notice that in Theorem 7.9 (ii), there is the extra assumption that u ∈ L 2 (Ω T ; W 1 2 (T)) as compared to Theorem 4.1 (ii). However, this does not cause any complication since the approximating sequence u n of Proposition 7.11 satisfies this condition.
Appendix A.
Lemma A.1. Let Assumptions 2.2 and 2.3 hold. Let Φ n and ξ n be as in Proposition 5.1 and (5.2) respectively, let u be an L 2 -solution of Π(Φ n , ξ n ), and let p ∈ [2, ∞). Then there exists a constant N depending only on K, N 0 , N 1 , T, d, m, and p such that Proof. We start with (A.1). By Itô's formula we have Using that Φ n is increasing and (2.12), we get Notice that where for the last inequality we used (2.11), and the fact that [f i ] ∈ W 1,1 (T d ) for almost all (ω, t) ∈ Ω T (which in turn follows from (2.11) and (2.10)). Raising to the power p/2, taking suprema up to time t ′ and expectations, gives By the Burkholder-Davis-Gundy inequality we have By Minkowski's inequality and (2.12) one has Consequently, which combined with (A.5) gives,