Numerical analysis of the stochastic Stefan problem

The gradient discretisation method (GDM) – a generic framework encompassing many numerical methods – is studied for a general stochastic Stefan problem with multiplicative noise. The convergence of the numerical solutions is proved by compactness method using discrete functional analysis tools, Skorohod theorem and the martingale representation theorem. The generic convergence results established in the GDM framework are applicable to a range of diﬀerent numerical methods, including for example mass-lumped ﬁnite elements, but also some ﬁnite volume methods, mimetic methods, lowest-order virtual element methods, etc. Theoretical results are complemented by numerical tests based on two methods that ﬁt in GDM framework.


Introduction
In this paper, we study the stochastic  The deterministic Stefan problem describes the behavior of phase transition during the evolution of two thermodynamical states, say solid and liquid.This problem has its name from Jožef Stefan (1835Stefan ( -1893) ) due to his research on solid-liquid phase changes in formation of ice in polar seas [22].Deterministic moving boundary problems have been studied extensively in the second half of the past century, see [18] and references therein.Numerical analysis of the problem is developed in [10,15,19] and a huge bibliography can be found in [23].Convergence of variety of gradient schemes (GS) to approximate its solution is provided in [11] and [6,Chapter 6].
In the past decade, several authors have started investigating stochastic perturbation of the classical Stefan problem.There are numerous examples of applications, such as climate models, diffusion of lithium-ions in lithium-ion batteries, modeling steam chambers for petroleum extraction and oxygen diffusion in an absorbing tissue, which are described by stochastic Stefan-type problems.
A significant literature is available on proving the existence and uniqueness of SSP.In [1], a semigroup approach was employed to obtain a mild solution.Using the enthalpy function with additive noise, the two phase free boundary problem is transformed into a stochastic evolution equation of porous media type with the fixed boundary conditions.Applying a coordinate transform, [17] studied the linear SSP on one dimensional unbounded domains (d = 1) assuming the random field Brownian in time but correlated (colored) in space.Extending these results, [16] established the existence and uniqueness of local strong solutions by using the interpolation theory.The analysis is based on the theory of mild and strong solutions proposed in [4].A further extension of these results under Robin boundary conditions was done in [21] (in which the moving interface between the two phases might have unbounded variation).
However, to the authors' knowledge, there is no available literature on the numerical analysis of the Stochastic Stefan problem.
This paper is primarily focused on the numerical analysis of SSP encompassing well known discrete schemes.We use a generic framework known as gradient discretisation method (GDM) to propose a numerical scheme approximating solutions of SSP.The GDM covers a large class of conforming and nonconforming schemes such as finite volume methods, Galerkin methods (including mass-lumped finite element), mixed finite element methods, hybrid high-order and virtual element methods, etc.A thorough discussion on the analysis and applications of the GDM method is given in the monograph [6].Moreover, we prove the convergence and stability of the numerical solutions, therefore, there is no need to prove convergence for each specific method which simplifies the analysis.
Our convergence analysis technique is based on the discrete functional analysis tools, Skorokhod theorem, Kolmogorov test, and martingale representation theorem.We used the similar idea as prescribed in [8] to show the existence of a weak martingale solution to the SSP.
This paper is organised as follows: Section 2 starts with the introduction of assumptions and notations followed by the description of the GDM framework and the related gradient scheme for the SSP; two examples of methods fitting the framework are presented (the mass-lumped P1 finite element (MLP1) method and hybrid mimetic method (HMM)), and the section concludes with the statement of the main convergence result.In Section 3 we provide a priori estimates of the approximated solutions.Section 4 details the following convergence steps: (1) tightness, in appropriate spaces, of a family of random variables representing in particular the solution to the GS and their gradient, (2) almost sure convergence (up to a change of probability space) in appropriate norms, and (3) continuity of the limits of the numerical solution and of the martingale involving the Wiener process.In Section 5, the limit of the martingale part is identified, after which the main convergence result is proved.Numerical results, based on the two examples of GDs presented in Section 2, are provided in Section 6 to validate the convergence results and their bounds.

Assumptions and notations
We first introduce the notations and assumptions that are used in the rest of the paper.Most of the time, the Lebesgue spaces we consider are those in Θ, so we write L r instead of L r (Θ).For a given s ∈ [0, T ], we set Θs = (0, s) × Θ.Throughout this paper, we assume the following.A2.Λ : Θ → S d (R) is a symmetric measurable tensor that is uniformly elliptic and bounded, that is: We will use the notation A4. Let (Ω, F, F = (Ft) t∈[0,T ] , P) be a stochastic basis, that is, (Ω, F, P) is a probability space equipped with an increasing family of σ-algebras {Ft}, t ∈ [0, T ], called filtration.Assume an F-adapted Wiener process k < ∞, and {W k : k ≥ 1} is a family of independent F-adapted real-valued Wiener processes.A5.Let L(K, L 2 ) be the Banach space of Hilbert-Schmidt operators [4, Appendix C] with norm denoted as We assume that the operator f : L 2 → L(K, L 2 ) is continuous and that there exist C1, C2 > 0 such that for any w ∈ L 2 and Ξ(w Throughout this article, C denotes a generic constant depends on f, T, Q, u0, Λ and ζ.We will only mention any further dependencies in its subscript.

Gradient scheme
We recall here the main definitions of the Gradient Discretisation Method.
Definition 2.1.D = (XD,0, ΠD, ∇D, ID, (t (n) )n=0,••• ,N ) is a space-time gradient discretisation (GD) for the homogeneous Dirichlet boundary conditions, with piecewise constant reconstruction, if the following properties hold: (i) XD,0 is a finite dimensional vector space of discrete unknowns, (ii) The linear map ΠD : XD,0 → L ∞ is a piecewise constant reconstruction operator in the following sense: there exist a basis (ei)i∈B of XD,0 and a family of disjoint subsets (Θi)i∈B of Θ such that ΠDu = i∈B ui½ Θ i for all u = i∈B uiei ∈ XD,0, where ½ Θ i is the characteristic function of Θi, (iii) The linear map ∇D : XD,0 → (L 2 ) d gives a reconstructed discrete gradient such that the mapping v → ∇Dv L 2 is a norm on XD,0, (iv) ID : L 2 → XD,0 is an interpolation operator that is used to create, from an initial condition in L 2 , a discrete vector in the space of unknowns, For any family (v We define the filtration (F n N ) 0≤n≤N by: [Gradient Scheme (GS) for (1.1)] Set u (0) := IDu0 and take the random variables D,0 such that u is adapted to the filtration (F n N ) 0≤n≤N and, for any φ ∈ XD,0 and for almost every ω ∈ Ω, we have, where The convergence of the scheme is analysed along a sequence (Dm) m∈N of GDs (such a sequence plays the role, in standard mesh-based schemes, of sequences of meshes with size going to zero).To establish this convergence, the sequence must satisfy a few key properties: consistency, limit-conformity and compactness.Definition 2.2.(Consistency) A sequence (Dm) m∈N of space-time GDs is said to be consistent if • for all φ ∈ H 1 0 , we have SD m (φ) → 0 as m → ∞, where • δtD m → 0 as m → ∞.
Definition 2.3.(Limit Conformity) A sequence (Dm) m∈N of space-time GDs is said to be limit conforming if, for all Definition 2.4.(Compactness) A sequence of (Dm) m∈N of space-time GDs is said to be compact if where, extending ΠD m v by 0 outside Θ, A limit conforming or compact sequence of GDs also satisfy the following important property, which states a (uniform) discrete Poincaré inequality.Lemma 2.1.(Coercivity of sequence of GDs) If a sequence (Dm) m∈N of space-time GDs is compact or limit conforming then it is coercive: there exist a constant ρ such that max

Examples of gradient discretisations
We present here two examples of GDs: the mass-lumped P 1 (MLP1) finite elements, and the hybrid mimetic mixed method (HMM).Both of them are known to satisfy the properties above under standard mesh regularity assumptions [6].
We first specify the Gradient discretisation 2.1 for MLP1.This GD is based on a conforming triangulation of the domain Ω.Let XD,0 = {v = (vs)s∈V : vs ∈ R, vs = 0 if s ∈ ∂Θ}, where V is the set of all vertices.To define the reconstruction operator ΠD, we construct a dual mesh (Θs)s∈V , which can for example be defined by setting Θs as the polygon obtained by linking the centers of the edges and cells having s as a vertex (see [6,Section 8.4] for more detail).We then define the piecewise constant reconstruction on this dual mesh ΠD by: ∀u ∈ XD,0, ∀s ∈ V, ΠDu = us on Θs.
Further, the reconstructed gradient ∇D is the gradient of piecewise linear function based on vertices, i.e., ∇Dv = s∈V vs∇φs where (φs)s∈V is the canonical basis of the conforming P 1 finite element space on the triangulation.The second method, HMM, is a polytopal method and therefore can be applied to more general meshes made of polygons, polyhedras, etc.We therefore consider a polytopal mesh (see [6,Definition 7.2]).To describe HMM gradient discretisation, let XD,0 = {v = ((vK )K∈M, (vσ)σ∈E ) : vK , vσ ∈ R, vσ = 0 if σ ⊂ ∂Θ} be the space of discrete unknowns, where M and E denotes the sets of cells and edges, respectively.For any v ∈ XD,0, the piecewise constant reconstruction ΠD is usually defined by: for all K ∈ M and v ∈ XD,0, (ΠDv)|K = vK .However, in the numerical tests of Section 6, we use a slightly modified version of the HMM.Specifically, the reconstruction ΠD builds the mass of the solution from both cell and edge unknowns: ΠDv is piecewise constant equal to vK on a domain DK in K, and to vσ on a domain Dσ around σ.In practice, these domains do not need to be specified, only their volume need to be fixed, which is done by selecting a parameter 0 ≤ r ≤ 1 and setting (where K, L are the two cells around σ).The integrals involving the piecewise constant functions thus constructed can then be computed using only these volumes: for example, This modification of the usual HMM ensures that all unknowns (cell and edge) have a strictly positive contribution in the mass matrix, which facilitates the convergence of the Newton iteration (the total number of iterations is reduced).The gradient reconstruct ∇D is built from the consistent polytopal gradients and the stabilizations RK,σ where nK,σ is the outer normal to K on σ.More precisely, denoting by DK,σ the convex hull of the center xK of K and σ, ∇D is given by where dK,σ is the orthogonal distance between xK and σ ∈ EK.The stabilization term is required to control the unknowns (vK )K∈M which are not involved in the polytopal gradient.
, where the stochastic integral on the left-hand side is the Ito integral in L 2 (Θ).
Theorem 2.2.Let (Dm) m∈N be a sequence of GDs that is consistent, limit-conforming, and compact.For every m ≥ 1, there exist a random process um solution to the GS (2.4) with D := Dm.Moreover, there exist a weak martingale solution ( Ω, F, F, P, W , ũ) to (1.1) in the sense of Definition 2.1 such that for a sequence (ũm)m of random processes defined on Ω, sharing the same law as (um)m, following convergences hold P-a.s. up to a subsequence

A priori estimates
In the following lemma, we first provide a priori estimates for the solution u to (2.4) and then deduce its existence.The convergence analysis is carried out along a sequence (Dm)m of GDs that satisfy the consistency, limit-conformity and compactness properties.For ease of notation, in this section we drop the index m and denote by D a generic element of that sequence.Moreover, In the proofs, C denote a generic constant which may change from one line to the next but has the same dependency as the constants appearing in the statement of the result to be proved.

Lemma 3.1.
There exist at least one solution u to the gradient scheme (Algorithm 2.1), and there exist a constant C > 0 such that Proof.
We choose the test function φ = ΠDζ(u (n+1) ) in (2.4) to get ) and the following identity for a = ∇Dζ(u (n) ) and b = ∇Dζ(u (n+1) ), we get, from (2.4), Moreover, since ζ is Lipschitz and non decreasing, we have Substituting a = ΠDζ(u (n) ) and b = ΠDζ(u (n+1) ), we get Plugging that into (3.6) and using (2.3) , we get (3.8) Adding together (3.4) and (3.8) yields (3.9) Using a telescopic sum from n = 0 to n = k − 1 gives (3.10) Applying the Cauchy-Schwarz inequality and the Young inequality ab 4 for the second term of the right-hand side, we obtain (3.11) Substituting (3.11) into (3.10),we get We note that the last term of the right-hand side of (3.12) vanishes when taking its expectation since ΠDu (n) is F t (n) measurable and thus independent with ∆ (n+1) W , which has a zero expectation.Taking expectations, we get (3.13) From (2.2) and using the independence of the increment of Wiener process, the last term of right-hand side becomes Together with (3.13), this implies By applying the discrete version of the Gronwall lemma to the above inequality together with the assumption (2.1), we obtain Step 2: bound on Ξ(ΠDu).
By taking the maximum of (3.12) over 1 ≤ k ≤ N and applying the expectation, we get To bound the third term in the right-hand side, we treat the sum as the stochastic integral of piecewise constant integrand and use the Burkholder-Davis-Gundy inequality [2, Theorem 2.4] with constant B where we have used the Young inequality in the third inequality, and the bound (2.2) together with the Poincaré inequality (2.6) in the fourth inequality.We use (3.14) to bound the last term of the right-hand side of (3.16): (3.17) By using (3.15)-(3.17),we deduce that which completes the proof of priori estimates (3.1).
The existence of at least one solution u to (2.4) in the Algorithm 2.1 follows from the assumption A1, estimate (3.1) and the same arguments as in [6, Corollary 6.9].Moreover, since ζ is coercive by A1, there exists K1, K2 such that To obtain suitable compactness properties on the sequence of approximate solutions, we will also need an improved version of (3.1) in which we estimate the higher moments of Ξ(ΠDu) and ∇Dζ(u).Lemma 3.2.Let u be a solution to (2.4).Then there exist a constant C > 0 such that Remark 3.2.We could also, following the technique in [8], bound the moments of order q for any q ≥ 1, instead of q = 2 as above, but these bounds will not be useful to us here. Proof.Let and Step 1: bound on maxn E (Z (n) ) 2 .
We use the above notations to rewrite (3.9) as Multiplying the above equation with Z (n+1) , we have We use the Cauchy-Schwarz and Young inequalities to estimate I1: To estimate I2, we proceed with We note that, by (2.2) and (3.18), Using the estimates on I1 and I2 together with (3.5), (3.21) and (3.22), we get (3.23) We note that the last term on the right-hand side of (3.23) vanishes when taking expectation while the first two terms are estimated as follows: Step 2: conclusion of the proof.
and taking maximum over k and then applying expectation, we get (3.26) Using the Burkholder-Davis-Gundy inequality (with constant B) and (3.22) we have where the conclusion follows from Obtaining strong compactness result on the sequence of approximate solutions will require to estimate the time translates of ζ (ΠDu).The following lemma is the key element for this estimate.Lemma 3.3.Let u be the solution of (2.4).Then there exist a constant C > 0 such that, for all ℓ ∈ {1, Proof.For any φ ∈ XD,0, writing (2.4) for a generic i ∈ {0, . . ., ℓ} and summing over i yields (n) ) and taking the sum over n from 1 to N − ℓ gives Applying (3.7) for a = ΠDζ(u (n) ) and b = ΠDζ(u (n+1) ), we have We first estimate the expectation of I1 by using Cauchy-Schwarz inequalities t (1)   ∇Dζ (ΠDu) 2 where we have used, in the penultimate line, the fact that, in the term , each integral over [t (k) , t (k+1) ) for k = n, . . ., N , appears at most ℓ times.To estimate the expectation of I2 we use the Young inequality and write (3.32)By using Itô isometry, Lemma 3.1 and (2.2), we bound the last term of the right-hand side: t (1)   f where we have used, in the second equality, the bound where A := φ ∈ XD,0, ΠDφ L 2 + ∇Dφ L 2 ≤ 1 .We note that, for all w ∈ XD,0, Proof.We apply (3.29) to a generic φ ∈ A to get To estimate I1, we use lemma 3.2 and the Hölder inequality, to obtain . We estimate I2 using the Burkholder-Davis-Gundy inequality, (2.2) and (3.20) to write The following lemma gives a bound on the time variations of | ΠDu, ΠDPDφi L 2 | i∈N .
For any t ∈ [0, T ], we define In the following lemma, we estimate MD.
Lemma 3.6.For any β ∈ (0, 1/2) there exists a constant C > 0 depending on β such that Proof.Using the Burkholder-Davis-Gundy inequality, we have The first estimate is then follows from (2.2) and Lemma 3.2.
For the second estimate, it follows from (3.39), that

Compactness of the solution to the scheme
From hereon, we re-introduce the index m in the sequence of gradient discretisations (Dm)m, removed at the start of Section 3, and prove the tightness of the sequence The Skorohod theorem [14] then allows us to construct a probability space and random variables that converge almost surely.
To prove the tightness of (ΠD m ζ(um))m, we define the following norm on X Nm +1 Recalling that {φi}i defined by (3.40) form a family of smooth functions whose span is dense in L 2 , we define the metric On bounded sets in L 2 this metric defines the weak topology.To prove the tightness of (MD m )m, we introduce the space L r (0, T ; L 2 w ) for 2 ≤ r < 4, which is L r (0, T ; L 2 ) endowed with the metric Moreover, on bounded sets in L ∞ (0, T ; L 2 ), the topology of L r (0, T ; L 2 w ) is identical to the one defined by the above metric.
Using similar arguments as in the proof of [8, Lemma A.3], we have the following result.
Lemma 4.1.Let β ∈ (0, 1).For any normed space V and v ∈ H β (0, T ; V ) there exists C depending only on v and β such that We define the space where the spaces L ∞ (0, T ; L 2 ) w* and L 2 (0, T ; L 2 ) d w are the spaces endowed with the weak- * and weak topologies, respectively.
Lemma 4.2.The laws of (Rm) m∈N in E are tight.
Proof.Consider, for a fixed constant λ, the sets For each m ∈ N, Km(λ) is bounded in the finite dimensional space ΠD m X N+1 Dm,0 and therefore relatively compact in L 2 (0, T ; L 2 ).We set Xm := ΠD m X N+1 Dm ,0 and define the norm By the compactness of (Dm) m∈N , (Xm) m∈N is compactly embedded in L 2 in the sense of [6,Definition C.4].Let (νm) m∈N be a sequence such that νm ∈ Km(λ) for all m and, by definition of that space, take wm ∈ X Nm+1 Dm,0 such that ΠD m wm = νm and wm β,Dm ≤ λ.By definitions (4.1) and (4.3) of the norms • β,Dm and • Xm we have Using Lemma 4.1 and the fact that νm H β (0,T ;L 2 ) ≤ wm β,Dm ≤ λ, we have a bound on time translates of νm that is uniform in m ∈ N. Therefore, using [6, Proposition C.5], any sequence (νm) m∈N such that νm ∈ Km(λ) for each m ∈ N is relatively compact in L 2 (0, T ; L 2 ).Combining this with the relative compactness of each Km and invoking [8, Lemma A.4] proves that K(λ) is relatively compact in L 2 (0, T ; L 2 ).Moreover, using the Chebyshev inequality and (4.2) for any m ∈ N, we have As the right-hand side does not depend on m and tends to 0 as λ → ∞, and since K(λ) is compact, this concludes the proof that the laws of (ΠD m ζ(um)) m∈N are tight in L 2 (0, T ; L 2 ).In addition, from [8, lemma A Therefore, using the bounds on (ΠD m um) m∈N , ΠD m um, ΠD m PD m φi L 2 i∈N m∈N and (MD m ) m∈N stated in Remark 3.3, Lemmas 3.5 and 3.6 imply the tightness of the law of (Rm) m∈N in E .
In the following lemma, we show the almost sure convergence of (Rm) m∈N up to a change of probability space using Skorohod theorem.

Lemma 4.3.
There exists a probability space (Ω, F , P), a sequence of random variables (um, M m, W m)m∈N and random variables (u, M , W ) on this space such that and Rm coincide for each m ∈ N, ) M m → M a.s. in L 4 (0, T ; L 2 w ), (4.6) • um is a solution to the gradient scheme (2.4) with D = Dm and W is replaced by W m.
Furthermore, up to a subsequence as m → ∞, for almost all t ∈ (0, T ) and r < 4 on this space and taking values in the space E with the same laws, for each m ∈ N, as Rm and random variables (u, z, Z, M , W , Pi i∈N ) in E such that there exist a set A ⊂ Ω satisfying P(A) = 1 and, up to a subsequence, for all [eq : rvzm]zm(ω) → z(ω) in L 2 (0, T ; L 2 ), and the convergence of (4.6) and (4.7) hold.Since the laws of (y m , zm, Zm, Pm,i i∈N ) and are identical, there exists um ∈ XD m,0 such that To prove (4.8), we first observe using (4.12), weak-strong convergence and the definition of consistency of space-time GDs that ΠD m um, ΠD m PD m φi L 2 → u, φi L 2 a.s. in L 4 (0, T ) for each i ∈ N. The left hand side converges to 0 since SD m (φi) → 0 as m → ∞, for each i ∈ N. From (4.15) and (4.14), we have the convergence (4.8).Lastly, to prove (4.9) and (4.10), we observe the following estimate for each i ∈ N using Remark 3.3 and the Cauchy-Schwarz inequality As a result of (3.20), the coercivity of (Dm) m∈N and (3.43), we have Therefore, for each i ∈ N and r < 4 the sequence ΠD m um, φi L 2 m∈N is equi-integrable in L r (Ω × (0, T )) and the sequences (ΠD m ζ (um)) m∈N and (M m)m∈N are equi-integrable in L r (Ω, L 2 (ΘT )) and L r (Ω × (0, T ); L 2 ) respectively.Using (4.4), (4.6), (4.8), we apply the Vitali theorem to get the following results Hence, there exist a subsequence, still denoted by (M m)m∈N, such that M m(t) converges to M (t) for almost all t ∈ (0, T ) in L r (Ω; L 2 w ), which proves (4.10).Moreover, using the diagonal extraction process, we can find a single subsequence for all i ∈ N, still denoted by ΠD m um, φi L 2 m∈N , such that ΠD m um(t), φi L 2 m∈N converges to u(t), φi L 2 for almost all t ∈ (0, T ) in L r (Ω).This implies the convergence (4.9).
The continuity of the stochastic processes u and M are shown in the following lemma.Proof.This result is a consequence of and Kolmogorov test.We have, for r < 4, the following inequality using (3.28), (3.37) and (3.42), for 0 Dm + CSD m (φi) r .Using the Jensen inequality, one can write As m → ∞, the last term tends to 0 using a discrete version of dominated convergence theorem by observing that Using (4.9) and the Fatou lemma, we infer, for almost any s, s ′ E dL 2 w (u(s ′ ), u(s)) r ≤ C|s ′ − s| r/2 , which implies the desired continuity of u using Kolmogorov test for r < 4.
For the continuity of M , from (3.44) and reasoning as in [8, Lemma 3.5], for r < 4 we have Applying the Kolmogorov test, we have the continuity of M .

Identification of the limit
In this section, we will first find the representation of the martingale M and then prove the main theorem.We know that M is continuous and square integrable from Lemma 4.4 and (4.16) respectively.Following the same arguments as in the proof of [8, Lemma 5.1] for f (ζ(u)) using (4.4), (4.10) and (4.17), we have quadratic variation of M defined for all a, b ∈ L 2 by for any t ≥ 0. Therefore, applying the continuous martingale representation theorem [4, Theorem 8.2], there exists a probability space ( Ω, F , P), a filtration { Ft} and a Q-wiener process W defined on Ω, F , P := (Ω × Ω, F × F , P × P) adapted to { Ft := Ft × Ft} such that M (•, ω) = M (•, ω, ω) and u(•, ω) = ũ(•, ω, ω) a.s. in (ω, ω) and we have for every t ≥ 0, We are now ready to prove our main theorem.
Recalling that um solves the gradient scheme with W replaced by W m, we have Summing this relation from n = 0 to n = k and choosing φ := PD m ψ, where ψ ∈ H 1 0 (Θ), we obtain, (5.2) Using (4.9), (4.10), PD m ψ → ψ in L 2 and the consistency of (Dm) m∈N , we obtain for almost every t, (5.3) To prove the convergence of the last term of left-hand side of (5.2), we first observe that (5.4) Using the convergence (4.5) and ∇D m PD m ψ → ∇ψ in L 2 , we have the convergence of the first term of the right-hand side of (5.4), for any t ∈ Lastly, we note from the following inequality that the expectation of the absolute value last term in the right-hand side of (5.4) tends to zero as m → ∞.where the conclusion comes from (3.1).Using (5.3)-(5.5)and (5.1), we pass to the limit in (5.2) to observe that ũ satisfies (4) in Definition 2.5.This shows that ( Ω, F , F, P, ũ(•), W (•)) is a weak martingale solution.

Numerical Examples
We consider the stochastic Stefan problem with dimension d = 2, Θ = (0, 1) 2 , T = 1, is used to fix the initial and boundary conditions.We choose δtD = h 2 to ensure that the truncation in time is not dominating the error and spatial truncation error remains the leading term in the estimates.Moreover, Gradient Scheme  in the comparison plots, see Figures 1 and 3: In Figure 1, comparisons of the error estimates with the mesh sizes are provided.All the quantities shows convergence, but we note a larger magnitude of the error for ΠDΞ with the HMM scheme on hexagonal meshes; it can probably be explained by the rougher interpolation process used for this quantity (which does not use any reconstruction in the cells).The average rate of convergence remains around 1 for each quantity.We also tested the validity of the theoretical bounds on ΠDζ(u), ∇Dζ(u) and ΠDΞ(u) by plotting the respective norms of these quantities in Figure 2. Figure 3 shows comparisons of the gradient schemes in terms of their algebraic complexity.For this, we plot the error estimates for each of the quantity versus their number of degrees of freedom (Ndofs).In computing the Ndofs for the HMM scheme, we only used the number of edges since the cell unknowns can be eliminated using static condensation.These plots show, for a given mesh family, a slight advantage to HMM for the approximation of ΠDζ(u), and a slight advantage of MLP1       for ∇Dζ(u); the approximations of ΠDΞ(u) are similar (despite a rougher interpolation of this quantity for the HMM scheme).

Conclusion
We presented a generic numerical analysis, based on the GDM framework, of the stochastic Stefan problem driven by a multiplicative noise.Using discrete functional analysis tools and the Skorohod theorem, we showed the compactness of the solution to the gradient scheme.We then proved the existence of weak martingale solution and obtained the convergence of the approximate solution.Though these results are available to all of the methods that lies under the hood of the GDM framework, we chose MLP1 and HMM to illustrate them.We observed that, under the influence of multiplicative noise, the overall numerical approximations are reasonably good and corroborate the theoretical results.
A1. ζ : R → R is Lipschitz continuous with Lipschitz constant L −1 ζ .We also assume that ζ is non decreasing, satisfies ζ(0) = 0 and is coercive in the sense that there exists c, d > 0 such that ζ(s) ≥ c|s| − d for all s ∈ R. )ds, ∀z ∈ R.

Lemma 4 . 4 .
The stochastic processes u has a continuous version in C([0, T ], L 2 w ) and M has a continuous version in C([0, T ], L 2 ).
Error E m ∇ D ζ .
Error E m Π D Ξ .
.44) Together with the first estimate in (3.43) and reasoning as in [8, Lemma A.2], we obtain the second estimate in (3.43).