Linear implicit approximations of invariant measures of semi-linear SDEs with non-globally Lipschitz coefficients

This article investigates the weak approximation towards the invariant measure of semi-linear stochastic differential equations (SDEs) under non-globally Lipschitz coefficients. For this purpose, we propose a linear-theta-projected Euler (LTPE) scheme, which also admits an invariant measure, to handle the potential influence of the linear stiffness. Under certain assumptions, both the SDE and the corresponding LTPE method are shown to converge exponentially to the underlying invariant measures, respectively. Moreover, with time-independent regularity estimates for the corresponding Kolmogorov equation, the weak error between the numerical invariant measure and the original one can be guaranteed with convergence of order one. In terms of computational complexity, the proposed ergodicity preserving scheme with the nonlinearity explicitly treated has a significant advantage over the ergodicity preserving implicit Euler method in the literature. Numerical experiments are provided to verify our theoretical findings.


Introduction
The primary objective of this paper is to study the invariant measures of semi-linear stochastic differential equations (SDEs) with multiplicative noise and their weak approximations.Given the probability space (Ω, F , P), we consider the following R d -valued semi-linear SDEs of Itô type: where A ∈ R d×d represents a negative definite matrix, f : R d → R d is the drift coefficient function, g : R d → R d×m is the diffusion coefficient function, and W • = (W 1,• , . . ., W m,• ) T : [0, T ] × Ω → R m denotes the R m -valued standard Brownian motion with respect to {F t } t∈[0,T ] .Moreover, the initial data x 0 : Ω → R d is assumed to be F 0 -measurable.This form covers a broad class of SDEs which are used to model real applications, for instance, the stochastic Ginzburg-Landau equation (see (6.2)), the mean-reverting model (see (6.3) or [12,18]) and space discretization of stochastic partial differential equations (SPDEs) (see (6.5) or [19,26]).
In this paper, we pay particular attention to a class of SDEs that, under certain conditions, converge exponentially to a unique invariant measure π.Evaluating the expectation of some function ϕ with respect to that invariant measure π is of great interest in mathematical biology, physics and Bayesian statistics: Generally speaking, it is not easy to obtain either the analytical solutions of SDEs or the explicit expression of the invariant measure.The study of the numerical approximations of π therefore receives increased attention.Previous research in this field typically focuses on stochastic differential equations (SDEs) characterized by coefficients that exhibit global Lipschitz continuity [23].Such a strong condition is however rarely satisfied by SDEs from applications.On the other hand, conventional numerical tools lose their powers when attempting to simulate SDEs under relaxed conditions.For example, as claimed in [13,22], for a large class of SDEs with super-linear growth coefficients, the widely-used Euler-Maruyama scheme leads to divergent numerical approximations in both finite and infinite time intervals.A natural question thus arises as to how to design the numerical scheme of the SDE (1.1) under a stiff condition caused by the linear operator in order to well approximate its invariant measure π and perform the error analysis.
Recent years have seen a proper growth of the literature on this topic, and it is worth mentioning that a majority of existing works analyze numerical approximations of invariant measures from SDEs via strong approximation error bounds (see [10,17,18,20,22,24]). The direct study of weak approximation errors (see [4,5,7,8]), which hold particular relevance in fields like financial engineering and statistics, is still in its early stages.In [7], the authors analyzed the backward Euler method of SDEs with piecewise continuous arguments (PCAs), where the drift is dissipative and the diffusion is globally Lipschitz, and recovered a time-independent convergence of order one.The author in [5] studied the tamed Euler scheme for ergodic SDEs with one-sided Lipschitz continuous drift coefficient and additive noise, and gave a moment bound that still depends on terminal time.We also mention that the authors in [1] provided new sufficient conditions for a numerical method to approximate with high order of accuracy of the invariant measure of an ergodic SDE, independently of the weak order of accuracy of the method.
Each method exhibits drawbacks when approximating (1.2) weakly.Implicit methods by their nature have better stability but at a price of escalated complexity; explicit methods such as the tamed methods (see [14,27]) on the other hand may not preserve the long time property numerically since the taming factor has no positive lower bound.Even though the explicit projected method [25] does keep the asymptotic stability, it usually faces a severe stepsize restriction due to stability issues from solving stiff linear systems; to apply the truncated methods [17] to approximate the invariant distribution, one has to construct a strictly increasing function to control the growth of both drift and diffusion and to find its inverse version.Besides, the weak error analysis of such schemes is, to the best of our knowledge, still an open problem.We, therefore, aim to propose a family of linear-implicit methods that not only address the challenges posed by stiff systems but also preserve ergodicity and achieve weak convergence towards the invariant measure admitted by SDEs (1.1).
More formally, our scheme, called the linear-theta-implicit-projected Euler (LTPE) method, with a method parameter θ ∈ [0, 1] on a uniform timestep size h is given as follows, ) where ∆W n := W t n+1 − W tn , n ∈ {0, 1, 2, . . ., N − 1}, N ∈ N, and P : R d → R d is the projected operator denoted as with γ being determined in Assumption 2.4 later.We point out that the scheme above can be derived from the stochastic theta methods [21,28] used to deal with different models.Also, note that the parameter θ is pre-determined.Where there is a stiff system, we are able to treat the linear operator A implicitly (i.e.θ = 1) without sacrificing numerical efficiency.And if one is working with the non-stiff system, using the explicit numerical scheme (i.e.θ = 0) would be more appropriate.In addition, we follow the projected technique, previously used in [2,3] for SDEs in finite time interval, to prevent the nonlinear drift and diffusion from producing extraordinary large values.Under certain conditions, for ∀ζ ∈ L 8γ+2 (Ω, R d ), where γ is given by Assumption 2.4, the projected process P(x) converges strongly to the original random variable ζ of order 2 (see Lemma 5.7 or [3]), i.e. (1.5) Compared with the truncated method in [17], the implementation of the LTPE method in (1.3) is more straightforward, where the projected operator we have chosen depends only on the growth of the drift and diffusion.Besides, when facing with linear-stiff systems, our method with θ = 1 may not suffer from too strict stepsize restriction.
To show the main result in Theorem 2.5, the derivations of the whole paper are organised in the following way: under Assumption 2.1-2.4,which can be regarded as a kind of dissipative condition, we follow [9] to present the existence and uniqueness of the invariant measures of both SDEs (1.1) and the LTPE scheme (1.3), respectively in Theorem 3.1 and Theorem 4.1; the main result regarding weak error analysis, presented in Theorem 5.8, is derived based on the associated Kolmogorov equation (5.5) of SDE (1.1).However, one may confront two main challenges.The first one is to get a couple of priori estimates that are independent of time and stepsize, including the uniform moment bounds of the LTPE method (1.3) and the time-independent regularity estimates of the Kolmogorov equation.Another one is the implicitness and discontinuity of the proposed LTPE method (1.3), which results in further difficulties in handling the weak error via the kolmogorov equation.Different techniques are used to circumvent these obstacles.Discretization strategy based on the binomial theorem is adopted to obtain the uniform moment bounds of the LTPE scheme (see Lemma 4.3), and we make use of the Itô formula and the variational approach to obtain the time-independent regularity estimates of the Kolmogorov equation (see Lemma 5.3 and Corollary 5.5).To deal with possible implicitness and discontinuity of the LTPE scheme (1.3), we introduce its continuous-version where F (x) := Ax + f (x), ∀x ∈ R d .It can be easily observed that Z n (t n+1 ) = Y n+1 − θAY n+1 h.In order to estimate the numerical approximation error of invariant measure, we separate the weak error ] based on the associated Kolmogorov equation (see (5.5) or [6, Chapter 1]), into three parts, where, for short, we denote Z n := Y n − θAY n h.Thanks to the fact that Z n+1 = Z n (t n+1 ) and the time-independent regularity estimates of the Kolmogorov equation, one can treat Error 1 and Error 2 directly and get max{Error 1 , Error 2 } = O(h).For Error 3 , we take full advantage of (1.6) and show further decomposition as (1.8) The first term on the right hand side of (1.8) is O(h) due to the regularity estimates of u(t, •) and (1.5); the second one, based on the Kolmogorov equation and the Itô formula, can also be proved to be O(h) (see more details in the proof of Theorem 5.8).Hence, we obtain the the uniform weak error between the invariant measures, admitted by SDE (1.1) and the LTPE method (1.3), of order one eventually.We summarize our main contributions: • A family of linear implicit numerical methods, capable of dealing with stiff linear systems and inheriting invariant measures, is presented.
• Time-independent weak convergence between two invariant measures inherited by SDE (1.1) and LTPE scheme (1.3), respectively, is established under non-globally Lipschitz coefficients.
Some numerical tests to illustrate our findings in Section 6.Finally, the Appendix contains the detailed proof of auxiliary lemmas.

Settings and main result
Throughout this paper, we use N to denote the set of all positive integers and let d, m ∈ N, T ∈ (0, ∞) be given.Let • and •, • denote the Euclidean norm and the inner product of vectors in R d , respectively.We use max{a, b} and min{a, b} for the maximum and minimum values of between a and b respectively, and sometimes we also use a simplified notation a ∧ b for min{a, b}.Adopting the same notation as the vector norm, we denote M := trace(M T M) as the trace norm of a matrix M ∈ R d×m , where M T represents the transpose of a matrix M. Given a filtered probability space Ω, F , {F t } t∈[0,T ] , P , we use E to mean the expectation and L r (Ω, R d ), r ≥ 1, to denote the family of R d -valued random variables ξ satisfying E[ ξ r ] < ∞.The diffusion coefficient function g : R d → R d×m is frequently written as g = (g i,j ) d×m = (g 1 , g 2 , ..., g m ) for g i,j : R d → R and g j : R d → R d , i ∈ {1, 2, ..., d}, j ∈ {1, 2, ..., m}.Moreover, we introduce a new notation X x t for t ∈ [0, T ] denoting the solution of SDE (1.1) satisfying the initial condition X x 0 = X 0 = x.Also, let Y x n , n ∈ {0, 1, . . ., N}, N ∈ N, be an approximation of the solution of SDE (1.1) with the initial point Y x 0 = x.In addition, denote by C b (R d ) the Banach space of all uniformly continuous and bounded mappings φ : R d → R endowed with the norm φ 0 = sup x∈R d |φ(x)|.
For the vector-valued function u : R d → R ℓ , u = (u (1) , . . ., u (d) ), its first order partial derivative is considered as the Jacobian matrix as In the same manner, one can define and for any integer k ≥ 3 the k-th order partial derivatives of the function u can be defined recursively.Given the Banach spaces X and Y, we denote by L(X, Y) the Banach space of bounded linear operators from X into Y.Then the partial derivatives of the function u can be also regarded as the operators We remark that the partial derivatives of the scalar valued function can be covered by the ) consisting of all functions with bounded partial derivatives D i φ(x), 1 ≤ i ≤ k, and with the norm . Further, let 1 B be the indicative function of a set B. Denote To close this part, we let both C and C A be the generic constant which are dependent of T and the stepsize, but more specially, the notation C A further depends on the matrix A.
We present the following assumptions required to establish our main result.
Assumption 2.1.Assume the matrix A ∈ R d×d is self-adjoint and negative definite.
Assumption 2.1 immediately implies that there exists a sequence of non-decreasing positive real numbers ∞ and an orthonormal basis {e i } i∈{1,...,d} such that Ae i = λ i e i , i ∈ {1, . . ., d}.Moreover, one also obtains Setting y = 0 leads to Note that Assumption 2.3 is equivalent to the following expression (2.12) Assumption 2.4 is regarded as a kind of polynomial growth conditions and in proofs which follow we will need some implications of this assumption.It follows immediately that, which in turns gives and Following the same idea, Assumption 2.4 also ensures, for j ∈ {1, . . ., m}, This in turns gives, for j ∈ {1, . . ., m}, and , 1 , κ ∈ (0, 1), (2.21) where ), C 1 is a constant depending only on the drift f , determined in (2.16).Then the SDE (1.1) and the corresponding LTPE scheme (1.3) method converge exponentially to a unique invariant measure, denoted by π and π, respectively.Moreover, for some test functions This theorem can be divided into three parts as • Existence and uniqueness of invariant measure of SDE (1.1).
• Existence and uniqueness of invariant measure of the LTPE scheme (1.3).
In the following, more details of each part will be shown.
3 Invariant measure of semi-linear SDE Indeed, we show the following result.
Theorem 3.1.Let Assumptions 2.1-2.3 be fulfilled with , with the initial condition X 0 = x 0 , admits a unique invariant measure π and there exists some positive constant With the condition 2λ 1 > max{L 1 , L 2 }, SDE (1.1) can be regarded as a dissipative system.We follow the standard way, as shown in [9], to prove the existence and uniqueness of the invariant measure inherited by such systems.For completeness, we outline the central idea in the proof of Theorem 3.1 while the detailed proof of the following lemmas can be found in Appendix.
It is desirable to consider SDE (1.1) with a negative initial time, that is, where ι ≥ 0, W t is specified in the following way.Let W t be another Brownian motion independent of W t defined on the probability space (Ω, F , P), and define with the filtration In what follows, we write X s,x t in lieu of X t to highlight the initial value X s = x.
Before moving on, we introduce a useful lemma, which is a slight generalization of Lemma 8.1 in [15], as below, Lemma 3.2.If r(t) and m(t) are continuous on [τ, ∞), τ ∈ R, and if where c is a positive constant, then The proof of Lemma 3.2 has been shown in [11].It is time to present the uniform moment bounds of the SDE (3.2).Lemma 3.3.(Uniform moment bounds of semi-linear SDEs.)Let the semi-linear SDEs {X −ι,x 0 t } t≥−ι in (3.2) satisfy Assumptions 2.1, 2.2 with 2λ 1 > L 1 .Then, for any p ∈ [1, p 0 ] and t ∈ [0, ∞), The proof of Lemma 3.3 can be found in Appendix A.1.Note that Lemma 3.3 can also cover the case p ∈ [0, 1) due to the Hölder inequality.Following Lemma 3.3, we obtain the contractive property of SDE (1.1) as follows, 0 .Let Assumptions 2.1, 2.3 hold with 2λ 1 > L 2 , then, there exists a constant The proof of Lemma 3.4 can be found in Appendix A.2.The next Lemma is a direct consequence of Lemma 3.3 and Lemma 3.4.Lemma 3.5.Consider the semi-linear SDE in (3.2) satisfying Assumptions 2.1-2.3 hold with 2λ 1 > max{L 1 , L 2 }.Let X −s 1 ,x 0 t and X −s 2 ,x 0 t with s 1 , s 2 > 0 satisfying −s 1 < −s 2 ≤ t < ∞, be the solutions of SDE (3.2) at time t starting from the same point x 0 but at different moments.Then, for any p ∈ [1, p 0 ], there exists some constant c 2 ∈ (0, 2λ The proof of Lemma 3.5 is postponed to Appendix A.3.Equipped with the previously derived lemmas, it is not hard to show Theorem 3.1.To be precise, recalling Lemma 3.5, by sending s 1 to infinity, one directly observes that {X −s,x 0 0 } s>0 is a Cauchy sequence in L 2 (Ω, R d ) and there exists Using Lemma 3.5 again yields By Lemma 3.4, we know ϑ x 0 is independent of x 0 , i.e.
and thus denoted by ϑ.Let π be the law of the random variable ϑ, then π is the unique invariant measure for SDE (1.1).Moreover, since X x 0 t and X −t,x 0 0 have the same distribution, for any (3.12) 4 Invariant measure of the LTPE scheme The main result of this Section is provided as below.
Then the numerical simulation from LTPE (1.3) method, denoted by {Y x 0 n } 0≤n≤N with the initial point x 0 , admits a unique invariant measure π.Moreover, there exists some positive constant C 1 such that, for some function 2) The theorem above can be proved in exactly the same way that Theorem 3.1 is proved, where the ergodicity of the LTPE (1.3) boils down to verifying the uniform moment bounds (see Lemma 4.3) and the contractive property (see Lemma 4.4).Before proceeding further, we first establish some preliminary estimates necessary for the proof of Theorem 4.1.Lemma 4.2.Recall the definition of P(x) in (1.4).Let Assumptions 2.2, 2.4 be fulfilled, then for any x ∈ R d the following estimates hold true, where ). Especially, for any integer p ≥ 1, we have, for x ∈ R d , Moreover, for any x, y ∈ R d , the following estimates hold true where 2 ) depending only on f .
The proof of Lemma 4.2 can be found in Appendix B.1.The next lemma provides the uniform moment estimates for the LTPE scheme (1.3).Lemma 4.3.(Uniform moment bounds of the LTPE method) Let Assumptions 2.1, 2.2 and 2.4 hold with 2λ 1 > L 1 .For a method parameter θ ∈ [0, 1], consider the numerical simulation Y n from LTPE method in (1.3).Then, for any uniform stepsize h ∈ (0, 1) then, for any Proof of Lemma 4.3.We first take square of (1.3) on both sides and analyze the left and right hand sides individually.With Assumption 2.1 being used, the left hand side goes to On the other hand, the right hand side goes to ) where B := I + (1 − θ)Ah.In the following, let us start by the estimation of (4.6).

Case I: estimate of E [ Y n+1
2p ] when p = 1.Using the Young inequality yields Taking expectations of (4.9) and (4.10) respectively with Lemma 4.2 and the fact that This in conjunction with Assumption 2.2 with 2λ 1 > L 1 leads to, for some positive constant where 1 − x ≤ e −x for any x > 0.
Case II: estimate of E [ Y n+1 2p ] when p ∈ (1, p 0 ) ∩ N. Proceeding to the estimate of higher order moment of the LTPE method (1.3), some restrictions need to be imposed on the timestep h.
, obviously, the matrix B is positive definite and max i=1,...,d λ B,i = 1 − (1 − θ)λ 1 h.By the Young inequality, we get, for some positive constant ǫ 1 ∈ (0, (2p 0 − 2p)/(2p − 1)], Following the binomial expansion theorem and taking the conditional mathematical expectation with respect to F tn on both sides to show that, Hence, the analysis can be divided into the following two parts.For the estimate of I 1 : According to the binomial expansion theorem again, one has Let us decompose the estimate of I 1 further into four steps.
Step I: the estimate of E Ξ n+1 F tn .
Based on the property of Brownian motion and the fact that ∆W n is independent of F tn , we deduce leading to E Ξ n+1 F tn = (1+ǫ 1 )h g(P(Yn)) 2 +2h P(Yn),f (P(Yn)) . (4.20) Step II: the estimate of E Ξ 2 n+1 F tn .Recalling some power properties of Brownian motions, we derive that, for any ℓ ∈ N, ) where (2ℓ − 1)!! := Π ℓ i=1 (2ℓ − 1).Before moving on, we here introduce a series of useful estimates.For any ℓ ∈ [2, ∞) ∩ N , by Lemma 4.2 and (4.21), one can achieve with some constant Similarly, with the Cauchy Schwarz inequality, one gets For any ℓ ≥ 2 and x ≥ 0, we know that x One needs to be careful about the estimate of term I 3 .Equipping with (4.19) yields It is time to move on to the estimate of E Ξ 2 n+1 F tn .We begin with the following expansion As claimed before, one will observe and, for where C = C(L 1 , C f ).As we know, for any positive constant ℓ ∈ [2, p] ∩ N, p < p 0 and ǫ 1 ∈ (0, (2p 0 − 2p)/(2p − 1)], so that we obtain Step III: the estimate of E Ξ 3 n+1 F tn .By the similar procedure, we can acquire that where (4.19) and (4.21) are used to imply that

.34)
Step IV: the estimate of Bearing the fact from Lemma 4.
Using the Young inequality yields, for some positive constants ǫ ℓ ∈ (0, (2ℓ In light of the estimates (4.22)-(4.24)and (4.35) with the elementary inequality, we obtain that, for some constant We would like to mention that the following inequality holds for any ℓ ∈ [4, p] ∩ N and ǫ 1 ∈ (0, (2p 0 − 2p)/(2p − 1)], ( Therefore, the estimate (4.38) can be rewritten as Combining Step I∼Step IV to show that, for some constant (4.41)Moreover, we can choose a appropriate h such that h ∈ 0, which leads to the following estimate by Assumption 2.2, Hence, we deduce that For the estimate of I 2 : For the estimate of I 2 , the key point is to get the estimate of which is uniform bounded with the same analysis as the estimate of I 1 , i.e., there exists some positive constant leading to Combining the estimates of I 1 and I 2 : Taking the estimates of I 1 and I 2 into (4.17),for some constant For 2λ 1 > L 1 , we take expectations on both sides of (4.48) with Lemma 4.2 and the Young inequality to show that, for some ǫ 2 > 0, .49) Then we can choose a suitable ǫ 2 to ensure that where we have used the fact that for any x > 0, 1 − x ≤ e −x .The proof is completed.
We remark that to verify the existence and uniqueness of the invariant measure of the LTPE method (1.3), the uniform estimate of the second order moment (i.e.(4.6) (2) 0 , (4.52) where h is the uniform timestep with

.53)
The constant λ f depends only on the drift f , denoted in Lemma 4.2.In addition, let Assumptions 2.1, 2.3 and 2.4 hold for 2λ 1 > L 2 , then there exists a positive constant C 1 such that, for any n ∈ {0, 1, 2, . . ., N}, N ∈ N and t n = nh, The proof of Lemma 4.4 is deferred to Appendix B.2.
Proof of Theorem 4.1.With Lemma 4.3 in mind, the existence of the invariant measure π admitted by the LTPE scheme (1.3) is obtained by Krylov-Bogoliubov theorem [9].Further, the proof of the uniqueness of such invariant measure π follows almost the same idea quoted from Theorem 7.9 in [17], which is a consequence of Lemma 4.4, so that we omit it here.Then, using Lemma 4.4 and the Chapman-Kolmogorov equation yields, (4.55)

Time-independent weak error analysis
Our aim is to estimate the error between the invariant measure π and π. i.e.
As we have claimed before, both {X tn } n∈N , defined by (1.1), and {Y n } n∈N , defined by (1.3), are ergodic, namely lim Hence, the error estimate boils down to the time-independent weak convergence analysis of the LTPE method (1.3) as follows, In order to carry out the error analysis, we need some priori estimates and lemmas.The key ingredient is to introduce u : where In what follows, we will show that u(•, •) is the unique solution of the associated Kolmogorov equations as with initial condition u(0, •) = ϕ(•), where we denote that F (x) := Ax + f (x).To examine the regularity of u, we need the following properties.
For the matrix A ∈ R d×d , it is apparent that ) Moreover, for convenience, we denote a mapping (5.8) In particular, let Obviously, these mappings are non-decreasing with respect to γ.Hence, it follows from Assumption 2.4 and its consequences that, for j ∈ {1, . . ., m}, ) which directly implies and (5.12) Correspondingly, the following estimates hold true, for j ∈ {1, . . ., m}, ) which also shows Besides, Assumptions 2.1, 2.3 lead to, for some where α := 2λ 1 − L 2 > 0. For random functions, let us introduce the mean-square differentiability, quoted from [29], as follows. ) where e i is the unit vector in R d with the i−th element being 1.Then Ψ is called to be mean-square differentiable, with ψ = (ψ 1 , . . ., ψ d ) being the derivative (in the mean-square differentiable sense) of Ψ at x. Also denoting D (i) Ψ = ψ i and DΨ(x) = ψ.
The above definition can be generalized to vector-valued functions in a component-wise manner.Now we are in the position to derive the uniform estimate of the derivatives of {X x 0 t } t∈[0,T ] of (1.1) in the mean-square differentiable sense.Here for each t we take the function X • t : R d → R d , and write its derivative as Higher order derivatives D 2 X x t and D 3 X x t can be defined similarly.
The proof of Lemma 5.2 will be presented in Appendix C.1.As a consequence of Lemma 5.2, the uniform estimate of the derivatives of u(t.•) is obtained by the following lemma.Lemma 5.3.For any x ∈ R d and some random variables ) and ) where α 1 , α2 and α3 are positive constants, with the latter two depending on α 1 , α 2 and α 3 defined as Lemma 5.2, i.e. (5.22) Remark 5.4.Bearing Lemma 5.3 in mind, we obtain that given the test function Then u(t, x) is the unique solution of (5.5) (see Theorem 1.6.2 in [6]).
The proof of Lemma 5.3 can be seen in Appendix C.2.Moreover, Lemma 5.3 apparently yields the contractivity of u(t, •), which can also be derived by Lemma 3.4.Thus, one can have the following result.
Corollary 5.5.Let Assumptions 2.1-2.4 hold with 2λ 1 > max{L 1 , L 2 }, and recall that α Before proceeding further, there is no guarantee that the LTPE method (1.3) is continuous in the whole time interval since the numerical solutions are prevented from leaving a ball, whose radius depends on the timestep size, in each iteration.To address this issue and fully exploit the Kolmogorov equations, we recall the continuous version of the LTPE scheme (1.3)The proof of Lemma 5.7 can be found in Appendix C.4.Up to this point, we have developed sufficient machinery to obtain the uniform weak error estimate of the SDE (1.1) and the LTPE scheme (1.3) as below.
To conclude, we deduce from Theorem 5.8 that the weak convergence order of the π and π is 1, i.e.
since the constant C A is independent of N in (5.3).

Numerical experiments
In this section, we illustrate the previous theoretical findings through three numerical examples: the scalar stochastic Ginzburg-Landau equation [16] in Example 1, the mean-reverting type model with super-linear coefficients [12,18] in Example 2 and the third is the semi-linear stochastic partial equation (SPDE) [19,26] in Example 3.For all three numerical experiments, we consider a terminal time T = 5, the timesteps h = 2 −6 , 2 −7 , 2 −8 , 2 −9 and four different choices for test function ϕ(•), ϕ(x) ∈ {arctan( x ), e − x 2 , cos( x ), sin( x 2 )}.( The empirical mean of E [ϕ(X T )] is estimated by a Monte Carlo approximation, involving 10,000 independent trajectories.It is worth noting that in Example 2 we will test that the terminal time T = 5 what we have chosen is appropriate.Example 1.Consider the stochastic Ginzburg-Landau equation [16] from the theory of superconductivity as follows, Let α = −2, σ = 0.5 and X 0 = 1.Then, all conditions in Assumptions 2.1-2.4 are meet with γ = 3 and for any p 0 ≥ 13.We compute the equation (6.2) numerically using the explicit projected Euler method, i.e. θ = 1 in (1.3), and the exact solutions are identified with the corresponding numerical approximations at a fine stepsize h exact = 2 −14 .Also, the reference lines of slope 0.5 and 1 are given here.It turns out in Figure 1 that the weak convergence rate of the approximation errors of the projected Euler method decrease at a slope close to 1. Example 2. Consider a scalar mean-reverting type model with super-linear coefficients in financial and energy markets as follows, Setting b = 0.3, α = 1, β = 0.6, σ = 0.2 and X 0 = 1.The requirements from Assumptions 2.1-2.4 can be verified with γ = 3 and for any p 0 ∈ [13, 31/2].We begin with the probability density test of the LTPE sheme (1.3) to discrete model (6.3) with three different θ, θ = 0, 0.5, 1, at the terminal time T = 5 using a stepsize h = 2 −14 , which can be found in Figure 3, respectively.Moreover, we put the probability density lines of such three numerical schemes with different choice of θ together and directly observe that all the probability density lines are almost same so that the choice of time T = 5 is suitable.We discrete this model (6.3) by the semi-linear-implicit projected Euler method (i.e.θ = 0.5 in (1.3)).To find the exact solutions, we discrete this model by the linear-implicit projected Euler method (θ = 1 in (1.3)) at a fine stepsize h exact = 2 −14 .In Figure 3, the weak error lines have slopes close to 1 for all cases.
Example 3. Consider the following semi-linear stochastic partial differential equation (SPDE), where g : R → R and W • : [0, T ] × Ω → R is the real-valued standard Brownian motions.Such an SPDE is usually termed as the stochastic Allen-Cahn equation.Discretizing such SPDE (6.4) spatially by a finite difference method yields a system of SDE as below, Probability density for the linear-implicit projected scheme ( = 1) at T=5  where Here we only focus on the temporal discretization of the SDE system (6.5).In what follows we set g(u) = sin(u) + 1 and u 0 (x) ≡ 1.The eigenvalues {λ i } K−1 i=1 of the matrix A are λ i = −4K 2 sin 2 (iπ/2K) < 0 [26], resulting in a very stiff system (6.5).Further, it is easy to check all conditions in Assumptions 2.1-2.4 are fulfilled with γ = 3 and for any p 0 ≥ 13.
Here we take the case K = 4 as an example.To deal with the stiffness, we take the linearimplicit projected Euler method, i,e, θ = 1 in (1.3), to discretize (6.5) in time and the exact solutions are given numerically by using a fine stepsize h exact = 2 −14 .As can be observed from Figure 4, the weak convergence rate of the linear-implicit Euler method is 1.   resulting in The proof is completed.
B Proof of Lemmas in Section 4 Owing to the fact that p 0 ∈ [1, ∞), the proof of the third estimate in (4.3) is completed.Then taking p-th square on both sides yields where C i p := p! i!(p−i)! .As we have claimed, P(x) ≤ h − 1 2γ and P(x) i ≤ (1 + P(x) 2 ) i 2 for any i ≥ 2, so that where Turning now on to the estimate (4.5), the proof of the first estimate in (4.5) can be found from Lemma 6.2 in [2].For the second estimate, we know from (1.3), Assumption 2.4 and Lemma 4.2 that, f P(x) − f P(y) ≤ C 1 1 + P(x) γ−1 + P(y) γ−1 P(x) − P(x) where one can follow the first estimate to complete the proof.The proof is completed.
B.2 Proof of Lemma 4.4 Proof of Lemma 4.4.Shortly, we denote Taking square on both sides, we then take expectations and follow Assumption 2.1 and Assumption 2.3 to imply Using the Cauchy-Schwarz inequality leads to Recalling Assumption 2.1, Assumption 2.3, Lemma 4.2, we can obtain that (B.9)Here we choose a conditional constant κ ∈ (0, 1) such that As a result, there exists some positive constant C 1 satisfying such that The proof is completed.
C Proof of Lemmas in Section 5 C.1 Proof of Lemma 5.2 Proof of Lemma 5.2.The existence of the mean-square derivatives up to the third order can be proved in a similar way as shown in [6].Based on our assumptions, we would like to obtain the time-independent estimate of the derivatives of solutions {X x t } t∈[0,T ] given by (1.1) with respect to the initial condition x.

Figure 1 :
Figure 1: Weak convergence rates of the explicit projected Euler method for stochastic Ginzburg-Landau model (6.2)

Figure 2 :
Figure 2: Probability density of LTPE scheme method for discretizing the mean reverting model (6.3) with different θ.

Figure 3 :
Figure3: Weak convergence rates of semi-linear-implicit projected Euler method for the mean reverting model(6.3)