A Strongly Monotonic Polygonal Euler Scheme

In recent years tamed schemes have become an important technique for simulating SDEs and SPDEs whose continuous coefficients display superlinear growth. The taming method, which involves curbing the growth of the coefficients as a function of stepsize, has so far however not been adapted to preserve the monotonicity of the coefficients. This has arisen as an issue particularly in \cite{articletam}, where the lack of a strongly monotonic tamed scheme forces strong conditions on the setting. In the present work we give a novel and explicit method for truncating monotonic functions in separable Hilbert spaces, and show how this can be used to define a polygonal (tamed) Euler scheme on finite dimensional space, preserving the monotonicity of the drift coefficient. This new method of truncation is well-defined with almost no assumptions and, unlike the well-known Moreau-Yosida regularisation, does not require an optimisation problem to be solved at each evaluation. Our construction is the first infinite dimensional method for truncating monotone functions that we are aware of, as well as the first explicit method in any number of dimensions.


Introduction
We adopt the setting of polygonal Euler approximations following Krylov's paradigm, see for example [5], where the coefficients of the approximate scheme dependent directly on the step size. This rich class of approximations includes taming schemes such as [8] and their extensions, see [9,6,10], sampling algorithms as in [1] and discretizations for stochastic evolution equations, see [2]. Within the aforementioned setting, we design a new polygonal Euler scheme that preserves the strong monotonicity property of drift coefficients even in the presence of highly non linear terms.
Let (Wt) t∈[0,T ] be an m-dimensional Wiener martingale defined on a filtered probability space (Ω, F, (Ft) t∈[0,T ] , P ). Let also |·| denote the Euclidean norm on R d and the Frobenius norm on R d×m . Moreover, the scalar product of two vectors x, y ∈ R d is denoted by xy. Consider the stochastic differential equation (SDE) dX(t) = b(X(t))dt + σ(X(t))dWt, X(0) = η, t ∈ [0, T ], where b : R d → R d , σ : R d → R d×m and η is an F0-measurable random variable. We consider the case where b is continuous and superlinear, i.e. where |b(x)| is not bounded from above by any affine function of |x|. We assume additionally that A1. There exists a positive constant L such that A2. There exists a positive constant p0 such that E|η| p 0 < ∞. One denotes the property (2) as 'strong monotonicity (with constant L)'. It is a standard result that under these assumptions, SDE (1) has a unique strong solution X. However, if one wishes to approximate (1) by means of a standard Euler scheme dXn(t) = b(Xn(κn(t)))dt + σ(Xn(κn(t)))dWt, Xn(0) = η, t ∈ [0, T ], for κn(t) := [nt]/n, then as shown in [3], due to the superlinear growth of b one cannot ensure that Xn exhibits even weak L p convergence to X. In fact, one can obtain weak L p divergence for every p > 0, i.e. sup 0≤t≤T E|Xn(t) − X(t)| p → ∞ as n → ∞. This discovery led to the development of 'tamed' Euler schemes, see [4,8], which ensure that the drift coefficient is bounded in a manner depending on n. In particular, it is shown in [8] that if we let α ∈ (0, 1/2] and bn : R d → R d be a continuous function satisfying |bn(x)| ≤ Cn α (1 + |x|), and converging to b in a suitable sense, then the solution Xn to the SDE dXn(t) = bn(Xn(κn(t)))dt + σ(Xn(κn(t)))dWt, converges in a strong L p sense to X, i.e. E sup 0≤t≤T |Xn(t) − X(t)| p → 0 as n → ∞. However, none of the tamed Euler schemes yet developed ensure that bn is strongly monotonic, a property which is important for extending these techniques to the simulation of SPDEs, as well as for MCMC samplers and stochastic optimization algorithms. The aim of this paper, therefore, is to find a choice of coefficient bn which is strongly monotonic with the same constant L as b. This is the content of the main theorem, Theorem 2.1.
The rest of the paper is dedicated to proving moment bounds and L p convergence for the scheme (5), which is in a similar vein to previous work in [8] and [10].
Remark 1.1. We consider time-homogenous coefficients in this paper, but our results generalise with little additional effort to the case where b and σ are additionally functions of t, if we add the assumption that for every R > 0 there exists a function NR such that NR ∈ L p [0, T ] for every p > 0 and sup |x|≤R |b(t, x)| ≤ NR(t).
Before proceeding with the proof of the above Theorem, we state two preperatory lemmas.
and furthermore the superlinear growth of b must be contained in f .
Proof. The inequality (6) follows immediately from a simple calculation, and the superlinearity of f from the triangle inequality. Therefore to find a sequence of functions bn that are strongly monotonic it suffices to find a sequence of continuous functions fn : R d → R d satisfying (6).
The second Lemma contains the observation that (6) is entirely a 'local' property, or in other words holds everywhere so long as it holds in arbitrary regions. For technical simplicity we consider only balls in the following Lemma.
Proof. First observe by the continuity of f and the dot product, (7) holds additionally for x, y ∈Ωi. Let x, y ∈ R d be arbitrary and let l denote the line segment connecting them. Every sphere in R d intersects l at most twice so since each Ωj is generated by a finite number of balls ∪ n i ∂Ωj intersects l a finite number of times. Let us write these intersections in order from x to y as s1, s2, ..., sm ∈ l. Then for every 1 ≤ i ≤ m − 1 the line segment connecting si to si+1 is entirely contained within someΩj, otherwise there would have to be another element of (si) m i=1 between them. Therefore (7) clearly holds for Ω1 and Ω4 by Lemma 2.2 and the construction of fn. For x, y ∈ Bs n−1 \ Bs n −2 = Ω2 since rn(x) is increasing as a function of |x| and nonnegative. Similarly for x, y ∈ Bs n \ Bs n −1 = Ω3 one observes that for every x, y ∈ R d , ||x| − |y|| ≤ |x − y|, and thus since |x|, |y| ≤ sn. So (7) holds for Ω1, ..., Ω4 as required.

Moment Bounds
To show the convergence of Xn to X in an L p sense we must first show that the moments of the random variables sup 0≤t≤T |Xn(t)| are each uniformly bounded in n. These proofs rely primarily on the bound Theorem 2.1 (ii), as well as the property which follows from the strong monotonicity of b. From now on let C, C1, C2, ... denote generic positive constants independent of n and t that change from line to line.
Then, substituting (11) and (12) into (10) yields at which point applying Gronwall's inequality results in Moreover, for any 0 < q < 4, raising to the power of q/p Furthermore since C does not depend on x, and since η is independent of the driving noise, one observes that for every 0 ≤ p ≤ p0

Rate of convergence estimates
In this section the rate of convergence of Xn(t) to X(t) in a strong L p sense is obtained under a suitable Lipschitz-style assumption on b, specifically: A3. There exist positive constants H and l such that Theorem 4.1. Let A1, A2 and A3 hold. Suppose 4(l + 1) ≤ p0. Then for every p < p0/(l + 2) For strong L 2 convergence of Xn(t) to X(t) at the optimal rate of 1/2 (setting α = 1/2) one requires that p0 ≥ 3l + 4.
To prove Theorem 4.1, one first observes that since A3 prohibits |f (x)| from growing to quickly as |x| → ∞, one can bound sn from below. Recall that bn is only defined for n sufficiently large that sn > 2.
Then, by the definition of sn it is easy to see that n α ≤ C(1 + s l+1 n ), so that and therefore if Lemma 4.3 failed, the LHS of (16) would go to 0 as n → ∞, a contradiction. Therefore, since Lemma 4.3 holds asymptotically, i.e. for all n > N for some N ∈ N, it must hold for all n since we can just pick the constant C to be smaller than (sn − 2)n − α l+1 for every n < N .
We shall also require the following standard result. Lemma 4.4. Let X be a positive q0th integrable random variable. Then, for every x > 0, 0 ≤ q < q0 it holds that Proof. The result follows from Holder's and Markov's inequality.
Now we can focus on the strong L p convergence of b(Xn(t)) to bn(Xn(t)).

The Constant Diffusion Case
In this section we extend our results to the case where the diffusion is constant and one has additional regularity on the drift coefficient b. Under these assumptions we shall be able to show that the scheme (5) converges to the solution of (1) with an improved rate of 1. In fact in this case the scheme (5) coincides with the Milstein scheme, see [6], which is known to converge with rate 1.
A4. Let b be of class C 1 , and let there exist constants S > 0, σ0 ∈ R d×m , l ≥ 1 for which where Db is the Jacobian matrix of b.

Simulations
We provide some simulations of the strongly monotonic scheme (5) for the following one dimensional SDEs where η is an F0 measurable random variable with standard normal distribution. The SDEs (39), (40) are easily seen to be strongly monotonic with constant L = 1/10, and Theorems 4.1 and 5.2 apply respectively with regards to the convergence of the scheme 5. The results of our simulations are plotted below in a log scale graph. The quantity E|X(1) − Xn(1)| 2 was estimated by E|X 2 17 (1) − Xn(1)| 2 , and likewise for E|Y (1) − Yn(1)| 2 . 5000 trials were performed for each value of n. It was observed when performing these trials that if one replaced the drift term in X and Y by a drift term without a constant term and with larger monotonicity constant L, both X and Y appeared to be converging at rate 1 in the scales utilized above. We hypothesize that this is because for such highly coercive SDEs the solution becomes small very quickly, at which point the diffusion coefficent 1 + X(t) is approximately constant.