REMARKS ON THE ASYMPTOTIC BEHAVIOR OF SCALAR AUXILIARY VARIABLE (SAV) SCHEMES FOR GRADIENT-LIKE FLOWS

We introduce a time semi-discretization of a damped wave equation by a SAV scheme with second order accuracy. The energy dissipation law is shown to hold without any restriction on the time step. We prove that any sequence generated by the scheme converges to a steady state (up to a subsequence). We notice that the steady state equation associated to the SAV scheme is a modified version of the steady state equation associated to the damped wave equation. We show that a similar result holds for a SAV fully discrete version of the Cahn-Hilliard equation and we compare numerically the two steady state equations.


Introduction
Gradient flow equations are essential in materials science and engineering or physically motivated problems. A gradient flow is a dynamic driven by a free energy and a dissipation mechanism. It has the general form ∂ϕ ∂t = G δE δϕ , (1.1) where ϕ is the state variable of the system and δE δϕ is the variational derivative of a free energy E [ϕ]. The nonpositive symmetric operator G partakes in the dissipation of the energy through In a large class of physical models, on which we focus here, G is a differential operator with constant coefficients (such as −Id, ∆, …) and the derivative δE δϕ can † The corresponding author. Email address: morgan.pierre@math.univ-poitiers.fr (M. Pierre where L is a linear symmetric nonnegative operator and the nonlinear part F (ϕ) is the free energy density. In such case, the free energy reads The nonlinear term F ′ (ϕ) can also include derivatives of order less than L thus it (F ′ (ϕ)) can be perceived as the variational derivative of F (ϕ). This framework includes the Cahn-Hilliard equation [6], the Allen-Cahn equation [2], the phase field crystal equation [8], epitaxial growth models [19], ….
Most of the solutions of gradient flows are analytically unattainable, so that attempts to obtain concrete and reliable solutions must resort to numerical methods. Many efforts have been put on finding numerical schemes which preserve the dissipation mechanism (see [1,3,5,12,21,32] and references therein). This ensures the stability of the numerical solution on long time scales.
Regarding the time discretization of (1.1), one of the most efficient approach is to treat the linear term implicitly and the nonlinear term explicitly in (1.2). However, for Cahn-Hilliard type equations, the assumptions on the nonlinearity F (ϕ) and the restriction on the time step remain severe [29]. In a recent paper of Shen et al [28], these inconveniences are avoided by introducing the so-called scalar auxiliary variable (SAV). The variable expands the degree of freedom of the dynamical system.
We note that the SAV approach is an enhanced version of the invariant energy quadratization (IEQ) (see Section 2 for details). It has been applied to a great variety of situations (see, e.g., [7,28,30]) and at every time step, only two linear PDEs with constant coefficients need to be solved. This can be very efficiently done with a FFT or a spectral method. The dissipation mechanism has been proved to hold for first order and second order SAV schemes (BDF2, Crank-Nicolson) without any restriction on the time step (unconditional stability). Convergence of the SAV method as the time step goes to 0 has been proved in [24,27].
In this note, we investigate the asymptotic behavior of solutions generated by SAV schemes, as time goes to infinity. We focus on a time semi-discretization of the damped wave equation with second order accuracy, namely the backward differentiation formula of order 2 (SAV/BDF2).
Our damped wave equation includes the dissipative sine-Gordon equation and the modified Allen-Cahn equation as particular cases. It is a model problem for evolution equations of the form with G and E as previously. In (1.3), the dissipation mechanism reads, assuming that G is invertible, We see here that a modified energy is nonincreasing, namely the sum of a kinetic energy and of the free energy E [ϕ]. Equations such as (1.3) are sometimes called "gradient-like flows." Many schemes which were developed for gradient flows have also been adapted to gradient-like flows such as (1.3) (see [1,14] and references therein). An IEQ approach has been proposed for the modified phase field crystal equation in [23]. But up to now, SAV schemes do not seem to have been considered so we explain our approach in Section 2.
Then, we prove that the SAV/BDF2 scheme for the damped wave equation satisfies an energy dissipation law where no restriction on the time step is required (unconditional stability, cf. Proposition 3.1). Moreover, we show that the scheme can be implemented into an efficient way by solving two linear second-order equations with constant coefficients at each time step (Section 3.4). In Theorem 3.1, we demonstrate that any sequence generated by the SAV scheme converges to a stationary state (up to a subsequence). To the extent of our knowledge, this is the first asymptotic result for a second order time semi-discretization of the damped wave equation. In contrast, for most of second order schemes which are known to preserve the dissipation mechanism in (1.3), obtaining the precompactness of trajectories is an issue [14]. We note that the standard BDF2 scheme applied to the gradient-like flow (1.3) does not preserve the dissipation, unlike what happens for the gradient flow [5].
Our analysis shows that, because of the additional scalar auxiliary variable, the steady state equation for the SAV scheme is a modified version of the steady state equation for (1.1). This is a drawback of the SAV method since it could drastically modify the longtime dynamics. We make this clear by examining several examples in Section 3.6. In the last section, we perform numerical simulations for the Cahn-Hilliard equation using a first order SAV scheme for the time discretization and a finite element space discretization. This allows a comparison with the theoretical results for a model gradient flow.

The IEQ and SAV methods for gradient flows
The SAV method in an enhanced version of the invariant energy quadratization (IEQ) method which was introduced in the work of [33] based on a Lagrange multiplier approach [4,15]. In these methods, a fundamental idea is to add a variable to the gradient flow (1.1) in such a way that the dissipation mechanism is preserved.
The time continuous IEQ version of the gradient flow (1.1) associated to the energy (1.2) reads where q(t, x) = F (ϕ) + C 0 . The free energy density F (ϕ) is assumed to be bounded from below as C 0 is a constant large enough such that F (ϕ) + C 0 > 0.
For the case of SAV, the reformulated system reads where the variable q(t, x) is replaced by the scalar variable , which does not change the dynamical system since ∂w ∂t is not involved, is costumary in Cahn-Hilliard type problems.
We note here that only F (ϕ)dx is required to be bounded from below. This allows the possibility of studying a relatively larger class of physical models since only a few acquire the boundedness of the free energy density [28]. We can easily obtain a modified energy dissipation law by taking the inner products of the above equations with w, − ∂ϕ ∂t and 2r respectively. It reads d dt The SAV system (2.1) is then discretized in time with an explicit expression for the nonlinear terms and an implicit expression for the linear terms. For instance, the first order time discrete SAV scheme reads (2. 2) The discrete dissipation law is obtained by taking the inner product of the equations in (2.2) by ∆tw n+1 , ϕ n − ϕ n+1 and 2r n+1 respectively, and using the well-known where E n = 1 2 (Lϕ n , ϕ n ) + (r n ) 2 . No restriction on the time step is required. While IEQ and SAV are very identical from a numerical analysis point of view, taking away the space dependency of the variable makes the numerical solving process greatly simplified. At every time step, only two linear equations with constant coefficients need to be solved for a SAV scheme, whereas the IEQ requires solving two equations with variable coefficients. We refer the reader to Sections 3.4 and 4.3 for more details.

The SAV approach for gradient-like flows
In order to derive a SAV scheme for the gradient-like flow (1.3), we write the second order equation as a first order system and we introduce the scalar variable r(t) as previously. This reads We can obtain a SAV first time semi-discrete scheme which preserves the dissipation by treating the nonlinear term explicitly and the linear term implicitly, as in (2.2). Second order schemes based on BDF2 or Crank-Nicolson can also be derived as for the gradient flows (see Section 3.2). We note that an IEQ approach has been used in a similar manner for the conservative sine-Gordon equation in [18].
for some positive constant c ′ . The assumption on p yields the Sobolev imbedding [11]. We assume furthermore that there exists a constant C 0 such that for some ε > 0 independent of u.
The growth assumption on f implies that the functional u → Ω F (u)dx is of class C 1 on H 1 0 (Ω) and that the (nonlinear) operator Formally, by multiplying equation (3.1) with u t and integrating on Ω, one gets

The SAV/BDF2 time discretization
We formally rewrite (3.1) as a first-order system and we introduce scalar auxiliary A second-order time semi-discretization by BDF2 reads

Energy stability
We define the discrete energy functional associated to the dynamical system As a shortcut, we write We introduce the modified discrete energy associated to equations (3.9)-(3.11), Proposition 3.1. The scheme (3.9)-(3.11) is unconditionally stable in the sense that
Proof. We multiply (3.10) by 3u n+1 − 4u n + u n−1 = 2δtv n+1 and equation (3.11) by 2r n+1 . We integrate on Ω and sum up the two equations. The nonlinear terms cancel, yielding the following equation, The inequality (3.14) is a direct result of the following expansion [28]:

An efficient solver
It is well-known that SAV schemes can be very efficiently solved. We show that this is the case for system (3.9)-(3.11). We assume that (u n , v n , r n ) and (u n−1 , v n−1 , r n−1 ) are known in H 1 0 (Ω) × L 2 (Ω) × R. By eliminating v n+1 in (3.10), thanks to (3.9), we get the following expression: where By (3.11), we know that Thus, substituting the last equation into (3.15) yields where In order to solve (3.17), the idea is first to compute the term (u n+1 , q n ). This can be done by applying A −1 to (3.17) and taking the inner product with q n (we note that A is an isomorphism from H 1 0 (Ω) into (H 1 0 (Ω)) ′ and that q n belongs to (H 1 0 (Ω)) ′ by (3.4)). We obtain The numerical implementation to solve u n+1 at each iteration can be resumed as follows: "This consists in solving a linear second order equation with constant coefficients." "This also consists in solving another linear second order equation with constant coefficients." "At this stage, all terms required in (3.17) are known. We can then easily find u n+1 ." can be explicitly computed with (3.9) and r n+1 with (3.11).

Asymptotic behavior
In this section, we assume that (u 0 , v 0 , r 0 ) and (u 1 , v 1 , r 1 ) are given in H 1 0 (Ω) × L 2 (Ω) × R. We have seen in Section 3.4 that the SAV/BDF2 scheme (3.9)-(3.11) generates a unique sequence (u n , v n , r n ) n in H 1 0 (Ω)×L 2 (Ω)×R. We are interested in the asymptotic behavior of this sequence as n goes to +∞.
It is easily seen that v n → 0 in L 2 (Ω) (see below). Thus, we focus on the sequence (u n , r n ) n and we introduce its ω-limit set For simplicity, we denote byṼ the space H 1 0 (Ω)×R associated with any standard norm (in our case, we chose . Ṽ = | . | 1 + | . |). We have the following result.
As a consequence (see (3.13)), the sequences (u n ), (v n ) and (r n ) are respectively bounded in H 1 0 (Ω), L 2 (Ω) and R. We claim that the sequence (u n ) is precompact in H 1 0 (Ω). The idea is to demonstrate that (u n ) is bounded in a Sobolev space W 2,q (Ω) for an appropriate value of q > 1.
By (3.10), we have the following expression where g is a linear combination of v n+1 , v n and v n−1 . In particular, the sequence g(v n+1 , v n , v n−1 ) is bounded in L 2 (Ω) and therefore in L q (Ω) for any 1 < q ≤ 2.
We know that (r n ) is bounded in R and so the sequence (u n , r n ) n is precompact inṼ . It is well-known that ω ((u n , r n ) n ) is closed inṼ and therefore compact.
On summing (3.14) from n = 1 to +∞, we find that In particular, Moreover, by (3.9), Thus, In a same manner, we obtain that By precompactness of the sequence (u n ) in H 1 0 (Ω), we deduce that We conclude, using a standard result, that ω ((u n , r n ) n ) is connected inṼ (see Lemma 3.1). Next, we write and so (u, r) → 2E(u, 0, r) = |u| 2 1 + 2(r) 2 is a constant equal to E ⋆ on the set ω ((u n , r n ) n ).
Let now (u ⋆ , r ⋆ ) ∈ ω ((u n , r n ) n ). There exists a subsequence (u n k , r n k ) of (u n , r n ) n such that (u n k , r n k ) → (u ⋆ , r ⋆ ) inṼ . We have (recall (3.24)) Thus, s n k +1/2 → s ⋆ in R. By letting n k tend to +∞ in (3.10) we obtain the steady state equation (3.18).
For the reader's convenience, we prove the following result.
Proof. Assume that ω ((u n , r n ) n ) is not connected inṼ . Then we can find two nonempty closed sets K 1 and K 2 inṼ such that We note that K 1 and K 2 are compact and we define 2ε = inf x∈K1,y∈K2 x − y Ṽ = min (x,y)∈K1×K2 x − y Ṽ > 0.
For simplicity, we write w n = (u n , r n ). By (3.23) and (3.24), We can always find a certain n 0 ≥ N such that n 0 ∈ I 1 , n 0 + 1 ∈ I 2 and n 0 + 1 ∈ I 1 . The infimum is reached and so we write where v 1 ∈ K 1 and v 2 ∈ K 2 . We have This is absurd. The set ω ((u n , r n ) n ) is therefore connected inṼ , as claimed.
Remark 3.1. Theorem 3.1 shows that the sequence generated by the SAV scheme converges to a steady state, up to a subsequence. We did not manage to prove that the whole sequence converges to a single equilibrium, even with additional assumptions on f . In contrast, in [26], we proved by means of a Lojasiewicz-Simon inequality that the sequence generated by the backward Euler scheme applied to (3.1) converges to a single equilibrium, for analytic nonlinearities satisfying a onesided Lipschitz condition.

Analysis of the steady state equation for three examples
In this section, we compare the steady state equations for the damped wave equation (3.1) and for the SAV time semi-discrete version (3.9)-(3.11).

The two steady state equations
A steady state u ∈ H 1 0 (Ω) for the damped wave equation is a solution of (3.1) which does not depend on time, that is a solution of the elliptic PDE − ∆u + f (u) = 0 in Ω. (3.25) If we consider the damped wave equation (3.1) as a first order system acting on . Concerning the SAV approach, we note the following.
(Ω) × R is a stationary state for the SAV scheme (3.9)-(3.11) (that is, a constant sequence which complies with the scheme) if and only if v ⋆ = 0 and (3.12)) has a unique critical point in H 1 0 (Ω) × L 2 (Ω) × R which is (0, 0, 0), but there are a lot more steady states than (0, 0, 0). We have a Lyapunov functional for the dynamical system (3.9)-(3.11) which does not drive every solution to its unique critical point (0, 0, 0) (this is also true for the continuous-in-time SAV dynamical system (3.6)-(3.8), so it is not a consequence of the time discretization: it is a consequence of the additional auxiliary variable r). However, Theorem 3.1 shows that the energy dissipation implies that, up to a subsequence, every sequence generated by the SAV scheme converges to a steady state which solves (3.27).

The linear damped wave equation
We first assume that f (u) = u, in which case (3.1) is the linear damped wave equation. Equation This is a well-known eigenvalue problem. Let 0 < λ 1 ≤ λ 2 ≤ · · · denote the eigenvalues of −∆ with Dirichlet boundary conditions. If α = −λ i for some i, then (3.29) has a continuum of solutions corresponding to the eigenspace for λ i . If α = −λ i for every i, then the unique solution to (3.29) is u = 0. With this simple example, we see that the set of steady states is drastically different for the damped wave equation and its SAV discretization.

The dissipative sine-Gordon equation
Next, we assume that f (u) = sin u and d ≥ 1 (cf. Example 3.1). If d = 1 we simply assume that Ω is an interval. If d ≥ 2, we assume that Ω is a bounded domain with a nonnegative mean curvature (this holds for instance if Ω is convex or star-shaped), or that Ω is an annulus of R d .

Equation (3.28) reads
− ∆u + α sin u = 0 in Ω, (3.30) with α ∈ R. If α > 0 (in particular if α = 1 as in (3.25)), then the unique solution to (3.30) is u = 0. This is a consequence of Pohozaev's identity in case Ω is starshaped [20]. The other situations have been considered in [13]. The linearized equation of (3.30) at u = 0 reads −∆w + αw = 0 in Ω, and we recover the previous eigenvalue problem. Using a bifurcation approach (see, e.g., [22]), a bifurcation branch is likely to start for (3.30) at every singular value α = −λ i where λ i is an eigenvalue of −∆ as previously. Moreover, for all α < −λ 1 , the functional has at least one global minimizer in H 1 0 (Ω), which therefore solves (3.30), and which cannot be identically equal to 0 because 0 is unstable (cf. next example). Again, the set of steady states for (3.30) is therefore very different from the case α = 1.

The modified Allen-Cahn equation
Last, we consider the case where f (u) = u 3 − βu with β > λ 1 (> 0) and 1 ≤ d ≤ 3 (cf. Example 3.2). Equation (3.25) reads (3.31) if and only if u is a critical point ofÊ in H 1 0 (Ω). By considering a minimizing sequence, it is easily seen that E has a global minimizer u ⋆ in H 1 0 (Ω), which is therefore a solution of (3.31). On the other handÊ is of class C 2 on H 1 0 (Ω) and its hessian at some point u reads Since β > λ 1 , we see that the critical point 0 is unstable, so that u ⋆ ≡ 0 on Ω. Indeed, on choosing h = e 1 as an eigenvector associated to λ 1 , we have Thus, 0 is not a global minimizer ofÊ in H 1 0 (Ω). As a consequence, the pair (u ⋆ , 0) ∈ H 1 0 (Ω) × L 2 (Ω) is a global minimizer in H 1 0 (Ω) × L 2 (Ω) of the functionalẼ(u, v) defined by (3.5), whereas (0, 0) is a critical point ofẼ (u, v) in H 1 0 (Ω) × L 2 (Ω) which is not a global minimizer. In contrast, (0, 0, 0) ∈ H 1 0 (Ω) × L 2 (Ω) × R is the unique global minimizer of the functional E(u, v, r) (cf. (3.12)) associated to the SAV time discrete scheme. This shows that the stability of a critical point can change drastically between the damped wave equation (3.1) and its SAV time discrete version.

Remark 3.2. By setting
we have a a sequence of initial values (u α k , 0, r α k ) which are steady states for the SAV scheme (3.9)-(3.11) and such that u α k → u 1 and r α k /s α k → 1 but r α k /s α k = 1 for all k. In other words, the SAV scheme starts arbitrarily close to u 1 and does not end up on a steady state of the PDE (but only close to u 1 , i.e. at u α k ).

Asymptotic behavior of a fully discrete SAV method for the Cahn-Hilliard equation
We have seen in the previous section that the steady state equation for the SAV scheme is a modified version of the steady state equation associated to the PDE. This can be easily quantified by the ratio r ⋆ /s ⋆ in (3.18), which should be equal to 1 in an ideal situation.
In this section, we want to check this numerically. In order to underline that the situation is not restricted to the damped wave equation with Dirichlet boundary conditions, we consider the Cahn-Hilliard equation with no-flux boundary conditions (a model problem for gradient flows). For the time discretization, we use a first order SAV scheme. For the space discretization, we use a finite element method which naturally inherits the properties of the continuous problem. We first prove that an asymptotic result similar to Theorem 3.1 holds for the fully discrete scheme.

We consider the Cahn-Hilliard equation on a bounded polyhedral domain
For the space discretization, we use piecewise linear continuous (P 1 ) finite elements based on a conforming triangulation of Ω [9]. The finite element space V h is a subspace of H 1 (Ω) which has finite dimension and which contains the constants. We denote (φ 1 , . . . , φ N h ) the standard basis and we seek u n h = for all φ h , χ h ∈ V h , where s n = Ω F (u n h )dx + C 0 and (·, ·) stands for the scalar product in L 2 (Ω).

Energy estimate and asymptotic behavior
It is well-known that the time semi-discrete SAV scheme (4.3)-(4.5) satisfies a stability estimate. We prove that this still holds for the fully discrete version (4.6)-(4.8). We , we multiply (4.8) by 2r n+1 and we add the resulting equations. The nonlinear terms cancel and we obtain This is the energy estimate, which is valid for all δt > 0 (unconditional stability). Now, we choose φ h ≡ 1 in (4.6). We obtain that (u n+1 h , 1) = (u n h , 1) for all n, so that the mass (u n h , 1) is constant, that is (u n h , 1) = (u 0 h , 1), ∀n ≥ 0. (4.10) A steady state for the discrete dynamical system (4.6)-(4.8) is a constant sequence, namely a triple Then the fully discrete scheme (4.6)-(4.8) generates a unique sequence (u n h , w n h , r n ) n in V h × V h × R (see Section 4.3). We have the following asymptotic result. Theorem 4.1. Any sequence (u n h , w n h , r n ) n generated by the SAV/BDF1 scheme (4.6)-(4.8) converges up to a subsequence in V h ×V h ×R to a steady state (u ⋆ h , C ⋆ , r ⋆ ) which solves (4.11) and where C ⋆ is constant in Ω.
Proof. The energy estimate (4.9) shows that (E(u n h , r n )) n is nonincreasing, so (|∇u n h | 0 ) n and (r n ) n are bounded. By preservation of the mass (cf. (4.10)), (u n h , 1) is constant. The Poincaré-Wirtinger inequality shows that the Hilbertian norm on H 1 (Ω) defined by v → (|∇v| 2 0 + (v, 1) 2 ) 1/2 is equivalent to the standard H 1 (Ω) norm. Thus, (u n h ) n is bounded in H 1 (Ω) and therefore in V h . Since V h has finite dimension, the Bolzano-Weierstrass theorem implies that for a subsequence, (u n k h , r n k ) → (u ⋆ h , r ⋆ ) in V h × R. Next, we sum the energy estimate (4.9) from n = 0 to n = +∞. We obtain In particular, |∇w n+1 On choosing χ h ≡ 1 in (4.7) and using the continuity of u → f (u) in H 1 (Ω) (cf. (3.4)), we see that This shows that (w n k h ) converges to some constant C ⋆ ∈ R. By choosing n + 1 = n k in (4.7) and letting k tend to +∞, for an arbitrary χ h ∈ V h , we obtain the steady state equation (4.11). The proof is complete. Remark 4.1. As in Theorem 3.1, the ω-limit set of (u n h , r n ) n is a compact and connected subset of V h × R on which E is constant and equal to the limit E ⋆ = lim n→+∞ E(u n h , r n ). Remark 4.2. If we replace ∆w by −w in (4.1), then (4.1)-(4.2) becomes the Allen-Cahn equation. A similar asymptotic convergence result can be obtained in that case. However, since the mass is not conserved for the Allen-Cahn equation, some coercivity must be added in the linear term in order to deal with the Neumann boundary conditions [27]. This can be done for instance by writing equation (4.2) as w = −α∆u + αu +f (u),

The linear solver
It is well-known that the SAV scheme for the Cahn-Hilliard equation requires only the resolution of two linear systems with constant coefficients [28]. We show this for the fully discrete finite element version (4.6)-(4.8).
For this purpose, we introduce the matrix version of the scheme, which reads (4.14) Here, is the usual finite element basis of V h . We have set is the mass matrix. The nonlinear term is the column vector F n = (F n 1 , . . . , F n N h ) t with In (4.14) we denote ·, · the usual scalar product in R N h . We first eliminate W n+1 . We get (4.17) Next, by eliminating r n+1 in (4.16) thanks to (4.17), we get where The idea consists in computing U n+1 , Q n . Thus, we apply the operator A −1 sav to equation (4.18) and we take the scalar product with Q n . This yields In order to compute U n+1 , we proceed as follows: (i) We first compute A −1 sav G n and A −1 sav AB −1 Q n . These are two linear systems. (ii) We obtain U n+1 , Q n from (4.20) by computing two scalar products and a division (cf. Lemma 4.1).
(iii) From (4.18), we can compute U n+1 through At this stage, all the required terms are known.
The matrix A is symmetric and positive semi-definite, but it is not invertible, so we set A ε = A + εI which is positive definite for ε > 0. The symmetric matrix B is also positive definite. Thus, by letting A sav, This is a symmetric and positive definite matrix, so its inverse A −1 sav,ε A ε B −1 is also symmetric and positive definite. By letting ε → 0 + , the claim follows.
This is the matrix version of (4.11), since the matrix A has a kernel of dimension 1 corresponding to the constant functions in V h .

Numerical simulations
We perform numerical simulations in one space dimension on Ω = (0, 1) with Matlab ® . The nonlinearity is f (s) = s 3 − s and α = 0.01. We consider an initial datum u 0 (x) = 0.9 cos(πx). For the space discretization with P 1 finite elements, a uniform subdivision with a space step equal to h = 1/(M − 1) with M = 400 is applied. The nonlinear term F n i defined by (4.15) is computed with Gauss formula of order 5, so that the calculation is exactly up to the double precision. We note that the matrices A and B are tridiagonal, but B −1 is a full matrix. And so, the SAV operator A sav = B − αδtAB −1 A is also a full matrix. It would be more efficient to use a lumped mass (diagonal) matrix as an approximation of B, but this would lead to an additional consistency error that we prefer to avoid.   Figure 1 illustrates that the generated sequence (u n h ) n converges to a state of equilibrium and that the modified energy E n = E(u n h , r n ) associated to (4.6)-(4.8) is nonincreasing in time (cf. (4.9)).
In order to show numerically that we have no guaranty that r ⋆ /s ⋆ = 1 in (4.21), we study the relative difference (r ⋆ − s ⋆ )/s ⋆ = r ⋆ /s ⋆ − 1. The results are shown in Table 1 for various choices of the time step δt and of the constant C 0 .
The iterations are stopped at the final time T = 4, at which time the steady state is considered to be numerically reached. This is confirmed in Figure 2 where the difference |U n+1 −U n | ∞ = u n+1 h −u n h ∞ is plotted as a function of the iteration n. We see that the difference |U n+1 − U n | ∞ rapidly decreases from 10 −1 to 10 −10 , then stabilizes around 10 −10 . The time step used for this particular simulation is δt = 1 100 and the SAV constant is C 0 = 0.2. The corresponding value of r ⋆ s ⋆ − 1 equals 0.00065 as shown in Table 1.  Table 1 confirms that the ratio r ⋆ /s ⋆ is never exactly equal to 1. In all cases except one (in red), we have found r ⋆ < s ⋆ . However, r ⋆ /s ⋆ is very close to 1, all the more since our initial value is close to a steady state. For a given constant C 0 , we observe that the relative error for r ⋆ has an order close to O(δt), especially for large values of C 0 . This is consistent with the first order approximation of the SAV scheme. Note that the space discretization does not appear here, because we work in a fixed space V h . For a fixed time step δt, the absolute value of the ratio seems to grow from C 0 = 0.1 to C 0 = 0.2, and then it monotonically decreases as C 0 increases. This does not mean C 0 should be chosen very large, because the numerical errors due to a very large constant C 0 could significantly change the numerical solution. We should rather seek an optimal value of C 0 , possibly small. An approach has been proposed in [25].