A Semi-Lagrangian scheme for a degenerate second order Mean Field Game system

In this paper we study a fully discrete Semi-Lagrangian approximation of a second order Mean Field Game system, which can be degenerate. We prove that the resulting scheme is well posed and, if the state dimension is equals to one, we prove a convergence result. Some numerical simulations are provided, evidencing the convergence of the approximation and also the difference between the numerical results for the degenerate and non-degenerate cases.


Introduction
Mean Field Games (MFG) systems were introduced independently by [23,24] and [27,28,29] in order to model dynamic games with a large number of indistinguishable small players. In the model proposed in [28,29] the asymptotic equilibrium is described by means of a system of two Partial Differential Equations (PDEs). The first equation, together with a final condition, is a Hamilton-Jacobi-Bellman (HJB) equation describing the value function of a representative player whose cost function depends on the distribution m of the entire population. The second equation is a Fokker-Planck (FP) equation which, together with an initial distribution m 0 , describes the fact that m evolves following the optimal dynamics of the average player. We refer the reader to the original papers [23,24,27,28,29] and the surveys [10,20] for a detailed description of the problem and to [22] for some interesting applications.
In this article we consider the following second order possibly degenerated MFG system m(·, 0) = m 0 (·) ∈ P 1 (R d ), (1.1) where P 1 (R d ) is the set of probability measures over R d having finite first order moment, σ : [0, T ] → R d×r and F , G : R d × P 1 → R are two functions satisfying some assumptions described in Section 2. Up to the best of our knowledge, for this system, existence and uniqueness results have not been established yet (except for the case r = d, σ :=σI d×d ,σ ∈ R).
Numerical methods to solve MFGs problems have been addressed by several authors. We refer the reader to [3,26,21,2,11] for the analysis in the non-degenerated second order case and to [9,12] for the first order case. For the stationary and evolutive non-degenerated second order system, a semi-implicit Finite-Difference method has been proposed in [2] and its convergence is studied in [3]. Using the Cole-Hopf change of variables, in [21] a non-degenerated second order MFG system with a quadratic Hamiltonian is transformed into a system of coupled linear parabolic equations. Then, a semi-implicit Finite-Difference scheme for the linear system is proposed and the convergence is analyzed. In the second order case, the convergence results for the above mentioned methods depend crucially on the non-degeneracy of the second order operators in (1.1).
The aim of this work is to provide a fully-discrete Semi-Lagrangian discretization of (1.1), to study the main properties of the scheme and to establish a convergence result. The main advantage of this approach is to get an explicit and conservative scheme, which allows for large time steps even when σ(t)σ(t) is degenerated. We refer the reader to [17] and the references therein for the study of Semi-Lagrangian schemes for first order HJB equations and to [8,16] for the second order case. We stress the fact that, if for second order HJB equations Semi-Lagrangian schemes have already been analyzed, the proposed approximation of the FP equation is new.
The line of argument in this paper is similar to the one analyzed in [12]. Given a continuous measure-valued application µ(·) and a space-time step (ρ, h) we discretize the HJB using a fully-discrete Semi-Lagrangian scheme in the spirit of [8,16]. We then regularize the solution of the scheme by convolution with a mollifier φ ε (ε > 0). The resulting function is called v ε ρ,h [µ]. In order to discretize the Fokker-Planck equation when σ = 0 we propose a natural extension of the scheme which was described in [12] for the deterministic case (σ = 0). The solution of the scheme is denoted by m ε ρ,h [µ](·). The fully-discretization of problem (1.1) is thus to find µ(·) such that m ε ρ,h [µ](·) = µ(·). The existence of a solution of the discrete problem is established in Theorem 5.1 by standard arguments based on the Brouwer fixed point Theorem. The convergence of the solutions of the discrete system to a solution of (1.1) is much more delicate. As a matter of fact, as in [12] we establish in Theorem 5.2 the convergence result only when the state dimension d is equals to one. Under suitable conditions over the discretization parameters, the proof is based on three crucial results. The first one is a relative compactness property for m ε ρ,h [µ](·), that can be obtained as a consequence of a Markov chain interpretation of the scheme (which is related with the probabilistic interpretation of Semi-Lagrangian schemes for second order HJB equations [25]). The second result is the discrete semiconcavity of v ε ρ,h [µ] (see e.g. [1]), which implies a.e. convergence of is the unique viscosity solution of (1.2)). The third result is a uniform L ∞ -bound for the density of m ε ρ,h [µ](·), where the one dimensional assumption plays an important role. We remark that our convergence result proves the existence of a solution of (1.1) when d = 1. Moreover, our results are valid for more general Hamiltionians, as the ones considered in [1] (see Remark 5.1(ii)). However, since the proofs are already rather technical, as in [12], we preferred to present the details for the quadratic Hamiltonian. Let us notice that when σ(t)σ(t) is uniformly definite positive, (1.1) admits classical solutions (see e.g. [28,29,10]). In this non-degenerate case it would be desirable to extend our convergence result for general state dimensions and to study also the case when F and G are local non-smoothing operators (see [28,29]). We expect to address these issues in future investigations.
The paper is organized as follows. In Section 2 we fix some notations and we state our main assumptions. In Section 3 we provide the natural Semi-Lagrangian discretization for the HJB equation and we prove its main properties. In Section 4 we propose a scheme for the Fokker-Planck equation and we prove that the associated solutions, as functions of the discretization parameters, form a relatively compact set. In Section 5 we prove our main results, the existence of a solution of the discrete system and, if d = r = 1, the convergence to a solution of (1.1). Finally, in Section 6 we present some numerical simulations showing the difference between the numerical approximation between degenerate and non-degenerate systems.

Preliminaries
Let us first fix some notations. For x ∈ R d let |x| = √ x x the Euclidean norm. In the entire article c > 0 will be a generic constant, which can change from line to line. For u ∈ R d × [0, T ] → R let ∂ t u denote the partial derivative of u (if it exists) w.r.t. the time variable and Du, D 2 u denote the gradient and Hessian of u (if they exist) w.r.t. the space variables. Let P(R d ) denote the Borel probability measures µ over It is well-known (see e.g. [31, Theorem 1.14]) that d 1 , can be expressed in the following dual form Let us recall the following useful result (see e.g. [4,Chapter 7] and [10, Lemma 5.7]): Then K is a relatively compact set in P p (R d ).
We assume now the following assumptions on the data of (1.1): (A1) We suppose that: (i) F and G are uniformly bounded over R d × P 1 and for every m ∈ P 1 (R d ), the functions F (·, m), G(·, m) are C 2 and their first and second derivatives are bounded in R d , uniformly with respect to m, i.e. ∃ c > 0 such that (ii) Denoting by σ : [0, T ] → R d ( = 1, . . . , r) the column vector of the matrix σ, we assume that σ is continuous.
(iii) The measure m 0 is absolutely continuous, with density still denoted as m 0 . Moreover, we suppose that m 0 is essentially bounded and has compact support, i.e. there exists c > 0 such that We say that (v, m) is a solution of (1.1) if the first equation is satisfied in the viscosity sense (see e.g. [14,19]), while the second one is satisfied in the distributional sense (see e.g [18]), i.e. for every Our aim in this work is to provide a discretization scheme for (1.1). Given h, ρ > 0, let us define a space grid G ρ and a time-space grid G ρ,h by where t k = kh (k = 0, . . . , N ) and t N = N h = T . We call B(G ρ ) and B(G ρ,h ) the spaces of bounded functions defined respectively on G ρ and G ρ,h . For f ∈ B(G ρ ) and g ∈ B(G ρ,h ) we set . Given a regular triangulation T ρ of R d with vertices belonging to G ρ , we set β i (x) for the P 1 basis function on T ρ : β i (x) are continuous functions, affine on each element of T ρ such that β i (x j ) = δ ij for all x j ∈ G ρ (the Kronecker symbol). Moreover, β i result to have compact support satisfying 0 ≤ β i ≤ 1, and i∈Z d β i (x) = 1 for all x ∈ R d . We consider the following linear interpolation operator We recall two basic results about the interpolation operator I (see e.g. [13,30]).
On the other hand, if φ ∈ C 2 (R d ), with bounded second derivatives, then there exists c > 0 such that 3 A fully discrete semi-Lagrangian scheme for the Hamilton-Jacobi Bellman equation Given µ ∈ C([0, T ]; P 1 (R d )), let us consider the equation We discuss now a probabilistic interpretation of (3.1). Consider a probability space (Ω, F, P), a filtration F := {F t ; t ∈ [0, T ]} and a Brownian motion W (·) adapted to F. Define the space where dt is the Lebesgue measure in [0, T ]. For every α ∈ L 2,2 F , set Then, setting v[µ](x, t) := inf Moreover, by the continuity property implied by (3.3), we can write directly the following dynamic programing principle for v[µ](·, ·) (see e.g. [6]): This scheme has been proposed in [8] for a stationary second order possibly degenerate Hamilton-Jacobi-Bellman equation, corresponding to an infinite horizon stochastic optimal control problem. We now prove, in our evolutive framework, some basic properties ofŜ ρ,h [µ]. Then, there exists a compact set K L ⊆ R d (whose diameter depends only on L) such that the infima in the r.h.s. of (3.7) is attained in the interior of K L .
Proof. Properties (ii) and (iii) follows directly from (3.7). Now, since I[f ] is bounded and continuous we directly obtain the existence of a minimizerᾱ of the r.h.s. of (3.7). Letting we have that g is Lipschitz with constant hL and The above expression implies that |ᾱ| ≤ 2L, which proves (i). Now, in order to prove (iv) let φ ∈ C ∞ c (R d ) and notice that since I[φ(·, t)] is Lipschitz with a constant depending only on Dφ(·, t) ∞ (and thus independent of (µ, ρ, h)), we obtain by (i) a fixed compact K φ ⊆ R d (depending only on φ) such that the infima in the r.h.s. of (3.7) are attained in K φ . Using this fact, for every = 1, . . . , r and α ∈ K φ a Taylor expansion yields to where T 3 denotes the third term in the Taylor expansion. Using the interpolation error estimate (2.4) and adding the equations in (3.8). we get In the last equation, by summing the third order terms, the elements of order h 3/2 n have simplified and remaining leading term is O(h 2 n ). If we choose K φ large enough such that for all ( then, dividing by h n and letting h n ↓ 0, we can pass to the limit in (3.9) to obtain the result. (3.10) (·, t) as well as a discrete version of (3.4).
Lemma 3.1 For every t ∈ [0, T ], the following assertions hold true: Proof. Using that β m (x i+j + z) = β m−j (x i + z), for every m,i, j ∈ Z d and z ∈ R d , for every α ∈ R d , k = 0, . . . , N − 1 and = 1, . . . , r, we have that with an analogous equality for the difference Therefore, by a recursive argument using (3.12) we easily obtain that and assertion (i) follows from (3.10) and (2.3). In order to prove the second assertion note that, since G is semiconcave, the result is valid for v ·,N . Inductively, we suppose the result for and we prove its validity for t k (k = 0, . . . , N − 1). Let us denote by α i,k an optimal solution for the (3.14) On the other hand, we have that where the last inequality follows from (3.13). Analogously, Therefore, combining (3.14), the semiconcavity of F and the above inequalities, we obtain In particular, for and by induction, for all k = 0, . . . , N , from which the result follows.
The following convergence result holds true: Using the properties of the scheme proved in Proposition 3.1, the first assertion follows by classical arguments (see [5] and [12,  4 The fully-discrete scheme for the Fokker-Planck equation Given a compact set K ⊆ R d let us define the convex and compact set and for a given and its extension to all t ∈ [0, T ] bỹ By constructionμ ∈ C([0, T ]; P 1 (R d )) and without danger of confusion we will still write µ forμ. Thus, given µ ∈ S N +1 K we can define v[µ](·, ·) as in Section 3. For ε > 0, i ∈ Z d , = 1, . . . , r and k = 0, . . . , N − 1 let us set In fact, using that m 0 has a compact support and that σ and Dv ε ρ,h [µ](x i , t k )are uniformly bounded (by Lemma 3.2(i)) we have the existence of a constant c > 0 such that which implies that the scheme is conservative.
, and for all k = 0, . . . , N we define the measurê Clearly, {m ε ρ,h [µ](·, t k ) ; k = 0, . . . , N } ∈ P 1 (R d ) N +1 . The following simple remark will be very useful in the sequel.  i ; i ∈ Z d } allow to define a probability space (Ω, F, P) and a discrete Markov chain (X k ) 0≤k≤N taking values in Z d , such that its initial distribution is given by (p (0) i ) i∈Z d , the transition probabilities are given by (4.7) and the law at time t k is given bym ε ρ,h [µ](·, t k ). That is, We have the following relation between the m ε ρ,h [µ] andm ε ρ,h [µ]: Lemma 4.1 There exists a constant c > 0 (independent of (ρ, h, ε, µ)) such that for all k = 0, . . . , N Proof. Let φ ∈ C(R d ) be 1-Lipschitz. Then, by definition, Then, the result follows, since for all i ∈ Z d , The following result will be the key to prove a compactness property for m ε ρ,h [µ]. Proposition 4.1 Suppose that ρ = O(h). Then, there exists a constant c > 0 (independent of (ρ, h, ε, µ)) such that for all 0 ≤ s ≤ t ≤ T , we have that Proof. Let us first show that for all k, k = 0, . . . , N , with k ≤ k, we have that For notational simplicity we will suppose that k = 0 and we omit the dependence on µ. Consider the Markov chain X (·) defined in Remark 4.2 and let γ ∈ P(R d × R d ) be the joint law X k and X 0 . By definition ofd 1 we have that where P is the probability measure introduced in Remark 4.2 and E P (Y ) = Ω Y (ω)dP(ω), for all Y : Ω → R which are F measurable. We have that 12) and by (4.7) we obtain Using that ρ = O(h), for = 1, . . . , r we have that Analogously, Thus, Therefore, By a recursive argument, we get Since E P (Z p ) = 0, by independence we easily get that tr(σ(t p )σ(t p ) ), and since σ is bounded, we have that for some c > 0. Thus, combining (4.11), (4.13) and the above inequality, we obtain that which proves (4.9). By the triangular inequality we get Since ρ = O(h), we get by Lemma 4.1 and (4.9) that which proves (4.10). Now, suppose that s ∈ (t k 1 , t k 1 +1 ) and t ∈ (t k 2 , t k 2 +1 ), then by the triangular inequality (4.14) Now, by (4.3) and (4.10) Therefore, since in both cases we have , inequalities (4.14) and (4.15)-(4.16) imply that Now, let us prove some uniform bounds for m ε ρ,h [µ](·) in P 2 (R d ).
Our aim now is to obtain when d = 1 uniform L ∞ -bounds for m ε ρ,h [µ]. We remark that for d = 1 it suffices to consider also r = 1. In this case the notation can be simplified, and the superscript will be suppressed.
Lemma 4.2 Suppose that d = 1 and consider a sequence of numbers ρ n , h n , ε n converging to 0. Then, there exists a constant c > 0 (independent of (n, µ) for n large enough) such that for all i, j ∈ Z, k = 0, . . . , N − 1 . As a consequence, there exists a constant c > 0 (independent of (n, µ)) such that Proof. For the reader's convenience, we omit the µ argument. By (4.4) we have that which together with the condition Lemma 3.2(ii) yields to for some c > 0. Since the same argument is valid for Φ εn,− i,k , we get (4.19). Using (4.19) and following the proof in [12,Lemma 3.8], we obtain that for all k = 0, . . . , N − 1 and i ∈ Z j∈Z β i Φ εn,+ j,k [µ] ≤ 1 + ch n , and j∈Z β i Φ εn,− j,k [µ] ≤ 1 + ch n for some c > 0, which implies (4.20).

Remark 4.3
Property (4.20) plays a crucial role in finding uniform bounds in L ∞ for m εn ρn,hn [µ] (see Proposition 4.3 below). Unfortunately, we have not been able to prove (4.20) in the general case. As a matter of fact, in dimension 2 it is easy to construct a counterexample in order to prove that (4.19) is not sufficient to prove (4.20). Therefore, we expect that finer properties of the scheme should be established in order to overcome this difficulty.
As a consequence we obtain the following uniform bound: Proposition 4.3 Suppose that d = 1 and consider a sequence of positive numbers (ρ n , h n , ε n ) → 0. Then, there exists a constant c > 0, independent of (n, µ) such that Proof. We have that for all k = 0, . . . , N − 1 and by (4.20). Therefore, by recurrence We have the following existence result: ρ,h has at least one solution.
Proof. Let {µ n } n∈N and µ ∈ S N +1 K h such that µ n → µ. Then, as elements in C([0, T ]; P 1 (R n ) (see (4.2)-(4.3)) we have that sup t∈[0,T ] d 1 (µ n (t), µ(t)) → 0. Therefore, by assumption uniformly. This implies that the function µ ∈ S N +1 Theorem 5.2 Suppose that d = 1 and that (A1)-(A3) hold true. Consider a sequence of positive numbers ρ n , h n , ε n satisfying that ρ n = O(h n ) and that h n = o(ε 2 n ). Let {m n } n∈N be a sequence of solutions of (M F G) εn ρn,hn and denote by {v n } n∈N the associated sequence of discrete value functions. Then, except for some subsequence, (v n , m n ) → (v,m), where (v,m) is a solution of (1.1) and the convergence is uniform over compact sets for v n and in C([0, T ]; P 1 ) for m n . Moreover, m n → m in L ∞ (R × [0, T ])-weak- * .
In particular, if the mean field game problem (1.1) has a unique solution (v, m), then the entire sequence (v n , m n ) converges to (v, m).
Proof. By Propositions 4.1-4.2, Lemma 2.1 and Ascoli Theorem, there exists m ∈ C([0, T ]; P 1 ) such that, except for some subsequence, m n converge to m in C([0, T ]; P 1 ). Our aim is to prove that Given t ∈ [0, T ], let us set t n := t hn h n . We have By (4.2)-(4.5) and (2.4), we obtain Taking |α| = 3 in the second inequality of (3.16) we easily obtain by a Taylor expansion that for some c > 0. Therefore, By a Taylor expansion we find that The expression above yields (5.5) Since by the second inequality of (3.16) the term inside the integral in (5.5) is c/ε n -Lipschitz (with c large enough) w.r.t. x, Proposition 4.1 gives that for all s ∈ [t k , t k+1 ], with k = 0, . . . , n − 1, we have Therefore, combining (5.5) and (5.6), we obtain that Thus, summing from k = 0 to k = n − 1 and using (5.3) Thus, since m n converge to m in L ∞ (R × [0, T ])-weak- * , using (5.8)-(5.9), that ρ n = O(h n ) and that h n = o(ε 2 n ), we can pass to the limit in (5.7) to obtain (5.2). The convergence result for v n follows from Theorem 3.1.
Remark 5.1 (i) As the proof shows, the costly assumption h n = o(ε 2 n ) comes from the a priori non regularity of Dv[m](x, t) w.r.t. the time variable. In fact, an argument similar to the one used for the convergence in (5.8) cannot be applied since a priori Dv[m](x, ·) is not necessarily Riemman integrable and hence (5.6) seems to be necessary. (ii) All the results of this paper, can be extended for the more general Hamiltonians H(x, t, p) considered in [1]. In fact, consider the system

Numerical Tests
We present some numerical simulations for the one dimensional case. For an easier explanation of the tests, let us recall the heuristic interpretation of the MFG system: an average player, whose dynamic is given by and W (·) a standard one dimensional Brownian motion, aims to minimize, with respect to the control α(·), the functional : We will consider running costs of the form where f is C 2 and for some δ > 0 to be chosen later. We solve heuristically the fully discrete MFG system (5.1) by a fixed-point iteration method. At a generic iteration p, let us call the sequences representing the approximated value function and mass distribution. We consider as initial guess Given m ε,p i,k we calculate m ε,p+1 i,k according to the following scheme where in the step m ε,p i,k −→ v ε,p i,k we compute {v ε,p i,k } i,k by solving the scheme (3.6) with discrete mass distribution given by {m ε,p i,k } i,k . In the step v ε,p i,k −→ Dv ε,p (x i , t k ) we compute the discrete gradient of v ε,p by approximating (3.15) using a discrete convolution and then approximating the gradient by central finite differences. In the last step Dv ε (x i , t k ) −→ m ε,p+1 i,k , we compute m ε,p+1 i,k by the scheme (4.5). We stop the fixed point method when the errors are below a given threshold τ or p has reached a fixed number of iterations.
So far, we have set the problem in the space domain Q = R. Clearly to implement the numerical scheme we have to suppose that the domain Q is bounded. Following [8, Section 3], we will thus formally constraint the problem to a sufficiently large bounded domain Q b by supposing now that Note that by doing this we are imposing a dependence on x for σ and our results do not apply. Moreover, for the Fokker Planck equation, in order to maintain the mass m in Q b , we will impose Neumann boundary conditions, which are not covered by our results neither. Therefore, the numerical resolution of the scheme is heuristic. However, since we will consider cost functions that incite the players to remain on a bounded domain, this type of approximation is reasonable since the influence in the cost, expressed through V δ (x, m), of players being far from Q b , is negligible.
We will show three numerical tests, comparing the different behavior at different choices for the diffusion term. First we consider the case in which the diffusion term is zero (studied already in [12]), which corresponds to a deterministic MFG system, then the case with a constant and positive diffusion term, which corresponds to second order MFG system (see [11]). Finally, we consider the case when the diffusion term is given by a positive continuous function, which degenerates in a given time interval.
Test 1 (deterministic case) We consider a numerical domain Q b × [0, T ] = [0, 1] × [0, 2] and we choose as initial mass distribution: We choose as final cost G = 0, as running cost 1 2 α 2 (t) + f (x) + V δ (x, m(t)) with δ = 0.2 and f (x, t) = 5(x − (1 − sin(2πt))/2) 2 , and we set σ(·) ≡ 0. In the running cost the term f (x, t) incites the agents to stay close to the point (1 − sin(2πt))/2 at each time t, while the term V δ (x, m) penalizes high concentration of the density distribution. The density evolution is shown in Fig.1, which has been computed with ρ = 3.12 · 10 −3 , h = ρ, ε = 0.15. The number of iterations required by the fixed point method to satisfy the stopping criteria with τ = 10 −3 is 10. We observe, during the whole time interval, that the mass density tends to concentrate around to the curve (1 − sin(2πt))/2 and no diffusion effect appears. It is important to remark that the term V δ (x, m) has a non negligible effect in the distribution. As a matter of fact, if this term is not present, then much higher concentrations are observed (see e.g. [12, Fig. 4.8]).
Test 2 (non-degenerate diffusion) We consider the same problem as in Test 1, but now we change the diffusion term choosing σ = 0.2. Let us note that, in this case, the scheme reduce to the one proposed in [11]. The running cost and the initial distribution are chosen as in the previous tests. The density evolution is shown in Fig. 2, which has been computed with ρ = 6.35 · 10 −3 , h = ρ, ε = 2 √ h and τ = 10 −3 . The number of iterations for the fixed point method, to satisfy the stopping criteria with τ = 10 −3 , is 6. Let us note that in this case the convergence is faster compared to the deterministic case in Test 1. A diffusive effect is observed during the whole time interval, which seems not very strong, since it is opposite to the one due to the running cost, which tends to concentrate the mass density around the sinusoidal curve. Test 3 (degenerate diffusion) We consider the same problem as in Test 1, but now we change the diffusion term choosing a scalar function σ(t) = max(0, 0.2 − |t − 1|).
Note that σ(t) = 0 for all t ∈ [0, 0.8] ∪ [1.2, 2]. The running cost and the initial distribution are chosen as in the previous tests. The density evolution is shown in Fig. 3, which has been computed with ρ = 6.35 · 10 −3 , h = ρ, ε = 2 √ h and τ = 10 −3 . The number of iterations, for the fixed point method to satisfy the stopping criteria with τ = 10 −3 , is 9. Let us note that in this case the rate of convergence, for the fixed point method, is between the rates for the two cases. We observe a diffusive effect during the time interval [0.8, 1.2], due to the non zero term σ(t). When the diffusion stops to act, at time t = 1.2 the density starts again to concentrate faster around the curve where f is lower. Table 1 shows the errors (6.2) computed varying all the parameters (ρ, h, ε), according the balance  Table 1 we show the space and regularizing parameters, in the last two columns the errors for the value function and the density computed after 10 iterations of the fixed point algorithm. Table 1: Parameters and errors between successive iterations ρ ε E(v ε,10 ) E(m ε,10 ) 1.25 · 10 −2 0.2 1.72 · 10 −6 9.52 · 10 −5 6.25 · 10 −3 0.15 1.08 · 10 −6 1.17 · 10 −4 3.12 · 10 −3 0.1 1.82 · 10 −6 3.26 · 10 −4 In Fig. 4, we display the errors (6.2) in logarithmic scale on the y-axes versus the number of fixed-point iterations on the x-axes. We vary all the parameters according to the Table 1. The behavior of the errors shows a linear convergence of the iterative method.  Table 1.