CG VERSUS MINRES: AN EMPIRICAL COMPARISON ∗

For iterative solution of symmetric systems  the conjugate gradient method (CG) is commonly used when A is positive definite, while the minimum residual method (MINRES) is typically reserved for indefinite systems. We investigate the sequence of approximate solutions  generated by each method and suggest that even if A is positive definite, MINRES may be preferable to CG if iterations are to be terminated early. In particular, we show for MINRES that the solution norms  are monotonically increasing when A is positive definite (as was already known for CG), and the solution errors  are monotonically decreasing. We also show that the backward errors for the MINRES iterates  are monotonically decreasing.

1. Introduction.The conjugate gradient method (CG) [11] and the minimum residual method (MINRES) [18] are both Krylov subspace methods for the iterative solution of symmetric linear equations Ax = b.CG is commonly used when the matrix A is positive definite, while MINRES is generally reserved for indefinite systems [27, p85].We reexamine this wisdom from the point of view of early termination on positive-definite systems.
We assume that the system Ax = b is real with A symmetric positive definite (spd) and of dimension n × n.The Lanczos process [13] with starting vector b may be used to generate the n × k matrix V k ≡ v 1 v 2 . . .v k and the (k + 1) × k Hessenberg tridiagonal matrix T k such that AV k = V k+1 T k for k = 1, 2, . . ., and AV = V T for some ≤ n, where the columns of V k form a theoretically orthonormal basis for the kth Krylov subspace K k (A, b) ≡ span{b, Ab, A 2 b, . . ., A k−1 b}, and T is × and tridiagonal.Approximate solutions within the kth Krylov subspace may be formed as x k = V k y k for some k-vector y k .As shown in [18], three iterative methods CG, MINRES, and SYMMLQ may be derived by choosing y k appropriately at each iteration.CG is well defined if A is spd, while MINRES and SYMMLQ are stable for any symmetric nonsingular A.
As noted by Choi [2], SYMMLQ can form an approximation x k+1 = V k+1 y k+1 in the (k+1)th Krylov subspace when CG and MINRES are forming their approximations x k = V k y k in the kth subspace.It would be of future interest to compare all three methods on spd systems, but for the remainder of this paper we focus on CG and

MINRES.
With different methods using the same information V k+1 and T k to compute solution estimates x k = V k y k within the same Krylov subspace (for each k), it is commonly thought that the number of iterations required will be similar for each method, and hence CG should be preferable on spd systems because it requires somewhat fewer floating-point operations per iteration.This view is justified if an accurate solution is required (stopping tolerance τ close to machine precision ).We show that with looser stopping tolerances, MINRES is sure to terminate sooner than CG when the stopping rule is based on the backward error for x k , and by numerical examples we illustrate that the difference in iteration numbers can be substantial.
1.1.Notation.We study the application of CG and MINRES to real symmetric positive-definite (spd) systems Ax = b.The unique solution is denoted by x * .The initial approximate solution is x 0 ≡ 0, and r k ≡ b − Ax k is the residual vector for an approximation x k within the kth Krylov subspace.For a vector v and matrix A, v and A denote the 2-norm and the Frobenius norm respectively, and A 0 indicates that A is spd.
2. Minimization properties of Krylov subspace methods.With exact arithmetic, the Lanczos process terminates with k = for some ≤ n.To ensure that the approximations x k = V k y k improve by some measure as k increases toward , the Krylov solvers minimize some convex function within the expanding Krylov subspaces [9].

CG.
When A is spd, the quadratic form φ(x) ≡ 1 2 x T Ax − b T x is bounded below, and its unique minimizer solves Ax = b.A characterization of the CG iterations is that they minimize the quadratic form within each Krylov subspace [9], [17, §2.4], [28, § §8.8-8.9]: known as the energy norm of the error, within each Krylov subspace.For some applications, this is a desirable property [21,24,1,17,28].

MINRES.
For nonsingular (and possibly indefinite) systems, the residual norm was used in [18] to characterize the MINRES iterations: Thus, MINRES minimizes r k within the kth Krylov subspace.This was also an aim of Stiefel's Conjugate Residual method (CR) [23] for spd systems (and of Luenberger's extensions of CR to indefinite systems [15,16]).Thus, CR and MINRES must generate the same iterates on spd systems.We use this connection to prove that x k increases monotonically when MINRES is applied to an spd system.

2.3.
CG and CR.The two methods for solving spd systems Ax = b are summarized in Table 2.1.The first two columns are pseudocodes for CG and CR with iteration number k omitted for clarity; they match our Matlab implementations.Note that q = Ap in both methods, but it is not computed as such in CR.Termination occurs when r = 0 (⇒ ρ = β = 0).
To prove our main result we need to introduce iteration indices; see column 3 of Table 2.1.Termination occurs when r k = 0 for some index k = ≤ n (⇒ ρ = β = 0, r = s = p = q = 0).Note: This is the same as the at which the Lanczos process theoretically terminates for the given A and b.
Theorem 2.1.The following properties hold for Algorithm CR: (a) q T i q j = 0 (i = j) (b) r T i q j = 0 (i ≥ j + 1)

CG
Theorem 2.2.The following properties hold for Algorithm CR: (a) Proof.(a) Here we use the fact that A is spd.The inequalities are strict until i = (and r = 0).
By construction, P = span{b, Ab, . . ., A −1 b} and Q = span{Ab, . . ., A b} (since q i = Ap i ).Again by construction, x ∈ P, and since r = 0 we have b = Ax ⇒ b ∈ Q.We see that P ⊆ Q.By Theorem 2.1(a), {q i / q i } −1 i=0 forms an orthonormal basis for Q.If we project p i ∈ P ⊆ Q onto this basis, we have where all coordinates are non-negative from (c).Similarly for any other p j , j < .Therefore p T i p j ≥ 0 for any i, j < .(e) By construction, Therefore x T i p i ≥ 0 by (d) and (a).(f) Note that any r i can be expressed as a sum of q i :

Thus we have
where the inequality follows from (a) and (c).
We are now able to prove our main theorem about the monotonic increase of x k for CR and MINRES.A similar result was proved for CG by Steihaug [21].
Theorem 2.3.For CR (and hence MINRES) on an spd system Ax = b, x k increases monotonically. Proof.
where the last inequality follows from Theorem 2.2 (a), (d) and (e).Therefore x i ≥ x i−1 .
x * − x k is known to be monotonic for CG [11].The corresponding result for CR is a direct consequence of [11,Thm 7:5].However, the second half of that theorem, k , rarely holds in machine arithmetic.We give here an alternative proof that does not depend on CG.
Theorem 2.4.For CR (and hence MINRES) on an spd system Ax = b, the error x * − x k decreases monotonically.
Proof.From the update rule for x k , we can express the final solution x l = x * as Using the last two equalities above, we can write where the last inequality follows from Theorem 2.2 (a), (d).
The energy norm error x * − x k A is monotonic for CG by design.The corresponding result for CR is given in [11,Thm 7:4].We give an alternative proof here.
Theorem 2.5.For CR (and hence MINRES) on an spd system Ax = b, the error in energy norm x * − x k A is strictly decreasing.
3. Backward error analysis.For many physical problems requiring numerical solution, we are given inexact or uncertain input data (in this case A and/or b).It is not justifiable to seek a solution beyond the accuracy of the data [6].Instead, it is more reasonable to stop an iterative solver once we know that the current approximate solution solves a nearby problem.The measure of "nearby" should match the error in the input data.The design of such stopping rules is an important application of backward error analysis.
For a consistent linear system Ax = b, we think of x k coming from the kth iteration of one of the iterative solvers.Following Titley-Péloquin [25] we say that x k is an acceptable solution if and only if there exist perturbations E and f satisfying for some tolerances α ≥ 0, β ≥ 0 that reflect the (preferably known) accuracy of the data.We are naturally interested in minimizing the size of E and f .If we define the optimization problem to have optimal solution ξ k , E k , f k (all functions of x k , α, and β), we see that x k is an acceptable solution if and only if ξ k ≤ 1.We call ξ k the normwise relative backward error (NRBE) for x k .

Monotonic backward errors.
Of interest is the size of the perturbations to A and b for which x k is an exact solution of Ax = b.From (3.2)-(3.3), the perturbations have the following norms: Since x k is monotonically increasing for CG and MINRES, we see from (3.2) that φ k is monotonically decreasing for both solvers.Since r k is monotonically decreasing for MINRES (but not for CG), we have the following result.
Proof.This follows from (3.5)-(3.6)with x k increasing for both solvers and r k decreasing for CR and MINRES but not for CG.The test examples are drawn from the University of Florida Sparse Matrix Collection (Davis [5]).We experimented with all 26 cases for which A is real spd and b is supplied.In Matlab we computed the condition number for each test matrix by finding the largest and smallest eigenvalue using eigs(A,1,'LM') and eigs(A,1,'SM') respectively.For this test set, the condition numbers range from 1.7E+03 to 3.1E+13.
Since A is spd, we apply diagonal preconditioning by redefining A and b as follows: The stopping rule used for CG and MINRES was (3.4) with α = 0 and β = 10 −8 .That is, r k ≤ 10 −8 b = 10 −8 (but with a maximum of 5n iterations for spd systems and n iterations for indefinite systems).3), even though it doesn't match the choice of α and β in the stopping rule (3.4).This gives φ k = 0 and E k = r k / x k in (3.5).Thus, as in Theorem 3.1, we expect E k to decrease monotonically for CR and MINRES but not for CG.
In Figures 4.2 and 4.3, we plot r k / x k , x * − x k A , and x * − x k for CG and MINRES for four different problems.For CG, the plots confirm that x * − x k A and x * − x k are monotonic.For MINRES, the plots confirm the prediction of Theorems 3.1, 2.5, and 2.4 that r k / x k , x * − x k A , and x * − x k are monotonic.shows problem Simon raefsky4 with A of size 19779×19779 and cond(A) = 2.2E+11.Because of the high condition number, both algorithms hit the 5n iteration limit.We see that the backward error for MINRES converges faster than for CG as expected.For the energy norm error, CG is able to decrease over 5 orders of magnitude while MINRES plateaus after a 2 orders of magnitude decrease.For both the energy norm error and 2-norm error, MINRES reaches a lower point than CG for some iterations.This must be due to numerical error in CG and MINRES (a result of loss of orthogonality in V k ). Figure 4.4 shows r k and x k for CG and MINRES on two typical spd examples.We see that x k is monotonically increasing for both solvers, and the x k values rise fairly rapidly to their limiting value x * , with a moderate delay for MINRES.
Figure 4.5 shows r k and x k for CG and MINRES on two spd examples in which the residual decrease and the solution norm increase are somewhat slower than typical.The rise of x k for MINRES is rather more delayed.In the second case, if the stopping tolerance were β = 10 −6 rather than β = 10 −8 , the final MINRES x k (k ≈ 10000) would be less than half the exact value x * .It will be of future interest to evaluate this effect within the context of trust-region methods for optimization.
From ( 4.1), one can infer that if r M k decreases a lot between iterations k − 1 and k, then r C k would be roughly the same as r M k .The converse also holds, in that r C k will be much larger than r M k if MINRES is almost stalling at iteration k (i.e., r M k did not decrease much relative to the previous iteration).The above analysis was pointed out by Titley-Peloquin [26] in comparing LSQR and LSMR [8].We repeat the analysis here for CG vs MINRES and extend it to demonstrate why there is a lag in general for large problems.
With α = 0 in stopping rule (3.4), CG and MINRES stop when r k ≤ β b .If this occurs at iteration l, we have Thus on average, r M k / r M k−1 will be closer to 1 if l is large.This means that the larger l is (in absolute terms), the more CG will lag behind MINRES (a bigger gap between r C k and r M k ).4.2.Indefinite systems.A key part of Steihaug's trust-region method for large-scale unconstrained optimization [21] (see also [4]) is his proof that when CG is applied to a symmetric (possibly indefinite) system Ax = b, the solution norms x 1 , . . ., x k are strictly increasing as long as p T j Ap j > 0 for all iterations 1 ≤ j ≤ k. (We are using the notation in Table 2 From our proof of Theorem 2.2, we see that the same property holds for CR and MINRES as long as both p T j Ap j > 0 and r T j Ar j > 0 for all iterations 1 ≤ j ≤ k.In case future research finds that MINRES is a useful solver in the trust-region context, it is of interest now to offer some empirical results about the behavior of x k when MINRES is applied to indefinite systems.First, on the nonsingular indefinite system MINRES gives non-monotonic solution norms, as shown in the left plot of Figure 4.6. The decrease in x k implies that the backward errors r k / x k may not be monotonic, as illustrated in the right plot.More generally, we can gain an impression of the behavior of x k by recalling from Choi et al. [3]  When A is nonsingular or Ax = b is consistent (which we now assume), y M k is uniquely defined for each k ≤ and the methods compute the same iterates x M k (but by different numerical methods).In fact they both compute the expanding QR factorizations (with R k upper tridiagonal) and MINRES-QLP also computes the orthogonal factorizations R k P k = L k (with L k lower tridiagonal), from which the kth solution estimate is defined by where it is clear that χ 2 increases monotonically.Although the last two terms are of unpredictable size, x M k 2 tends to be dominated by the monotonic term χ 2 and we can expect that x M k will be approximately monotonic as k increases from 1 to .Experimentally we find that for most MINRES iterations on an indefinite problem, x k does increase.To obtain indefinite examples that were sensibly scaled, we used the four spd (A, b) cases in Figures 4.4 x k converges to minimum-norm x * for singular systems 5. Conclusions.For full-rank least-squares problems min Ax − b , the solvers LSQR [19,20] and LSMR [8,14] are equivalent to CG and MINRES on the (spd) normal equation A T Ax = A T b.Comparisons in [8] indicated that LSMR can often stop much sooner than LSQR when the stopping rule is based on Stewart's backward error norm A T r k / r k for least-squares problems [22].
Our theoretical and experimental results here provide analogous evidence that MINRES can often stop much sooner than CG on spd systems when the stopping rule is based on the backward error r k / x k for Ax = b (or the more general backward errors in Theorem 3.1).In some cases, MINRES can converge faster than CG by as much as 2 orders of magnitude (Figure 4.3).On the other hand, CG converges somewhat faster than MINRES in terms of both x * − x k A and x * − x k (same figure).For spd systems, Table 5.1 summarizes properties that were already known by Hestenes and Stiefel [11] and Steihaug [21], along with the two additional properties of MINRES that we proved here (Theorems 2.3 and 3.1).
These theorems and experiments on CG and MINRES are part of the first author's PhD thesis [7], which also discusses LSQR and LSMR and derives some new results for both solvers.Table 5.2 summarizes the known results for LSQR and LSMR (in [19] and [8] respectively) and the newly derived properties for both solvers (in [7]).

4 .
Numerical results.Here we compare the convergence of CG and MINRES on various spd systems Ax = b and some associated indefinite systems (A − δI)x = b.
DAD, b ← Db, b ← b/ b .Thus in the figures below we have diag(A) = I and b = 1.With this preconditioning, the condition numbers range from 1.2E+01 to 2.2E+11.The distribution of condition number of the test set matrices before and after preconditioning is shown in Figure 4.1.

Fig. 4 . 1 .
Fig. 4.1.Distribution of condition number for matrices used for CG vs MINRES comparison, before and after diagonal preconditioning

Figure 4 . 2 (
Figure 4.2 (left) shows problem Schenk AFE af shell8 with A of size 504855 × 504855 and cond(A) = 2.7E+05.From the plot of backward errors r k / x k , we see that both CG and MINRES converge quickly at the early iterations.Then the backward error of MINRES plateaus at about iteration 80, and the backward error of CG stays about 1 order of magnitude behind MINRES.A similar phenomenon of fast convergence at early iterations followed by slow convergence is observed in the energy norm error and 2-norm error plots.

Figure 4 . 2 (
Figure 4.2 (right) shows problem Cannizzo sts4098 with A of size 4098 × 4098 and cond(A) = 6.7E+03.MINRES converges slightly faster in terms of backward error, while CG converges slightly faster in terms of both error norms.

Fig. 4 . 2 .Fig. 4 . 3 .
Fig. 4.2.Comparison of backward and forward errors for CG and MINRES solving two spd systems Ax = b.Left: Problem Schenk AFE af shell8 with n = 504855 and cond(A) = 2.7E+05.Note that MINRES stops significantly sooner than CG with α = 0 and β = 10 −8 in (3.4).Right: Cannizzo sts4098 with n = 4098 and cond(A) = 6.7E+03.MINRES stops slightly sooner than CG.Top: The values of log 10 ( r k / x k ) are plotted against iteration number k.These values define log 10 ( E k ) when the stopping tolerances in (3.4) are α > 0 and β = 0.Middle: The values of log 10 x * − x k A are plotted against iteration number k.This is the quantity that CG minimizes at each iteration.Bottom:The values of log 10 x * − x k .

Figure 4 . 3 (
Figure 4.3 (left)  shows problem Simon raefsky4 with A of size 19779×19779 and cond(A) = 2.2E+11.Because of the high condition number, both algorithms hit the 5n iteration limit.We see that the backward error for MINRES converges faster than for CG as expected.For the energy norm error, CG is able to decrease over 5 orders of magnitude while MINRES plateaus after a 2 orders of magnitude decrease.For both the energy norm error and 2-norm error, MINRES reaches a lower point than CG for some iterations.This must be due to numerical error in CG and MINRES (a result of loss of orthogonality in V k ). Figure 4.3 (right) shows problem BenElechi BenElechi1 with A of size 245874 × 245874 and cond(A) = 1.8E+09.The backward error of MINRES stays ahead of CG by 2 orders of magnitude for most iterations.Around iteration 32000, the backward error of both algorithms goes down rapidly and CG catches up with MINRES.Both algorithms exhibit a plateau on energy norm error for the first 20000 iterations.The error norms for CG start decreasing around iteration 20000 and decrease even faster after iteration 30000.Figure4.4shows r k and x k for CG and MINRES on two typical spd examples.We see that x k is monotonically increasing for both solvers, and the x k values rise fairly rapidly to their limiting value x * , with a moderate delay for MINRES.Figure4.5shows r k and x k for CG and MINRES on two spd examples in which the residual decrease and the solution norm increase are somewhat slower than typical.The rise of x k for MINRES is rather more delayed.In the second case, if the stopping tolerance were β = 10 −6 rather than β = 10 −8 , the final MINRES x k (k ≈ 10000) would be less than half the exact value x * .It will be of future interest to evaluate this effect within the context of trust-region methods for optimization.

Figure 4 . 3 (
Figure 4.3 (left)  shows problem Simon raefsky4 with A of size 19779×19779 and cond(A) = 2.2E+11.Because of the high condition number, both algorithms hit the 5n iteration limit.We see that the backward error for MINRES converges faster than for CG as expected.For the energy norm error, CG is able to decrease over 5 orders of magnitude while MINRES plateaus after a 2 orders of magnitude decrease.For both the energy norm error and 2-norm error, MINRES reaches a lower point than CG for some iterations.This must be due to numerical error in CG and MINRES (a result of loss of orthogonality in V k ). Figure 4.3 (right) shows problem BenElechi BenElechi1 with A of size 245874 × 245874 and cond(A) = 1.8E+09.The backward error of MINRES stays ahead of CG by 2 orders of magnitude for most iterations.Around iteration 32000, the backward error of both algorithms goes down rapidly and CG catches up with MINRES.Both algorithms exhibit a plateau on energy norm error for the first 20000 iterations.The error norms for CG start decreasing around iteration 20000 and decrease even faster after iteration 30000.Figure4.4shows r k and x k for CG and MINRES on two typical spd examples.We see that x k is monotonically increasing for both solvers, and the x k values rise fairly rapidly to their limiting value x * , with a moderate delay for MINRES.Figure4.5shows r k and x k for CG and MINRES on two spd examples in which the residual decrease and the solution norm increase are somewhat slower than typical.The rise of x k for MINRES is rather more delayed.In the second case, if the stopping tolerance were β = 10 −6 rather than β = 10 −8 , the final MINRES x k (k ≈ 10000) would be less than half the exact value x * .It will be of future interest to evaluate this effect within the context of trust-region methods for optimization.

4. 1 . 1 .[ 10 ,
Why does r k for CG lag behind MINRES?.It is commonly thought that even though MINRES is known to minimize r k at each iteration, the cumulative minimum of r k for CG should approximately match that of MINRES.That is, Figures 4.2 and 4.3 we see that r k for MINRES is often smaller than for CG by 1 or 2 orders of magnitude.This phenomenon can be explained by the following relations between r C k and r M k Lemma 5.4.1]and[26]:

Fig. 4 . 4 .
Fig. 4.4.Comparison of residual and solution norms for CG and MINRES solving two spd systems Ax = b.These are typical examples.Left: Problem Simon olafu with n = 16146 and cond(A) = 4.3E+08.Right: Problem Cannizzo sts4098 with n = 4098 and cond(A) = 6.7E+03.Top: The values of log 10 r k are plotted against iteration number k. Bottom: The values of x k are plotted against k.The solution norms grow somewhat faster for CG than for MINRES.Both reach the limiting value x * significantly before x k is close to x * .

Fig. 4 . 5 .
Fig. 4.5.Comparison of residual and solution norms for CG and MINRES solving two more spd systems Ax = b.Sometimes the solution norms take longer to reach the limiting value x * .Left: Problem Schmid thermal1 with n = 82654 and cond(A) = 3.0E+05.Right: Problem BenElechi BenElechi1 with n = 245874 and cond(A) = 1.8E+09.Top: The values of log 10 r k are plotted against iteration number k. Bottom: The values of x k are plotted against k.Again the solution norms grow faster for CG.

Fig. 4 . 6 .
Fig. 4.6.For MINRES on the indefinite problem (4.2), x k and the backward error r k / x k are both slightly non-monotonic.
the connection between MINRES and MINRES-QLP.Both methods compute the iterates x M k = V k y M k in (2.1) from the subproblems y M k = arg min y∈R k T k y − β 1 e 1 and possibly T y M = β 1 e 1 .
and x M k = W k u k .As shown in [3, §5.3], the construction of these quantities is such that the first k − 3 columns of W k are the same as in W k−1 , and the first k − 3 elements of u k are the same as in u k−1 .Since W k has orthonormal columns, x M k = u k , where the first k − 2 elements of u k are unaltered by later iterations.As shown in [3, §6.5], it means that certain quantities can be cheaply updated to give norm estimates in the form

Fig. 4 . 7 .
Fig. 4.7.Residual norms and solution norms when MINRES is applied to two indefinite systems (A − δI)x = b, where A is the spd matrices used in Figure 4.4 and δ = 0.5 is large enough to make the systems indefinite.Left: Simon olafu with n = 16146.Right: Problem Cannizzo sts4098 with n = 4098.Top: The values of log 10 r k are plotted against iteration number k for the first n iterations.Bottom left: The values of x k are plotted against k.During the n = 16146 iterations, x k increased 83% of the time and the backward errors r k / x k (not shown) decreased 96% of the time.Bottom right: During the n = 4098 iterations, x k increased 90% of the time and the backward errors r k / x k (not shown) decreased 98% of the time.
-4.5, applied diagonal scaling as before, and solved (A − δI)x = b with δ = 0.5 and where A and b are now scaled (so that diag(A) = I).The number of iterations increased significantly but was limited to n.

Figure 4 .
7 shows log 10 r k and x k for the first two cases (where A is the spd matrices in Figure4.4).The values of x k are essentially monotonic.The backward errors r k / x k (not shown) were even closer to being monotonic (at least for the first n iterations).

Fig. 4 . 8 .
Fig. 4.8.Residual norms and solution norms when MINRES is applied to two indefinite systems (A − δI)x = b, where A is the spd matrices used in Figure 4.5 and δ = 0.5 is large enough to make the systems indefinite.Left: Problem Schmid thermal1 with n = 82654.Right: Problem BenElechi BenElechi1 with n = 245874.Top: The values of log 10 r k are plotted against iteration number k for the first n iterations.Bottom left: The values of x k are plotted against k.There is a mild but clear decrease in x k over an interval of about 10000 iterations.During the n = 82654 iterations, x k increased 83% of the time and the backward errors r k / x k (not shown) decreased 91% of the time.Bottom right: The solution norms and backward errors are essentially monotonic.During the n = 245874 iterations, x k increased 88% of the time and the backward errors r k / x k (not shown) decreased 95% of the time.

Figure 4 .
Figure 4.8 shows x k and log 10 r k for the second two cases (where A is the spd matrices in Figure 4.5).The left example reveals a definite period of decrease in x k .Nevertheless, during the n = 82654 iterations, x k increased 83% of the time and the backward errors r k / x k decreased 91% of the time.The right example is more like those in Figure 4.7.During n = 245874 iterations, x k increased 83% of the time, the backward errors r k / x k decreased 91% of the time, and any nonmonotonicity was very slight.

Table 2 . 1
Pseudocode for algorithms CG and CR

Table 5 . 1
Comparison of CG and MINRES properties on an spd system Ax = b.

Table 5 . 2
Comparison of LSQR and LSMR properties on min Ax − b .