Skip to main content
Log in

The effect of non-optimal bases on the convergence of Krylov subspace methods

  • Published:
Numerische Mathematik Aims and scope Submit manuscript

Summary

There are many examples where non-orthogonality of a basis for Krylov subspace methods arises naturally. These methods usually require less storage or computational effort per iteration than methods using an orthonormal basis (optimal methods), but the convergence may be delayed. Truncated Krylov subspace methods and other examples of non-optimal methods have been shown to converge in many situations, often with small delay, but not in others. We explore the question of what is the effect of having a non-optimal basis. We prove certain identities for the relative residual gap, i.e., the relative difference between the residuals of the optimal and non-optimal methods. These identities and related bounds provide insight into when the delay is small and convergence is achieved. Further understanding is gained by using a general theory of superlinear convergence recently developed. Our analysis confirms the observed fact that in exact arithmetic the orthogonality of the basis is not important, only the need to maintain linear independence is. Numerical examples illustrate our theoretical results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Axelsson, O., Vassilevski, P.S.: Variable-step multilevel preconditioning methods. I. Self-adjoint and positive definite elliptic problems. Numer. Linear Algebra Appl. 1, 75–101 (1994)

    Google Scholar 

  2. Barrett, R. et al.: Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods. SIAM, Philadelphia, 1994

  3. Di Benedetto, F., Fiorentino, G., Serra, S.: C.G. preconditioning for Toeplitz matrices. Computers Math. Applic. 25, 35–45 (1993)

    Article  Google Scholar 

  4. Chan, T.F., Chow, E., Saad, Y., Yeung, M.C.: Preserving symmetry in preconditioned Krylov subspace methods. SIAM J. Scientific Comput. 20, 568–581 (1998)

    Article  Google Scholar 

  5. Cullum, J.: Peaks, plateaux, and numerical stabilities in a Galerkin/minimal residual pair of methods for solving Ax=b. Appl. Numer. Math. 19, 255–278 (1995)

    Article  Google Scholar 

  6. Cullum, J.: Arnoldi versus nonsymmetric Lanczos algorithms for solving matrix eigenvalue problems. BIT 36, 470–493 (1996)

    Google Scholar 

  7. Cullum, J.: Iterative methods for solving Ax=b, GMRES/FOM versus QMR/BiCG. Advances Comput. Math. 6, 1–24 (1996)

    Google Scholar 

  8. Cullum, J., Greenbaum, A.: Relations between Galerkin and norm-minimizing iterative methods for solving linear systems. SIAM J. Matrix Anal. Appl. 17, 223–247 (1996)

    Article  Google Scholar 

  9. Freund, R.W.: Conjugate Gradient-type methods for linear systems with complex symmetric coefficient matrices. SIAM J. Scientific Comput. 13, 425–448 (1992)

    Article  Google Scholar 

  10. Freund, R.W., Nachtigal, N.M.: QMR: A quasi-minimal residual method for non-Hermitian linear systems. Numerische Mathematik 60, 315–339 (1991)

    Article  Google Scholar 

  11. Golub, G.H., Van Loan, C.F.: Matrix Computations. The John Hopkins University Press, Baltimore, Third Edition, 1996

  12. Golub, G.H., Ye, Q.: Inexact preconditioned conjugate gradient method with inner–outer iteration. SIAM J. Sci. Comput. 21, 1305–1320 (1999)

    Article  Google Scholar 

  13. Greenbaum, A.: Iterative Methods for Solving Linear Systems. SIAM, Philadelphia, 1997

  14. Greenbaum, A., Rozložník, M., Strakoš, Z.: Numerical Behaviour of the Modified Gram-Schmidt GMRES Implementation. BIT 37, 706–719 (1997)

    Google Scholar 

  15. Greenbaum, A., Strakoš, Z.: Predicting the behavior of finite precision Lanczos and conjugate gradient computations. SIAM J. Matrix Anal. Appl. 13, 121–137 (1992)

    Article  Google Scholar 

  16. Gustafson, K.E., Rao, D.K.M.: Numerical Range: The Field of Values of Linear Operators and Matrices. Springer, New York, 1997

  17. Higham, N.J.: The Matrix Computation Toolbox. Technical report, Manchester Centre for Computational Mathematics, 2002. In: www.ma.man.uc.uk/~higham/mctoolbox

  18. Hochbruck, M., Lubich, C.: Error analysis of Krylov methods in a nutshell. SIAM J. Scientific Comput. 19, 695–701 (1998)

    Article  MathSciNet  Google Scholar 

  19. Lanczos, C.: Solution of linear equations by minimized iterations. J. Res. Natl. Bur. Stand. 49, 33–53 (1952)

    Google Scholar 

  20. Liesen, J., Rozložník, M., Strakoš, Z.: Least Squares Residuals and Minimal Residual Methods. SIAM J. Scientific Comput. 23, 1503–1525 (2003)

    Article  Google Scholar 

  21. Liesen, J., Strakoš, Z.: Convergence of GMRES for tridiagonal Toeplitz matrices. SIAM J. Matrix Anal. Appl. 26, 233–251 (2004)

    Article  MathSciNet  Google Scholar 

  22. Liesen, J., Strakoš, Z.: GMRES convergence analysis for a convection-diffusion model problem. Technical Report 26-2003, Institute of Mathematics, Technical University of Berlin, 2003. To appear in SIAM J. Scientific Comput.

  23. Morgan, R.: Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equations. SIAM J. Matrix Anal. Appl. 21, 1112–1135 (2000)

    Article  Google Scholar 

  24. Notay, Y.: Flexible conjugate gradient. SIAM J. Scientific Comput. 22, 1444–1460 (2000)

    Article  Google Scholar 

  25. Paige, C.C., Strakoš, Z.: Residual and Backward Error Bounds in Minimum Residual Krylov Subspace Methods. SIAM J. Scientific Comput. 23, 1899–1924 (2002)

    Google Scholar 

  26. Saad, Y.: Practical use of some Krylov subspace methods for solving indefinite and unsymmetric linear systems. SIAM J. Scientific Stat. Comput. 5, 203–228 (1984)

    Article  Google Scholar 

  27. Saad, Y.: Iterative Methods for Sparse Linear Systems. The PWS Publishing Company, Boston, 1996. Second Edition, SIAM, Philadelphia, 2003

  28. Saad, Y., Schultz, M.H.: GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems. SIAM J. Scientific Stat. Comput. 7, 856–869 (1986)

    Article  Google Scholar 

  29. Saad, Y., Wu, K.: DQGMRES: a direct quasi-minimal residual algorithm based on incomplete orthogonalization. Numerical Linear Algebra Appl. 3, 329–343 (1996)

    Article  Google Scholar 

  30. Simoncini, V.: A matrix analysis of Arnoldi and Lanczos methods. Numer. Math. 81, 125–141 (1998)

    Article  Google Scholar 

  31. Simoncini, V.: On the convergence of restarted Krylov subspace methods. SIAM J. Matrix Anal. Appl. 22, 430–452 (2000)

    Article  Google Scholar 

  32. Simoncini, V., Szyld, D.B.: Theory of Inexact Krylov Subspace Methods and Applications to Scientific Computing. SIAM J. Scientific Comput. 25, 454–477 (2003)

    Article  MathSciNet  Google Scholar 

  33. Simoncini, V., Szyld, D.B.: On the Occurrence of Superlinear Convergence of Exact and Inexact Krylov Subspace Methods. SIAM Review, 47, 247–272 2005

  34. van der Vorst, H.A.: Iterative Krylov Methods for Large Linear Systems. Cambridge University Press, Cambridge, 2003

  35. van der Vorst, H.A., Melissen, J.B.M.: A Petrov-Galerkin type method for solving Ax=b, where A is symmetric complex. IEEE Trans. Magnetics 26, 706–708 (1990)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Valeria Simoncini.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Simoncini, V., Szyld, D. The effect of non-optimal bases on the convergence of Krylov subspace methods. Numer. Math. 100, 711–733 (2005). https://doi.org/10.1007/s00211-005-0603-8

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-005-0603-8

Mathematics Subject Classification (2000)

Navigation