ABSTRACT

Starting from an initial guess x(0), iterative methods generate a sequence {x(0), x(1), . . . , x(k), . . . , } which converges towards A−1b. We define the residuals as

r(k) = b−Ax(k), (6.3) and the errors as

e(k) = x− x(k) = A−1r(k). (6.4) Thus the sequence of residuals and the sequence of errors converge towards 0 when the iterative method converges. The rate of convergence should be fast enough so that iterations can stop when an accurate approximation is found. Each iteration requires BLAS 2 GAXPY’s operations with a complexity of O(n2) flops for dense matrices and O(n) flops for sparse matrices. In practice, the system must be preconditioned with another matrix and each iteration requires solving a system with this matrix. The objectives of preconditioning are twofold: reduce the number of iterations and solve an easy system, with a linear O(n) complexity for sparse matrices. Moreover, memory requirements are also linear in O(n) floating-point numbers. When convergence is fast enough, iterative methods are competitive with direct methods, which require much more memory and have a higher complexity. There are several references on iterative methods to solve a system of linear equations, for example Kelley [42], Saad [58], Ciarlet [20], Meurant [47] and Dongarra et al. [26]. In this chapter, we discuss only two classes of iterative methods: stationary methods and Krylov projection methods. We describe some well-known methods and refer to the literature for the others. We do not discuss other iterative methods, such as multigrid methods or domain decomposition methods.