Averaged Nystr\"om interpolants for the solution of Fredholm integral equations of the second kind

Fredholm integral equations of the second kind that are defined on a finite or infinite interval arise in many applications. This paper discusses Nystr\"om methods based on Gauss quadrature rules for the solution of such integral equations. It is important to be able to estimate the error in the computed solution, because this allows the choice of an appropriate number of nodes in the Gauss quadrature rule used. This paper explores the application of averaged and weighted averaged Gauss quadrature rules for this purpose, and introduces new stability properties for them.


D
k(x, y)f (x)dµ(x) = g(y), y ∈ D, (1.1) where the kernel k and right-hand side function g are given, the function f is to be determined, and dµ(x) is a nonnegative measure supported on a bounded or unbounded domain, arise in many applications including image restoration (when applying Tikhonov regularization) [1], conformal mapping [2], frequency analysis [3], and tomography [4]; see also Atkinson [5] and Kress [6] for discussions on further applications.The present paper considers equations of the form (1.1) when D is a bounded or infinite interval on the real axis, and the integral operator is a compact map from X to X , where X is a suitable weighted Banach space; see Section 2 for its definition.Suitable assumptions on the kernel and on the measure can be made to guarantee compactness of the integral operator K. Detailed results on this topic are available in [7,Chapter 5].
The Nyström method is one of the most popular approaches to compute an approximate solution of Fredholm integral equations of the second kind; see, e.g., Atkinson [5] or Kress [6].The method is easy to implement and use: the integral in (1.1) is replaced by an interpolatory quadrature rule K m with m nodes x 1 < x 2 < • • • < x m on the interval D, and the equation where I is the identity operator and f m is the unknown interpolant, is required to hold at the nodes y = x i , i = 1, 2, . . ., m.This yields the linear system of equations m j=1 [δ ij + c j k(x j , x i )]a j = g(x i ), i = 1, 2, . . ., m, ( with a coefficient matrix of order m.Here the c j are coefficients of the chosen quadrature rule, a j = f m (x j ), and δ ij is the Kronecker δ-function, i.e., δ ii = 1 and δ ij = 0 for i ̸ = j.Assume that the integral equation (1.1) has a unique solution in X .This is the case when the null space of the corresponding operator is trivial, that is when N (I +K) = {0}.Then, when m is sufficiently large, the matrix of the linear system of equations (1.3) is nonsingular and its condition number can be bounded independently of m; see [5] for details.
Having computed the solution [a 1 , a 2 , . . ., a m ] T ∈ R m of the linear system of equations (1.3), the Nyström interpolant f m (y) = g(y) − m j=1 c j k(x j , y)a j , y ∈ D, (1.4) provides an approximate solution f m (y) of (1.1) that can be evaluated at any y ∈ D.
The Nyström interpolant (1.4) is known to converge to the exact solution f (y) of (1.1) with the same rate of convergence as the quadrature rule used; see, e.g., [5] for details.
An important aspect of the Nyström method is the choice of the quadrature formula.We would like the quadrature rule to be convergent in the weighted function space X determined by the measure dµ(x).For this reason, Nyström methods often are based on Gauss quadrature rules, for which there exists a wide literature.Indeed, Gauss quadrature formulas associated with different measures have been applied to Nyström methods, both for a single Fredholm integral equation of the second kind [8,9,10,11] and for systems of such equations [12,13] in function spaces suited to handle possible pathologies of the solution of (1.1).
In a sequence of papers Laurie [14] and Spalević with collaborators [15,16,17,18] developed averaged and weighted averaged Gaussian quadrature rules.These rules are convex combinations of two quadrature rules G m and G + m+1 with m and m + 1 nodes, respectively, where G m is an m-point Gauss rule and G + m+1 is an (m + 1)node quadrature rule that is related to G m .Averaged and weighted averaged rules have been applied to estimate the quadrature error in Gauss rules.It is the purpose of the present paper to explore their application to estimate the error in Nyström interpolants (1.4).
Since the averaged and weighted averaged rules have 2m + 1 nodes and weights, their straightforward application in a Nyström method requires the solution of a linear system of equations with a (2m + 1) × (2m + 1) matrix.We will explore the possibility of reducing the computational effort required when using these rules.
The averaged rule introduced by Laurie [14] is the average of an m-point Gauss rule and an (m + 1)-point anti-Gauss rule.The application of pairs of Gauss and anti-Gauss rules to the estimation of the error in Nyström interpolants has recently been described in [19].
In the beginning of this paper, we analyze the weighted averaged rules described in [17,18,20] and show their stability and convergence in weighted function spaces.These results extend those shown for anti-Gauss rules in [19] in that they involve more general quadrature rules, weight functions, and domains.Moreover, our results are shown under less restrictive assumptions than those in [19], and include the latter results.In the second part of the paper, our discussion focuses on the use of averaged rules of Laurie [14] and weighted averaged rules of Spalević [17,18] to estimate the error in the Nyström interpolant (1.4).Finally, new iterative methods are developed to solve the linear systems of equations associated with Nyström's method.These methods exploit the structure of the coefficient matrix, and their convergence is studied.To the best of our knowledge, this is the first time that such methods are proposed.
This paper is organized as follows.Section 2 reviews averaged and weighted averaged Gaussian quadrature rules.New stability results are shown.Illustrations of their performance for some classical measures are presented in Section 3. The application of averaged and weighted averaged Gauss rules to the estimation of the error in Nyström interpolants is discussed in Section 4, and several iterative methods for the computation of Nyström interpolants are described in Section 5. Computed examples are presented in Section 6, and concluding remarks can be found in Section 7.
2. Averaged and weighted averaged Gauss quadrature rules.Let for some nonnegative measure dµ with infinitely many points of support, and let {p k } ∞ k=0 be the sequence of monic orthogonal polynomials associated with this measure, i.e., p k is a polynomial of degree k with leading coefficient one such that The above inner product defines a Hilbert space It is well known that the polynomials p k satisfy a recurrence relation of the form where the recursion coefficients are given by The zeros of each polynomial p k , k ≥ 1, live in the convex hull of the support of dµ and are distinct; see, e.g., [21].Let x m denote the zeros of p m .They are known to be the eigenvalues of the symmetric tridiagonal matrix ; see [21].This matrix has orthogonal eigenvectors.Let v k,1 denote the first component of a normalized real eigenvector associated with the eigenvalue x (G) k .Then the m-point Gauss rule associated with the measure dµ is given by where the weights λ (G) k (also known as Christoffel numbers) can be determined as λ ; see [21,22].The rule (2.4) is known to be exact for all polynomials in P 2m−1 ; see, e.g., [21].Here and throughout this paper P k denotes the set of polynomials of degree at most k.
Consider the extended symmetric tridiagonal matrix where e k = [0, . . ., 0, 1, 0, . . ., 0] T stands for the kth axis vector of suitable dimension.Let { x k } m+1 k=1 denote the eigenvalues of J m+1 .They are real and distinct, but they are not guaranteed to live in the convex hull of the support of the measure dµ; see, e.g., [14] for a discussion.Let v k,1 denote the first component of a normalized real eigenvector associated with the eigenvalue x k .Then the x k are the nodes and the λ k = β 0 v 2 k,1 are the weights of the (m + 1)-point anti-Gauss rule introduced by Laurie [14].This rule satisfies see [14] for details.The degree of exactness of the anti-Gauss rule G m+1 is at least 2m − 1.This follows from (2.6).
Laurie [14] also introduced the averaged rule which has the following properties: 1.It has 2m + 1 nodes.Its evaluation requires the calculation of the integrand f at 2m + 1 nodes.However, m of these function values also are required to evaluate the Gauss rule G m (f ).Therefore, the additional computational effort demanded when calculating the averaged rule (2.7) is only m+1 function evaluations.2. All its 2m + 1 weights are positive.3. Its degree of exactness is at least 2m + 1, i.e., A 2m+1 (f ) = I(f ) for all f ∈ P 2m+1 .For some measures, the averaged rule agrees with the Gauss-Kronrod rule and its degree of exactness is higher; see, e.g., [15,23,24,25].4. For certain measures, the averaged rule (2.7) is internal, i.e., all nodes live in the convex hull of the support of the measure dµ.For instance, when dµ(x) = x α e −x dx and then the averaged rule is internal only for suitable values of α and β; see [14].
5. The averaged rule (2.7) furnishes an estimate for the error I(f )−G m (f ) since Spalević [17] constructed a symmetric tridiagonal matrix of order 2m + 1 whose eigenvalues are the nodes of the averaged rule (2.7) and the weights can be computed from the first component of the associated normalized real eigenvectors.This construction lead Spalević to the definition of the weighted averaged quadrature rule associated with the Gauss rule G m .The nodes of the rule (2.8) are the eigenvalues of the symmetric tridiagonal matrix where Z m ∈ R m×m is the row-reversed identity matrix.The weights of the quadrature rule (2.8) are the square of the first component of normalized real eigenvectors of (2.9) multiplied by β 0 .Spalević [17] showed the following properties of the quadrature formula (2.8): (A) The formula requires 2m + 1 evaluations of the integrand f .When the nodes x (G) j of the Gauss rule (2.4) and the nodes x j of the weighted averaged rule (2.8) are ordered in increasing order, the quadrature nodes satisfy (B) All the weights λ k are positive.
(C) The quadrature rule is exact for polynomials of degree at least 2m + 2. If the measure dµ(x) is symmetric with respect to the origin, then the degree of exactness is at least 2m + 3.For certain measures dµ, the degree of exactness is much higher; see [15,24,25].(D) If dµ(x) = x α e −x dx and D = R + , then the formula is internal when α > 1.
If dµ(x) = (1 − x) α (1 + x) β dx and D = [−1, 1], then it is internal only for certain values of α and β. (E) The quadrature rule suggests the error estimate Computed examples reported in [20] show the weighted averaged quadrature rule (2.8) for many integrands to give higher accuracy than suggested by its degree of exactness.This also holds to a lesser extent for the averaged rule (2.7).This property of the rule (2.8) results in that the right-hand side of (2.10) for many integrands provides an accurate estimate of the left-hand side.The present paper uses this property to determine accurate estimates of the error in computed approximate solutions of Fredholm integral equations of the second kind.
When applying the quadrature rule (2.8), one generally also evaluates the Gauss rule (2.4).Therefore, the nodes and weights of the representation can be evaluated faster than computing the eigenvalues and first components of normalized eigenvectors of the matrix (2.9).Here G * m+1 is the quadrature rule determined by the matrix where e * m+1 (f ) denotes the quadrature error in G * m+1 (f ); see [16] for a derivation of (2.11) and a discussion on the computational effort.
The representation (2.11) shows that the rule A 2m+1 (f ) is a weighted average of the Gauss rule G m (f ) and the quadrature rule G * m+1 (f ).It also shows that one can evaluate the error estimate (2.10) as The following expression for the weights of the rule G * m+1 is believed to be new.Theorem 2.1.The degree of exactness of the quadrature rule G * m+1 in (2.11) is at least 2m − 1, the nodes x * k interlace with the Gauss nodes x k , and the weights are given by where Proof.The first part of the thesis follows from the representation (2.11) of A 2m+1 and the fact that the degree of exactness of both the weighted averaged rule A 2m+1 and the Gauss rule G m is at least 2m − 1.
Let us consider the function The last equality is a consequence of the well-known representation for the Lagrange polynomial associated with the quadrature node x * j of formula (2.11), , where we note that q ′ 2m+1 (x * j ) = p m (x * j )(p * m+1 ) ′ (x * j ).On the other hand, for any j there exists a polynomial q m−1,j of degree m − 1 such that Combining the above two equalities we obtain the expression given for λ * k , whose positivity follows from its definition in terms of the squared first component of a normalized real eigenvector.Finally, the interlacing property of the nodes follows by applying Cauchy's interlacing result to the matrix (2.12); see, e.g., [26,Theorem 3.3].
The next lemma, which will be useful in the sequel, gives a Markov-Stieltjes-type inequality.An analogous inequality is well known for classical Gauss rules (see, for instance, [27]), but the inequality has never been proved for averaged rules.
Lemma 2.2.For a fixed quadrature node x * k of G * m+1 , we have the bounds 2m be two polynomials of degree 2m such that These polynomials are uniquely determined; see, for instance, [27,Lemma 1.3].Following the proof of [27, Theorem 5.2] one obtains that for each real x, we have where H k is the shifted Heaviside function defined by Then, since by (2.11) one has where e m (f ) represents the quadrature error for the Gauss rule.To justify the last inequality, let us recall the error representation of the Gauss rule [21,Formula (1.4.14)] By virtue of (2.15), as H k (x) is a piecewise constant function and the polynomial P has even degree, its leading coefficient has to be negative to satisfy the inequality for x → ±∞.Therefore, e m (P 2m of degree 2m has a positive leading coefficient and, consequently, e m (Q since this implies that sup m ∥G * m+1 ∥ < ∞.The quadrature error tends to zero as fast as the best approximation error by polynomials of degree less than 2m − 1, Here and in the sequel, C is a positive constant which may assume different values in different formulas.We write C ̸ = C(a, b, . . . ) to indicate that C is independent of the parameters a, b, . . . .Introduce a bounded weight function u : D → R that is positive on the support of dµ, and satisfies We define the weighted space C u (D) as the set of all continuous function f ∈ C(D \ ∂D) such that f u ∈ C(D), equipped with the weighted uniform norm ∥f u∥ ∞ .For smoother functions, we consider Sobolev-type spaces of index r ≥ 1, where AC(D \ ∂D) denotes the set of absolutely continuous functions on D \ ∂D and (2.18) The following result shows the stability of the quadrature rule G * m+1 in C u (D).Theorem 2.3.Let u be a bounded weight function that is positive on the support of dµ and satisfies (2.16).Assume that Then the formula G * m+1 is stable, i.e., Proof.The bounds (2.14) imply as the measure µ(x) is supported on D. This shows the stability of the formula.
The bound (2.20) allows us to show convergence of the quadrature rules G * m+1 as m increases, i.e., lim m→∞ e * m+1 (f ) = 0, and that e * m+1 (f ) goes to zero as fast as the error of the best polynomial approximation, This is shown in the following corollary.
Corollary 2.4. where Proof.The inequality (2.21) can be shown following, mutatis mutandis, the proof of Theorem 5.1.7 in [28].We report the main steps of the proof for the convenience of the reader.By the exactness of formula G * m+1 and by (2.20), we have for each polynomial for some constant C ′ related to the constant C in (2.20).Taking the infimum with respect to P , we obtain the assertion.Example 1.Let the function f belong to the function space and assume that we would like to evaluate (2.1), where the measure dµ(x) = w(x)dx is determined by the Jacobi weight function Then the first inequality in (2.16) is satisfied if and Theorem 2.3 holds true.Now, let instead f be a smoother function, namely, let f belong to the Sobolev space W r u ([−1, 1]) of index r ≥ 1, defined in (2.17).Then [28, p. 172], where φ is given in (2.18).Corollary 2.4 then yields and assume that we would like to evaluate (2.1), where the measure dµ(x) = w(x)dx is determined by the Laguerre weight function Then the first inequality in (2.16) and convergent, i.e., Proof.Stability follows from the representation (2.11), the stability of the Gauss rule G m , and Theorem 2.3.Convergence can be shown similarly as in the proof of Corollary 2.4.

3.
Properties and performance of the considered quadrature rules.We analyze the quadrature rules (2.5), (2.13), (2.7), and (2.8) for a few measures dµ(x) = w(x)dx with classical weight functions w(x) that commonly arise in Fredholm integral equations of the second kind.
For general α, β > −1 in (3.1), we have and it follows that the coefficients for G m and G * m+1 in (2.11) tend to 1 2 as m increases, so that This implies that the quadrature rules A 2m+1 (f ) and A 2m+1 (f ) may produce significantly different results only for small values of m.
Example 3. Consider the integral The integral I 1 can be computed analytically.To illustrate the performance of the quadrature rules without influence of round-off errors introduced during the computations, we carry out all computations of this section in high-precision arithmetic.
Results determined in standard double precision arithmetic are very close to those reported.Table 3.1 displays, for the integral I 1 and several small values of m, the quadrature errors obtained by the Gauss rule G m , the anti-Gauss formula G m+1 , the rule G * m+1 , the averaged formula A 2m+1 , and the weighted averaged rule A 2m+1 .The weighted averaged rule A 2m+1 can be seen to produce a more accurate approximation of I 1 than A 2m+1 for all values of m.It also can be observed that the anti-Gauss rule G m+1 and the rule G * m+1 give quadrature errors of opposite sign to that of the corresponding Gauss rule G m .In Table 3.1, as well as in the remainder of this section, the rule A 2m+1 was computed according to (2.11).
The rules A 2m+1 and A 2m+1 can be used to estimate the quadrature error (I − G m )(f ).A comparison of Table 3.2 with the second columns of Table 3.1 shows these error estimates to be quite accurate.We consider the situation when the sequence of monic orthogonal polynomials {p m } ∞ m=0 are generalized Laguerre polynomials [21,28], i.e., they satisfy (2.2) with respect to the domain D = R + and the measure dµ(x) = x α e −x dx for some α > −1.The recursion coefficients are given by It is easy to see that Example 4. Regard the integral whose exact solution is approximated by a Gauss rule with 1024 nodes.Table 3.3 displays quadrature errors for this integral.The averaged rules can be seen to yield one or two more correct decimal digits than the corresponding Gauss rule.In this example the averaged rule produces higher accuracy than the weighted averaged rule with the same number of nodes.The monic Hermite orthogonal polynomials satisfy (2.3) with the coefficients Then Example 5. Consider the integral with w(x) = e −x 2 .The exact value is approximated by a Gauss rule with 512 nodes.The quadrature errors reported in Table 3.4 show that also for the integral in the present example, the averaged rules yield higher accuracy than the underlying Gauss rules, and the weighted averaged formula is superior to the averaged one.This is due to the smoothness of the integrand.4. Averaged and weighted averaged Nyström-type interpolants.This section describes several ways to apply the averaged and weighted averaged quadrature rules to compute and evaluate an approximate solution of the integral equation (1.1) in suitable weighted spaces C u (D).The consideration of the equations in weighted spaces is crucial in order to include the cases when the kernel, right-hand side, and the solution may be unbounded at some boundary points of the domain D.
Let the quadrature formula K m employed in (1.2) be the m-point Gauss rule G m (2.4).Then, after multiplying both sides by u(x and we determine an approximate solution of (1.1) by using the weighted Nyström interpolant We note for future reference that the linear system of equations (4.1) can be expressed as where I m is the identity matrix of order m, m ] T ∈ R m , and the entries of the matrix Φ We tacitly assume that m is large enough so that the system (4.A Nyström method based on the anti-Gauss quadrature rule (2.5) was recently described in [19], where the interpolant corresponding to the (2m + 1)-point averaged quadrature rule (2.7) was studied.
In the following, we discretize the integral equation (1.1) by the (2m + 1)-point weighted averaged quadrature rule that is associated with the m-point Gauss rule, and define a corresponding weighted Nyström interpolant.This interpolant gives higher accuracy than the Nyström interpolant defined by the m-point Gauss rule.The difference between the approximate solutions of (1.1) furnished by the Nyström interpolants associated with the m-point Gauss rule and the corresponding (2m + 1)-point weighted averaged quadrature formula is used to estimate the error in the approximate solution obtained by the Gauss-Nyström interpolant.This approach of estimating the error is analogous to the technique used in Section 3.

4.1.
A weighted averaged Nyström interpolant.We consider the Nyström interpolant (1.4) that is determined by the (2m + 1)-point weighted averaged quadrature rule A 2m+1 (2.8) associated with the m-point Gauss rule (2.4) used in (4.1).The determination of this interpolant requires the solution of the equation with that is, the solution of the linear system of equations with a matrix of order 2m + 1, with a j = ( f [1] 2m+1 u)( x j ).We assume as usual that m is large enough so that this system has a unique solution a = [ a 1 , a 2 , . . ., a 2m+1 ] T , which determines the weighted averaged Nyström interpolant ( f We will use the difference ( f m (y))u(y) as an estimate of the error in (f Introduce the matrices D 2m+1 = diag(u( x 1 ), . . ., u( x 2m+1 )) and Φ = [ϕ ij ] ∈ R (2m+1)×(2m+1) with entries i, j = 1, 2, . . ., 2m + 1, and the vector g = [(gu)( x i )] ∈ R 2m+1 .Then the linear system of equations (4.6) can be written as The solution of this linear system by LU factorization requires about 16  3 m 3 flops.Thus, the total computational effort required to solve both the systems (4.3) and (4.8) is about 18  3 m 3 flops.2m+1 )u∥ ∞ tends to zero as the error of best polynomial approximation in W r u (D).Finally, the ∞-norm condition number of the coefficient matrix in (4.8) is bounded independently of m, for m sufficiently large.
Proof.To prove the first part of the theorem we have to show that (A) lim Let us first prove (A).We observe that the function k(•, y)f is in C u 2 (D).By applying Corollary 2.5 to this function with u 2 in place of u, we deduce Therefore, multiplying both sides times u(y) and using [8, Equation 4.1], we obtain which tends to zero as m increases by the assumptions on k(•, y) and f .To prove (B), we show that K 2m+1 maps C u (D) into W r u (D).Indeed, from (4.5), which is bounded by virtue of the assumptions on the kernel and on the weight function, and by taking Corollary 2.5 into account.Hence, K 2m+1 f ∈ W r u (D) and we can deduce that where c and C are positive constants independent of m, M , and f .Condition (B) now follows and, consequently, ∥(K − K 2m+1 ) K 2m+1 ∥ tends to zero as m increases.This can be seen by ( 4 (f − f and by applying (4.9).A proof of the well-conditioning of the linear system is given in [5, p. 113].We just have to replace the usual infinity norm by the weighted norm of the space C u (D).

An approximate Nyström interpolant based on the splitting (2.11).
The splitting (2.11) suggests the use of an approximate Nyström interpolant that is cheaper to compute than solving (4.8).The evaluation of the approximate interpolant proceeds as follows: I. Determine and solve the linear system of equations (4.3) associated with the m-point Gauss rule.This yields the Nyström interpolant f (G) m u defined by (4.2).II.Compute the (m + 1)-point Nyström interpolant that is associated with the quadrature rule G * m+1 in (2.13), with nodes and weights x * j and λ * j , respectively.Thus, we consider the equation where This leads to the Nyström interpolant whose coefficients a * j = (f * m+1 u)(x * j ), j = 1, 2, . . ., m + 1, are the unknowns in the linear system of equations We can express the above linear system in the form where and the entries of the matrix ( f The determination of this interpolant requires the solution of two linear systems of equations with matrices of orders m and m + 1, respectively.Their solution demands about 4  3 m 3 flops.In particular, this includes the computational effort required to evaluate the Gauss-Nyström interpolant (4.2).Hence, the calculation of the interpolant (4.13) is cheaper than the computation of the interpolant (4.7).We will compare the accuracy of the approximations f [1] 2m+1 and f [2] 2m+1 , defined by (4.7) and (4.13), respectively, of the solution f of (1.1) in Section 6.
We observed in Section 2 that for Chebychev measures, the coefficients βm+1 βm+βm+1 and βm βm+βm+1 in (2.11) are 1 2 .When m tends to ∞, these coefficients tend to 1 2 as m increases also for other measures.When the coefficients equal 1  2 , the Nyström interpolant (4.13) coincides with the averaged interpolant determined by the Gauss and anti-Gauss rules.The latter interpolant has been investigated in [19].
We conclude this subsection by showing convergence and stability of the Nyström method based on the quadrature rule G * m+1 .This yields (4.11), and convergence of the interpolant (4.13).
Theorem 4.2.Assume that N (I + K) = {0} in C u (D) and let f be the unique solution of equation (1.1) for the right-hand side function g ∈ C u (D).
and the kernel function k is such that then, for m large enough, equation (4.10) has a unique solution )u ∞ tends to zero as m increases as the error of best polynomial approximation in W r u (D).Finally, the condition number of the matrix (I m+1 +Φ * ) in the ∞-norm is bounded independently of m, for m sufficiently large.
Proof.The assertions can be proved similarly as Theorem 4.1, by applying Corollary 2.4 in place of Corollary 2.5.
where f [2] 2m+1 is given by (4.13).Proof.By (4.13) we have from which the assertion follows by considering that the coefficients tend to 1 2 as m → ∞ and by taking Theorems 4.1 and 4.2 into account.

5.
Iterative methods for the evaluation of the Nyström interpolant (4.7).This section describes several iterative methods for computing approximations of the Nyström interpolant (4.7) that are more accurate than the approximation described in the previous subsection.This paper discusses some simple algorithms directly stemming from the quadrature rules used to compute (4.7).We prove their convergence under the assumption that the weight of the space C u (D) is u(x) = 1.However, in Section 6, we show by a numerical experiment that a suitable choice of the weight u may improve the rate of convergence.Other iterative methods also could be employed, such as Krylov methods.Here, we focus on methods that exploit the structure of the problem.
Express the system (4.8) as where b = a (G) ∈ R m , c = a * ∈ R m+1 .The representation (5.1) suggests a few iterative solution methods.The first one we consider is a modification of the method considered in Subsection 4.2, which uses the computed LU factorizations in an iterative fashion.It is defined by Since the method is stationary, the LU factorizations of the coefficient matrices can be computed initially, and then used in each iteration.Computing the vector c (0) by the (m + 1)-point G * m+1 quadrature formula yields a quite accurate initial approximation of the second part of the solution.In actual computations, convergence is typically achieved within a fairly small number of iterations.The following results give sufficient conditions for the convergence of the method.
Theorem 5.1.Let the kernel of (1.1) satisfy where β 0 = ⟨p 0 , p 0 ⟩.Then, for a sufficiently large m, the iteration process (5.2) converges to the vectors that is, to the unique solution of system (4.8).
Theorem 5.2.Let the kernel of (1.1) satisfy where M is a positive constant.Then, for a sufficiently large m, the iteration process (5.2) converges to the unique solution (5.4) of system (4.8).
Proof.Proceeding as in the proof of Theorem 5.1, we can write Since G m is convergent, there are a sequence of constants ρ m > 0, m = 1, 2, . . ., which converge to 1, such that for m sufficiently large.Recalling that θ m converges to 1 2 as m increases, we have The rest of the proof is similar to the proof of Theorem 5.1.
The iterations (5.2) are terminated when two consecutive iterates are sufficiently close, that is when for a chosen tolerance τ , or when a prescribed maximum number of iterations has been carried out.We denote the computed solution by b for any y ∈ D.
We next consider the alternative iterative method which only requires the LU factorization of the matrix I m + Φ 11 .Similarly, for the method (5.2), the initial solution c (0) is computed by the quadrature rule G * m+1 .We denote the Nyström interpolant corresponding to the above iteration by f [4] .
Proof.The error vectors for the method are The last iterative method we consider is a Richardson-type method.It is defined by Convergence can be established similarly as above.The LU factorization of a matrix is not required in this case, but the convergence is the slowest among the iterative methods considered.We refer to the Nyström interpolant computed in this manner as f [5] .Remark 1.We note that the iterative methods (5.2), (5.6), and (5.7) may be implemented by replacing b (k+1) by b (k) in the right-hand side of the second equation of each method.This has the advantage that the two formulas can be evaluated simultaneously on a parallel computer, but it slightly decreases the rate of convergence.

Numerical examples.
To illustrate the performance of the averaged Nyström interpolants discussed in the previous sections, we apply them to the solution of three integral equations with different degrees of regularity.The computations were carried out in Matlab R2021b on a Debian GNU/Linux computer.
In the tables that report the results, the symbols denote the difference in the uniform norm between the exact solution f and the Nyström interpolants respectively.We approximate the uniform norm by evaluating the error at 10 3 equispaced points in the interval (−1, 1).Example 6.We first consider the equation where g(y) = 1 32 (8 cos 2 − 4 cos 4 − 4 sin 2 + sin 4)e y cos y + cos(3y).The exact solution is f (y) = cos 3y.We set α = β = 0 in the Jacobi weight function (3.1) and γ = δ = 0 in the weight u(x) (2.22) of the function space C u ([−1, 1]).
In the graph on the left of Figure 6.1, we plot the difference between the exact solution f and the approximation produced by the Gauss rule, the anti-Gauss formula, and the quadrature rule G * m+1 for m = 2.We can observe that the errors of the second and third Nyström interpolants are of opposite sign as the error in the Nyström interpolant determined by the Gauss rule for all x ∈ [−1, 1].Table 6.1 shows the behavior of the Nyström interpolants determined by three quadrature rules (Gauss, anti-Gauss, and G * m+1 ), and compares them to the interpolants determined by averaged and weighted averaged Gauss rules (f (A) 2m+1 and f [1] 2m+1 ), and the approximation f [2] 2m+1 of the latter.The number of quadrature nodes, m, ranges from 2 to 10.
It can be seen that while the simple rules are equivalent, the averaged rules lead to improved accuracy, in some cases the improvement is 6 significant decimal digits.The weighted averaged rule is always more accurate than the averaged rule, except when machine precision is reached, where the larger linear system to be solved increases error propagation.The approximated interpolant f [2] 2m+1 (4.13) is less accurate than 2.44e-12 (20) 2.44e-12 (43) 64 5.62e-14 5.60e-14 5.60e-14 (10) 5.62e-14 (18) 5.60e-14 (35) 128 1.33e-15 2.39e-15 1.33e-15 (10) 1.33e-15 (14) 1.55e-15 (100) 256 1.55e-15 1.01e-15 7.77e-16 (7) 7.77e-16 (10) 8.88e-16 (20) column.Due to propagation of round-off errors, the error in the interpolants f does not decrease as m becomes larger than 128.The approximation f [2] 2m+1 is slightly less accurate for m < 16, but it is about the same as in f [1] 2m+1 when m is large.We recall that the evaluation of f [2] 2m+1 is cheaper than the evaluation of f [1] 2m+1 .The iterative methods prove to be more stable than the direct approaches, reaching machine precision for m = 256; see the last three columns in Table 6.3.The number of iterations decreases as the size of the problem increases, and the second iterative method appears to be more efficient than the first one, as the complexity is reduced.The accuracy of the third method is comparable, but the number if iterations grows with m, and exceeds the maximum number of allowed iterations (100) when m = 128.
To investigate the influence of the weight in the space C u ([−1, 1]), we repeat in Table 6.4 the computations for the larger values of m and γ = δ = 1.24 in (2.22).By (2.23), γ and δ should be less than 5/4; we choose their value to be slightly smaller than this bound.We observe that the accuracy is unaltered for the three iterative approaches, but the number of iterations decreases, showing that the weight u(x) acts as a preconditioner for the iterative algorithms.At the same time, the accuracy of the first direct method improves when m = 256.Table 6.5 reports the results obtained for the direct and iterative methods for computing the weighted averaged interpolant.The first two iterative methods exhibit  This illustrates that convergence may occur only when m is large enough.This is not a significant drawback, since iterative methods are mainly intended for medium and large-scale problems.

7.
Conclusions.This paper discusses and compares Nyström interpolants determined by Gauss, averaged Gauss, and weighted averaged Gauss quadrature rules with a focus on the latter.Stability and accuracy of the Gauss rules used is investigated, and convergence of the Nyström interpolants, and of iterative methods for their computation, are discussed.For many problems the interpolants based on averaged Gauss and weighted averaged Gauss rules are shown to perform well.A complete analysis of the iterative methods for a generic weight function u will be the subject of future research.

. 9 )
with K 2m+1 f in place of f and by applying our assumnptions on the kernel function.Therefore by[5, Chapter 4], the operator I − K 2m+1 is invertible in C u (D), for m sufficiently large, and uniformly bounded.This completes the proof of the first assertion.The estimate of the error follows from [5, Theorem 4.1.2],

2 ,
. . ., m + 1. III.Approximate the solution of the original equation (1.1) by a convex combination of the weighted Nyström interpolants computed in steps I and II, as suggested by the representation (2.11) of the quadrature rule (2.8):

Proposition 4 . 3 .
Assume that N (I + K) = {0} in C u (D) and let f be the unique solution of equation (1.1) for the right-hand side function g ∈ C u (D).Then, under the assumption of Theorem 2.3, (k+1) b = b (k+1) − b and e (k) c

= 0 .
The fact that lim k→∞ e (k) b

R
very slow convergence for m ≤ 8, while they are fast for larger values of m.The third method diverges for m ≤ 16, but converges in just 2 iterations for larger values of m.
∥f (r) φ r u∥ ∞ .We conclude this section by showing the stability and convergence of the quadrature rule A 2m+1 .Corollary 2.5.Under the assumptions of Theorem 2.3, formula A 2m+1 is stable, i.e.,

Table 3 . 1
Quadrature errors for the integral I 1 .

Table 3 . 2
Quadrature error estimates for Gm obtained by the averaged rules for the integral I 1 .

Table 3 . 3
Quadrature errors for the integral I 2 .

Table 3 . 4
Quadrature errors for the integral I 3 .