Pollution and accuracy of solutions of the Helmholtz equation: A novel perspective from the eigenvalues

In researching the Helmholtz equation, the focus has either been on the accuracy of the numerical solution (pollution) or the acceleration of the convergence of a preconditioned Krylov-based solver (scalability). While it is widely recognized that the convergence properties can be investigated by studying the eigenvalues, information from the eigenvalues is not used in studying the numerical dispersion which drives the pollution error. Our aim is to bring the topics of accuracy and scalability together for the first time; instead of approaching the pollution error in the conventional sense of being the result of a discrepancy between the exact and numerical wavenumber, we show that the dispersion which drives the pollution error can also be decomposed in terms of the eigenvectors and eigenvalues. Using these novel insights, we construct sharper upper bounds for the total error independent of the grid resolution. While the pollution error can be minimized in one-dimension by introducing a dispersion correction, the latter is not possible in higher dimensions, even for very simple model problems. For our model problem, a correction on the eigenvalues enables us to remove the pollution error and study it in full detail, both in one-and two-dimensions. ©


Introduction
The Helmholtz equation is widely used in applications ranging from geophysics to bio medical physics.Many researches have contributed to the broad range of literature on this topic.In particular, the pollution effect deserved a lot of attention due to its far ranging consequences.In essence, the pollution effect is directly related to numerical dispersion errors due to differences between the actual and numerical wavenumber [1][2][3][4].This error grows with the wavenumber as in the high-frequency range the solutions become very oscillatory.
As a result of this discrepancy, there may be large errors between the actual solution and the obtained numerical solution.Therefore, the solution obtained using fast and efficient solvers, may therefore be severely inaccurate.The fact that the pollution effect for finite element and finite difference methods cannot be avoided in higher-dimensions adds to the problem [2].No simple solution exists, as it has been shown that for a certain accuracy, the number of grid points needed to retain that accuracy grows along with the wavenumber.However, it grows slower than the order of accuracy of the schemes.In particular, if we let k denote the wavenumber, n the problem size in one-dimension and p the order of a finite difference of finite element scheme, then n = Ck , where C is a constant that only depends on the accuracy achieved [5].Therefore, if we wish to increase k while keeping the accuracy of the same order, we need to increase n as well, which leads to larger linear systems.The literature has proposed several ways to mitigate this persisting issue.One branch has focused on formulating new higher-order discretization schemes.Among the first were a rotated 9-point finite difference scheme [6].This method was extended by including a 'perfectly matched layer' (PML) [7].In both works, optimal parameters for the difference scheme were computed in order to improve the accuracy of the numerical solution.A similar strategy was used for the threedimensional Helmholtz problem, where the 9-point stencil was extended to a 27-point stencil [8].Furthermore, some line of work developed accurate higher order schemes for the one-and two-dimensional Helmholtz equation, under the assumption that separation of variables can be used [9][10][11][12].
In line of this strategy lies the use of compact finite difference schemes [5,[13][14][15].One advantage of the compact scheme is that no additional boundary conditions are required due to having a larger stencil.While both compact fourth-and sixth-order schemes were developed in the literature, it has been shown that at best sixth-order accuracy can be achieved using compact stencils for the Poisson, and thus inherently, the Helmholtz equation [16].Apart from using compact higher-order finite difference schemes, others have incorporated wave-ray theory to obtain more accurate solutions [17] or have constructed a modified wavenumber which is closer to the exact wavenumber in order to reduce the numerical dispersion [18].When using such strategies, all methods depend on a pre-specified propagation angle to provide an accurate solution, as the exact propagation angle is unknown.As a result, for specified angles an accurate solution can be obtained by either incorporating a modified wavenumber or by switching to a higher-order dispersion corrected discretization.A combination of both has been studied by Cocquet et al. [18], where the standard 5-point stencil is replaced by a parametrized 9-point difference scheme including a modified wavenumber.Very recently, using an asymptotic dispersion correction for two-dimensional constant wavenumber problems, these methods have shown to provide up to sixth order accuracy for plane waves given an angle of propagation [19].In this paper, we aim to provide a theoretical contribution to this field of research by introducing a novel perspective on the pollution error as a result of the numerical dispersion; we study the effect not explicitly by reducing it to a difference between the exact and numerical wavenumber, but we analyse the differences between the analytical and numerical eigenvalues.As we will be using the analytical and numerical eigenvalues, we will use a simple model problem from the literature with Dirichlet boundary conditions.Using this configuration, some new aspects come to light which are of paramount importance.First of all, we will be able to obtain the true characteristics and propagation of the pollution error as k grows due to numerical dispersion instead of a linear dependence on k.As a result, we are able to construct novel yet sharp error estimates which reflect these characteristics.Second of all, we will be able to study the exact eigenmodes in higher-dimensions which reflect the numerical dispersion.Consequently, for the first time, we can pinpoint the pollution effect for particular eigenmodes in one-and two-dimensions and minimize the pollution effect without keeping n uneconomically large.Moreover, for the two-dimensional model problem, the dispersion correction can be obtained for all angles simultaneously.
The paper is organized as follows.We start by defining the model problem in Section 2.Here we also provide the analytical solution, which we need in order to study the pollution error.Section 3 derives the pollution error in the conventional sense by looking at the difference between the analytical and numerical wavenumber.In Section 4 we start by constructing our new upper bound using the eigenvalues as our source for information.We finally present numerical results in Section 5.The experiments are twofold; we show that the bound holds and we provide a way to apply a dispersion correction.We conclude by summarizing our findings in Section 6.

Problem definition
In this section we start by defining two model problems.Following a similar approach in the literature, we use the constant wavenumber model with Dirichlet conditions, such that the analytical solution and eigenvalues can be derived [9][10][11][12]14,[19][20][21][22].We therefore start by focusing on the following model problem We will refer to this model problem as MP 1. Working on the unit-domain (L = 1), the second order difference scheme with step-size h = 1 n leads to Using a lexicographic ordering, the linear system can be formulated exclusively on the internal grid points due to the homogeneous Dirichlet boundary conditions.We obtain the following system and eigenvalues ( In order to investigate the pollution error in higher dimensions, we define MP 2 to be the two-dimensional version of the original model problem.Therefore, on the standard two-dimensional square unit domain Ω = [0, 1] × [0, 1] with constant wavenumber k we consider u(x, y) = 0, (x, y) ∈ ∂Ω,

Analytical solution
The general solution to our one-dimensional model problem is given by However, apart from the general exponential form, we can also express the exact solution to MP 1 in terms of the Green's function G(x, x ′ ) given that this contains the eigenvalues.We need to use the Green's function given that we are working with the non-homogeneous Helmholtz equation.We therefore seek a solution of the form where the Green's function satisfies To obtain the Green's function, we need to by rewriting the differential operator from MP 1 in the Sturm-Liouville form [23]. Let L(x) be the general Sturm-Liouville operator Setting p(x) = −1 and q(x) = −k 2 , we obtain the Sturm-Liouville operator for the Helmholtz boundary value problem, which we will continue to denote by L(x).Using the Sturm-Liouville operator for the Helmholtz problem, we can rewrite the problem as The related eigenvalue problem is Using the eigenfunction expansion, we can rewrite MP 1 (1) as Normalizing with a factor √ 2 L , gives the following solution Integrating over the eigenfunctions for the eigenvalue problem gives The Green's function for Eq. ( 6) is given by Consequently on the unit interval, G(x, x ′ ) satisfies In the event that k 2 = j 2 π 2 , the eigenfunction expansion would become defective as this would imply resonance and unbounded oscillations in the absence of dissipation.Therefore, we explicitly need to warrant for the latter case and impose the extra condition k 2 ̸ = j 2 π 2 asserting that our Green's function exists.
Eq. ( 7) immediately provides us with an expression for the analytical eigenvalues.It is apparent that within the bounded domain [0, 1] there are an infinite number of eigenpairs.We employ this expression for the eigenvalues in upcoming sections, where we compare them with the numerical eigenvalues for the linear system of equations.We have expressed the exact solution to MP 1 as an eigenfunction expansion using Green's function.A similar approach will allow us to obtain the exact solution for the two-dimensional MP 2, which is given by The Green's function G(x, y, x ′ , y ′ ) on the unit square becomes where L(x, y) is the two-dimensional Sturm-Liouville operator corresponding to the Helmholtz equation from MP 2.

Error bounds
We now briefly explain the classical error bound for the pollution error.It was mentioned, that in order to keep the pollution error at bay, the grid should be refined such that k 3 h 2 < 1 [1,24].Such a severe restriction on the step-size is necessary, as the accuracy of the numerical solution deteriorates rapidly when the wavenumber increases.In fact, the numerical wave has dispersive properties, which are not present in the analytical wave.Consequently, a phase shift occurs which forms the primary source of error in the pollution term.Thus, in the case FEM and FDM solutions, a phase lag between the computed and the exact wave is directly related to the dispersive character of the discrete medium (i.e. the computed wave does not propagate at the speed of sound), which causes a difference between the exact and numerical wavenumber.This effect accumulates into the pollution term as k increases.

Numerical dispersion
To understand how the pollution error depends on the numerical dispersion and consequently on the wavenumber k, note that the dimensionless wavenumber is represented by where 2π f denotes the angular frequency and λ denotes the phase velocity.Discretizing the one-dimensional Helmholtz equation leads to Moreover, a general continuous solution is given by Evaluation of expression (15) in the discrete points gives Here i denotes the imaginary unit and k represents the perturbed wavenumber due to having a velocity which is different than the speed of sound.Substituting Eq. ( 16) into ( 14) results in Eq. ( 17) is a good approximation of the exact solution if k solves Applying Taylor's expansion on the cosine term and substituting into Eq.(18) gives The a priori error estimation due to The factor Ck 3 h 2 can be decomposed as follows.O(k 2 h 2 ) provides the error in the numerical wave speed for a wave travelling one period.The extra factor k is called the pollution error and corrects the total pollution error by scaling the error over one wave length by the total number of wave lengths travelled over the entire numerical domain [4,24].
In [1] it was noted that the error given in Eq. ( 19) mainly relates to the dispersion caused by the differing wavenumbers.The total error for the discretized one-dimensional Helmholtz operator is given by While applying the rule of thumb kh ≤ 0.625 is sufficient for keeping the first term under control, it does not harbour properly against the propagation of the pollution error which grows with k, even if kh is kept small enough.Thus, it has been advocated to set the grid resolution to k 3 h 2 ≤ ϵ instead of kh ≤ 0.625.Deraemaeker et al. [1] and Ainsworth [4] have proved that while it is possible to eliminate the pollution effect in one-dimensional Helmholtz problems by implementing a modified wavenumber, a similar conclusion cannot be extended to higher dimensional problems, see Section 3.2 for more details.As a result, much research has been conducted towards minimizing the pollution error.Note that the bound in Eq. (20) also holds in higher dimensions, as long as the second order finite difference method is used.For any general pth order scheme, we obtain the following error bound:

Dispersion correction
As mentioned earlier, it is possible to eliminate the pollution error for the one-dimensional MP 1. Recall from the previous section that the discretization of MP 1 using second order finite-differences was given by with general solution Evaluation of expression (23) in the discrete points led to which can be considered as plane-wave solutions of the discrete homogeneous Helmholtz equation, where k represents the numerical wavenumber.Substituting (24) into (22) and using Euler's trigonometric identity to decompose the exponential function, leads to If we want to eliminate the discretization error introduced into the scheme, we need to set k = k, i.e.
Unfortunately, this approach only works for one-dimensional problems.To see this, we look at the two-dimensional second order finite difference scheme Again, using plane-wave solutions, we can write u(x, y) = e i(k 1 x+k 2 y j ) , with (k 1 , k 2 ) = (k cos θ, k sin θ).Evaluating the solution in the discrete grid points (x i , y j ) gives u(x i , y j ) = e i( k1 x+ k2 y j ) , where ( k1 , k2 ) = ( k cos θ, k sin θ ) denotes the numerical wavenumber.Substituting these expressions into the difference scheme (26), the problem becomes Generally the direction of the plane waves θ is unavailable.This is due to the fact that plane waves propagate in an infinite number of directions.Even if there are directionally prevalent components in this decomposition they are not necessarily known a priori [3,25].Therefore, in order to solve for k to obtain a two-dimensional dispersion correction, Eq. ( 27) needs to be minimized over all angles θ, which remains problematic.

Pollution and spectral properties
The vast majority of works regarding the pollution error focuses on developing numerical discretization schemes to mitigate the pollution effect.Note that in order to study the pollution error, the analytical solution must be known, which limits the scope of potential test problems.Moreover, the a priori upper bound from expression (20) shows that the pollution error can be bounded from above by a term which grows linearly with k.This bound is known to be sharp, but provides little detail as regards the underlying characteristics with respect to its dependence on the numerical dispersion.As we have seen in Section 3.2, this becomes even more problematic in higher-dimensions.
Thus, in order to investigate the explicit translation of the numerical dispersion effect into the pollution error, we will use the information from the eigenvalues.To our current knowledge, this provides a novel theoretical perspective on the pollution error.How the pollution effect influences spectral properties and vice versa has remained an unconventional approach in researching the pollution error.Thus, in order to research these properties, we will start by looking at the differences between the exact and numerical solution of MP 1.The explicit use of the eigenvalues requires that we use a model problem with Dirichlet boundary conditions.The latter model problem has also been researched using the conventional method [11,14,20,21].

General properties
Recall from Section 2.1 that the one-dimensional MP 1 is given by We also showed that the analytical solution u(x, x ′ ) can be expressed in terms of the Green's function by If we define u j = u(x j ), j = 1, 2, . . ., n, where u is evaluated at the discrete grid points, we can represent the n-th term finite solution as a vector u(x) by where x = [x 1 , x 2 , . . ., x n ] T and v j (x) = sin(jπ x) ∥sin(jπ x)∥ is now the jth orthonormal eigenvector corresponding to the jth eigenvalue.The eigenvectors are exact discretizations of the continuous eigenfunctions.Note that the denominator of each term in the sum consists of the analytical eigenvalues.The right-hand side function f (x) of MP 1 is known and can also be represented using the same basis of orthonormal eigenvectors Similarly, we can write the numerical solution vector û as follows where λj are the numerical eigenvalues.We will proceed by using the notation u, û and f respectively.

One-dimensional spectral properties
We now have a simple expression which can be decomposed into terms containing the eigenvalues.This allows us to identify the polluting terms of the numerical solution.We start by investigating some general properties of the differences between the analytical and numerical eigenvalues.
Lemma 1 (Difference Eigenvalues).Let λ j be the analytical eigenvalue and λj be the numerical eigenvalue for j = 1, . . ., n, where n > π.If the expressions for the eigenvalues are given by then the difference between the eigenvalues is bounded from above by and from below by Proof.We start by showing expression (32).The difference between the eigenvalues is given by ) .
Substituting the power series for the cosine term and letting ζ represent our cut-off point, we obtain ) , .
This gives us an upper bound with respect to the difference between the analytical and numerical eigenvalue.Now to construct the lower bound in expression (33), we need to show that We again substitute the power series for the cosine term in the difference equation of the eigenvalues, which gives ) .
(35) Thus, in order to show that this holds for all j we need to show that each term in parenthesis is non-negative.We can write expression (35) as ) . ( The sum on the left hand side of expression (36) will be greater than the right hand side if we can prove that each grouped term is non-negative.Thus, we need to show that for each j = 1, 2, . . .n For a positive integer j and 0 < h < 1, this boils down to showing that for each j = 1, 2, . . .
Given that the right hand side of inequality ( 38) is strictly increasing with respect to j, we can evaluate the minimum at j = 1 and maximum at j = n to evaluate the lower bound.
where we used that h = n −1 < 1, where n > π.In both cases and already for the smallest value of l (l = 2), the statement holds.Consequently, we must have . □ Corollary 2 (Bound for Analytical Eigenvalue).Let λ j be the analytical eigenvalue and λj be the numerical eigenvalue for j = 1, . . ., n, where n > π.Then for each j, the analytical eigenvalue λ j is bounded in terms of the numerical eigenvalue λj by λj + .
Proof.This result follows directly from Lemma 1, where we have ) .□ Note that the upper and lower bound are dependent on the truncation error of the numerical discretization method.We will use Lemma 1 and Corollary 2 to obtain a more detailed understanding of the pollution error and how the numerical dispersion contributes to it.Moreover, we aim to find the eigenmodes which are responsible for this dispersive pattern.By writing the numerical eigenvalue as a function of the discretization error to approximate the analytical eigenvalue, we can propose a dispersion correction depending on the discretization scheme (see Section 4.1).

Corollary 3 (Sum Eigenvalues)
. Let λ j be the analytical eigenvalue and λj be the numerical eigenvalue for j = 1, . . ., n.Then the sum of the reciprocal of the analytical eigenvalues can be bounded in terms of the numerical eigenvalues by Proof.We use Corollary 2. By taking the minimum, we ensure that the analytical eigenvalue is bounded in terms of magnitude.This is necessary as both the continuous and discrete operator are indefinite, which leads to positive and negative eigenvalues.Taking the reciprocal and summing over all eigenvalues gives the statement.□ Lemma 1 and Corollary 3 provides us with a way to express the analytical eigenvalues in terms of the numerical eigenvalues by adding a correction term.This correction term depends on the truncation error of the discretization method.We can now construct an upper bound for the error term between the exact and numerical solution in the theorem below.
Theorem 4 (Pollution).Let u be the (exact) solution to MP 1 and let û be the numerical solution obtained by solving A û = f , where A is a non-singular matrix obtained by using a pth order finite difference scheme.If kh is kept constant, then the absolute error in the L 2 -norm is bounded from above by , Proof.Using the expansion for the right-hand side function f (x), We can write the numerical solution vector û as Note that this is based on the eigenfunctions evaluated at the discrete grid points and scaled to yield an orthonormal basis (see Section 4.1).Consequently, we have , where we used that the eigenvectors are orthonormal.We can write the error in the 2-norm as where we used that the eigenvectors are orthonormal and each sine term containing the location of the source is less than one.We would like to find an upper bound for expression (41).We can use Lemma 1 and Corollary 3, to provide element-wise upper bounds.From Lemma 1 it follows that ) 2 . ( For the denominator, Corollary 3 provides us with

Two-dimensional spectral properties
In this section we will extend the results from Section 4.2 to the two-dimensional case for MP 2. We start by defining the error estimation for the two-dimensional case.
Lemma 5 (Difference Eigenvalues).Let λ i,j be the analytical eigenvalue and λi,j be the numerical eigenvalue for i, j = 1, . . ., n, where n > π.If the expressions for the eigenvalues are given by then the difference between the eigenvalues is bounded from above by λ i,j − λi,j < (i 4 and from below by Proof.Similar to the one-dimensional case, substituting the power series for both the ith and jth cosine term and letting ζ represent our cut-off point, we obtain ) , ) , .
To construct the lower bound, we again substitute the power series for the cosine terms in the difference equation, which gives ) , ) .
We can write this as ) . (46) The sum on the left hand side of expression (46) will be greater than the right hand side if we can prove that each grouped term is non-negative.Thus, we need to show that for each i, j = 1, 2, . . .n ( 2(i 4l For positive integers i, j and 0 < h < 1, this boils down to showing that for each i, j = 1, 2, . . .n and l ≥ 2 Given that the right hand side of inequality ( 48) is strictly increasing with respect to i and j, we can evaluate the minimum at i, j = 1 and maximum at i, j = n to evaluate the lower bound.
where we used that h = n −1 < 1 such that nh = 1 and n > π.In both cases and already for the smallest value of l (l = 2), the statement holds.Consequently, we must have Similar to the one-dimensional case, we can now bound the analytical eigenvalues in terms of the numerical eigenvalues by using the lower bound.

Then the sum of the reciprocal of the analytical eigenvalues can be bounded in terms of the numerical eigenvalues by
, where we let λi,j = min {⏐ ⏐ ⏐ λi,j + (i 4 The proof is exactly the same as in the one-dimensional case.Using the lower bound and taking the reciprocal of each respective term will give the statement after summing over all i and j. □ We can use Lemma 5 and Corollary 6 to find a similar upper bound for the two-dimensional pollution error.We proceed by extending Theorem 4 to the two-dimensional case.

Corollary 7 (Pollution 2D). Let u be the (exact) solution to MP 2 and let û be the numerical solution obtained by solving
A û = f , where A is a non-singular matrix obtained by using a pth order finite difference scheme.If kh is kept constant, then the absolute error in the L 2 -norm is bounded from above by λi,j λi,j where λi,j = min {⏐ ⏐ ⏐ λi,j + (i 4 Proof.See proof of Theorem 4 for the one-dimensional case and extend it to the case where the index i also goes from 1 to n. □ We now have an upper bound for the total error in terms of the numerical eigenvalues.If we compare this to the conventional pollution term, we observe that the explicit linear dependence on k has been replaced by the explicit dependence on a superposition of the numerical eigenvalues.One advantage of writing the upper bound in this way is that we can immediately observe that the pollution error can be minimized in both one-and two-dimensions for this model problem.Even for this simple model problem, the latter was deemed impossible due to the wave travelling in infinite directions for the two-dimensional model problem, see Section 4.4.1.It is easy to see that if we can minimize the largest term of the sum, then all other terms, which are by definition smaller, will allow the total sum to be minimized as well. Corollary 8 (Minimized Pollution 2D).Let u be the (exact) solution to MP 2 given by expression (29) and suppose the L 2 -norm of the exact solution is always smaller than 1, i.e. ∥u∥ < 1.Let (i min , j min ) and ( ˆimin , ˆjmin ) denote the location of the smallest analytical and numerical eigenvalue respectively and suppose λi min ,j min ( λi min ,j min + then the relative error is bounded by Proof.Note that reciprocal of the smallest analytical value in terms of magnitude is the largest term in the set of the reciprocals of both the analytical and numerical eigenvalues.Now, unless (i min , j min ) = ( ˆimin , ˆjmin ), and λ i min ,j min ≈ λi min ,j min , the difference between the reciprocals will be largest there and thus it will provide the largest contribution to the sum.As a result, we must have for all i, j = 1, 2, . . ., n.Each (i, j)-term can be bounded from above by the left-hand side of inequality (50).Now substituting for each term in the upper bound from Corollary 7, we obtain ) The proof for the case The upper corollary reveals the paramount importance of the accuracy of the near-zero eigenvalues and eigenvectors.These dictate the upper bound for the remaining terms in the sum.If the near-zero eigenmodes are approximated with high accuracy, then the dispersion part of the pollution error can be minimized.This also means that if we need a rough estimate which is in the ball park of the true error, we can simply take the reciprocal of the smallest eigenvalue in magnitude due to its largest contribution to the entire sum.In the next section we will use the results from this section to construct a dispersion correction for the one-and two-dimensional model problems.

Eigenvalue based dispersion correction
Using this novel perspective, we can construct a dispersion correction using the eigenvalues and eigenvectors.Note that for the one-dimensional MP 1, this can easily be constructed and produces similar results compared to using the modified wavenumber, see Section 4.4.1.However, one advantage we now have is that we can use the same method in the higher-dimensional problem MP 2 to explicitly study how the numerical dispersion translates into the pollution error.In the next section, we will provide numerical evidence for the accuracy ranging from fine to very coarse grids (kh ≥ 1).The latter will allow to solve and study the current model problem very intricately, while keeping the problem size economically feasible compared to determining the step-size according to k 3 h 2 ≤ 1.

One-dimensional dispersion correction
We start by rewriting our original system as follows.Note that for our matrix A, if λj is an eigenvalue of A corresponding to eigenvector v j , then and thus λj + c is an eigenvalue of (A + cI).Consequently, if the analytical solution is known, a very simple remedy to obtain better accuracy according to our proposition, would be to let c = − λj min + λ j min . (52) This alleviates the mismatch between the exact near zero eigenvalue and the numerical eigenvalue at index j min .Recall from Section 3.2 that the pollution error for MP 1 can be eliminated by incorporating a modified wavenumber k.The latter represents an explicit correction of the wavenumber with respect to the dispersion error.Consequently, we can test for the elimination of pollution by comparing the relative error between the exact and numerical solution after solving the following two systems We furthermore denote ûk : Ãû k = f and ûc : A c ûc = f .
For the one-dimensional case, our results from Section 4.4.1 suggest that this is often enough to alleviate the adverse effects of numerical dispersion by adding the constant c.However, in some cases, and especially for the two-dimensional model problem, we need a way to shift more smaller eigenvalues while keeping the corresponding eigenvectors unchanged.The reason for this is that in the two-dimensional case there may be a higher algebraic multiplicity and corresponding locations (i min , j min ) where the smallest eigenvalue is located and consequently there may be more than one value for c.In order to circumvent this difficulty, we will make use of some theorems, starting with Brauer's theorem [26].
Theorem 9 (Brauer).Let A be a diagonalizable matrix with Av j = λ j v j and suppose r is a vector such that r ⊺ v j = 1, then for any scalar λj , the eigenvalues of the matrix consist of those of A, except that one eigenvalue λ j of A is replaced with λj .Moreover, the eigenvector v j is unchanged, that is Âv j = λj v j .
Proof.For a proof see [26] Corollary 10.Let A be a diagonalizable matrix with Av j = λ j v j and suppose r = v j then for any scalar λj , the eigenvalues of the matrix consist of those of A, except that one eigenvalue λ j of A is replaced with λj .Moreover, all the eigenvectors remain unchanged.
Proof.By the diagonalization property of A, we can write A = PΣP −1 , where Σ consist of the diagonal matrix containing the eigenvalues of A. Then v j lies in the j-th column of P. Let e j be the j-th column of the identity matrix.Then we can take Â = A + ( λj − λ j )P(e j e ⊺ j )P −1 , where r ⊺ = e ⊺ j P −1 , is precisely the jth column of the matrix P −1 .□ Using the above theorem and lemma, we can correct each eigenvalue, without shifting the eigenvectors of the previous system.Our dispersion correction for the two-dimensional case will use the above theorem recursively, which is extended into the following lemma.

Numerical results
We start by examining the error estimates for the pollution error for MP 1 and MP 2. In both cases we evaluate how close our error estimates lie relative to the true error.We then continue by examining the performance of our eigenvaluebased dispersion correction for both model problems.We mentioned that the conventional approach to studying pollution focuses on the notion of a discrepancy between the numerical and exact wavenumber k.In these instances, the exact solution is generally expressed in exponential form, and the eigenvalues are not expressed explicitly.An interesting observation is that this discrepancy between the numerical and exact wavenumber manifests itself through inaccurate near zero eigenvalues.Thus, if the numerical eigenvalues were better approximations of their continuous counterparts, then we expect the relative error to decrease.Section 4.4.1 contains the results for MP 1, while Section 4.4.2covers MP 2. All one-dimensional systems are solved using a direct method in Matlab R2018a.For the two-dimensional model problems with large k (k > 300), we use a standard preconditioned GMRES-solver to obtain the numerical solution, due to the increasing density of the coefficient matrix.

Error estimation
In Fig. 1 we plot the relative error (red) for random values of k between 100 and 2000 and the upper bound (blue) based on Theorem 4. Additionally, the dashed line is the reciprocal of the smallest numerical eigenvalue in magnitude.This allows us to assess how well this estimate is in the ballpark of the true relative error.
From Fig. 1 we can see that the upper bound always holds, as the blue line is always either above or exactly on the red line.The lines for the error (red) and upper bound (blue) never intersect, and the bound is sharp.Moreover, it shows that the true error behaves more erratically and has a more oscillatory nature which is in direct relation to the smallest eigenvalue in magnitude (dotted line).In particular for example, k = 1000 yields a true relative error of 1.493.If we use the bound where the error grows linearly with k, then we have that the pollution term is estimated to be bounded by k 3 h 2 = 390.625.Using the information from the eigenvalues, our upper bound gives 3.238.Note that the true error (red) follows an oscillatory pattern with peaks appearing for certain k.These are instances where one of the eigenmodes are close to resonant modes and the numerical approximation is poor.If λ j min or λj min is closer to zero than its counterpart, the reciprocal becomes very large.As the intrinsic oscillatory behaviour of the actual error become visible, we observe that the proxy based solely on the smallest eigenvalue (dashed black line) provides a close representation of the actual relative error.Thus, a lot of information can be deduced by simply looking at the smallest eigenvalue in terms of magnitude.Note the proxy is meant to perform as an estimate of the true relative error and not as an upper bound.In some cases, the estimation underestimates the actual relative error.

One-dimensional dispersion correction
For the one-dimensional case, we will use the dispersion correction in Eq. (52).It is also possible to correct each eigenvalue in order to obtain very accurate solutions.However, the results we obtained by using the simple correction with respect to the smallest eigenvalue produces comparable results relative to including the modified wavenumber, which is known to eliminate the pollution error to a satisfactory level.Thus, we start by adding the correction term, which is based on adding terms of truncation error, to the coefficient matrix A, This alleviates the mismatch between the exact near zero eigenvalue and the numerical eigenvalue at index j min .As mentioned, recall from Section 3.2 that the pollution error for MP 1 can be eliminated by incorporating a modified wavenumber k.The latter represents an explicit correction of the wavenumber with respect to the dispersion error.
Consequently, we can test for the elimination of pollution by comparing the relative error between the exact and numerical solution after solving the following two systems .
We furthermore denote ûk : Ãû k = f and , ûc : Table 1 contains the results for randomly chosen wavenumbers k between 100 and 1000 using 10 grid points per wave length (kh = 0.625) and approximately 6 grid points per wave length (kh = 1).The latter represents the results of applying the dispersion correction on a very coarse grid.The reason we consider a coarse grid is that in absence of dominating pollution, which has been corrected by either k or c, we should be able to obtain accurate results.The results from Table 1 show that using the eigenvalue correction c leads to significant reduction of the relative error.In some instances it provides even better accuracy than using the adjusted wavenumber k.Similar conclusions can be drawn from the results when letting kh = 1.While e c exceeds e k occasionally, we see that e k is more much insensitive to changes in the grid resolution.In particular for k, the average error for kh = 0.625 appears to be fixed around 0.06, and increases to about 0.18 for kh = 1, whereas even for kh = 1 even further reductions of the error can be obtained by using the eigenvalue correction c.

Error analysis
In this section we provide numerical results for MP 2. We start by presenting the error and the upper bound using the eigenvalues in Fig. 2. To put illustrate the pollution effect, we will present the solution and error for various examples in Figs. 3 and 4.
Starting with Fig. 2, we observe that the upper bound always holds.Similar to the one dimensional case, we again observe the oscillatory nature of the actual true error.The spikes in the error provide great insight relative to the linear relation between k and the increasing error.From Fig. 2 we additionally notice that almost for all k, the relative error is always larger than one.While the upper bound is of the same order as the true error, it is often larger than the true error.Yet, it follows the same oscillatory pattern as the true error from which we can deduce how much each eigenmode contributes to the error.For the first time to our knowledge, we are therefore able to break down and study Table 1 Relative error e before and after dispersion correction using the eigenvalue-correction c and k for kh = 0.625 (left) and kh = 1 (right  the dispersive property of the numerical solution in higher-dimensions.The oscillatory error pattern also reveals that the largest contribution in terms of the dispersion can be pointed to the smallest eigenvalues which drive the total sum in Corollaries 6 and 7.
Secondly, as mentioned previously, in some cases the upper bound is much larger than the actual error.This can be understood by noting that in this model problem the source is located at the centre of the numerical domain.Thus, at all even indices j, the sine-term related to the source will be zero and these terms will not be included into the sum.In cases where we see an overshoot, either the smallest numerical or analytical eigenvalue is located at an even index.While it is not part of the actual error, due to being eliminated by the sine-term containing π 2 , it is in fact still included in our upper bound.Note that in creating the upper bound, we do not differentiate between even and odd indices.The reason for this is that we prefer an upper bound which covers the worst case scenario and is not limited to fixing the location of the point source for this model problem.
To illustrate the full pollution effect, we continue by plotting some solutions for several values of k.We have plotted the results for k = 50 and k = 150 in Figs. 3 and 4. Note that here we are using 20 grid points per wave length which results in kh = 0.3125.On the x-and y-axis respectively, we have the index i, j corresponding the grid point (x i , y j ).The colorbar indicates the value of u(x i , y j ).In all subfigures, blue hues correspond to negative values, whereas red hues correspond to positive values.We can see from Fig. 3 that for a medium size wavenumber (k = 50), the numerical solution is a fair approximation of the exact solution.We can see from the contour of both figures that most of the error does not come from numerical dispersion.If the latter would be the case, the contour of the numerical solution would differ significantly from the exact solution (see Fig. 4 for example).
We repeat the analysis for a larger wavenumber; k = 150.From Fig. 4(b) we can see that the accuracy deteriorates rapidly as k increases.Fixing the resolution at kh = 0.3125 does not suffice in keeping both the phase and amplitude differences under control.We can see from Fig. 4(a) that the exact and numerical solution do not coincide, forcing the conclusion that severe differences between the exact and numerical wavenumber are present.It furthermore supports the observation that increasing the number of grid points mainly results in a substantial resolve of the amplitude differences, rather than the phase differences.

Two-dimensional dispersion correction
We now investigate the effect of applying a dispersion correction using the eigenvalues for the two-dimensional MP 2. Note that for the two-dimensional case it will not suffice to simply add the constant c = − λi min ,j min + 10 ∑ n=2 (−1) n (i min π 2n + j min π 2n )h 2(n−1) (2n)! .
There may be multiple locations (i min , j min ) where the smallest eigenvalue is located and thus there may be more than one value for c.If the algebraic multiplicity of the smallest eigenvalue is exactly two, then adding the constant c will still reduce the overall error.However, in the two-dimensional case, the algebraic multiplicity may often be larger than two.Therefore, we will follow the steps described in Algorithm 1.Given that we are solving for the underlying Green's function and general solution, the property that separation of variables can be applied, results in the fact that we can start  correcting the eigenvalues already in the one-dimensional case and use those to construct the new coefficient matrix Â.As this leads to a correction which is independent of the true analytical wavenumber and pre-specified propagation angles, the resulting coefficient matrix will become more dense and subjected to a different sparsity pattern.In Fig. 5 we have plotted the sparsity pattern of the corrected coefficient matrix Â for k = 10, using kh = 1.5.It is apparent that many diagonals are added to the matrix.Additionally, we can see the formation of clear blocks in the centre of the adjusted matrix.For smaller kh, the new coefficient matrix Â will contain many diagonals and larger blocks, being much more dense yet sparse compared to the original coefficient matrix A. Before we solve the linear systems explicitly, we verify the two-dimensional dispersion correction.Irrespective of the solution method, we can use the series representation of the discrete solution using the dispersion correction, to establish whether the resulting solution will indeed be dispersion free.Thus, in Table 2 we report the results for various k and kh using the dispersion correction on the numerical eigenvalues which we construct from the one-dimensional case.Note that we do not need to compute the two-dimensional eigenvalues and eigenvectors in Algorithm 1 and proceed until step 6 in the algorithm.We note that in almost all cases the true relative error is always larger than 1 without the dispersion correction.Using the new correction for this model problem, the error is reduced significantly and shows relative independence as regards kh.Even when we move to very coarse grids, which will allow for solving the corresponding linear systems accurately and iteratively, the error stays almost constant despite being in the highfrequency range, which to our current knowledge, is a novel theoretical result.For kh = 2, ⊘ represents a case where the numerical smallest eigenvalue without correction becomes zero and we have resonance.This shows the severity of the dispersion causing the pollution, as the actual analytical eigenvalue is still far away from zero.
We now assess the performance in terms of computation time and iterations.In order to make a fair comparison, we solve the linear systems using second-order finite differences using the rule k 3 h 2 = 5, as this should reduce the pollution error to some extent.We then increase k and report the relative error and number of iterations.From Table 2 we observe that we can use coarser grids to solve for the same wavenumber k and we compare the differences.We will use GMRES as the iterative solver and apply the standard Complex Shifted Laplacian preconditioner (CSLP) with a complex shift set to 1 using multigrid.We use one V-cycle with one pre-and post-smoothing step.Some important remarks are in place.First of all, the accuracy achieved from the iterative solver will depend on the stopping criterion and we set the tolerance at 10 −6 .Second of all, higher accuracy could have been received of order 10 −2 by taking k 3 h 2 smaller.However, that would lead to large linear systems and thus we report up to N = 320 2 .Finally, the number of iterations needed to reach convergence for GMRES remains unaffected by the increased accuracy and a detailed study on the convergence behaviour lies beyond the  scope of this work.For normal matrices in general, GMRES convergence is governed by the smallest eigenvalues in terms of magnitude.Thus, while the resulting eigenvalues may be more accurate, they may still be small leading to hampered convergence.Table 3 sheds light on some interesting observations made previously.Using the dispersion correction, we can solve for the same wavenumber k while using coarser grids which lead to smaller linear systems.This is beneficial as this implies that the theoretical study of the pollution error can now be studied from all angles simultaneously in higher-dimensions using coarser systems.If we for example look at k = 80, we note that even with N = 320 2 , which is equivalent to using 27 grid points per wavelength (k 3 h 2 ≈ 5), the error keeps increasing and require even finer grids to obtain accurate solutions.Moreover, the standard iterative solver needs 654 iterations and approximately 1386 s to reach convergence.
On the contrary, using N = 40 2 , which is equivalent to using 3 grid points per wave length (kh ≈ 2), the error is reduced to the tolerance level of GMRES.
In Fig. 6 we have plotted the exact solution for k = 200 on a fine grid and compare it to the numerical solution computed on a very coarse grids using the eigenvalue based dispersion correction.We can see that the accuracy and resolution for such a high wavenumber computed on a very coarse grid (kh = 2) are still satisfactory.The figures illustrate what we observed for k = 200 in Table 2; the error, after introducing the dispersion correction, at its best is of order 10 −14 and at its worse of order 10 −13 .Even for a simple model problem such as ours, achieving an explicit dispersion correction independent of the propagation angle in higher-dimensions is unprecedented.

Concluding remarks and summary
In this paper we researched the pollution error due to numerical dispersion for the Helmholtz problem using Dirichlet conditions from an unconventional and novel perspective; the eigenvalues.We have sought to provide the first theoretical basis for defining the pollution error in terms of the eigenvalues.Our work also aims to build a bridge between studying the relation between iterative solvers and the accuracy of numerical solutions now that both have been expressed in terms of a common denominator; the near-zero eigenvalues.This is especially interesting due to the fact that these near-zero eigenvalues, which are generally responsible for hampering the convergence of iterative solvers, are in fact indicators for the pollution effect.Furthermore, by examining the behaviour of the eigenvalues, we proposed an upper bound for the relative error.In particular, we have shown that if the near-zero eigenvalues and eigenvectors are approximated with high accuracy, then the dispersion part of the pollution error can be minimized considerably.Our results also illustrate that the error grows in an oscillatory manner, and our error bound is able to capture and reveal this effect.For higher-dimensional problems, it has been shown theoretically that the pollution error cannot be avoided [2].However, we have provided a theoretical framework where the pollution error can be brought to approximately zero for very large wavenumbers, irrespective of the grid resolution (kh).The basis of this approach lies in correcting the respective eigenvalues with the remainder, which depends on the order of the truncation error of the finite difference scheme.
Using our new results, we have shown that on a coarse-grid it is possible to obtain pollution-free and therefore accurate one-and two-dimensional solutions.In particular the latter exhibits the novelty of our approach as it is generally considered impossible, even for simple problems to reduce the pollution error to such a degree.The solutions obtained account for all propagation angles simultaneously and do not rely on pre-determined angles for plane-wave propagation, which promotes a detailed study of the pollution effect.

Fig. 2 .
Fig. 2. 2D Relative error for with upper bound for various k between 10 and 425 using kh = 0.625.
Substituting the difference expression into (34) and grouping terms on the left hand side leads to a true statement if each of the term in parenthesis is non-negative.

Table 2
Relative (RE) and corrected relative error (CRE) for various k and kh.⊘ represents the case where the numerical smallest eigenvalue becomes zero.