Towards Accuracy and Scalability: Combining Isogeometric Analysis with Deflation to Obtain Scalable Convergence for the Helmholtz Equation

Finding fast yet accurate numerical solutions to the Helmholtz equation remains a challenging task. The pollution error (i.e. the discrepancy between the numerical and analytical wave number k) requires the mesh resolution to be kept fine enough to obtain accurate solutions. A recent study showed that the use of Isogeometric Analysis (IgA) for the spatial discretization significantly reduces the pollution error. However, solving the resulting linear systems by means of a direct solver remains computationally expensive when large wave numbers or multiple dimensions are considered. An alternative lies in the use of (preconditioned) Krylov subspace methods. Recently, the use of the exact Complex Shifted Laplacian Preconditioner (CSLP) with a small complex shift has shown to lead to wave number independent convergence while obtaining more accurate numerical solutions using IgA. In this paper, we propose the use of deflation techniques combined with an approximated inverse of the CSLP using a geometric multigrid method. Numerical results obtained for both one- and two-dimensional model problems, including constant and non-constant wave numbers, show scalable convergence with respect to the wave number and approximation order p of the spatial discretization. Furthermore, when kh is kept constant, the proposed approach leads to a significant reduction of the computational time compared to the use of the exact inverse of the CSLP with a small shift.

problems using IgA discretizations containing both a constant wave number and a variable wave number ( , ). In the latter case, we focus on the performance of the solver in the presence of sharp discontinuities in the wave number and the underlying solution. For the two-dimensional model problems, we report the number of iterations and the CPU-timings to show that the use of deflation combined with a multigrid-approximated CSLP allows for tremendous gain in computational efficiency while keeping scalable convergence in terms of the number of iterations. The method outperforms the exact inversion of the CSLP with a small complex shift in terms of number of iterations and CPU-timings when a large constant or non-constant wave number is used.
The paper is organized as follows. We start with the variational formulation of the Helmholtz equation and the model problem definitions in section 2. In section 3 we discuss the deflation preconditioning technique for the Krylov subspace method. Here we introduce the use of higher-order Bezier curves as a basis for the deflation space. We then proceed by performing a spectral analysis of the preconditioned systems and various numerical experiments in section 4 in order to determine the convergence behavior. We provide CPU-timings in order to assess the computational time complexity. We conclude our results in section 5.

Problem Definition
In order to assess the quality of the proposed solution method, we start by defining a variety of one-and two-dimensional model problems. In particular, we consider model problems involving both constant and non-constant wave numbers. Then, we proceed by presenting the variational formulation and B-spline discretization using the generalization of our two-dimensional model problem as an example.

MP 1-A
The first one-dimensional model problem, MP 1-A, is given below Here, homogeneous Dirichlet and Sommerfeld boundary conditions are applied on the left and right boundary, respectively. The exact solution for MP1-A is given by ( ) = . Model problem MP 1-A will be adopted to investigate the pollution error for various values of the approximation order of the B-spline basis functions. It will also be used to perform a convergence factor study in order to check the robustness of the solver.

MP 1-B
Model problem MP1-B involves an inhomogeneous source term. Furthermore, Dirichlet boundary conditions are applied on both boundaries, resulting in the following model problem  Note that, for 2 = 2 2 , the eigenfunction expansion would become defective as this would imply resonance and unbounded oscillations in the absence of dissipation. Therefore, we explicitly impose the extra condition 2 ≠ 2 2 asserting that our Green's function exists. By imposing Dirichlet boundary conditions, the resulting system matrix exhibits the most unfavorable distribution of the eigenvalues 34 . Note that the inclusion of Sommerfeld radiation conditions already slightly shifts the eigenvalues away from the origin due to the natural occurring damping.

Two-dimensional Model Problems 2.2.1 MP 2-A
In two dimensions, we consider as MP 2-A the natural extension of MP 1-B to two dimensions: Again, the analytic solution is given by the Green's function: 2 ≠ 2 2 + 2 2 , , = 1, 2, 3, … .

MP 2-B
As a final model problem, MP 2-B, we consider a non-constant wave number = ( , ), an inhomogeneous source function and Dirichlet boundary conditions on the entire boundary Ω.
Here, ( , ) is chosen to be a two-dimensional step function consisting of 16 different values. For a fixed value of , the values vary between 1 2 and 3 2 . Figure 1 shows the considered field ( , ) for = 100. This model problem uses various horizontal layers in order to test the performance of the solver when a variable wave number ( , ) is used. This is particularly important to investigate as in certain cases for Helmholtz-type problems the underlying solver might diverge. This has been reported for domain decomposition based preconditioners using inexact factorizations 29 .

Variational Formulation
To illustrate the variational formulation, we consider the inhomogeneous Helmholtz equation in two dimensions adopting inhomogeneous Robin boundary conditions: Here, Ω is a connected Lipschitz domain, ∈ 2 (Ω), ∈ 2 ( Ω) and > 0 a constant wave number. Let us define  = 1 0 (Ω) as the space of functions in the Sobolev space 1 (Ω) that vanish on the boundary Ω. The variational formulation of (8) is obtained by multiplication with an test function ∈  and application of integration by parts where A geometry function is then defined to parameterize the physical domain Ω by describing an invertible mapping to connect the parameter domain Ω 0 = (0, 1) 2 with the physical domain Ω.
The considered geometries throughout this paper can be described by a single geometry function , that is, the physical domain Ω is topologically equivalent to the unit square. In case of more complex geometries, a family of functions ( ) ( = 1, … , ) is defined and we refer to Ω as a multipatch geometry consisting of patches.

B-spline basis functions
To discretize Equation (8), univariate B-spline basis functions are defined on the parameter domain Ω 0 by an underlying knot vector Ξ = { 1 , 2 , … , + , + +1 }. Here, denotes the number and the order of the B-spline basis functions. Based on this knot vector, the basis functions are defined recursively by the Cox-de Boor formula 35 , starting from the constant ones Higher-order B-spline basis functions of order > 0 are then defined recursively The resulting B-spline basis functions , are non-zero on the interval [ , + +1 ) and possess the partition of unity property. Furthermore, the basis functions are − -continuous, where denotes the multiplicity of knot . Throughout this paper, we consider a uniform knot vector with knot span size ℎ, where the first and last knot are repeated + 1 times. As a consequence, the resulting B-spline basis functions are −1 continuous and interpolatory at both end points. Figure 2 illustrates both linear and quadratic B-spline basis functions based on such a knot vector.
For the two-dimensional case, the tensor product of univariate B-spline basis functions are adopted for the spatial discretization. Let dof denote the total number of multivariate basis functions Φ , . The spline space  ℎ, can then be written as follows The Galerkin formulation of (10) now becomes: Find ℎ, ∈  ℎ, such that The discretized problem in (16) can be written as a linear system Here, ℎ, , = ∫ Ω ∇Φ , ⋅ ∇Φ , dΩ is the stiffness matrix, ℎ, , = ∫ Ω Φ , Φ , dΩ the mass matrix and ℎ, , = ∫ Ω Φ , Φ , dΓ the boundary mass matrix. Next, by defining ℎ, = ℎ, + 2 ℎ, − ℎ, we can write For the ease of notation, we will proceed with the notation = , and drop the subscript (ℎ, ). Using this discretization technique, we will now briefly explain the model problems used in this paper.

Preconditioned Krylov Subspace Methods
For Helmholtz-type problems, the number of degrees of freedom grows with the wave number . Consequently, for larger values of the linear systems become very large, especially in two and three dimensions. As a result, direct solvers become unattractive and computationally expensive due to fill-in. Thus, in order to solve the model problems, an iterative method is considered. For normal matrices, the convergence of Krylov subspace methods is closely related to the underlying distribution of the eigenvalues. The more clustered the eigenvalues, the better and faster the method converges. For MP 1-B, we can easily deduce the analytical eigenvalues which are given by = 2 2 − 2 . It is easy to see that the resulting systems will have both positive and negative eigenvalues, rendering it indefinite. This limits our choice of Krylov subspace methods, where often GMRES is chosen as the underlying iterative solver. Many studies have investigated the performance of GMRES for the Helmholtz equation and the use of preconditioners is necessary in order to obtain satisfactory convergence. One of these preconditioners is the CSLP, which is defined by taking the original coefficient matrix and adding a complex shift. Thus, in the one-dimensional case, CSLP is given by and the resulting preconditioned system becomes Here, denotes the identity matrix and 2 ∈ ℝ the shift. In practice, the CSLP is often included by applying a fixed number of V-cycles of a (geometric) multigrid method to approximate M −1 . As a smoother within the multigrid method, we adopt damped Jacobi ( = 0.6). Note that the use of standard smoothers (i.e. Jacobi or Gauss-Seidel) within a multigrid solver 36 in IgA results in −dependent convergence. This has led to the development of non-standard smoothers to obtain -independent convergence rates 37,38,39,40,41,42 . Their application within a multigrid method to approximate −1 is, however, out of the scope of this paper. In order for −1 to remain a good preconditioner, the shift 2 should not be too small as otherwise multigrid will diverge 20,5 . On the other hand, the preconditioner should still remain close enough to the original coefficient matrix , which is also why 2 should not be too large.
While the complex shift transfers part of the unwanted spectrum onto the complex axis, unless the shift is kept very small, near-zero eigenvalues start appearing around the origin as the wave number increases 43,34,31 . This effect accumulates in higherdimensions. Especially the real part of these near-zero eigenvalues is known to have a detrimental effect on the convergence behavior of the Krylov solver. One simple yet effective way to get rid of these unwanted near-zero eigenvalues is to use deflation. By using an orthogonal projection, the deflation operator, which we will denote by projects these unwanted eigenvalues onto zero. Thus, for a general symmetric linear system, we can define the projection matrix̂ and its complementary projection aŝ Here the matrix is the deflation matrix whose columns consist of the deflation vectors and denotes the coarse-grid variant of the original coefficent matrix . The performance of the deflation preconditioner depends on the choice of . In principle, the deflation matrix is defined as the prolongation and restriction matrix from a multigrid setting using a first-order linear interpolation scheme 44,45,46,47,31,48 . While this improves the convergence significantly, the near-zero eigenvalues start reappearing for very large wave numbers . Consequently, it has been shown recently that the use of a quadratic interpolation scheme results in close to wave number independent convergence for the two-level deflation preconditioner 33 . In fact, the use of these higherorder deflation vectors results in a smaller projection error compared to the case where a linear interpolation schemes is used.
To construct the stencil for the deflation matrix , we start by introducing the rational ́ curve.
are known as the Bernstein basis polynomials of order . The points are called control points for the ́ curve.
Definition 2 (Rational ́ curve). A rational ́ curve of degree with control points 0 , 1 , … , and scalar weights For large , the prolongation operator working on the even basis functions is not sufficiently accurate to map the underlying eigenvectors to its fine-and coarse-grid counterparts. We thus consider a quadratic rational ́ curve in order to find appropriate coefficients to yield a higher order approximation of the fine-grid functions ℎ by the coarse grid functions 2ℎ . The motivation for using the rational ́ curve is that the latter formulation allows for the weights to be adjusted in order to account for the higher requested accuracy at the even basis functions. In particular, if we define the coarse-grid basis function with respect to the degree of freedom by [ 2ℎ ] , then the quadratic approximation is defined as follows and [ 2ℎ ] ∕2 , whenever is even. Because we wish to add more weight whenever is even, we take weights 0 = 2 = 1 2 , 1 = 3 2 and = 1 2 to obtain When is odd, [ 2ℎ ] ( −1)∕2 and [ 2ℎ ] ( +1)∕2 are associated to an even degree of freedom and the resulting stencil leads to the standard linear interpolation scheme.
Thus, with respect to the coarse-grid function 2ℎ at degree of freedom , we can define the stencil for the prolongation and restriction operator as if is odd, for = 1, … , dof 2 . Now that we have a stencil to construct , we can use Equation (21) to construct the deflation preconditioner. The resulting linear system to be solved becomes Often, the deflation preconditioner is combined with the CSLP to accelerate convergence, which leads to solving the following system where, as mentioned previously, − is generally approximated using a multigrid method. Note that the operator is never constructed explicitly but is passed as a function handle onto the coefficient matrix within the GMRES-algorithm. Moreover, we will refer to based on the higher-order quadratic approximation as the 'Adapted Deflation Preconditioner' (ADP) to distinguish between the standard deflation preconditioner using linear interpolation and the higher-order deflation scheme. Additionally, a weight-parameter can be included to further increase the accuracy of the prolongation and restriction operator 33 . In this case, the stencil for the prolongation and restriction operator is given by if is odd, for = 1, … , dof 2 . Note that the value of is constant with respect to and ℎ and is chosen such that the projection error is minimized 33 .

Numerical Results
To assess the quality of the proposed iterative solver, we consider the model problems described in 2.3. We start by studying the pollution error for our one-dimensional model problem when adopting high-order B-spline basis functions for the spatial discretization. In 15 , a detailed first application of IgA discretizations for Helmholtz problems has been given. We therefore only show the pollution reduction for the model problems used in this paper. We proceed by conducting a spectral analysis in one dimension (MP 1-B) to investigate the effect of the proposed preconditioning techniques on the spectrum of the preconditioned operator. Finally, the convergence of the iterative solver is studied in terms of both iteration numbers and CPU timings. These are obtained for the proposed deflation based preconditioner and compared to the use of the (exactly inverted) CSLP.

Pollution Error
As a first verification of the quality of the solver, a spatial convergence test has been performed for the MP 1-A benchmark for a fixed value of the wave number ( = 1). Figure 3 shows the 2 -error under mesh refinement for different values of obtained with a (deflated) GMRES solver. Note that, for all values of , the order of convergence observed is (ℎ +1 ), as expected from literature 7 . For = 5 and a sufficiently fine mesh, the 2 -error becomes close to machine precision and therefore suffers from errors in floating point operations. Detailed 2 -errors can be found in Table 1.  In order to determine the effect of using B-spline basis functions on the pollution error, we present the 2 -error as a function of the wave number as well. Note that the case = 1 corresponds to the standard Lagrangian FEM solution. We observe that for = 2 to = 5 the 2 -error with respect to the analytical solution decreases. While this leads to significant more accurate solutions, we do observe that as the wave number increases, the 2 -error increases accordingly. This is in line with the literature, as it has been proven that the pollution error can not be avoided completely 49,50 . Moreover, as increases the advantage of using = 5 over = 4 decreases as both lead to similar accuracy. For standard FEM, this was already observed 51 . Furthermore, decreasing the number of degrees of freedom per wavelength from 10 (solid line) to 7.5 (dashed line) already results in lower accuracy. In fact, the achieved accuracy for = 4 and = 5 with 7.5 degrees of freedom per wavelength is similar to the obtained accuracy for = 3 when 10 degrees of freedom per wavelength are used. Thus, in order to warrant for sufficiently accurate numerical solutions for larger wave numbers, we will keep the grid resolution at ℎ = 0.625.

Spectral Analysis
We now proceed by analyzing the spectrum of the preconditioned system of MP 1-B. It is widely known that the near-zero eigenvalue distribution strongly affects the resulting convergence factor of Krylov subspace methods. In general, these eigenvalues close to the origin hamper the convergence of such methods. By using Dirichlet boundary conditions, we additionally have the most unfavorable distribution of eigenvalues, allowing us to fully examine the potency of the preconditioner. With respect to CSLP, many studies have confirmed that unless the complex shift is kept very small and the inversion is performed exactly, the eigenvalues cluster near the origin 34,20,17 . In this work, we are not inverting the CSLP exactly and we thus need to derive a proxy of the multigrid iteration used to approximate the inverse. This can be done by using the two-grid iteration matrix from a multigrid setting 52 . This leads to the following approximation for where 2ℎ denotes the coarse-grid variant of the CSLP, the diagonal of and denotes the smoothing steps. Additionally, we use damped Jacobi as a smoother with damping parameter = 0.6. Note that for the multigrid cycle, is now the standard geometric multigrid prolongation and restriction operator based on the linear interpolation scheme. Using this approximation for M −1 , we study the eigenvalues of the linear system ̃ −1 , where denotes the adapted deflation preconditioner based on the quadratic Bezier scheme. Figure 5 shows the spectra of the preconditioned linear system for = 50 (left) and = 500 (right) for different values of . The complex shift has been set to 2 = 1 and one pre-and post-smoothing step has been used. Note that half of the eigenvalues of the preconditioned system will be projected onto the origin. The other half of the eigenvalues will therefore be non-zero. For = 50 (left), all eigenvalues for a fixed value of have a spiral shape, apart for the case = 1. Furthermore, the angle between the eigenvalues and the real-axis in Quadrant 2 becomes smaller for higher values of . Therefore, we can expect a -dependency for small values of for ≥ 2. For = 500 this becomes even more obvious visually, as the higher number of degrees of freedom leads to more eigenvalues. As the preconditioned operator becomes too large to determine all eigenvalues, it remains unsure how the spectra will further developed for large values of . Next, in Figure 6, we fix = 2 (left) and = 5 (right) and let increase from = 50 to = 250. Here we can clearly observe that for = 2, the eigenvalues remain fairly clustered in a semi-circular shape. Increasing leads to a larger radius of this semi-circle and therefore a larger spread of the eigenvalues. If we focus on the small box containing a detailed illustration of what is occurring near the origin, we observe that for larger more and more eigenvalues are starting to move closer towards the origin. Closest to the origin we can clearly see the eigenvalues for = 250 (purple) and = 200 (red) appearing. Although the eigenvalues seem less clustered for = 5, the same general behavior can be observed.
Classically, deflation based preconditioners are combined with the CSLP in order to obtain faster GMRES-convergence. Note that the projection matrix projects a certain part of the spectrum of the coefficient matrix onto zero. The addition of the CSLP ensures that the remaining non-zero eigenvalues are shifted towards the complex axis, which gives it the typical circular spectrum in the complex plane. However, for finite differences discretizations, the use of the CSLP is often redundant as wave number independent convergence can already be attained by using deflation without another preconditioner. An interesting point of investigation would be to study the spectrum of the preconditioned system . In Figure 7, we study the spectrum of where we use the weight-parameter in order to construct accurate higher-order deflation vectors. We indeed observe that half of the eigenvalues are mapped onto zero and the remaining part of the eigenvalues remains clustered. The eigenvalues no longer cross the negative real axis, which results in the preconditioned system being positive semi-definite. Apart from a scaling factor, the spectrum of = 50 looks similar to the spectrum of = 250 and illustrative of the −independent convergence. However if we compare = 2 (left) to = 5 (right), we observe that for = 5 the eigenvalues of are closer to zero and have a larger spread between the smallest and largest eigenvalue. For example for = 250, the eigenvalues for = 2 lie in the ballpark of 450 to 550, whereas for = 5 the eigenvalues lie between 50 and 250.
For illustration purposes, we study the effect of interpolating and restricting the fine-grid systems with low accuracy. In Figure  8, we have plotted the spectrum of , where we deliberately set the weight-parameter to a value which lowers the accuracy of the interpolation scheme to construct the deflation matrix . It immediately becomes apparent that the resulting preconditioned system is again indefinite as some eigenvalues are still negative. Moreover, if we compare = 2 (left) to = 5 (right), we observe a larger spread for = 2 compared to = 5. This is the opposite of what we observed in Figure 7. In both cases, the example is illustrative of the fact that having a low-order interpolation scheme to construct the prolongation and restriction  operator, will lead to an ineffective mapping of the underlying eigenvalues and eigenvectors. As the wave number increases and the solutions become more oscillatory, the accurate mapping of the fine-and coarse-space become of increasing importance. Therefore, we chose a weight-parameter such that the projection error with respect to the eigenvectors are minimized 33 .

Numerical experiments
We will now present the convergence results for our model problems using the preconditioners described above. Unless stated otherwise, we set the grid resolution at ℎ ≈ 0.625, which is equivalent to using 10 degrees of freedom per wavelength. We use GMRES as the underlying Krylov subspace method and use a stopping criterium on the relative residual of 10 −7 . A serial implementation is considered on an Intel(R) i7-8665 CPU @ 1.90GHz using 8GB of RAM. For the sake of completeness and clarity, we briefly introduce the notation of the preconditioners used in the experiments.
• := Adapted Deflation Preconditioner (ADP) + GMRES using the shift-parameter to construct the deflation matrix. The value has been taken from 33 and is constant throughout the use of the numerical experiments.

MP 1-B
We start by numerically solving MP 1-B using the deflation preconditioner together with the multigrid approximation of the CSLP. We differentiate between deflation with and without the weight-parameter and we vary the number of V-cycles between 1 and 10 iterations to obtain a fair approximation of the inverse of the CSLP. Table 2 shows the number of GMRES iterations for the three different combinations. Starting with 1 (first column) we observe that the number of iterations both grow with and . These results are in line with the spectral analysis from Section 4.2, in particular Figure 5 and Figure 6. There we observed that the angle the eigenvalues make with the real axis becomes smaller for increasing , anticipating some −dependent convergence. Similarly, in Figure 5, the radius of the circular shape of the eigenvalues grows with , leading to the expectation that the number of iterations could grow with . However, for very large wave numbers such as = 10 4 , we observe that the number of iterations is inversely related to . Note that the spectrum of such large wave numbers has not been examined in this work. For preconditioner we obtain -independent convergence up to 10 6 . Finally, increasing the number of V-cycles to 10 for 10 (third column) leads to -independent convergence and shows identical results to inverting CSLP exactly; see Table 3 and Table  4. Note, however, that application of 10 is more expensive compared to the application of 1 as we use more Vcycles in order to obtain a fair approximation of the CSLP. This result, however, is in line with the literature as regards the −dependent convergence observed for IgA discretizations combined with multigrid. Generally speaking, more smoothing steps and/or intricate smoothers are needed in order to counteract the increasing number of iterations for higher-order IgA schemes.

TABLE 2
Number of (preconditioned) GMRES iterations to reach convergence for MP 1-B. Here we combine the two-level deflation (D) using quadratic Bezier curves with the CSLP. The shift 2 has been set to 1. CSLP has been inverted using 1 and 10 respectively.
As mentioned previously, for a finite difference scheme, it has been shown that the deflation preconditioner without CSLP could also lead to close to wave number independent convergence. Thus, analogously, we perform a similar test to examine how well the deflation preconditioner performs with no other preconditioner. We will distinguish two cases; ADP without weight parameter and ADP with weight parameter . For , the results are reported in Table 3, where we compare the number of iterations to the number of iterations obtained by using the (exactly inverted) CSLP with shift −1 ( ). Note that, the exactly inverted CSLP leads to iteration numbers independent of both and . In absence of the weight parameter, the number of GMRES iterations preconditioned with increases with and for wave numbers < 10 5 . These results are similar to the ones reported in Table 2, where we observed a similar effect for 1 . The observed number of iterations is also in agreement with the spectral analysis from Fig 8 in Section 4.2. It has been shown that as the accuracy of ADP decreases, the projection error increases, and the eigenvalues are not accurately projected onto the origin. As a result, the number of iterations is expected to increase with . However, we did observe that this effect is less pronounced for larger values of , which is why we obtain better convergence for larger values of when ≥ 4. Table 4 contains the same comparison, however we use the deflation preconditioner . We report the number of (precondi-  4 13  5  13  5  13  5  20  5  68  5  = 5 19  5  19  5  16  5  25  5  48  5   TABLE 3 Number of (preconditioned) GMRES iterations to reach convergence for MP 1-B. Here we use GMRES with either two-level deflation (D) using quadratic Bezier curves or exact inverse of CSLP using 2 = −1 . * indicates that the number of max iterations (100) has been reached without convergence. tioned) GMRES for both preconditioners. Note that, the exactly inverted CSLP leads to iteration numbers independent of both and . In absence of the weight parameter, the number of GMRES iterations adopting the deflation preconditioner increases with and decreases with starting from = 10 5 . These results are similar to the ones reported in Table 2, where we observed a similar effect for 1 . Adding the weight parameter significantly improves the convergence of the GMRES method with respect to −dependent convergence. In particular, wave number independent convergence is observed for values of up to 10 6 . This is in line with the spectral analysis from Fig 7 in Section 4.2. There, we observed that an accurate interpolation scheme ensures that half of the eigenvalues are mapped onto the origin and the spectrum remains as clustered as possible. However, for = 5 we observed that the smallest and largest eigenvalue lie further away, which could explain the −dependent convergence, and in particular the higher number of iterations observed for = 5. Thus, similar to multigrid solvers, deflation based solvers are also subjected to −dependent convergence. The effect can be circumvented by combining both methods and increasing the number of V-cycles.

MP 2-A
In the previous subsection, it was observed that combining the deflation preconditioner with the approximated CSLP yields the best results in terms of iteration numbers. In this subsection, we apply this preconditioner to MP 2-A, the natural extension of MP 1-B to two dimensions. In particular, CPU timings are determined to obtain a fair comparison in terms of computational costs. 12 with the exactly inverted CSLP . For 12 , we obtain close to -and -independent convergence. Only for = 5, the number of iterations increases. Here, 3 pre-and post-smoothing steps and a shift of 2 = 4.2 are adopted. For the preconditioner, a shift of (3 ) −1 has been adopted. Both the shift −1 as well the shift 2 = (3 ) −1 does not lead to wave number independent convergence. In fact, uses more iterations for < 5 in most cases. This can be explained by the fact that we are using Dirichlet boundary conditions, which are known to cause a less favorable distribution of the eigenvalues compared to the use of Sommerfeld radiation conditions 34 . In particular, keeping the shift −2 results in wave number independent convergence but leads to very uneconomical systems, which are close to the original coefficient matrix. Figure 9 shows the corresponding CPU times to reach convergence with the GMRES method when applying 12 and as a preconditioner. The CPU-timings have been obtained using the Matlab 2019b 'tic toc' command. For = 50, inverting the CSLP preconditioner exactly leads to the lowest CPU times for all values of considered. However, from = 150 already, the opposite holds: 12 is computationally more efficient compared to the exact CSLP preconditioner. This effect becomes more pronounced as increases. Thus, the larger , the larger the computational speedup of the deflated preconditioned solver relative to the solver using the exact inversion of the CSLP combined with a small complex shift.

TABLE 5
Number of (preconditioned) GMRES iterations to reach convergence for MP 2-A. Here we combine the two-level deflation (D) using quadratic Bezier curves with CSLP. CSLP has been inverted using 1 and 12 respectively where the shift has been set to 2 = 1 and 2 = 4.2 respectively. When using , the shift has been set to 2 = 3 −1 .

MP 2-B
Finally, we consider model problem MP 2-B, where the wave number is non-constant and given by a two-dimensional step function. This is an important benchmark as some solvers only perform successfully when a constant wave number is used. Moreover, it allows for testing whether the numerical solver can deal with sharp disruptions in the underlying velocity, which is the main focus of this section. In Figure 10 we have plotted the variable (left) and constant (right) solution for MP 2-A and MP 2-B respectively using = 100 as a base wave number. The step-function used to vary throughout the numerical domain is observed to disrupt the symmetric pattern observed for = 100 (right). Table 6 shows the number of GMRES iterations needed to reach convergence when 12 and are applied as a preconditioner. With respect to −dependent convergence, the number of iterations slightly varies with for both preconditioned systems. In contrast to MP 2-A, however, we also observe a small increase in the number of iterations as increases for both preconditioned systems. However, in terms of iterations, the deflated preconditioned system needs less iterations compared to the system using the exact inversion of the CSLP and a very small complex shift. Unlike the results from the constant wave number model problem, we therefore report weakly dependent convergence on . However, note that for = 5, the convergence appears to resemble wave number independent convergence. We do note that using the deflation preconditioner combined with the multigrid approximation of the CSLP, the number of iterations could be improved by using more V-cycles. These are relatively cheap in terms of computational costs as they are of order ( ) FLOPs and given that the diagonal scaled Jacobi smoother is used.
preconditioner exactly is more expensive. Note that, for higher values of , the difference between both approaches also becomes more visible in terms of CPU timings. This effect will only be magnified in 3D-applications.

Conclusion
In this work, we focus on the combination of IgA discretized linear systems with a state-of-the-art iterative solver using deflation and a geometric multigrid method. In particular, we extend the line of research set out by 15 , where it was shown that the use of IgA reduces the pollution error significantly compared to −order FEM. The authors have shown that the use of the exact inverse of the CSLP preconditioner with a small complex shift, yields wave number independent convergence for moderate values of . Instead of inverting the CSLP exactly and using a small complex shift, we use a standard multigrid method to approximate its inverse and combine it with a two-level deflation preconditioner to accelerate the convergence of GMRES. We use a large complex shift in order to ensure that the multigrid algorithm does not diverge.
The use of deflation techniques is motivated by studying the spectrum of the preconditioned systems. Deflation projects the unwanted negative and near-zero eigenvalues corresponding to the smooth eigenmodes onto zero, thereby accelerating the convergence of GMRES. Our spectral analysis shows that for increasing and , the spectrum remains well-clustered. This is supported by the numerical results in 1D as the number of iterations remains -and -independent for ℎ constant. If we exclude the CSLP, we obtain independent convergence and the number of iterations increases slightly with .
When deflation is combined with CSLP, the number of iterations weakly depends on and for ℎ constant in the 2D case. Starting from = 150, the deflation based preconditioner combined with the approximate inverse of the CSLP outperforms the exact inversion of the CSLP with shift 2 = (3 ) −1 in terms of CPU-timings. The obtained speed-up becomes more significant as the wave number increases. Results for the highly varying non-constant wave number model show a slight dependence on but an inversely related dependence on as the wave number increases. Even for this model problem, the proposed solver outperforms in terms of number of iterations and CPU-timings, when compared to the use of the exact inversion of the CSLP with a small complex shift.