OVERLAPPING DOMAIN DECOMPOSITION METHODS FOR LINEAR INVERSE PROBLEMS

. We shall derive and propose several eﬃcient overlapping domain decomposition methods for solving some typical linear inverse problems, including the identiﬁcation of the ﬂux, the source strength and the initial temperature in second order elliptic and parabolic systems. The methods are iterative, and computationally very eﬃcient: only local forward and adjoint problems need to be solved in each subdomain, and the local minimizations have explicit solutions. Numerical experiments are provided to demonstrate the robustness and eﬃciency of the methods: the algorithms converge globally, even with rather poor initial guesses; and their convergences do not deteriorate or deteriorate only slightly when the meshes are reﬁned.


Introduction
Domain decomposition methods (DDMs) have been developed and proved to be one of the most successful methodologies in the construction of efficient numerical solvers for solving many boundary value and initial-boundary value problems, the so-called direct problems; see [11] [13] [14] and the references therein.DDMs usually possess two important features for solving a wide class of large-scale direct problems: first, they are natural parallel solvers and can be easily implemented in parallel computers; second, their convergence may be made nearly optimal in the sense that the resulting convergence rate is nearly independent of the mesh size.
However, no much progress has been made in the construction of efficient DDMs for solving mathematically ill-posed inverse problems, although the inverse problems are usually much more challenging and time consuming than their corresponding direct problems.In [5] [10], DDMs were used indirectly for an elliptic identification problem, where classical iterative optimization algorithms were first applied for the stabilized minimization system of the identification problem, then the existing DDMs were introduced for solving the direct problems and their adjoint systems involved at each iteration.As the outer global iterations of these methods are based on the classical nonlinear optimization algorithms, their convergences deteriorate rapidly as the degrees of freedom of the entire optimization systems increase.Newton's method was first used in [3] for solving the optimality system of the stabilized minimization of an elliptic identification problem, then an additive Schwarz type preconditioned algorithm was applied to solve the linear system involved at each Newton's iteration.As Newton's method requires the evaluations of the Hessian of the corresponding objective functional, the approach of [3] is applicable only to a very special formulation of the parameter identification problem.In this work we shall develop some DDMs for directly solving the stabilized minimization systems of some typical linear inverse problems so that their convergences do not deteriorate or deteriorate only mildly as the entire degrees of freedom of the optimization system grow.Next, we shall briefly address some major difficulties in the construction of DDMs for inverse problems directly, then point out the new contributions of this work.
We shall use q and u(q) to represent respectively the parameter function to be identified and the solution to the forward model system associated with the parameter q, then one may formulate a general inverse problem formally as the following forward operator equation u(q) = z δ where z δ is the measured data of the exact solution u in some subregion inside the physical domain or on part of the boundary, or at the terminal time t = T when the problem is time-dependent.And the parameter δ is used here to emphasize the existence of the noise in the measured data.
Inverse problems are usually ill-posed as at least one of the following three conditions is violated: the existence, uniqueness and stability of solutions [1][2] [7].Of the three conditions stability is the most frequently encountered difficulty in numerical solutions of inverse problems.One of the most stable and effective approaches to solve general ill-posed inverse problems is to transform them into stabilized output least-squares minimizations with some appropriately selected Tikhonov regularizations, namely to minimize the following type of functionals over some constrained set K: where V is a Hilbert or Banach space over the measurement subregion and is determined based on the type of measurement data available, N (q) is the regularization term and β is a regularization parameter to balance between the data fitting and regularization.One of the major difficulties in the construction of DDMs for solving a nonlinear minimization problem associated with J(q) lies in the global dependence of the forward operator u(q) on the parameter q: a change of q in a small subregion of the global domain Ω causes the change of u in the entire Ω.This is generally true no matter if u(q) is linear or nonlinear.Due to this global dependence, a direct application of the DDM principle to solve the nonlinear minimization problem of J(q) may not work.To illustrate this point more clearly, we consider a decomposition of the global minimization of J(q) over the entire domain Ω into a set of subproblems that involve only all sub-minimizations of functionals J i (q i + q) on the subdomains Ω i , where q i has support only in Ω i , and q is the known contribution from other subdomains, then J i (q i + q) should be of the form Clearly the sub-minimization of functional J i in (1.2) involves the solution u(q i + q), which still needs to solve the forward problem in the global domain Ω even when operator u(q) is linear and only the local quantity q i needs to update.Hence the direct application of the DDMs does not really reduce the global computations to the ones in the local subdomains.
In this study, we will derive and propose several efficient overlapping DDMs for solving some typical linear inverse problems, including the identification of the source strength, the initial temperature inside a physical domain, and the fluxes on (inaccessible) part of the boundary of a physical domain in second order elliptic and parabolic systems.These inverse problems are all ill-posed, especially unstable with respect to the change of the noise in the data [2].The new algorithms will be constructed in a way that meets the true spirits of DDMs, namely at each iteration only smaller minimizations are solved on the subdomains of the original global domain, and their convergence is nearly optimal in the sense that the number of the iterations required for a specified accuracy grows nearly independent of (or very slowly on) the refinement of finite element meshes.
The rest of the paper is arranged as follows.In Section 2, we propose the Tikhonov regularization for identifying the source strength.In Section 2.1, the overlapping domain decomposition methods are first introduced and local minimizations are studied, then the algorithms are further improved.In Sections 3 and 4, we derive DDMs for the reconstruction of the fluxes on part of the boundary and the initial temperature inside a physical domain respectively.In Section 5, numerical experiments are presented for the identification of source strength, fluxes and initial temperature to illustrate the efficiency and robustness of the proposed algorithms.Some concluding remarks are given in Section 6.
Throughout the paper, C is often used for a generic constant.We shall use the symbol •, • for the general inner product, and write the norms of the spaces

Domain decomposition algorithms for the reconstruction of source strengths
The major task of this work is to propose some new overlapping DDMs for solving three typical linear inverse problems, including the identification of the source strength, the flux and the initial temperature.For ease of exposition, we shall take the inverse problem of identifying the source strength in a diffusion system as an example to derive and discuss the new DDMs in more detail in this section, and address the other two inverse problems in sections 3 and 4. Let Ω be an open bounded and connected domain in R d (d ≥ 1), with a boundary ∂Ω.Then we consider the following diffusion system where a(x), c(x) and g(x) are all given functions, and a(x Suppose that the source strength f (x) of the model system is unknown in Ω.Our inverse problem is to recover the source strength distribution f (x) in Ω when the measurement data of u, denoted by z δ , is available in Ω, or in a subregion Ω of Ω.For convenience, we shall write the solution of system (2.1) as u(f ) to emphasize its dependence on the source strength f (x).This is a well-known mathematically ill-posed problem.As in (1.1), we formulate it in a mathematically stabilized minimization system of the form min Indeed we can show that the minimizer of the system is stable in the sense that it depends continuously on the change of the noise in the data z δ [8] [12].
Linearity of the forward solutions.The forward solution u(f ) of the system (2.1) is basically linear in terms of f .It is easy to check directly that if and only if g(x) = 0.This leads us to consider the solution U to the following system: We can verify that u(f , or equivalently we have From now on we shall view the solution U (f ) to (2.3) as a mapping from L 2 (Ω) to L 2 (Ω).Adjoint operator.It is easy to verify that operator U (f ) is self-adjoint.In fact, we have by integration by parts for any ω ∈ L 2 (Ω) that

Overlapping DDMs with explicit local solvers
Using the relation (2.4) we can rewrite the minimization (2.2) as min with z δ 0 = z δ − u(0).As U (f ) is linear, J(f ) is convex with respect to f .And the minimizers of (2.6) exist and are unique.
In this section, we shall derive some effective DDMs to solve the optimization system (2.6).We shall not intend to solve this optimization system on the global domain Ω, as most existing numerical solvers do.Instead we plan to construct some DDMs so that the nonlinear system (2.6) can be effectively solved on local subdomains.To do so, we divide the global domain Ω into a finite number of overlapping subdomains Ω 1 , Ω 2 , ... , Ω l , where l is a positive integer.Though our new DDMs work for a general number of subdomains, we shall focus all our discussions only on 4 subdomains with a cross-point for ease of exposition; see Figure 2.1.It is well-known that the case of 4 subdomains with a cross-point is a most representative case of general multiple subdomains [11] [14].Based on the partition of Ω into 4 overlapping subdomains, we shall often need a local subspace of L 2 (Ω) on each subdomain Ω i : Next we start to derive some new DD algorithms for solving the optimization system (2.6).The algorithms are based on the local optimizations on the subspaces V i associated with subdomain Ω i .For some given f j ∈ V j (j = 1, 2, 3, 4), let us consider the following local minimization over Ω i : (2.7) Here and in the sequel, we often write 4 j=1,j =i as j =i for simplicity.By the definition of J in (2.6) we know that each local update v i in Ω i still needs to compute the quantity U (v i + j =i f j ), which involves the solution of the forward system (2.3) in the entire domain Ω.To avoid this, we construct an auxiliary functional Js i of J, called the surrogate functional in [6], by introducing an auxiliary variable a.For a given a ∈ V i and f j ∈ V j (j = 1, 2, 3, 4), we define where A is a positive constant to be selected such that This implies for any f = 4 j=1 f j that Js i (f, a) = J(f ) when a = f i , and So Js i (f, a) can be viewed as a small perturbation of J(f ) when a is close to f i .Now we shall convert (2.8) into a more explicit representation.Using (2.6), (2.8) and the adjoint relation (2.5) we can rewrite Js i as follows: We can see that the last two terms in (2.11) does not depend on f i , so it will not affect the local minimization over Ω i if we drop them in the functional Js i .This leads us to consider the following functional for a given a ∈ V i : min where zi is given by zi Noting that (2.12) is a simple quadratic minimization, we can find its exact minimizer f * i : Clearly, the new functional Js i in (2.12) has an obvious advantage over the functional J in (2.6) or (2.7): it is completely local, and the minimization can be solved explicitly within the subdomain Ω i .However, for the solution of the local minimization (2.12) we need the data zi from (2.13), which involves the evaluations of U ( j =i f j + a) and U (z δ 0 − U ( j =i f j + a)).Unfortunately, these two evaluations are both global, and require the solutions of the forward system (2.3) in the entire domain Ω.This is surely not expected in an efficient DD algorithm.
Next, we shall propose some techniques to get rid of the aforementioned two global evaluations so that the resulting DD algorithm involves only local minimizations over the local subdomains.For convenience, we write the boundary of Ω i inside Ω by Γi , i.e., Γi = ∂Ω i ∩ Ω for i = 1, 2, 3, 4. Then we introduce a local forward operator U i (f, p) associated with the forward problem (2.3): (2.15) Using the local operators U i (f, p) in (2.15), we introduce the following local functional for f j ∈ V j , j = 1, 2, 3, 4: and its surrogate functional J s i for any given a ∈ V i : Using the important fact that U i ( 4 j=1 f j , p) = U i ( j =i f j , p) + U i (f i , 0) and the adjoint relation (2.16), we can write We can easily see that the last term above does not depend on f i , so it will not affect the local minimization over Ω i if we drop them in the functional J s i .This leads us to consider the following functional for a given a ∈ V i : min where ) is a simple quadratic minimization, and we can find its exact minimizer f * i : We can see from this expression that as long as the inner boundary value p is available, the minimization (2.18) does not involve any global data and is completely local.Noting that ) and the definitions of J i (f, p) and J s i (f, p, a), we can connect J i (f, p) and J s i (f, p, a) with functional J(f ) (cf. (2.6)) restricted in Ω i : (2.20)So using (2.18), we are now ready to apply the multiplicative or additive Schwarz iteration principle [11] [14] to establish two DD algorithms for solving the optimization system (2.6).
For the description of the algorithms, we introduce an index function for any point x ∈ Ω: 4), and solve (2.3) for U (f (0) ); set p (0) i ); update the inner boundary values on Γj for j > i if Γj ∈ Ω i : i ); update the inner boundary values on Γi (i = 1, 2, 3, 4): We can easily see that Algorithm 2.1 is sequential or multiplicative.The next algorithm proposes a parallel version of Algorithm 2.1.For this purpose, we introduce a bounded uniform partition of unity {χ i } 4 i=1 such that 4 i=1 χ i = 1 and χ i ∞ ≤ 1 and supp(χ i ) ⊂ Ω i .

Domain decomposition algorithms for flux reconstruction
In this section, we propose a DD algorithm to solve the inverse problem of identifying fluxes on part of the boundary.
) be an open bounded and connected domain, with a boundary ∂Ω, which splits into two parts, i.e., ∂Ω = Γ 0 ∪ Γ 1 .Then we consider the following elliptic system where a(x), c(x), f (x), g(x) are all given functions, and a(x) ≥ a 1 > 0, c(x) ≥ c 1 > 0 in Ω. Suppose that the flux h(x) of the model system is unknown on the inaccessible part Γ 1 of ∂Ω, our inverse problem is to recover the flux distribution on Γ 1 when some measurement data u δ of u is available on the accessible part Γ 0 of ∂Ω.We shall write the solution of system (3.1) as u(h) to emphasize its dependence on the flux h(x).
As discussed in section 2, we formulate the ill-posed inverse problem of recovering the flux into a mathematically stabilized minimization system of the form min This formulation is stable in the sense that the minimizer h to (3.2) depends continuously on the change of the noise in the data u δ [12].
Similarly to the discussions in Section 2, we can write the solution u(h) to (3.1) as where U (h) is the solution to the following system: Adjoint operator of U (h).For any ω ∈ L 2 (Γ 0 ), consider the solution U * (ω) ∈ H 1 (Ω) to the following system: is the adjoint operator of U , namely, it holds that This relation follows directly from (3.4), (3.5) and an application of integration by parts:

DD algorithms with explicit local solvers
In this subsection, we follow section 2.1 to derive some overlapping domain decomposition method for solving the minimization in (3.2).As in section 2.1, Ω is divided into the overlapping subdomains Ω i (i = 1, 2, 3, 4), accordingly the feasible constraint space L 2 (Γ 1 ) can be decomposed into the subspaces Next we introduce an auxiliary surrogate functional Js i of J(h) in (3.2) for any given a ∈ V i and h j ∈ V j (j = 1, 2, 3, 4): By similar derivations to (2.11) but using the adjoint relation (3.6), we can rewrite Js i as Js i ( where z δ 0 = z δ − u(0).We can see that the last two terms do not depend on h i , so we can drop them in the minimization of functional Js i .Hence it leads us to the following local minimization for any a ∈ V i : min where zi is given by zi This is a quadratic minimization, so we can find its exact minimizer h * i : We observe that the minimization (3.9) is completely local, and its solution can be achieved explicitly within the subdomain Ω i .However, its solution h * i needs the data zi from (3.10), which involves two global solutions of the forward and adjoint systems (3.4) and (3.5), and is clearly not expected in an efficient DD algorithm.Next, we propose some techniques to avoid these two global evaluations so that the resulting DD algorithm involves only local minimizations over the local subdomains.To do so, we introduce two local forward and adjoint operators U i (h, p) and U * i (ω, q) associated with the global forward and adjoint systems (3.4) and (3.5): Using the systems (3.12), (3.13) and the integration by parts formula, we derive the following important relation that will be needed later on: By means of the local operators U i (h, p) in (3.12), we introduce the local functional J i ( 4 j=1 h j , p) for h j ∈ V j (j = 1, 2, 3, 4): and a surrogate functional J s i for a given a ∈ V i : Using the important fact that U i ( 4 j=1 h j , p) = U i ( j =i h j , p) + U i (h i , 0) and the adjoint relation (3.14), we can rewrite J s i ( 4 j=1 h j , p, a) as As the last term does not depend on h i , we are led to the following quadratic minimization: where z i is given by We can easily find the minimizer to the quadratic optimization (3.17) in an explicit form: As in Section 2.1, we are now ready to formulate two new DD algorithms for the minimization system (3.2) for identifying the heat flux.For the description of the DD algorithms, we introduce an index function for any point x ∈ Γ 1 : i , h i ); update the inner boundary values on Γj for j > i if Γj ∈ Ω i : update the inner boundary values on Γi (i = 1, 2, 3, 4): The next algorithm proposes a parallel version of Algorithm 3.1.For this purpose, we introduce a uniform partition of unity {χ i } 4 i=1 such that 4 i=1 χ i = 1 and χ i ∞ ≤ 1 and supp(χ i ) ⊂ ∂Ω i ∩ Γ 1 .Algorithm 3.2 (Additive Schwarz Algorithm (ASA)) Choose a tolerance parameter ǫ 1 > 0, a relaxation parameter λ ∈ (0, 1), an initial value h 3,4), and solve (3.4) for U (h (0) ); set p (0) i := U (h (0) )| Γi and n := 0.

Remark 3.1 The same as for (3.18), we have explicit expressions for the minimizers h
(n+1) i in (3.19) and (3.20).In our numerical implementations, we will simply take the partition of unity {χ i } 4 i=1 used in Algorithm 3.2 as follows:

Domain decomposition algorithms for the reconstruction of an initial temperature
In this section, we are interested in extending the DD algorithms proposed in sections 2 and 3 for solving the stationary inverse source and flux problems to a time-dependent inverse problem, the identification of the initial temperature in the following heat conduction system: We assume that some observation data z δ of the temperature u(x, t) are available in Ω or in some small subregion ω ⊂ Ω, but with a time history in the range [T − σ, T ].The inverse problem of our interest is to recover the initial temperature distribution ϕ(x), using the observation data z δ .We shall write the solution of system (4.1) as u(ϕ) to emphasize its dependence on the initial temperature ϕ(x).
As described in Section 2, it is easy to verify that u(ϕ) = U (ϕ) + u(0), where U (ϕ) is linear with respect to ϕ and satisfies the following system whose variational formulation is given by Let z δ 0 = z δ − u(0), then we can formulate our inverse problem as the following regularized output least-squares minimization: Now we introduce the adjoint system of the forward problem (4.2): which is linear with respect to ω.Next we derive a very useful relation: Clearly, this is true for t = 0 by the initial and terminal conditions in (4.2) and (4.5).To verify it for t ∈ (0, T ], we define U * ,s (ω) for s ∈ (0, T ]: It is easy to find the following relation, and the variational formulation of (4.7), Using U * ,s in (4.7) and its property (4.8), we see (4.6) immediately from the following relation To check this relation, we use (4.3) with the terminal time T replaced by s, then take ψ = U * ,s and integrate by parts with respect to t to obtain Now the desired relation (4.10) follows readily from the initial and terminal conditions in (4.2) and (4.7) and equation (4.9) with ψ = U .Next we shall follow sections 2 and 3 to derive some overlapping domain decomposition method for solving the time-dependent minimization (4.4).As in section 2.1, Ω is divided into the overlapping subdomains Ω i (i = 1, 2, 3, 4), accordingly the feasible constraint space L 2 (Ω) can be decomposed into the following subspaces: In order to avoid any global solution of the forward and adjoint systems (4.2) and (4.5) in our DD algorithms, we introduce their local variants, namely, the solutions U i (ϕ, p) and U * i (ω, p) to the following systems: Noting that U i (ϕ, 0) = U * i (ω, 0) = 0 on ∂Ω i , we can derive as we did for (4.6) that Now we can define a local functional J i ( 4 j=1 ϕ j , p) for ϕ j ∈ V j (j = 1, 2, 3, 4): and introduce a surrogate functional J s i for any a ∈ V i : Using the fact that U i ( 4 j=1 ϕ j , p) = U i ( j =i ϕ j , p)+U i (ϕ i , 0) and the adjoint relation (4.12), we can rewrite We can easily see that the last term above does not depend on ϕ i , so it will not affect the local minimization over Ω i if we drop the term in the functional J s i .This leads us to consider the following functional for a given a ∈ V i : min where Clearly the minimization (4.14) is quadratic, so we can find its exact minimizer ϕ * i : i , ϕ i ); update the inner boundary values on Γj for j > i if Γj ∈ Ω i :  element triangulation of domain Ω, which is done in such a way that it is consistent with the subdomain decompositions.All the elliptic problems involved in DD algorithms are solved by the continuous linear finite element method, while all the parabolic problems are solved by the continuous linear finite element method in space and the Crank-Nicolson scheme in time.
The parameters involved in the DD algorithms are chosen as follows.The initial guesses are set to be identically equal to some constants, which as we see are rather poor initial guesses for all the test problems.We take the parameter A = 1 and the relaxation parameter λ = 1/2 in all the numerical experiments.The noisy data z δ is obtained by adding some uniform random noise to the exact data, i.e., z δ = u + δR u, where R is a uniform random function varying in the range [-1,1].The errors shown in all the tables are the relative L 2 -norm errors q (k) − q / q , where q and q (k) are the exact parameter and its numerical reconstruction by the DD algorithms, which are terminated when the relative L 2 -norm errors reach 0.1.The exact parameters and their numerical reconstructed profiles will be also presented.
We start two numerical tests for the flux reconstructions in the partial boundary Γ 1 = {(x, y); x = 1, 0 ≤ y ≤ 2} in the system (3.1),where we take g(x) = 0 on Γ 0 , f (x) = 0 and a(x) = c(x) = 1 in Ω.We can see from Figure 5.1 that the numerical reconstructed fluxes, with a 5% noise in the data, appear to be quite satisfactory, in view of the severe ill-posedness of the inverse flux problem.More importantly, we observe from Tables 5.1 and 5.2 that the convergence of the DD algorithms are nearly optimal with the refinement of the finite element mesh, i.e., the number of iterations grows very mildly with the mesh refinement.
Next, we demonstrate three numerical examples of reconstructing the source strength f (x) in the system (2.1), with a(x) = (x + y)/100, c(x) = 1 in Ω and g(x) = 0 on ∂Ω.We start with a constant initial guess f (0) = 0 in Ω.

Example 5.3
We take the exact source strength f (x, y) = sin(2πx) sin(2πy) and the noise level δ = 1%.Figure 5.2 shows the exact and numerically recovered source strengths, while Table 5.3 gives the number of iterations by Algorithms 2.1 (MSA) and 2.2 (ASA).We can see from Figures 5.2-5.4 that the numerical reconstructed source strengths, with a 1% noise in the data, appear to be quite satisfactory, in view of the severe ill-posedness of the inverse source problem and the complicated profiles of the exact source strengths, especially in Example 5.3 where the source strength oscillates frequently between 8 peaks and valleys.More importantly, we observe from Tables 5.3-5.5 that the convergence of the DD algorithms are nearly optimal with the refinement of the finite element mesh, i.e., the number of iterations grows only mildly with the mesh refinement.
Finally, we present three numerical examples for the reconstructions of the initial temperature in the heat conductive system (4.1), by two DD algorithms, namely Algorithms 4.1 and 4.2 proposed in Section 4. In our experiments, we take a(x) = 1, f (x, t) = 0, the terminal time T = 4, with the constant initial guess ϕ (0) = 0.        We can see from Figures 5.5-5.7 that the numerical reconstructed initial temperatures, with a 1% noise in the data, appear to be quite satisfactory, in view of the severe ill-posedness of the inverse initial temperature problem and the complicated profiles of the exact initial temperatures, especially in Example 5.6 where the initial temperature oscillates frequently between 8 peaks and valleys.More importantly, we observe from Tables 5.6-5.8 that the convergence of the DD algorithms are nearly optimal with the refinement of the finite element mesh, i.e., the number of iterations grows still mildly with the mesh refinement.But compared with the numerical results for the source strengths and fluxes, we can see that the performance of the reconstructions for the initial temperatures are less satisfactory in terms of the mesh refinement.

Concluding remarks
We have proposed several overlapping domain decomposition algorithms for solving some representative linear inverse problems, including the identification of the fluxes, the source intensity and the initial temperature in second order elliptic and parabolic systems.The algorithms are constructed in a way that only small sub-minimizations are needed to solve on the subdomains of the original global domain at each iteration.And it is important to observe from many numerical examples that the convergence of the DD algorithms are nearly optimal with the refinement of the finite element mesh, i.e., the number of iterations grows only mildly with the mesh refinement.
Our future work includes the extension of the proposed overlapping domain decomposition algorithms to nonlinear inverse problems, such as the constructions of the diffusivity coefficient, the radiative coefficient and Robin coefficient in elliptic and parabolic systems.

Figure 5 . 4 :
Figure 5.4: Exact and numerically recovered source strengths for Example 5.5

Figure 5 .
Figure 5.6 shows the exact and numerically recovered initial temperatures, while Table 5.7 gives the number of iterations by Algorithms 4.1 (MSA) and 4.2 (ASA).

Figure 5 .
Figure 5.7 shows the exact and numerically recovered initial temperatures, while Table 5.8 gives the number of iterations by Algorithms 4.1 (MSA) and 4.2 (ASA).

Figure 5 . 7 :
Figure 5.7: Exact and numerically recovered initial temperatures for Example 5.8 and n := n + 1, go to Step 1.The same as for(2.19)wehave explicit expressions for the minimizers f

Table 5 .
1: Iterative numbers k of MSA and ASA

Table 5 .
2: Iterative numbers k of MSA and ASA

Table 5 .
5: Number of iterations for Algorithms 2.1 (MSA) and 2.2 (ASA) Figure 5.4 shows the exact and numerically recovered source strengths, while Table 5.5 gives the number of iterations by Algorithms 2.1 (MSA) and 2.2 (ASA).