Parallel multigrid method for solving inverse problems

We considered in this work the linear operator equation and used the Landweber iterative method as an iterative solver. After that, we used the multigrid method as an optimization method for obtaining an approximation solution with a highly accurate and fast process. A new parallel algorithm for the multigrid process has been developed. The proposed algorithm is based on a V-cycle mixed with the two-grid method. This modification of the V-cycle provides for parallel computing for each level. A coarse grid operator with a residual right-hand side vector for each coarse grid is provided. This parallel algorithm is used to accelerate and enhance computation for the solution of the iteration method in solving the inverse ill-posed problems. The necessary cost-time computation for all stages and processes for the parallel V-cycle algorithm has been done. A numerical experiment on solving the IVP (initial value problem) for the heat equation showed that the new parallel algorithm is much more efficient than the sequential method.• The study of iteration algorithms and mathematical experiments reveals a slow rate of convergence.• The Multigrid method is often used to speed up the rate of convergence of iterative methods, which is an effective method of solving large systems of linear algebra equations.• The approximation solution for the linear algebra equations was found by using the parallel method with the multigrid method.


Introduction
We defined a parallel multigrid algorithm for solving IP (Inverse Problems). In this article, we consider the initial value problem of the one-dimensional (1-D) heat equation as a model of an illposed inverse problem. In the early twentieth century, Hadamard [1] labeled situations as well-posed problems, stating that a problem was well-posed when it fulfilled the following points: 1 A solution exists. 2 uniqueness. This solution is unique. 3 Stability (the given data is continuously dependent on the solution).
If at least one of the above points or conditions is not fulfilled in the problem, the problem is considered as an ill-posed problem. The violations of 1 and 2 can often be improved with a small reformulation of the problem. Violations of stability are much harder to remedy because they imply that a small disturbance in the data leads to a large disturbance in the estimated solution [2][3][4][5] . Various methods and algorithms for solving IP "inverse problems" have been explained and used in [6][7][8][9][10][11][12][13][14][15][16] . The success of these methods and algorithms is largely based on understanding and analyzing the mathematical problems related to the declarations of the properties of these IP "inverse problems" and identifying specific difficulties in solving them [17][18][19][20][21][22] .
One of the many known methods of solving a linear operator equation is the iteration method. This method has several shortcomings, including initial guessing, complexity, and convergence. The discretization process for the integral equation primes to a huge, sparse, and high-dimensional system of linear equations or linear operator equation. Inappropriately, theoretic convergence studies and mathematical experimentation show that the convergence rate is slow for these iterative algorithms, leading to an amplified cost of iteration. The direct iteration method suffers from some restricting boundaries. Multigrid methods advanced from effort s to overcome these limitations. Multigrid settings are largely successful when used in conjunction with relaxation or iteration methods, and they lead to fast and direct points to estimated solutions to solve the IP "inverse problems".
Multigrid methods [23][24][25] are frequently used to accelerate the convergence rate of iterative methods [26] . The main idea of this work is to apply parallel computing to the classical multigrid method and consider the cost of parallel and equational computing. All these steps have been applied through the following sections. In Section 2 we defined the inverse problem for linear operator equations with the necessary mathematical notions. In section 3, the classical multigrid method has been defined with some classical algorithms. The new parallel multigrid method has been defined with the cost of computing. In the last section, the numerical examples have been applied and many results have been obtained.

Problem statement
Let's define the operator A ∈ H and the vectors u, f ∈ Hthen let's consider the linear system of equations.
A is linear a compact operator A : H → H. We are interested in cases in which the operator A is obtained from the discretization of an ill-posedness, such as the integral equation of first kind. Express the ill-posed problem of finding an approximate solution to the Eq. (1) as shown in the following.
Suppose that f = f 0 there exists an real solution u 0 of problem (1) and this solution is related to the set M r which is defined as the class of correctness of (1), but the vector f 0 is unknown. Instead f δ and error level or measurement noise δ are given such that Using the given information M r , f δ , and δ, we go to find the estimated solution u δ of (1) and compute the deviation from the real solution u δ − u 0 .
The iterative regularization method Landweber [27] used to find the estimated solution u δ defined by next formula where α is defined as the regularization parameter and E is the identity operator.
The classical Landweber iterative method (CLI) can be defined by the following algorithm. Algorithm 2 CLI (iteration number) There are two significant measures of u δ as an approximation of u 0 the error and residual. An iterative method for the solution of (1) can be applied directly to the Eq. (1) or to an equation of the error. It is called residual equation [28] . Let u δ be an approximation of u 0 , then the error e = u 0 − u δ . satisfies the equation where r is the residual, e.g. the volume by which the estimate u δ fails to satisfy the original problem (1).

Multigrid method
Multigrid method (MG) may be observed as a class of iterative solvers with the fast property of linear systems of equations with large sparse matrices ascending from PDE or integral equation discretization. The iterative solvers have nice numerical properties, such as low memory demand of O (N) . However, these methods have a serious drawback which slows down convergence with the increase of discretization resolution the smoothing property.
Often, iterative solutions reduce high-frequency error components, which are related to the toughest variations of the error spread on the grid or domain. Low frequency (low varying) parts aren't taken out, and the error doesn't go away, but instead becomes smooth.
Moreover, the grid size increases or domain diminution leads to the oscillation of the slowly rotting smooth error, making it accordingly extra vulnerable. Coarsening the grid to accelerate the convergence is very effective, but it spoils the discretization accuracy. From this point, we need to define the grid with diminution of that grid h and the equivalent linear system of equations or linear operator equation as the next equation The method for improving the performance of iterative methods, at least at the beginning of the iteration, starts with a good initial iteration. For the typical problem, we can find a good initial iteration by explaining the nearly solution for the problem on the coarse grid and by using a few iterations, which is called smoothing.
Let us study a grid 2 h . In practice, uniform refinement consists in dividing in halves all intervals of 2 h , leading to the grid h , see Fig. 1 .
The nested iteration is the strategy of using coarse grid problems to improve the iterative method to solve (6): • solve A h u h = f h on a very coarse grid approximately by applying a smoother (iteration).
on h by the iterative method with the initial iteration coming from the coarser grids.
Coarse grid correction (two-level method) is the second strategy and also uses the residual Eq. (5) (see Fig. 2 ): This step gives an approximation u h δ of the solution which still has to be updated appropriately. Compute the residual The transfer from a coarse grid to a fine grid is called prolongation or interpolation. In many situations, we can use simplest approach, which is the linear interpolation. Let 2 h be divided into N = 2 intervals and h into N intervals. The node j on 2 h corresponds to the node 2j on h , 0 ≤ j ≤ N/ 2 , Let u 2 h be given on 2 h . Then, the linear interpolation is given by (see Fig. 3 ). For even points or nodes in grid h , takes the value of the equivalent point in grid 2 h . About the odd nodes in grid h , the values of the neighbor points is computed.
It can be represented as operator. Using the standard basis of R N/ 2 −1 and R N−1 , then, Suppose that error (unknown) is a smooth vector on the grid h . In addition, the coarse grid estimates on 2 h is calculated, and it should be exact in the coarse grid. The interpolation of this coarse grid evaluation is a smooth vector on the fine grid. For this reason, the smooth error on the fine grid has supposed.
If there is a fault on the fine grid, then each interpolation of a coarse grid estimate to the fine grid is a smooth function and we cannot expect that the error on the fine grid will come close to being fixed. See Fig. 4 .
Perturbation provides good results if the fault on the fine grid is smooth. Hence, prolongation is an appropriate match for the smoother, which works best professionally if the error is wavering.
For the two-level method, we must do the transmission action for the residual vector from h to 2 h before the coarse grid equation can be solved. This transmission is called restriction. The simplest restriction is injection. It is defined by (see Fig. 5 ) . In this method, for each node on the coarse grid, we take the value of the grid function at the corresponding node on the fine grid.
Injection is an efficient method. If we ignore every other node on h, then the values of the residual in these nodes (and the error in these nodes) have no impact on the system on the coarse grid. Consequently, these errors will generally not be corrected.
There is a weighted restriction for finite difference schemes. The weighted restriction uses all the nodes on the fine grid. It is defined by an appropriate average If the spaces R N−1 and R N/ 2 −1 are equipped with the standard bases, the matrix representation of the weighted restriction operator appears as follows: With this representation, we can see the important connection between weighted restriction  The operator on a coarse grid is defined as the Galerkin projection operation. The operation is starts from the derivation of an appropriate coarse grid operator, by the Galerkin operator which is used by the residual formula as next Let us assume that e h is founded in the range of the prolongation operator I h 2 h . Then, there is a vector e 2 h defined on the next coarse grid as following Replacing (12) in (11), we obtain By running the restriction action to both sides of (13) returns leading to the definition where The derivation of (16) was depended on the error e h is in the range of the prolongation. This property in general is not specified. If it true, then a real solution in coarse grid provide a solution to A h u h = f h with correction step on the coarse grid. However, this derivation gives a motivation for defining A 2 h as (16).
To make simpler notations, the residual equation for the right-hand side vector will be signified by f 2 h instead of r 2 h . The solution on the finest grid will represented as u h and the present iterate as v h . Instead of representing the solution on the coarse grid as e 2 h , it will be represented as v 2 h .
Multigrid method V-cycle is applied by imbedding the two-level method into itself. Let us assume that there some l + 1 grids, l ≥ 0 , where the finest grid has the size h and this size increased by the factor 2 for each next or coarser grid. Let L = 2 l .
Algorithm V-cycle • Pre smoothing: apply the smoother V 1 times to A h u h = f h with the initial guess v h . The result is denoted by v h .   • Compute The name V-cycle comes from their movement through the hierarchy of grids (see Fig. 6 ).
The v-cycle method is based on the applied recursion of the two -level algorithm for solving the coarse grid. The v-cycle algorithm does not provide an answer on how to solve the initial value in the pre-smooth section. There are some other methods, such as the W-cycle (see Fig. 7 ).
If we use on each grid (besides the finest grid) one multigrid V-cycle for smoothing, a full multigrid V-cycle is performed (see Fig. 8 ).
The above methods run sequentially, which requires a great deal of computation time, especially when the fine grid is the highest level. For this reason, we define parallel computing for the V-cycle method as shown in the next section.

Parallel multigrid method
The two-grid procedure defined above can be comprehensive to an arrangement or next grids through recursion, which leads to the modified FMG with V-cycle as shown in Fig. 6 .
We suggest a parallel method which applies the two-grid method during pre-smoothing for the Vcycle method. In this method, we can compute the residual for each coarse grid independently prior to smoothing. This means that we can apply all pre-smooth computing in parallel as shown in Fig. 9 .
The parallel multigrid method V-cycle is applied by imbedding the two-level method for each coarse grid prior to smoothing.
Algorithm PV-cycle • Pre smoothing: • Apply two-grid to A h u h = f h , the result is denoted by v h and v 2 h • Apply two-grid to A 4 h u 4 h = f 4 h , the result is denoted by v 4 h and v 8 h  The parallel V-cycle becomes a parallel move through the hierarchy of grids, see Fig. 9 To compute the coarse grid operator for each domain independently for each two-grid in the coarse grid, we used the following formula.
The same method is used to compute the f right-hand side vector, The above-mentioned multigrid algorithms were used in a general-object to solve linear operator equations. In order to compute the computation time for running a parallel V-cycle, we divided the parallel V-cycle into two parts: post-smooth, which is implemented in parallel, and pre-smooth, which is implemented sequentially (see Fig. 10 ).  The time for running each process P r f inegr id,coar segr id in the pre-smooth ( T pre −smooth ) for each twogrid part is calculated as: We defined the next equation to compute the time for each two-grid process ( S h smoothing in a fine grid, S 2 h smoothing in a coarse grid, restriction, and prolongation).
where T (S 2 j h ) , returned the time for smoothing, and T (I 2 j+1 h for computing the restriction and prolongation operations, respectively. Regarding post-smoothing computation time T post−smooth , the time necessary to complete prolongation with last smooth step can be calculated as:

Numerical experiment and conclusion
The IVP "initial value problem" for heat problem was studied in work [ 2 , 4 ]. The mathematical form of IVP has been labeled by the following linear PDE system: where u ( 0 , t ) and u ( l, t ) are boundary conditions and u 0 (x ) is the IC "initial condition" which is needing to found. In this work, we apply other numerical algorithms to obtain a more accurate solution and fast implementation for a high-scale problem. The integral form for this equation will be: where the kernel K( x, y ) is an infinite series. Since we cannot hold the infinite sum, we do 10 times for the sum of series: To obtain an estimated solution to u (x ) , we can reformulate the problem as a linear operator equation, Au = f . After applying the discretization algorithm described in [5] we obtain . . . where The bounded and injective operator A is ill-conditioned, so any numerical attempt to directly solve (28) will be fail.
In this numerical example we will use the Landweber method and the same iterative method as in the V-cycle multigrid. We compared the 3 algorithms with high scale of domains h ; the result is shown in Table 1 for a domain of h = 1024 and Table 2 for domain size h = 2048 .
The V-cycle algorithm has been successfully applied to obtain a good approximating solution with low time in high scale domains h , see Fig. 12 .
The Landweber iteration method has been successfully applied to obtain an estimated solution. The V-cycle multigrid method has been applied to reduce computation costs for the iterative method. The parallel technique has been successfully applied to the V-cycle multigrid method to obtain the approximation solution for the IVP for a heat equation.

Conflicts of Interest
The authors declare no conflict of interest.

Data Availability
No data was used for the research described in the article.