Performance of the Restarted Homotopy Perturbation Method and Split Bregman Method for Multiplicative Noise Removal

In this paper, we first propose restarted homotopy perturbation methods (RHPM) for multiplicative noise removal of the RLO and AA2models.Themain difficulty in applying the RHPM to the nonlinear denoising problem is settled by using binomial series techniques.Wenext propose the split Bregmanmethods formultiplicative noise removal of the RLO andAA2models.The difficulty in applying the split Bregman method to the nonlinear denoising problem can be handled by transforming ill-conditioned linear systems into well-conditioned linear systems using splitting techniques of singular matrices. Lastly, numerical experiments for several test problems are provided to demonstrate the efficiency and reliability of the RHPM and split Bregman methods.


Introduction
Image denoising which is one of the fundamental problems in image processing is to recover an original image from a given noisy image. Let Ω be an open rectangular domain in R 2 , let : Ω → be an observed noisy image to be denoised, and let be the original unknown image to restore. The mathematical models of noisy images can be classified into additive or multiplicative ones according to the relations of noises and clean images, i.e., = +õ wherẽand are unknown additive and multiplicative noise functions, respectively. In this paper, we focus on multiplicative noise removal for the observed images corrupted by multiplicative Gaussian white noise or Gamma noise with mean equal to one.
There are a lot of investigations for additive model (1) using the following general nonlinear variational model [1,2]: where the first term of the right-hand side is a regularizer in general form to ensure the smoothness of the desired image with edge preserving, the second term is a data fitting term which reflects the fidelity between the original image and observed noisy image, and > 0 is a regularization parameter.
Many approaches have been proposed to remove the additive noise, like variational approaches [3,4], the stochastic approach [5,6], wavelet approaches [7,8], framelet approaches [9,10], and RHPM approaches [11], to name a few. It is well known that the model introduced by Rudin, Osher, and Fatemi [12] (called the ROF model) is a popular model for image denoising due to its property of preserving edges. If (|∇ |) = |∇ | in (2), then (2) reduces to the ROF model using Total Variation (TV): Since the images restored by the ROF model are in the space of bounded variation functions, the ROF model preserves sharp edges or object boundaries, and it is especially effective on restoring the piecewise constant images. However, the ROF model does not preserve the details and textures since it yields staircase effects.

Mathematical Problems in Engineering
Multiplicative noise models occur in the study of several coherent imaging systems, such as synthetic aperture radar (SAR) and sonar (SAS), ultrasound or laser imaging, and magnetic resonance imaging (MRI). The researches on variational models of additive noise removal have been successful, but the investigations on variational models of multiplicative noise removal have not been done much. The difference between a variational model of additive and that of multiplicative noise removal is that the data fitting term depends on noise distributions, such as Gamma, Poisson, Rayleigh, and Gauss distributions due to different means of image acquisition. SAR images are Gamma distributions [13], medical ultrasound or resonance images fit Rayleigh distributions [14], and astronomical microscopy images and medical SPECT/PET images fit Poisson distributions [15,16]. If the multiplicative noise is not too strong, the abovementioned noise can be considered approximately to the Gauss distributions [17].
For the multiplicative Gaussian white noise with mean equal to one, Rudin, Lions, and Osher [17] proposed the following variational diffusion model (called the RLO model): where and are weighted parameters. For the multiplicative Gamma noise with mean equal to one, Jin and Yang [18] applied the exponential transformation → introduced by Huang et al. [19] with the fitting term of the AA model [13] and proposed the following variational model (called the AA2 model in this paper): where is a weighted parameter. The investigations on the algorithm design of the classic TV model have attracted a lot of interest since it was invented to improve its computation efficiency and the quality of restored images, such as the artificial time marching method [12], fixed-point iterative method [20], primal-dual method [21], dual method [22], split method [23], Bregman iterative method [24], split Bregman method [25], and unbiased Box-Cox transformation method [26]. Among them, the split Bregman method combines the advantages of the split method in easy implementation and the Bregman iterative method in good quality of restored images. The variational models of multiplicative noise removal are more complex than the corresponding ones of additive noise removal, but its previous studies focused on models according to different noise distributions. Some researchers pay their attention to algorithm design, such as [12,20,27,28]. The starting point of our investigation is the RLO model of (4) and AA2 model of (5) with the TV regularization term. The purpose of this paper is to propose restarted homotopy perturbation methods (RHPM) and split Bregman methods for multiplicative noise removal models (4) and (5).
The paper is organized as follows. In Section 2, we review the TM (time marching) method for multiplicative noise removal models. In Section 3, we briefly review a restarted homotopy perturbation method (RHPM). In Section 4, we propose RHPM algorithms for the multiplicative noise removal models. In Section 5, we propose split Bregman algorithms for the multiplicative noise removal models. In Section 6, we provide numerical experiments for several test problems in order to evaluate the performance of the RHPM and split Bregman methods. Lastly we provide concluding remarks.

Review of the TM Method for Multiplicative Noise Removal
The Euler-Lagrange equations corresponding to RLO model (4) and AA2 model (5) lead to the following nonlinear elliptic PDEs with the homogeneous Neumann boundary conditions, respectively: where → is the unit normal vector exterior to the boundary Ω.
Without loss of generality, we can assume that ℎ = ℎ = 1. Then, nonlinear PDE (10) can be approximated by the following finite difference formula: where for 1 ≤ ≤ , 1 ≤ ≤ with △ + , = +1, − , , In a similar way as was done for the AA2 model, we can obtain the following formula for the RLO model: where ( , ) and the boundary conditions are defined the same as those in the AA2 model. Hence, we obtain TM Algorithms 1 and 2 for the AA2 and RLO models.
denotes the maximum number of iterations, denotes the noisy image, denotes the restored image at the th iteration, and MAE( ) denotes the mean absolute error at the th iteration, i.e., where ‖ ⋅ ‖ 1 stands for the 1 -norm.

Review of the Restarted Homotopy Perturbation Method (RHPM)
We briefly describe the homotopy perturbation method (HPM) for solving nonlinear partial differential equation with the boundary conditions where is a general differential operator, is a boundary operator, ( ) is a known analytic function, and Γ is the boundary of the domain Ω. The operator can be divided into two parts and , where is linear and is nonlinear. Therefore, (16) can be written as By using the homotopy technique, one can construct a homotopy V( , ) : Ω × [0, 1] → R which satisfies or, equivalently, where ∈ [0, 1] is an embedding parameter and 0 is the initial approximation of (16) which satisfies the boundary conditions. Clearly, we have The changing process of from zero to unity is just that of V( , ) changing from 0 ( ) to ( ). This is called deformation and also (V) − ( 0 ) and (V) − ( ) are called homotopic in topology. If the embedding parameter 0 ≤ ≤ 1 is considered as a "small parameter" applying the classical perturbation technique, then the solution of (20) can be expressed as a power series in , that is, Setting → 1, the solution of (16) can be expressed as the sum of an infinite series Generally speaking, the computation of V in the HPM becomes more complicated as increases. To circumvent this problem, the restarted homotopy perturbation method (RHPM) proposed by Han and Yun [11] repeats the HPM process by computing only the first few terms instead of computing infinite terms of V . This is a big advantage of the RHPM as compared with the HPM. The general procedure of the RHPM for solving a nonlinear equation is described below (see [11] for more details).
Let 0 0 be an initial approximation of a nonlinear equation. First, we compute the first -th term approximation 1 0 in the following way: Following the above computational process with 1 0 as an initial approximation, a new approximation 2 0 is computed as follows: By repeating the above process, a new approximation ℓ+1 0 can be computed from the initial approximation ℓ 0 , which was obtained at the ℓth step, in the following way: (20) and (22) ℓ+1 0 Notice that V ℓ 0 , V ℓ 1 , . . . , V ℓ are coefficients of 0 , 1 , . . . , , respectively. Then the exact solution can be approximated by ℓ+1 0 , that is, under some appropriate conditions = lim ℓ →∞ ℓ 0 . The RHPM which computes only vectors V ℓ 1 , V ℓ 2 , . . . , V ℓ every step is called the RHPM( ) method.

RHPM Algorithms for Multiplicative Noise Removal
In this section, we describe an application of the RHPM to multiplicative denoising problems of the RLO and AA2 Mathematical Problems in Engineering 5 models. Before doing this, we describe the binomial series which is used for an application of the RHPM. The difficulty in applying the RHPM to the TV-based image denoising problem comes from (36) and (62) (i.e., . In order to handle this difficulty, the following binomial series is used: for | | < 1, (27), then (28), for 0 < < 2 −1/2 can be approximated by If > 0 is large, then, by introducing a small parameter > 0 and using (29), Similarly, if = −3/2 and = − 1 in (27), then, by introducing a small parameter > 0, Note that a small parameter > 0 in (30) and (31) is used to guarantee convergence of the binomial series when > 0 is large. We first describe how to apply the RHPM to the following time-dependent nonlinear parabolic PDE corresponding to AA2 model (7): For simplicity of exposition, we describe the computational process for RHPM(2) (i.e., = 2). We use three terms of binomial series (27) for numerical computation of (36) and (62), so division by zero does not occur. Let the operators , , and be defined as where From (20), one obtains where Using the Taylor series expansion of −V with the second order and power series expansion of V of form (22), −V can be approximated as Since the solution V of (35) is of form (22), by simple computation Mathematical Problems in Engineering From these equations, one easily obtains where Substituting = (V ) 2 + (V ) 2 into (30) and using (39), one obtains where Hence, the first term of the right-hand side in (36) can be written as Substituting = (V ) 2 + (V ) 2 into (31) and using (39), one obtains where Hence, the second term of the right-hand side in (36) can be written as From (36), (37), (42), and (44), (35) can be expressed as Comparing coefficients of 0 , 1 , and 2 in (44), given an initial approximation (46) 1 : Mathematical Problems in Engineering Notice that the initial approximation 0 is set to which is a noisy image. Applying the inverse operator of to (47), one obtains where From (49), the partial derivatives of V 1 are Using (51), ( 2 ), ( 4 ), and ( 6 ) can be expressed as where ← = (V 10 ) + (V 10 ) , Substituting (49) and (52) into (48) and applying the inverse operator of to (48), one obtains where Now update 0 whose new value is set to V 0 + V 1 + V 2 . By repeating the aforementioned process with the new updated 0 , we obtain the RHPM(2) method for the AA2 model. For numerical implementation of the RHPM(2) method, let us assume that the domain Ω has been split into × cells where the grid points are located at ( = ℎ , = ℎ ), 1 ≤ ≤ , 1 ≤ ≤ . Let = k△t, where △ and = 1, 2, . . . refer to the time step and iteration number, respectively. We denote the values of ( , ; ) at the grid points ( , ; ) by ( ) , . Without loss of generality, we can assume that ℎ = ℎ = ℎ = 1. For simplicity of exposition, we define the following notation: Mathematical Problems in Engineering The boundary conditions are defined the same as those in the TM method. Assume that ( 0 0 ) , = ( , ) = , and ( 0 ) , has been computed for all and . Then (V 0 ) , = ( 0 ) , and discretization of (49) is given by where Also, discretization of (54) is given by where We now compute +1 Repeating the above process with +1 0 as an initial approximation, we can obtain the RHPM(2) method, called Algorithm 3, for multiplicative noise removal of the AA2 model. We next describe how to apply RHPM(2) to the following time-dependent nonlinear parabolic PDE corresponding to RLO model Set V 0 = 0 4: Compute V 1 using (57) for all and 5: Compute V 2 using (59) for all and 6: if MAE( ) < MAE( + 1) then 8: where ( , , 0) is given. Let the operators , , and be defined as In order to handle the difficulty in applying the RHPM to the RLO model, we use three terms of binomial series (27) for V −2 and V −3 in (62). That is, for a small parameter > 0, V −2 and V −3 can be approximated by Using (62) and (63), the RHPM(2) method for the RLO model can be derived in a similar way as was done for the AA2 model as follows: Mathematical Problems in Engineering 9 where ( Comparing coefficients of 0 , 1 , and 2 in (64), given an initial approximation (66) 1 : Applying the inverse operator of to (67), one obtains where (V 10 ) is the constant term and (V 11 ) is the coefficient of . Substituting (69) and (52) into (68) and applying the inverse operator of to (68), one obtains where (V 21 ) and (V 22 ) are coefficients of and 2 , respectively.
From RHPM(2) Algorithms 3 and 4, we can easily obtain RHPM(1) Algorithms 5 and 6 whose computational costs per iteration are much cheaper than those of Algorithms 3 and 4. In Algorithms 3-6, , , and 0 denote the maximum number of iterations, the noisy image, and the restored image at the th iteration, respectively. Notice that all images are assumed to have an intensity range of [0, 1] to guarantee numerical stability of the RHPM methods.

Split Bregman Algorithms for Multiplicative Noise Removal
From (4) and (5), the general variational models based on TV regularization for multiplicative noise removal can be written as To derive the alternating split Bregman algorithm for general multiplicative noise removal problem (74), we first introduce an auxiliary vector w = ( , ) so that and w minimize the following unconstrained problem: where |w| = √( ) 2 + ( ) 2 and > 0 is a penalty parameter. Then unconstrained minimization problem (75) can be solved by the following split Bregman algorithm using an auxiliary vector b = ( , ) : where b 0 = w 0 = 0 and 0 is set to (noisy image).
Minimizing (76) alternatingly with respect to and w, one obtains the alternating split Bregman algorithm introduced in [25]: The Euler-Lagrange equations corresponding to (78) and (79) are as follows: where w = ( , ) , b = ( , ) , and |w +1 | = First we consider how to solve (80) for +1 . For the case of RLO model (4), ( ) = ( / ) + ( /2)( / − 1) 2 and thus To avoid division by zero or near-zero for / 2 and 2 / 3 , −2 and −3 are approximated using four terms of binomial series (27). That is, where need to be scaled so that the value of lies between 0 and 2 to guarantee convergence of the binomial series. Thus Substituting this approximate equation into (80), one obtains the following approximate PDE for the RLO model: For the case of AA2 model (5), ( ) = ( + − ). Using four terms of the Taylor series for − , ( / ) ( ) can be approximated as From this approximate equation, ( / ) ( +1 ) in (80) can be approximated as Substituting this approximate equation into (80), we can obtain the following approximate PDE for the AA2 model: Now we consider the numerical approach for solving PDE problems of RLO model (86) Then the discrete gradient operator∇ can be represented by a matrix operator of size 2 × : where ⊗ denotes the Kronecker product, denotes the identity matrices of order , and denotes the following matrix of order : Using the matrix operator , it is easy to show that ∇ ⋅ ∇ and ∇ ⋅ w in (86) and (89) can be approximated as Hence, we obtain the finite difference approximate equations for RLO model (86) and AA2 model (89) where 2 , ( ) 2 , and ( ) 3 denote elementwise exponentiations, denotes the restored image at the th iteration, all multiplications of two × arrays denote elementwise multiplications, and diag( ) is an × diagonal matrix whose diagonal elements are the same as vec( ).

(101)
Linear systems (94) and (95) are singular or illconditioned problems, so direct or iterative methods fail to get an approximate solution for these problems.
Then linear systems (94) and (95) can be rewritten as In order to solve ill-conditioned linear systems (103) for +1 , we propose the following iterative method: Algorithm: SOLVER( ) Choose 1 = for ℓ = 1 to Solve ( + ) ℓ+1 = ℓ + for ℓ+1 end for +1 = ℓ+1 In it = 1 or 2, refers to the restored image computed at the previous th step, and > 0 is a parameter which should be chosen so that the coefficient matrix + is well conditioned. To study semiconvergence of SOLVER( ), we first introduce some important properties for an iterative method.

Lemma 2. Let
Since + is symmetric positive definite for ≥ ≥ 0 and > 0, the linear systems in SOLVER( ) can be solved using the CG (Conjugate Gradient) method [32,33]. Notice that convergence rate of the CG method depends on the condition number of coefficient matrix + . So we next provide some conditional properties for + in SOLVER( ). For a square matrix , let ( ) denote the condition number of . For a symmetric matrix ∈ R × , let ( ) denote the th largest eigenvalue of the matrix for 1 ≤ ≤ .
Notice that the condition 2 ( ) < 1 ( ) in Theorem 7 usually holds when is a singular or illconditioned matrix. Theorem 7 implies that the condition number of + is small if a large number of > 0 is chosen. In other words, for a suitably chosen large > 0 the coefficient matrix + in SOLVER( ) is well conditioned, so that the CG method for solving the linear systems in SOLVER( ) converges fast to the exact solution. The split Bregman method is an iterative method which monotonically decreases the residual norm between the original image and noisy image, so we do not have to get an accurate solution every iteration. For this reason, we have used = 1
Finally, one can obtain the split Bregman algorithms for the RLO and AA2 models. In Algorithms 7 and 8, maxit denotes the maximum number of outer iterations, ‖⋅‖ 2 stands for the 2 -norm, and tol denotes the tolerance of the stopping criterion which is set to 3 × 10 −3 for the test problems used in this paper. Notice that all images are assumed to have an intensity range of [0, 1] to guarantee convergence of the binomial series used in the split Bregman algorithm.

Numerical Experiments
In this section, we provide numerical performance results of the TM, split Bregman, RHPM(2), and RHPM(1) methods for multiplicative noise removal problems. Numerical tests are performed using MATLAB R2016a on a personal computer with 3.6GHz CPU and 8GB RAM. The multiplicative noisy images are generated by multiplying Gaussian white noise or Gamma noise to the clean image using the built-in MATLAB function randn or randg. More specifically, the multiplicative noisy images are generated by where is the clean image, = 1 + √ 0.01 × ( , ) generates the Gaussian white noise with mean 1 and variance 0.01, = 0.01 × (1/0.01, ( , )) generates the Gamma noise with mean 1 and variance 0.01, and ⋅ is componentwise multiplication of and .
In order to illustrate the efficiency and reliability of the proposed denoising algorithms (i.e., Algorithms 1-8), we provide numerical results for 4 test images such as the Caribou, Boat, Jet Plane, and Lake. The pixel size of the Caribou image is 256 × 256, and the pixel size of the other 3 test images is 512×512. To evaluate the quality of the restored images, we use the peak signal-to-noise ratio (PSNR) between the restored image and original image which is defined by where ‖ ⋅ ‖ refers to the Frobenius norm and and̃are the original and restored images, respectively. Also , stands for the value of the original image at the pixel point ( , ) and ⋅ is the total number of pixels. It is generally true that the larger PSNR stands for the better quality of the denoised image.
For numerical experiments, the TM method uses the test images with an intensity range of [0, 255], the split Bregman and RHPM methods use the test images with an intensity range of [0, 1], and the noise is Gaussian white noise or Gamma noise with mean 1 and variance 0.01 or 0.03. For all test problems, an initial image was set to the observed noisy image , and we have used = 200, = 0.2, and = 0.1 for the RLO model and = 10 −4 for the AA2 model. For the TM and RHPM methods (i.e., Algorithms 1-6), we have used △ = 0.35, and the iteration was terminated when MAE( ), defined in Section 2, started to increase. For the split Bregman method (i.e., Algorithms 7 and 8), we have used = 255, and the iteration was terminated when the following stopping criterion was satisfied: where = 3 × 10 −3 for all test problems. Tables 1 and 2 provide numerical results for the TM, split Bregman, RHPM(1), and RHPM(2) methods corresponding to the AA2 model with multiplicative Gamma noise and the RLO model with multiplicative Gaussian white noise, respectively. In Tables 1 and 2, 0 represents the PSNR values for the noisy images, Psnr represents the PSNR values for the restored images, parameters and are chosen as the best one by numerical tries, Iter denotes the number of iterations, and Time denotes the elapsed CPU time in seconds. Figures 1-4 show the denoised images by the TM, split Bregman, RHPM(1), and RHPM(2) methods for the AA2 model with multiplicative Gamma noise of variance 0.03.    (2) methods. As can be seen in Tables 1 and 2, the split Bregman method denoises best for variance 0.03, while RHPM(2) denoises best for variance 0.01. That is, the split Bregman method has the highest PSNR values for variance 0.03, while RHPM(2) has the highest PSNR values for variance 0.01. Observe that RHPM(1) computes 2 vectors V 0 , V 1 every iteration, while RHPM(2) computes 3 vectors V 0 , V 1 , V 2 every iteration. It means that the computational cost of RHPM(2) per iteration is more expensive than RHPM(1). However, the convergence rate of RHPM(2) is much faster than RHPM(1), so that total CPU time of RHPM(2) is much less than RHPM(1). In addition, RHPM(2) denoises much better than RHPM(1), and RHPM (2)  the methods considered in this paper. Overall, the split Bregman and RHPM(2) methods perform better than the RHPM(1) and TM methods. The split Bregman method denoises almost as well as RHPM(2) for most cases, while it denoises significantly better than RHPM(2) for the Jet Plane image with variance 0.03. In this regard, the split Bregman method performs more stably than RHPM(2).

Conclusion
In this paper, we have proposed restarted homotopy perturbation methods (RHPM) and split Bregman methods for multiplicative noise removal of the RLO and AA2 models. For the RHPM method, we have used binomial series techniques to settle the main difficulty in handling nonlinear terms. For the split Bregman method, we have used splitting techniques of singular matrices to handle the difficulty in solving illconditioned linear systems. Numerical experiments have shown that these techniques work well for all test problems. More specifically, the split Bregman and RHPM methods perform better than the TM method.
Numerical experiments also showed that the split Bregman and RHPM(2) methods perform better than RHPM(1). In addition, the split Bregman method denoises almost as well as RHPM(2) for most cases, but it performs more stably than RHPM(2) for noisy images with large variance. Notice that RHPM(2) has the smallest execution time because of the fastest convergence rate. Based on these facts, the RHPM(2) and split Bregman methods are preferred over RHPM (1), and the split Bregman method is preferred over RHPM (2) for noisy images with large variance. The proposed RHPM and split Bregman methods are only compared with the TM method, so future work will include comparison studies between the proposed methods and other existing methods for multiplicative noise removal models.

Data Availability
All the data used to support the findings of this study are included within the article.