A Modified Dai–Liao Conjugate Gradient Method with a New Parameter for Solving Image Restoration Problems

One adaptive choice for the parameter of the Dai–Liao conjugate gradient method is suggested in this paper, which is obtained withmodified quasi–Newton equation. So we get amodified Dai–Liao conjugate gradient method. Some interesting features of the proposed method are introduced: (i)(e value of parameter t of the modified Dai–Liao conjugate gradient method takes both the gradient and function value information. (ii) We establish the global convergence property of the modified Dai–Liao conjugate gradient method under some suitable assumptions. (iii) Numerical results show that the modified DL method is effective in practical computation and the image restoration problems.


Introduction
In this paper, we consider the following unconstrained optimization problem: where f: R n ⟶ R and f is a continuously differentiable function. Requiring simplicity and low memory storage [1][2][3][4][5], conjugate gradient (CG) methods [6][7][8][9][10][11] are useful tools for solving large-scale problems. us, we consider solving (1) with the conjugate gradient methods. A sequence of iterates is generated from a given x 0 with the following simple iteration formula: x k+1 � x k + α k d k , k � 0, 1, 2, . . . , (2) where x k is the k-th iteration point, the step size α k > 0 can be usually chosen to satisfy certain line search conditions, and d k is the search direction defined by where β k is a scalar. Many scholars presented some classical formulas of β k as follows: the corresponding methods are called the Polak-Ribiére-Polyak (PRP) [12,13], Fletcher-Reeves (FR) [14], Hestense-Stiefel (HS) [15], conjugate descent (CD) [16], Liu-Storey (LS) [17], and Dai-Yuan (DY) [18] CG methods: where g k− 1 is the gradient ∇f(x k− 1 ) of f(x) at the point x k− 1 and ‖ · ‖ is the Euclidean norm. We relatively easily established the global convergence property of the CD method, DY method, and FR method, but numerical results of these methods are not desirable in some computations. Powell [19] explained a major numerical disadvantage of the FR method, such as subsequent steps being very short if a small step is originated away from the solution point. If poor d k occurs in practical computation, the PRP, HS, or LS method will perform a restart, so these three methods perform much better than the above three methods in the numerical test. ey are generally regarded as the most efficient conjugate gradient methods. erefore, in recent year, many scholars try to research some modified formulas for conjugate gradient methods which have global convergence property for general functions and satisfied performance in the numerical test. Among them, Dai and Liao [20] proposed some modified conjugate methods with a new conjugacy condition. Interestingly, their method cited in [20] not only has global convergence for general function, but also has better numerical performance than HS and PR methods.
When the CG methods produce a sequence of search directions d k , the following conjugacy condition holds: where m ≠ n and H is the Hessian of the objective function. e definition of y k− 1 is as follows: If the general function f is nonlinear, combining with the knowledge of mean value theorem, we draw a conclusion such that where t ∈ (0, 1). By (5) and (7), we have As we know, Dai and Liao [20] studied (8) in depth based on the quasi-Newton techniques. We have that in the quasi-Newton method, H k− 1 is defined as an approximation matrix of the Hessian ∇ 2 f(x k− 1 ), and the new matrix H k holds the following formula: In the quasi-Newton method, the search direction d k is defined by According to the above two equations, we have Based on the above relations, Dai and Liao acquired the following conjugacy condition: where t is a positive parameter. In the case t � 0, (12) becomes (8). In the case t � 1, (12) becomes (11). Moreover, g T k s k− 1 � 0 holds under the exact line search. It is easily observed that both (11) and (12) coincide with (8). According to the above discussion, (12) can be considered as an extension of (8) and (11). Multiplying the definition of d k with y k− 1 and using (12), Dai and Liao introduced the new formula in the DL method as follows: In [20], the conjugate gradient method with (13) establishes the global convergence property of uniformly convex functions. Furthermore, in order to ensure general functions also satisfy global convergence, a new formula was presented by Dai and Liao as follows: It is easily observed that the values of the first term in (13) are nonnegative. ey proved that the modified DL method with (14) satisfies global convergence for general functions under some suitable conditions. In the DL method, different parameter t shows different numerical performance. Based on the singular value study, Hager and Zhang [21] presented the new choices for t: and they proved that d k in [21] satisfies the sufficient descent condition. In addition, Babaie-Kafaki and Ghanbari [22] presented a new value for t: In [22], they proved that the DL method with t � t 1 has better numerical results than the methods proposed by Dai and Kou [23]. In last several years, some scholars tried to find the new choice for the nonnegative parameter t in (13) [24,25]. e remainder of this paper is organized as follows: in Section 2, based on the new conjugacy condition, a modified DL gradient method is proposed with a new value for the parameter. We establish the global convergence property of the presented method in Section 3. Finally, we do some numerical experiments in Section 4.

New DL Conjugate Gradient Method
It is well known that the objective function can be regarded as a quadratic equation, so, if a point x k is close enough to a local minimizer, a good direction is the Newton direction: Hence, from d k of CG methods and the formula of β k in (13), we can compute the value of parameter t as follows: By some algebra, we obtain oughtfully, because computing the Hessian matrix ∇ 2 f(x k ) is inefficient, a quasi-Newton method is used in this paper. In the quasi-Newton methods [26], the search direction d k is computed by solving unconstrained optimization problems, B k d k � − g k , where B k is the approximation matrix of the Hessian ∇ 2 f(x k ) such that the matrix B k satisfies the following important equation: Theorem 1. Assume that the function f(x) is sufficiently smooth and ‖s k− 1 ‖ is sufficiently small, and then, we have where T k is the tensor of f at x k and Proof. Doing Taylor expansion for the objective function f(x), we have By the definitions of y k− 1 , we completed the proof.
e BFGS-type methods have been proven to satisfy ideal global convergence of uniformly convex functions, but the methods may fail to establish the convergence property of nonconvex functions. So, a new version of the BFGS method is proposed by Wei et al. [27] to overcome its convergence failure for more general objective function. In their method, the matrix B k is obtained from the following modified secant equation: where In order to get the powerful theoretical and numerical properties of the modified BFGS method, use (24) to simplify (19) and the parameter t is proposed as follows: en, to ensure that the new DL method satisfies the descent condition, we present a modified definition of the value of parameter t similar to [28] as follows: where θ > (1/4); it means that we have d k as follows: where . Based on the above discussion, we can collect some of the advantages of these formulas:

Convergence Analysis
In the following paper, there are some indispensable assumptions for the global convergence of the algorithm on objective functions.
(ii) In some neighbourhood Ψ of Υ, f is differentiable, and its gradient function g is the Lipschitz continuous, namely,

Mathematical Problems in Engineering
where L > 0 is a constant and any x, y ∈ Ψ.
Remark 2. In order to establish the convergence property of new method, we assume the new proposed parameter is bounded. For this purpose, we set where H is a large positive constant.

Lemma 1.
Consider the proposed modified DL method in the form of (27). If the line search procedure guarantees d T k y k ≥ 0, for all k ≥ 0, then, we have where λ � (1 − (1/4θ)).
□ Lemma 2. Suppose that Assumption 1 holds. Consider the proposed modified DL method in the form of (27), and the step size α k obtained by the equations in step 2 in Algorithm 1. If f be a nonconvex function on the level set Ψ, then, we have Proof. From Lemma 1, d T k g k < 0 holds. By the second equation in Step 2 in Algorithm 1 and (28), So the following equation holds: From the first equation in Step 2 in Algorithm 1, we obtain Summing up both sides of the above inequalities from Combining with (34), (32) holds, which completes the proof. □ Lemma 3. Suppose that Assumption 1 is true and then the sequence x k , d k , α k , g k is generated by Algorithm YWLDL. If we obtain Proof. From Lemma 1 and (37), we have Combining with Lemma 2, such that e proof is complete.
Step 6: set k: � k + 1 and go to step 3. ALGORITHM 1: YWLDL algorithm. (27) and the step size α k obtained the equations in step 2 in Algorithm 1. If f be a nonconvex function on the level set Ψ, then, we obtain

Numerical Experiments
is section reports some numerical results. We do some numerical experiments in order to investigate the performance of our proposed algorithm and compare it with others. In the next subsections, all the numerical tests were run in a 2.30 GHz CPU and 8.00 GB of memory in a Windows 10 operating system.

Normal Unconstrained Optimization Problems.
In this section, we test the effect of some algorithms on numerical problems, and the unconstrained optimization problems are listed in Table 1. We will report on various numerical implementations of our presented method (YWLDL algorithm) with the modified Wolfe-Powell line search (YWL) on these problems and compare its performance with the one of the following algorithms: HZ algorithm, the BG algorithm, and the DL+ algorithm, for the problems. e proposed algorithm is listed as follows: HZ algorithm: the DL method with parameter t in [21] under the weak Wolfe-Powell line search BG algorithm: the DL method with parameter t in [22] under the weak Wolfe-Powell line search DL+ algorithm: the DL method with parameter t in [20] under the strong Wolfe-Powell line search In this part of the numerical experiment, we introduce the stopping rules, dimensions, and some parameters as follows: Stop rules (the Himmelblau stop rule [31] If the condition ‖g(x)‖ < ε or stop 1 < e 2 is satisfied, where e 1 � e 2 � 10 − 6 , ε � 10 − 6 . e algorithm will stop if the number of iterations is greater than 1000. Dimension: 3000, 6000, and 9000 variables. Parameters: δ � 0.2, δ 1 � 0.05, and σ � 0.9. e columns of Table 1 have the following meanings: Nr: the number of tested problems Test problems: the name of problems Mathematical Problems in Engineering 5 e comparison data can be shown as follows: NI: the number of iterations NFG: the total of the function and gradient evaluations CPU: the CPU time of the algorithm spent in seconds We used the tool presented by Dolan and Moré [32] in these numerical reports, the new tool to show their performance in order to analyse the efficiency of the YWLDL algorithm, HZ algorithm, BG algorithm, and DL+ algorithm, and Figures 1-3 show the profiles relative to the CPU time, NI, and NFG, respectively. Figure 1 presents that YWLDL successfully solves about 35 percent of the test problems with the CPU time in seconds, HZ solves about 18 percent of the test problems, BG solves about 24 percent of the test problems, and DL+ solves about 6 percent of the test problems. We can conclude that YWLDL is more competitive than the HZ algorithm, BG algorithm, and DL+ algorithm since its performance curves corresponding to the CPU time are best. Figures 2 and 3 show that the robustness of YWLDL algorithm is worse than that of the BG algorithm. e YWLDL algorithm can successfully solve most of the test problems. Altogether, it is clear that the YWLDL algorithm is efficient based on the experimental results and the YWLDL algorithm with modified Wolfe-Powell line search is competitive with the other four algorithms for these test problems.

Muskingum Model in Engineering Problems.
It is well known that some algorithms for solving optimization problems have been reassessed as a major challenge in engineering problems. Many scholars hope to present some effective algorithms to solve these engineering problems. Parameter estimation is one of the important tools for the research of a well-known hydrologic engineering application problem called the nonlinear Muskingum model. e Muskingum model, depending on the water inflow and outflow, is a popular flood routing model defined as follows: e Muskingum model [33] is defined by       Mathematical Problems in Engineering

Mathematical Problems in Engineering
at time t k , where k � 1, 2, . . . , n, n denotes the total number of times; x 1 , x 2 , and x 3 denote the storage time constant; the weighting factor, and an additional parameter, respectively; Δt denotes the time step; I i denotes the observed inflow discharge; and Q i denotes the observed outflow discharges. In Chenggouwan and Linqing of Nanyunhe in the Haihe Basin, Tianjin, China, using actual observation data of flood run off process, Δt � 12(h) and the initial point x � [0, 1, 1] T are presented. e detailed data for I i and Q i in 1960, 1961, and 1964 can be found in [34]. e tested results of the final points are listed in Table 2. Figures 4-6 can draw at least three conclusions: (i) It is not different from the BFGS method and HIWO method, the presented algorithm provides a good approximation for these data, and the YWLDL algorithm can effectively be used to study this nonlinear Muskingum model.

Image Restoration Problems.
In this section, we will use the proposed algorithm to recover the original image from the image destroyed by impulse noise. It has important practical significance in the field of optimization. e selection of parameters is similar to that in the section of Numerical Experiments. e stop condition is (|‖f k+1 ‖ − ‖f k ‖|/‖f k ‖) < 10 − 3 or (‖x k+1 − x k ‖/‖x k ‖) < 10 − 3 holds. e experiments choose Barbara (512 × 512), Baboon (512 × 512), and Lena (512 × 512) as the test images. e detailed performance results are given in Figures 7-9. It is not difficult to observe that both HZ algorithm, BG algorithm, and DL+ algorithm and YWLDL algorithm are tested in the image restoration of tested images. e expenditure of the CPU time is listed in Table 3 to compare the YWLDL algorithm with the other algorithms.
From Table 3 and Figures 7-9, we may draw the following conclusions: (i) It pretty obviously takes less time to restore an image with 25% noise than one with 65% noise. (ii) When the salt-and-pepper noise increases, the cost to restore the image increases. (iii) e YWLDL algorithm shows advantage to the HZ algorithm, BG algorithm, and DL+ algorithm in image restoration problems.

Conclusion
In this paper, the YWLDL conjugate gradient algorithm that combines with the modified WWP line search technique is proposed. e theory of other methods for constructing parameter t is one of the interesting works, and we think that some modified DL methods with other parameter t under Figure 8: Restoration of the Babara, Baboon, and Lena images by using the HZ algorithm, BG algorithm, DL+ algorithm, and YWLDL algorithm. From left to right: a noisy image with 45% salt-and-pepper noise, restorations obtained by minimizing z with the HZ algorithm, BG algorithm, and DL+ algorithm and YWLDL algorithm. Figure 9: Restoration of the Babara, Baboon, and Lena images by the HZ algorithm, BG algorithm, DL+ algorithm, and YWLDL algorithm. From left to right: a noisy image with 65% salt-and-pepper noise, restorations obtained by minimizing z with the HZ algorithm, BG algorithm, and DL+ algorithm and YWLDL algorithm.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.