Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

  • Gonglin Yuan,

    Affiliations Guangxi Colleges and Universities Key Laboratory of Mathematics and Its Applications, College of Mathematics and Information Science, Guangxi University, Nanning, Guangxi, 530004, P. R. China, School of Computer and Software, Nanjing University of Information Science & Technology, Nanjing 210044, P. R. China

  • Xiabin Duan ,

    xiabinduan@126.com

    Affiliation Guangxi Colleges and Universities Key Laboratory of Mathematics and Its Applications, College of Mathematics and Information Science, Guangxi University, Nanning, Guangxi, 530004, P. R. China

  • Wenjie Liu,

    Affiliations School of Computer and Software, Nanjing University of Information Science & Technology, Nanjing 210044, P. R. China, Jiangsu Engineering Center of Network Monitoring, Nanjing University of Information Science & Technology, Nanjing 210044, P. R. China

  • Xiaoliang Wang,

    Affiliation Guangxi Colleges and Universities Key Laboratory of Mathematics and Its Applications, College of Mathematics and Information Science, Guangxi University, Nanning, Guangxi, 530004, P. R. China

  • Zengru Cui,

    Affiliation Guangxi Colleges and Universities Key Laboratory of Mathematics and Its Applications, College of Mathematics and Information Science, Guangxi University, Nanning, Guangxi, 530004, P. R. China

  • Zhou Sheng

    Affiliation Guangxi Colleges and Universities Key Laboratory of Mathematics and Its Applications, College of Mathematics and Information Science, Guangxi University, Nanning, Guangxi, 530004, P. R. China

Abstract

Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

Introduction

As we know, the conjugate gradient method is very popular and effective for solving the following unconstrained optimization problem (1) where f : ℜn → ℜ is continuously differentiable and g(x) denotes the gradient of f(x) at x, the problem Eq (1) also can be applied to model some other problems [15]. The iterative formula used in the conjugate gradient method is usually given by (2) and (3) where gk = g(xk), βk ∈ ℜ is a scalar, αk > 0 is a step length that is determined by some line search, and dk denotes the search direction. Different conjugate methods have different choices for βk. Some of the popular methods [612] used to compute βk are the DY conjugate gradient method [6], FR conjugate gradient method [7], PRP conjugate gradient method [8, 9], HS conjugate gradient method [10], LS conjugate gradient method [11], and CD conjugate gradient method [12]. βk [8, 9] is defined by (4) where denotes the Euclidean norm, yk−1 = gkgk−1. The PRP conjugate gradient method is currently considered to have the best numerical performance, but it does not have good convergence. With an exact line search, the global convergence of the PRP conjugate gradient method has been established by Polak and Ribière [8] for convex objective functions. However, Powell [13] proposed a counter example that proved the existence of nonconvex functions on which the PRP conjugate gradient method does not have global convergence, even with an exact line search. With the weak Wolfe-Powell line search, Gilbert and Nocedal [14] proposed a modified PRP conjugate gradient method by restricting βk to be not less than zero and proved that it has global convergence, with the hypothesis that it satisfies the sufficient descent condition. Gilbert and Nocedal [14] also gave an example showing that βk may be negative even though the objective function is uniformly convex. When the Strong Wolfe-Powell line search was used, Dai [15] gave a example showing that the PRP method cannot guarantee that every step search direction is the descent direction, even if the objective function is uniformly convex.

Through the above observations and [13, 14, 1618], we know that the following sufficient descent condition (5) and the condition βk is not less than zero are very important for establishing the global convergence of the conjugate gradient method.

The weak Wolfe-Powell (WWP) line search is designed to compute αk and is usually used for the global convergence analysis. The WWP line search is as follows (6) and (7) where .

Recently, many new conjugate gradient methods ([1928] etc.) that possess some good properties have been proposed for solving unconstrained optimization problems.

In Section 2, we state the motivation behind our approach and give a new modified PRP conjugate gradient method and new algorithm for solving problem Eq (1). In Section 3, we prove that the search direction of our new algorithm satisfies the sufficient descent property and trust region property; moreover, we establish the global convergence of the new algorithm with the WWP line search. In Section 4, we provide numerical experiment results for some test problems.

New algorithm for unconstrained optimization

Wei et al. [29] give a new PRP conjugate gradient method usually called the WYL method. When the WWP line search is used, this WYL method has global convergence under the sufficient descent condition. Zhang [30] give a modified WYL method called the NPRP method as follows

The NPRP method possesses better convergence properties. The above formula for yk−1 contains only gradient value information, but some new yk−1 formulas [31, 32] contain information on gradient value and function value. Yuan et al.[32] propose a new yk−1 formula as follows and Where sk−1 = xkxk−1.

Li and Qu [33] give a modified PRP conjugate method as follows and

Under suitable conditions, Li and Qu [33] prove that the modified PRP conjugate method has global convergence.

Motivated by the above discussions, we propose a new modified PRP conjugate method as follows (8) and (9) where u1 > 0, u2 > 0, is the of [32].

As it follows directly from the above formula that . Next, we present a new algorithm and it’s diagram (Fig 1) as follows.

Algorithm 2.1

Step 0: Given the initial point , set d1 = −∇f(x1) = −g1, k: = 1.

Step 1: Calculate ; if , stop; otherwise, go to step 2.

Step 2: Calculate step length αk by the WWP line search.

Step 3: Set xk+1 = xk + αk dk, then calculate ; if , stop; otherwise, go to step 4.

Step 4: Calculate the scalar βk+1 by Eq (8) and calculate the search direction dk+1 by Eq (9).

Step 5: Set k: = k + 1; go to step 2.

Global convergence analysis

Some suitable assumptions are often used to analyze the global convergence of the conjugate gradient method. Here, we state it as follows

Assumption 3.1

  1. The level set Ω = {x ∈ ℜnf(x) ≤ f(x1)} is bounded.
  2. In some neighborhood H of Ω, f is a continuously differentiable function, and the gradient function g of f is Lipschitz continuous, namely, there exists a constant L > 0 such that (10)

By Assumption 3.1, it is easy to obtain that there exist two constants A > 0 and η1 > 0 satisfying (11)

Lemma 0.1 Let the sequence {dk} be generated by Eq (9); then, we have (12)

Proof When k = 1, we can obtain by Eq (9), so Eq (12) holds. When k ≥ 2, we can obtain

The proof is achieved.

We know directly from above Lemma that our new method has the sufficient descent property.

Lemma 0.2 Let the sequence {xk} and {dk, gk} be generated by Algorithm 2.1, and suppose that Assumption 3.1 holds; then, we can obtain (13)

Proof By Eq (7) and the Cauchy-Schwarz inequality, we have

Combining the above inequality with Assumption 3.1 ii) generates it is easy to know by lemma 0.1. By combining the above inequality with Eq (6), we obtain

Summing up the above inequalities from k = 1 to k = ∞, we can deduce that

By Eq (6), Assumption 3.1 and lemma 0.1, we know that {fk} is bounded below, so we obtain

This finishes the proof.

The Eq (13) is usually called the Zoutendijk condition [34], and it is very important for establishing global convergence.

Lemma 0.3 Let the sequence {βk, dk} be generated by Algorithm 2.1, we have (14) where .

Proof When dk = 0, we directly get gk = 0 from Eq (12). When dk ≠ 0, by the Cauchy-Schwarz inequality, we can easily obtain and

We can obtain

Using Eq (8), we have

Finally, when k ≥ 2 by Eq (9), we have

Let ; we obtain . This finishes the proof.

This lemma also shows that the search direction of our algorithm has the trust region property.

Theorem 0.1 Let the sequence {dk, gk, βk} and {xk} be generated by Algorithm 2.1. Suppose that Assumption 3.1 holds; then (15)

Proof By Eqs (12) and (13), we obtain (16)

By Eq (14), we have ; then, we obtain which together with Eq (16) can yield

From the above inequality, we can obtain . The proof is finished.

Numerical Results

When βk+1 and dk+1 are calculated by Eqs (4) and (3), respectively, in step 4 of Algorithm 2.1, we call it the PRP conjugate gradient algorithm. We test Algorithm 2.1 and the PRP conjugate gradient algorithm using some benchmark problems. The test environment is MATLAB 7.0, on a Windows 7 system. The initial parameters are given by We use the following Himmeblau stop rule, which satisfies

If ∣f(xk)∣ ≤ ɛ2, let stop1 = ; otherwise, let . The test program will be stopped if stop1 < ɛ3 or is satisfied, where ɛ2 = ɛ3 = 10−6. When the total number of iterations is greater than one thousand, the test program will be stopped. The test results are given in Tables 1 and 2: x1 denotes the initial point, Dim denotes the dimension of test function, NI denotes the the total number of iterations, and NFG = NF+NG (NF and NG denote the number of the function evaluations and the number of the gradient evaluations, respectively). denotes the function value when the program is stopped. The test problems are defined as follows.

  1. Schwefel function:
  2. Langerman function:
  3. Schwefel′s function
  4. Sphere function:
  5. Griewangk function:
  6. Rosenbrock function:
  7. Ackley function:
  8. Rastrigin function:

thumbnail
Table 2. Test results for the PRP conjugate gradient algorithm.

https://doi.org/10.1371/journal.pone.0140071.t002

It is easy to see that the two algorithms are effective for the above eight test problems listed in Tables 1 and 2. We use the tool of Dolan and Morè [35] to analyze the numerical performance of the two algorithms.

For the above eight test problems, Fig 2 shows the numerical performance of the two algorithms when the information of NI is considered, and Fig 3 shows the the numerical performance of the two algorithms when the information of NFG is considered. From the above two figures, it is easy to see that Algorithm 2.1 yields a better numerical performance than the PRP conjugate gradient algorithm on the whole. From Tables 1 and 2 and the two figures, we can conclude that Algorithm 2.1 is effective and competitive for solving unconstrained optimization problems.

A new algorithm is given for solving nonlinear equations in the next section. The sufficient descent property and the trust region property of the new algorithm are proved in Section 6; moreover, we establish the global convergence of the new algorithm. In Section 7, the numerical results are presented.

New algorithm for nonlinear equations

We consider the system of nonlinear equations (17) where q : ℜn → ℜn is a continuously differentiable and monotonic function. ∇q(x) denotes the Jacobian matrix of q(x); if ∇q(x) is symmetric, we call Eq (17) symmetric nonlinear equations. As q(x) is monotonic, the following inequality holds. If a norm function is defined as follows and we define the unconstrained optimization problem as follows, (18)

We know directly that the problem Eq (17) is equivalent to the problem Eq (18).

The iterative formula Eq (2) is also usually used in many algorithms for solving problem Eq (17). Many algorithms ([3641], etc.) have been proposed for solving special classes of nonlinear equations. We are more interested in the process of dealing with large-scale nonlinear equations. By Eq (2), it is easy to see that the two factors of stepsize αk and search direction dk are very important for dealing with large-scale problems. When dealing with large-scale nonlinear equations and unconstrained optimization problems, there are many popular methods ([38, 4246] etc.) for computing dk, such as conjugate gradient methods, spectral gradient methods, and limited-memory quasi-Newton approaches. Some new line search methods [37, 47] have been proposed for calculating αk. Li and Li [48] provide the following new derivative-free line search method (19) where αk = max{γ, ργ, ρ2 γ, …}, ρ ∈ (0,1), σ3 > 0 and γ > 0. This line search method is very effective for solving large-scale nonlinear monotonic equations.

Solodov and Svaiter [49] presented a hybrid projection-proximal point algorithm that could conquer some drawbacks when the form Eq (18) is used with nonlinear equations. Yuan et al.[50] proposed a three-term PRP conjugate gradient algorithm by using the projection-based technique, which was introduced by Solodov et al.[51] for optimization problems. The projection-based technique is very effective for solving nonlinear equations. It involves certain methods to compute search direction dk and certain line search methods to calculate αk, which satisfies in which wk = xk + αk dk. For any x* that satisfies q(x*) = 0, considering that q(x) is monotonic, we can obtain

Thus, it is easy to obtain the current iterate xk, which is strictly separated from the zeros of the system of equations Eq (17) by the following hyperplane

Then, the iterate xk+1 can be obtained by projecting xk onto the above hyperplane. The projection formula can be set as follows (20)

Yuan et al. [50] present a three-term Polak-Ribière-Polyak conjugate gradient algorithm in which the search direction dk is defined as follows where yk−1 = qkqk−1. The derivative-free line search method [48] and the projection-based techniques are used by the algorithm [50], proved to be very suitable for solving large-scale nonlinear equations. The most attractive property of algorithm [50] is the the trust region property of dk.

Motivated by our new modified PRP conjugate gradient formula, proposed in Section 2, we proposed the following modified PRP conjugate gradient formula (21) and (22) Where u3 > 0, u4 > 0. It is easy to see that , motivated by the above observation and [50]. We present a new algorithm for solving problem Eq (17): it uses our modified PRP conjugate gradient formula Eqs (21) and (22). Here, we list the new algorithm and it’s diagram (Fig 4) as follows.

Algorithm 5.1

Step 1: Given the initial point x1 ∈ ℜn,ɛ4 > 0,ρ ∈ (0,1), σ3 > 0, γ > 0,u3 > 0, u4 > 0, and k: = 1.

Step 2: If stop; otherwise, go to step 3.

Step 3: Compute dk by Eq (22) and calculate αk by Eq (19)

Step 4: Set the next iterate to be wk = xk + αk dk;

Step 5: If , stop and set xk+1 = wk; otherwise, calculate xk+1 by Eq (20)

Step 6: Set k: = k + 1; go to step 2.

Convergence Analysis

When we analyze the global convergence of Algorithm 5.1, we require the following suitable assumptions.

Assumption 6.1

  1. The solution set of the problem Eq (17) is nonempty.
  2. q(x) is Lipschitz continuous, namely, there exists a constant E > 0 such that

By Assumption 6.1, it is easy to obtain that there exists a positive constant ζ that satisfies (23)

Lemma 0.4 Let the sequence {dk} be generated by Eq (22); then, we can obtain (24) and (25)

Proof As the proof is similar to Lemma 0.1 and Lemma 0.3 of this paper, we omit it here.

Similar to Lemma 3.1 of [50] and theorem 2.1 of [51], it is easy to obtain the following lemma. Here, we omit this proof and only list it.

Lemma 0.5 Suppose that Assumption 6.1 holds and x* is a solution of problem Eq (17) that satisfies g(x*) = 0. Let the sequence {xk} be obtained by Algorithm 5.1; then, the {xk} is a bounded sequence and holds. Moreover, either {xk} is a infinite sequence and or the {xk} is a finite sequence and a solution of problem Eq (17) is the last iteration.

Lemma 0.6 Suppose that Assumption 6.1 holds, then, an iteration xk+1 = xk + αk dk will be generated by Algorithm 5.1 in a finite number of backtracking steps.

Proof We will obtain this conclusion by contradiction: suppose that does not hold; then, there exists a positive constant ɛ5 that satisfies (26) suppose that there exist some iterate indexes that do not satisfy the condition Eq (19). We let then it can obtain

By Assumption 6.1 (b) and Eq (24), we find

By Eqs (23) and (25), we can obtain

Thus, we obtain which shows that is bounded below. This contradicts the definition of ; so, the lemma holds.

Similar to Theorem 3.1 of [50], we list the following theorem but omit its proof.

Theorem 0.2 Let the sequence {xk+1, qk+1} and {αk, dk} be generated by Algorithm 5.1. Suppose that Assumption 6.1 holds; then, we have (27)

Numerical results

When the following dk formula of the famous PRP conjugate gradient method [8, 9] is used to compute dk in step 3 of Algorithm 5.1, then it is called PRP algorithm. We test Algorithm 5.1 and the PRP algorithm for some problems in this section. The test environment is MATLAB 7.0 on a Windows 7 system. The initial parameters are given by

When the number of iterations is greater than or equal to one thousand and five hundred, the test program will also be stopped. The test results are given in Tables 3 and 4. As we know, when the line search cannot guarantee that dk satisfies , some uphill search direction may be produced; the line search method possibly fails in this case. In order to prevent this situation, when the search time is greater than or equal to fifteen in the inner cycle of our program, we set αk that is acceptable. NG, NI stand for the number of gradient evaluations and iterations respectively. Dim denotes the dimension of the testing function, and cputime denotes the cpu time in seconds. GF denotes the evaluation of the final function norm when the program terminates. The test functions all have the following form the concrete function definitions are given as follows.

  1. Function 1. Exponential function 2 Initial guess:
  2. Function 2. Trigonometric function Initial guess:
  3. Function 3. Logarithmic function Initial guess: x0 = (1,1,⋯,1)T.
  4. Function 4. Broyden Tridiagonal function [[52], pp. 471–472] Initial guess: x0 = (−1,−1,⋯,−1)T.
  5. Function 5. Strictly convex function 1 [[44], p. 29]
    q(x) is the gradient of Initial guess:
  6. Function 6. Variable dimensioned function Initial guess:
  7. Function 7. Discrete boundary value problem [53]. Initial guess: x0 = (h(h−1), h(2h−1),⋯,h(nh−1))T.
  8. Function 8. Troesch problem [54] Initial guess: x0 = (0, 0, ⋯, 0)T.

By Tables 3 and 4, we see that Algorithm 5.1 and the PRP algorithm are effective for solving the above eight problems.

We use the tool of Dolan and Morè [35] to analyze the numerical performance of the two algorithms when NI, NG and cputime are considered, for which we generate three figures.

Fig 5 shows that the numerical performance of Algorithm 5.1 is slightly better than that of the PRP algorithm when NI is considered. It is easy to see that the numerical performance of Algorithm 5.1 is better than that of the PRP algorithm from Figs 6 and 7 because the PRP algorithm requires a bigger horizontal axis when the problems are completely solved.

thumbnail
Fig 7. Performance profiles of the two algorithms (cputime).

https://doi.org/10.1371/journal.pone.0140071.g007

From the above two tables and three figures, we see that Algorithm 5.1 is effective and competitive for solving large-scale nonlinear equations.

Conclusion

(i) This paper provides the first new algorithm based on the first modified PRP conjugate gradient method in Sections 1–4. The βk formula of the method includes the gradient value and function value. The global convergence of the algorithm is established under some suitable conditions. The trust region property and sufficient descent property of the method have been proved without the use of any line search method. For some test functions, the numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems.

(ii) The second new algorithm based on the second modified PRP conjugate gradient method is presented in Sections 5-7. The new algorithm has global convergence under suitable conditions. The trust region property and the sufficient descent property of the method are proved without the use of any line search method. The numerical results of some tests function are demonstrated. The numerical results show that the second algorithm is very effective for solving large-scale nonlinear equations.

Acknowledgments

This work is supported by China NSF (Grant No. 11261006 and 11161003), NSFC No. 61232016, NSFC No. U1405254, the Guangxi Science Fund for Distinguished Young Scholars (No. 2015GXNSFGA139001) and PAPD issue of Jiangsu advantages discipline. The authors wish to thank the editor and the referees for their useful suggestions and comments which greatly improve this paper.

Author Contributions

Conceived and designed the experiments: GY XD WL. Performed the experiments: GY XD WL. Analyzed the data: GY XD WL XW ZC ZS. Contributed reagents/materials/analysis tools: GY XD WL XW ZC ZS. Wrote the paper: GY XD WL.

References

  1. 1. Gu B, Sheng V, Feasibility and finite convergence analysis for accurate on-line v-support vector learning, IEEE Transactions on Neural Networks and Learning Systems. 24 (2013) 1304–1315.
  2. 2. Li J, Li X, Yang B, Sun X, Segmentation-based Image Copy-move Forgery Detection Scheme, IEEE Transactions on Information Forensics and Security. 10 (2015) 507–518.
  3. 3. Wen X, Shao L, Fang W, Xue Y, Efficient Feature Selection and Classification for Vehicle Detection, IEEE Transactions on Circuits and Systems for Video Technology. 2015,
  4. 4. Zhang H, Wu J, Nguyen TM, Sun M, Synthetic Aperture Radar Image Segmentation by Modified Student’s t-Mixture Model, IEEE Transaction on Geoscience and Remote Sensing. 52 (2014) 4391–4403.
  5. 5. Fu Z, Achieving Efficient Cloud Search Services: Multi-keyword Ranked Search over Encrypted Cloud Data Supporting Parallel Computing, IEICE Transactions on Communications. E98-B (2015) 190–200.
  6. 6. Dai Y, Yuan Y, A nonlinear conjugate gradient with a strong global convergence properties, SIAM J. Optim. 10 (2000) 177–182.
  7. 7. Fletcher R, Reeves C, Function minimization by conjugate gradients, Comput. J. 7 (1964) 149–154.
  8. 8. Polak E, Ribière G, Note sur la convergence de directions conjugees, Rev. Fran. Inf. Rech. Opérat. 3 (1969) 35–43.
  9. 9. Polyak BT, The conjugate gradient method in extreme problems, USSR Comput. Math. Math.Phys. 9 (1969) 94–112.
  10. 10. Hestenes MR, Stiefel EL, Methods of conjugate gradients for solving linear systems, J. Res. Natl.Bur. Stand. Sect. B. 49 (1952) 409–432.
  11. 11. Liu Y, Storey C, Effcient generalized conjugate gradient algorithms part 1: theory. J. Comput. Appl.Math. 69 (1992) 17–41.
  12. 12. Fletcher R, Practical method of optimization, vol I: unconstrained optimization, 2nd edn. Wiley,New York, 1997.
  13. 13. Powell MJD, Nonconvex minimization calculations and the conjugate gradient method, Lecture Notes in Mathematics, vol. 1066, Spinger, Berlin, 1984, pp. 122–141.
  14. 14. Gibert JC, Nocedal J, Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2 (1992) 21–42.
  15. 15. Dai Y, Analyses of conjugate gradient methods, PH.D.thesis, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences (in Chinese), 1997.
  16. 16. Ahmed T, Storey D, Efficient hybrid conjugate gradient techniques, Journal of Optimization Theory and Applications. 64 (1990) 379–394.
  17. 17. Al-Baali A, Descent property and global convergence of the Flecher-Reeves method with inexact line search, IMA Journal of Numerical Analysis. 5 (1985) 121–124.
  18. 18. Hu Y F, Storey C, Global convergence result for conjugate gradient methods, Journal of Optimization Theory and Applications. 71 (1991) 399–405.
  19. 19. Yuan G, Wei Z, Zhao Q, A modified Polak-Ribière-Polyak conjugate gradient algorithm for large-scale optimization problems, IIE Transactions. 46 (2014), 397–413.
  20. 20. Yuan G, Zhang M, A modified Hestenes-Stiefel conjugate gradient algorithm for large-scale optimization, Numerical Functional Analysis and Optimization. 34 (2013), 914–937.
  21. 21. Yuan G, Lu X, Wei Z, A conjugate gradient method with descent direction for unconstrained optimization, Journal of Computational and Applied Mathematics, 233 (2009), 519–530.
  22. 22. Yuan G, Lu X, A modified PRP conjugate gradient method, Annals of Operations Research. 166 (2009),73–90.
  23. 23. Li X, Zhao X, A hybrid conjugate gradient method for optimization problems, Natural Science. 3(2011): 85–90.
  24. 24. Yuan G, Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems, Optimization Letters. 3 (2009) 11–21.
  25. 25. Huang H, Lin S, A modified Wei-Yao-Liu conjugate gradient method for unconstrained optimization, Applied Mathematics and Computation. 231 (2014) 179–186.
  26. 26. Yu G, Zhao Y, Wei Z, A descent nonlinear conjugate gradient method for large-scale unconstrained optimization, Applied mathematics and computation. 187 (2) (2007) 636–643.
  27. 27. Yao S, Wei Z, Huang H, A note about WYL’s conjugate gradient method and its applications, Applied Mathematics and computation. 191 (2) (2007) 381–388.
  28. 28. Hager WW, Zhang H, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM Journal on Optimization. 16(1) (2005) 170–192.
  29. 29. Wei Z, Yao S, Liu L, The convergence properties of some new conjugate gradient methods, Applied Mathematics and Computation. 183 (2006) 1341–1350.
  30. 30. Zhang L, An improved Wei-Yao-Liu nonlinear conjugate gradient method for optimization computation, Applied Mathematics and computation. 215 (6) (2009) 2269–2274.
  31. 31. Wei Z, Yu G, Yuan G, Lian Z, The superlinear convergence of a modified BFGS-type method for unconstrained optimization, Comput. Optim. Appl. 29 (2004) 315–332.
  32. 32. Yuan G, Wei Z, Convergence analysis of a modified BFGS method on convex minimizations, Computational Optimization and Applications. 47 (2) (2010) 237–255.
  33. 33. Li M, Qu A, Some sufficient descent conjugate gradient methods and their global convergence, Computational and Applied Mathematics. 33 (2) (2014) 333–347.
  34. 34. Zoutendijk Z, Nonlinear programming computational methods. In: Abadie J. (ed.) Integer and nonlinear programming, North-Holland, Amsterdam, 1970, 37–86.
  35. 35. Dolan ED, Morè JJ, Benchmarking optimization software with performance profiles, Math.Program. 91 (2002) 201–213.
  36. 36. Buhmiler S, Krejic´ N, Luzanin Z, Practical quasi-Newton algorithms for singular nonlinear systems, Numer. Algorithms 55 (2010) 481–502.
  37. 37. Gu G, Li D, Qi L, Zhou S, Descent directions of quasi-Newton methods for symmetric nonlinear equations, SIAMJ.Numer.Anal. 40 (2002) 1763–1774.
  38. 38. La Cruz W, Martínez. M, Raydan M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations, Math. Comp. 75 (2006) 1429–1448.
  39. 39. Kanzow C, Yamashita N, Fukushima M, Levenberg-Marquardt methods for constrained nonlinear equations with strong local convergence properties, J. Comput. Appl. Math. 172 (2004) 375–397.
  40. 40. Zhang J, Wang Y, A new trust region method for nonlinear equations, Math. Methods Oper. Res,58 (2003) 283–298.
  41. 41. Grippo L, Sciandrone M, Nonmonotone derivative-free methods for nonlinear equations, Comput. Optim. Appl. 37 (2007) 297–328.
  42. 42. Cheng W, A PRP type method for systems of monotone equations, Math. Comput. Modelling. 50 (2009) 15–20.
  43. 43. La Cruz W, Raydan M, Nonmonotone spectral methods for large-scale nonlinear systems, Optim. Methods Softw. 18 (2003) 583–599.
  44. 44. Raydan M, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem, SIAM J. Optim. 7 (1997) 26–33.
  45. 45. Yu G, Guan L, Chen W, Spectral conjugate gradient methods with sufficient descent property for large-scale unconstraned optimization, Optim.Methods Softw. 23 (2) (2008) 275–293.
  46. 46. Yuan G, Wei Z, Lu S, Limited memory BFGS method with backtracking for symmetric nonlinear equations, Math. Comput. Modelling. 54 (2011) 367–377.
  47. 47. Li D, Fukushima M, A global and superlinear convergent Gauss-Newton -based BFGS method for symmetric nonlinear equations, SIAMJ.Numer.Anal. 37 (1999) 152–172.
  48. 48. Li Q, Li D, A class of derivative-free methods for large-scale nonlinear monotone equations, IMA J. Numer. Anal. 31 (2011) 1625–1635.
  49. 49. Solodov MV, Svaiter BF, A hybrid projection-proximal point algorithm, J. Convex Anal.6 (1999) 59–70.
  50. 50. Yuan G, Zhang M, A three-terms Polak-Ribière-Polyak conjugate gradient algorithm for large-scale nonlinear equations, Journal of Computational and Applied Mathematics. 286 (2015) 186–195.
  51. 51. Solodov MV, Svaiter BF, A globally convergent inexact Newton method for systems of monotone equations, in: Fukushima M., Qi L. (Eds.),Reformulation:Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, Kluwer Academic Publishers. 1998, pp. 355–369.
  52. 52. Gomez-Ruggiero M, Martinez J, Moretti A, Comparing algorithms for solving sparse nonlinear systems of equations, SIAM J. Sci. Comput. 23 (1992)459–483.
  53. 53. Morè J, Garbow B, Hillström K, Testing unconstrained optimization software, ACM Trans. Math. Softw. 7 (1981) 17–41.
  54. 54. Roberts SM, Shipman JJ, On the closed form solution of Troesch’s problem, J. Comput. Phys. 21 (1976) 291–304.