Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The global convergence of spectral RMIL conjugate gradient method for unconstrained optimization with applications to robotic model and image recovery

  • Nasiru Salihu ,

    Contributed equally to this work with: Nasiru Salihu, Poom Kumam

    Roles Conceptualization, Formal analysis, Methodology, Software, Writing – original draft

    Affiliations Center of Excellence in Theoretical and Computational Science, Fixed Point Research Laboratory, Fixed Point Theory and Applications Research Group, Faculty of Science, King Mongkut’s University of Technology Thonburi, Bangkok, Thailand, Department of Mathematics, Faculty of Sciences, Modibbo Adama University, Yola, Nigeria, Gombe State University Mathematics for Innovative Research Group, Gombe State University, Gombe, Nigeria

  • Poom Kumam ,

    Contributed equally to this work with: Nasiru Salihu, Poom Kumam

    Roles Project administration, Supervision

    poom.kum@kmutt.ac.th

    Affiliations Center of Excellence in Theoretical and Computational Science, Fixed Point Research Laboratory, Fixed Point Theory and Applications Research Group, Faculty of Science, King Mongkut’s University of Technology Thonburi, Bangkok, Thailand, KMUTT-Fixed Point Research Laboratory, Science Laboratory Building, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi, Bangkok, Thailand, Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan

  • Aliyu Muhammed Awwal ,

    Roles Formal analysis, Methodology, Software, Writing – review & editing

    ‡AMA, IMS and TS also contributed equally to this work.

    Affiliations Center of Excellence in Theoretical and Computational Science, Fixed Point Research Laboratory, Fixed Point Theory and Applications Research Group, Faculty of Science, King Mongkut’s University of Technology Thonburi, Bangkok, Thailand, Gombe State University Mathematics for Innovative Research Group, Gombe State University, Gombe, Nigeria, Department of Mathematics, Faculty of Science, Gombe State University, Gombe, Nigeria

  • Ibrahim Mohammed Sulaiman ,

    Roles Methodology, Software, Validation

    ‡AMA, IMS and TS also contributed equally to this work.

    Affiliations School of Quantitative Sciences, Universiti Utara Malaysia, UUM Sintok, Kedah, Malaysia, Institute of Strategic Industrial Decision Modelling, Universiti Utara Malaysia, Sintok, Kedah, Malaysia

  • Thidaporn Seangwattana

    Roles Investigation, Methodology, Project administration

    ‡AMA, IMS and TS also contributed equally to this work.

    Affiliation Faculty of Science Energy and Environment, King Mongkut’s University of Technology, North Bangkok, Rayong Campus, Rayong, Thailand

Abstract

In 2012, Rivaie et al. introduced RMIL conjugate gradient (CG) method which is globally convergent under the exact line search. Later, Dai (2016) pointed out abnormality in the convergence result and thus, imposed certain restricted RMIL CG parameter as a remedy. In this paper, we suggest an efficient RMIL spectral CG method. The remarkable feature of this method is that, the convergence result is free from additional condition usually imposed on RMIL. Subsequently, the search direction is sufficiently descent independent of any line search technique. Thus, numerical experiments on some set of benchmark problems indicate that the method is promising and efficient. Furthermore, the efficiency of the proposed method is demonstrated on applications arising from arm robotic model and image restoration problems.

Introduction

In unconstrained optimization, large scale problem of the following form (1) is usually considered, where the function is smooth, meaning its gradient, g(x) = ∇f(x) is obtainable. Problems in the form of Eq (1) attracted much attention in optimization community because of their wide variety of applications in different fields [1], including portfolio choice [2, 3], m-tensor system [4], image restoration [58], signal recovery [911] and robotic motion [1214] among others. The iterative methods that make use of gradient of f are usually preferable to solve these problems. Example of such methods is the Conjugate Gradient (CG) method, owing to its modest memory requirement and nice theoretical property. The method generates sequence {xk} by taking as an initial guess for the solution using the scheme (2) The scalar αk > 0, called step-size or line-step is calculated so that it approximately meets the condition (3) A practical technique used for obtaining the step-size is Wolfe rule [15]. The technique at k-th iteration is computed in such away that αk satisfies certain defined rules [16]. The commonly used technique referred as the standard line search technique, consisting the following inequalities (4) (5) Replacing Eq (5) with the formula (6) gives the rule called strong Wolfe condition, where 0 < δ < σ < 1. Moreover, this step-size is calculated along the spectral search direction dk, given by (7) in which, the spectral and CG updating parameters are represented by the scalars θk and βk respectively. The parameter βk is crucial in constructing and determining choice of a CG method. Some earlier formulas for the parameters are proposed by Fletcher and Revees (FR) [17] and Polak, Ribière and Polyak (PRP) [18, 19] with the following formulas respectively with yk−1 = gk−1gk and ‖.‖ represents 2 norm.

Others include; Hestenes and Steifel (HS) [20], Liu and Storey (LS) [21], Dai and Yuan (DY) [22], and Conjugate Descent (CD) [23]. It is observed in theory that, if the underlying quadratic function happen to be strongly convex, and αk is determined by Eq (3), then the aforementioned schemes are comparable. However, for non-quadratic functions that mostly come up in reality, the behavior of the methods differs. For instance, when the difference between two consecutive gradients tends to zero, then, the PRP, HS and LS methods automatically performs a restart. This feature makes them numerically effective. However, they are theoretically unstable, and require some variants of Wolfe-type line search rule to converge. On the other hand, the FR, DY and CD methods have stable convergence property with modest numerical results [24].

The convergence result of FR scheme via the exact line search was initially discussed by Zoutendijk [25]; where Al-Baali [26], Touati-Ahmed and Storey [21], and Gilbert and Nocedal [27] further analyzed the theoretical result under the strong Wolfe conditions. Similar result was also recorded for the CD and DY methods. Furthermore, survey of literature revealed that the PRP method is one of the most reliable CG methods. However, Powell [28] showed that the method possesses unstable global convergence property. This led to several modifications of the method by different authors. For example see, [27, 2932] among others.

Consider the modified version of the PRP method proposed by Rivaie et al. [33] (8) The RMIL method has been given considerable attention recently, but unfortunately, the convergence result of the RMIL is established based on exact line search, which is sometimes difficult to obtain. It has been pointed out by Dai [34] that the RMIL method may not generate descent directions for general functions, i.e. (9) may fail to hold. Subsequently, based on the condition imposed on RMIL method by Dai [34], i.e. Yousif [35] discussed the RMIL’s convergence result using the inexact line search Eqs (4) and (6). However, the Dai’s condition may not always hold for general functions.

Another way to ensure that a CG method produces descent search directions is to scale the first term of the search direction, i.e. −gk, with a positive parameter, usually referred to as spectral parameter. Such modifications are known as spectral CG methods. For instance, Raydan [36] introduced a spectral CG method with good convergence properties. Also, spectral FR CG methods were suggested in [37, 38]. The theoretical feature of these methods is that, regardless of which line search technique is employed, the direction dk satisfies Eq (9). The method in [37] reduces to classical FR parameter provided Eq (3) is satisfied, while that of [38] did not explicitly mention the line search rule used. Unfortunately, their numerical results are not promising even with inexact line minimization. The following references are available for more information on hybrid and spectral CG methods [27, 3847].

Inspired by the contributions discussed so far, a spectral RMIL version is proposed in this paper. Interestingly, it is worth nothing that one of the major advantages of the proposed spectral RMIL method is that it possesses the nice restart feature of PRP method as well as strong convergence property of FR method. This is evident from the theoretical analysis presented in Section 3. In addition, the proposed spectral RMIL method does not require the restriction imposed on the RMIL method by Dai [34]. Combining these features together, one can see that the spectral RMIL method works better than the earlier restricted versions of RMIL method.

The key contributions of this paper are enumerated as follows:

  1. An efficient CG parameter is suggested from spectral search direction.
  2. The search direction is always descent without any line search consideration at every iterations.
  3. Several large-scale benchmark unconstrained optimization problems from the literature are used to demonstrate the efficiency of the suggested scheme.
  4. The algorithm is shown to converge globally under Lipschitz continuity condition.
  5. Finally, the arm robotic model and image restoration problems are solved using the new spectral method.

The reminder of the paper is structured as follows. The new spectral CG method and its algorithm is presented next. In Section 3, the global convergence result of the new scheme via the standard Wolfe line search is analysed. The experimentation and comparison are discussed in Section 4 as well as applications of the new algorithm to solve arm motion control and image restoration problems in section 5 and 6 respectively and finally brief conclusion is made in the last section.

Algorithm and motivation

Consider the sequence {xk} generated by spectral CG method Eqs (2) and (7). Pre-multiplying Eq (7) by and using Eq (8) together with Cauchy-Schwartz inequality gives (10) For Eq (10) to satisfy Eq (9), implies that , where η > 0. Without loss of generality, it holds when that is (11) Applying Cauchy-Schwartz inequality on Eq (11), we have (12) Thus, base on the above selection of the parameter θk, the search direction dk is always descending independent of any line search rule, which means that the inequality Eq (9) is fulfilled. Therefore, motivated by this nice property, in this paper, we suggest the following spectral parameter (13) Remark 1. The choice of parameter θk in this form allows us to remove the computational burden and establish the global convergence of the proposed method without the condition, imposed on the earlier versions of .

In the following, the implementation procedure of the proposed method, NSRMIL method, is described.

Algorithm 1: NSRMIL

Step 1: Given an initial point . Compute d0 = −g0 and set k = 0.

Step 2: If ‖gk‖ ≤ ϵ, then stop. Otherwise,

Step 3: Compute dk by Eq (7), where βk and θk are determined by Eqs (8) and (13), respectively.

Step 4: Compute αk > 0 satisfying Eqs (4) and (6).

Step 5: Set xk+1 = xk + αkdk.

Step 6: Update the next iterate from step 2.

To analyse Algorithm 1 convergence characteristics, the following presumption is useful

Assumption 2 1. The function f(x) is bounded below on the level set where x0 is the initial point such that

2. Denoting Γ as some neighborhood of δ, and function f is smooth with its gradient being Lipschitz continuous satisfying (14) Under these assumptions, it is easy to see that (15) for all xδ, γ > 0.

Convergence analysis

We demonstrate the convergence of Algorithm 1 in this section. The Lemma that follows is taken from [25] and is an important part of the analysis.

Lemma 3 Suppose that {xk} and {dk} are sequences generated by Algorithm 1, where the search direction dk is descent and αk fulfils Wolfe condition, then (16) Theorem 4 If Assumption 2 holds and the sequence of iterates {xk} is produced by Algorithm 1, then (17)

Proof If Eq (17) does not hold, then there exists some constant m > 0 so that (18) Claim: There exists a constant P > 0, such that the search direction specified by Eq (7) is bounded. (19) In order to establish the claim in Eq (19), which is crucial in showing the convergence of NSRMIL method, we need the following equations Eqs (8), (12), (13), (14), (15) and (18). Now taking the norm of the search direction given by Eq (7), we get (20) (21) Setting the result is established. The fourth and last line inequalities follow directly from Cauchy-Schwartz inequality and Assumption 2 respectively. Squaring both sides of Eq (11) gives (22) Dividing through by ‖dk2 and summing gives (23) which contradicts Eq (16). Thus, Eq (17) must hold.

Numerical results

Here, we give the numerical output of Algorithm 1 using some set of test functions considered by Andrei [24] and Jamil [48]. The problems are presented in Table 1 and compared with the following methods:

  • SRMIL+ [2]: Algorithm 1 with and .
  • SPRP [49]: Algorithm 1 with and .
  • RMIL+ [35]: Algorithm 1 with and θk = 1.
  • SCG [38]: Algorithm 1 with and .

To analyse the performance of these coefficients, we consider forty nine (49) set of problems with different dimensions, and the codes are written in MATLAB 9.12 (R2022a), which is run on a personal computer with configuration; Intel(R) Core i7–1195G7 PC, RAM of 16 GB and 2.90 GHz of CPU. In running the code, we set parameters δ = 10−4 and σ = 10−3 in the strong Wolfe line search conditions for all the algorithms. The algorithms are set to stop when ‖gk‖ ≤ 10−6 or 2000 iterations is reached. To determine the proper value for the modulating parameter, η in the NSRMIL scheme, we tested the numerical behavior of the parameter for some selected values {0.0001, 0.01, 0.5, 1, 10, 20, 50, 100}, where the best result is obtained when η = 0.01.

Furthermore, numerical results are compared using performance profile introduced by Dolan and Moré [50]. The detailed description of the numerical outcomes of the experiments are tabulated and available in S1 Table, and the results are presented graphically in Figs 13, where the P(τ) in the graphs denote the fraction of the test functions a method solved, and a method having high performance ratio of P(τ) is regarded the best. In addition, this ratio measures the performance of the algorithms based on number of iterations (NI), number of function evaluation (FE) and amount of time required to solve the problems (CPU). Similarly, the right hand part of the curves indicate robustness of a method. The interpretation of Figs 13 show that NSRMIL is efficient and preferable to other four methods. In Figs 13 we can see that, the NSRMIL method solves 88% of the test problem and win, i.e be the best in about 68% of the problems, followed by SRMIL+, RMIL+, SPRP and SCG methods respectively.

thumbnail
Fig 1. Number of iterations performance profiles of the methods.

https://doi.org/10.1371/journal.pone.0281250.g001

thumbnail
Fig 3. Number of function evaluation performance profiles of the methods.

https://doi.org/10.1371/journal.pone.0281250.g003

Application of NSRMIL on real-time 3DOF robotic model

In this section, we illustrate application of Algorithm 1 in solving the real-time motion control model of a three-joint planar robotic manipulator as investigated in [51]. The code and implementation of Algorithm 1 was performed using MATLAB R2022a 11th Gen. Intel(R) Core i7–1195G7 and run on a PC with RAM 16 GB that has CPU of 2.90 GHz. Briefly, the position level of discrete-time kinematics model equation is written as (24) The relation implies that h(⋅) is the kinematics mapping, which relates the orientation and position of any part of the robot. The length of the segments is denoted by of ithrod, and ηi (for i = 1, 2, 3) and the vector of h(β) is the joint angles that show the end effector position. Let indicate the desired travel vector at time interval tk ∈ [0, tf]. Therefore, in robotic model, we usually minimize the following nonlinear least square problem: (25) where the vector δk represents the actual path. So, the end-effector is controlled to monitor Lissajous curve [52]: (26) The joint is initialized at time instant t = 0, whereas the end effector δk at with the task period [0, tfinal] that is equally divided into 200 units. The report of motion control experiment of Algorithm 1 are plotted in Figs 47, which show that, the NSRMIL method synthesized the robot trajectories and pass through the desired path as given in Figs 6 and 7 with residuals error of 10−6 as observed from Figs 4 and 5.

thumbnail
Fig 6. End effector of NSRMIL trajectory and desired path.

https://doi.org/10.1371/journal.pone.0281250.g006

Application of NSRMIL for image recovery

The conjugate gradient (CG) method is widely known for solving large-scale smooth and non-smooth convex optimization problems due to its efficiency and low memory requirement. A typical example of the smooth problem is the image restoration problem traced to the field of medical sciences, mathematical biology, and many more. This section tend to investigate the performance of Algorithm 1 in restoring images that have been corrupted by noise in the process of acquisition. One of the most common and frequently used noise restoration models is impulse noise (see; [5355]). For the purpose of this study, we consider restoring the original 256 × 256 grey level images (x) of Canal and Building that have been corrupted by salt-and-pepper impulse noise. To achieve this, we need to define the following terms. From the above statement, we denote the true image with M × N pixels by (x) and its index set define as W = {1, 2, …, M} × {1, 2, …, N}.

At the first stage, suppose we denote the detected set of indices of the noise pixels as . Then, if we want to restore the noise pixels in the second stage, we have to minimize the following function: where, Here, ϕα denote the edge-preserving function defined as and Vij = {(i, j − 1), (i, j + 1), (i − 1, j), (i + 1, j)} is the set of neighbours of the pixel at position (i, j). In the above function, denote an adaptive median filter to the observed corrupted noisy image ξ.

In this part, the efficiency and relative accuracy of the proposed method is analysed based on peak signal-to-noise ratio (PSNR), relative error (RelErr), and CPU time (CPUT) using 30%, 50%, and 80% noise-degrees, respectively. The algorithms for this experiment are coded on MATLAB 2019a software installed on an Intel(R) Core(TM) i5–3210M PC with CPU@2.50 GHz, 4.00 GB RAM, and 64-bit.

The detailed description of the results are discussed in Tables 24, and the graphical representation of the restored images are presented in Figs 8 and 9. The results presented in the tables and figures demonstrate the performance of all the methods considered for the study in removing the noise from the corrupted Canal and Building images. Considering the results as regards to the three metrics, including the PSNR, RelErr, and CPU time. It is obvious that the proposed method is very competitive because it produces the best performance, that is having the least CPUT for a greater number of the noise degrees as seen in Table 2. This also applies to other metrics analysed in Tables 3 and 4. A close observation of the overall results show that the proposed method significantly outperformed the existing algorithms based on CPUT, RelErr, and PSNR. The NSRMIL method is able to de-correlates the grey noise and improved the correlation in signals with better accuracy.

thumbnail
Fig 8.

Canal image corrupted by 30, 50, and 80% salt-and-pepper noise: (a,b,c), the restored images using NSRMIL: (d,e,f), SPRP: (g,h,i), RMIL+: (j,k,l), SCG (m,n,o), SRMIL+ (p,q,r).

https://doi.org/10.1371/journal.pone.0281250.g008

thumbnail
Fig 9.

Building image corrupted by 30, 50, and 80% salt-and-pepper noise: (a,b,c), the restored images using NSRMIL+: (d,e,f), SPRP: (g,h,i), RMIL+: (j,k,l), SCG (m,n,o), SRMIL+ (p,q,r).

https://doi.org/10.1371/journal.pone.0281250.g009

thumbnail
Table 2. Image restoration outputs for NSRIL, SRMIL+, SPRP, RMIL+, and SCG, based on CPUT.

https://doi.org/10.1371/journal.pone.0281250.t002

thumbnail
Table 3. Image restoration outputs for NSRIL, SRMIL+, SPRP, RMIL+, and SCG, based on RelErr.

https://doi.org/10.1371/journal.pone.0281250.t003

thumbnail
Table 4. Image restoration outputs for NSRIL, SRMIL+, SPRP, RMIL+, and SCG, based on PSNR.

https://doi.org/10.1371/journal.pone.0281250.t004

Conclusion

In this paper, we have presented a simple spectral CG by incorporating update parameter in [33]. The parameter is obtained by modifying the spectral CG direction, where the sufficient descent condition and the global convergence hold independent of any line search rule. The theoretical analysis is not based on assumptions of earlier versions given in [2, 34, 35] respectively. The promising numerical result was obtained by relaxing the computational burden associated with the aforementioned versions. Similarly, the numerical result of the proposed algorithm is efficient, when the modulating parameter η = 0.01 compared to some known existing methods. The robustness of the NSRMIL is also demonstrated in solving image restoration problem and arm robotic planar problem. Future work include exploring the proposed method in two-step algorithms as presented in [5658].

Supporting information

S1 Table. The numerical result of NSRMIL, SPRP, RMIL+ and SCG methods.

The results presented in the table is used to demonstrate the performance of all the methods in plotting the graphs for the three metrics shown in Figs 13.

https://doi.org/10.1371/journal.pone.0281250.s001

(XLSX)

Acknowledgments

The authors acknowledge the support provided by the Petchra Pra Jom Klao Scholarship of King Mongkut’s University of Technology Thonburi (Contract No. 52/2564), Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT and King Mongkut’s University of Technology North Bangkok (Contract No. KMUTNB-FF-66-36).

References

  1. 1. Dai Zhifeng, Zhu Haoyang, and Zhang Xinhua. Dynamic spillover effects and portfolio strategies between crude oil, gold and chinese stock markets related to new energy vehicle. Energy Economics, 109:105959, 2022.
  2. 2. Awwal Aliyu Muhammed, Sulaiman Ibrahim Mohammed, Maulana Malik, Mustafa Mamat, Poom Kumam, and Kanokwan Sitthithakerngkiet. A spectral RMIL+ conjugate gradient method for unconstrained optimization with applications in portfolio selection and motion control. IEEE Access, 9:75398–75414, 2021.
  3. 3. Dai Zhifeng, Li Tingyu, and Yang Mi. Forecasting stock return volatility: The role of shrinkage approaches in a data-rich environment. Journal of Forecasting, 2022.
  4. 4. Liu Jiankun and Du Shouqiang. Modified three-term conjugate gradient method and its applications. Mathematical Problems in Engineering, 2019.
  5. 5. Yuan Gonglin, Lu Junyu, and Wang Zhan. The PRP conjugate gradient algorithm with a modified WWP line search and its application in the image restoration problems. Applied Numerical Mathematics, 152:1–11, 2020.
  6. 6. Waziri Mohammed Yusuf, Ahmed Kabiru, and Halilu Abubakar Sani. A modified dai–kou-type method with applications to signal reconstruction and blurred image restoration. Computational and Applied Mathematics, 41(6):1–33, 2022.
  7. 7. Awwal Aliyu Muhammed, Ishaku Adamu, Halilu Abubakar Sani, Predrag S. Stanimirović, Nuttapol Pakkaranang and Bancha Panyanak. Descent Derivative-Free Method Involving Symmetric Rank-One Update for Solving Convex Constrained Nonlinear Monotone Equations and Application to Image Recovery. Symmetry, 14(11):2375, 2022.
  8. 8. Waziri Mohammed Yusuf, Ahmed Kabiru, Halilu Abubakar Sani, and Awwal Aliyu Mohammed. Modified dai-yuan iterative scheme for nonlinear systems and its application. Numerical Algebra, Control and Optimization, 2021.
  9. 9. Halilu Abubakar Sani, Majumder Arunava, Waziri Mohammed Yusuf, and Ahmed Kabiru. Signal recovery with convex constrained nonlinear monotone equations through conjugate gradient hybrid approach. Mathematics and Computers in Simulation, 187:520–539, 2021.
  10. 10. Sulaiman Ibrahim Mohammed, Awwal Aliyu Muhammed, Malik Maulana, Pakkaranang Nuttapol and Panyanak Bancha. A Derivative-Free MZPRP Projection Method for Convex Constrained Nonlinear Equations and Its Application in Compressive Sensing. Mathematics, 10(16):2884, 2022.
  11. 11. Waziri Mohammed Yusuf, Ahmed Kabiru, Halilu Abubakar Sani, and Sabi’u Jamilu. Two new hager–zhang iterative schemes with improved parameter choices for monotone nonlinear systems and their applications in compressed sensing. RAIRO-Operations Research, 56(1):239–273, 2022.
  12. 12. Halilu Abubakar Sani, Majumder Arunava, Waziri Mohammed Yusuf, Ahmed Kabiru, and Awwal Aliyu Muhammed. Motion control of the two joint planar robotic manipulators through accelerated Dai–Liao method for solving system of nonlinear equations. Engineering Computations, 2021.
  13. 13. Sulaiman Ibrahim Mohammed, Malik Maulana, Awwal Aliyu Muhammed, Kumam Poom, Mamat Mustafa and Al-Ahmad Shadi. On three-term conjugate gradient method for optimization problems with applications on COVID-19 model and robotic motion control. Advances in Continuous and Discrete Models, 2022(1):1–22, 2022. pmid:35450201
  14. 14. Yahaya Mahmoud Muhammad, Kumam Poom, Awwal Aliyu Muhammed, Chaipunya Parin, Aji Sani, and Salisu Sani. A new generalized quasi-newton algorithm based on structured diagonal hessian approximation for solving nonlinear least-squares problems with application to 3dof planar robot arm manipulator. IEEE Access, 10:10816–10826, 2022.
  15. 15. Nasiru Salihu, Mathew Remilekun Odekunle, Mohammed Yusuf Waziri, Abubakar Sani Halilu, and Suraj Salihu. A dai-liao hybrid conjugate gradient method for unconstrained optimization. International Journal of Industrial Optimization, 2(2):69–84, 2021.
  16. 16. Hager William W and Zhang Hongchao. A survey of nonlinear conjugate gradient methods. Pacific journal of Optimization, 2(1):35–58, 2006.
  17. 17. Fletcher Reeves and Reeves Colin M. Function minimization by conjugate gradients. The computer journal, 7(2):149–154, 1964.
  18. 18. T. Polyak B. A general method for solving extremal problems. Dokl. Akad. Nauk SSSR, 174(1):33–36, 1967.
  19. 19. Polak Elijah and Ribiere Gerard. Note sur la convergence de methodes de directions conjuguees. USSR Computational Mathematics and Mathematical Physics, 9(4):94–112, 1969.
  20. 20. Hestenes Magnus R and Stiefel Eduard. Methods of conjugate gradients for solving. Journal of research of the National Bureau of Standards, 49(6):409, 1952.
  21. 21. Liu Y and Storey C. Efficient generalized conjugate gradient algorithms, part 1: theory. Journal of optimization theory and applications, 69(1):129–137, 1991.
  22. 22. Dai Yu-Hong and Yuan Yaxiang. A nonlinear conjugate gradient method with a strong global convergence property. SIAM Journal on optimization, 10(1):177–182, 1999.
  23. 23. Fletcher Roger. Practical methods of optimization. A Wiley Interscience Publication, 1987.
  24. 24. Andrei Neculai. Nonlinear Conjugate Gradient Methods for Unconstrained Optimization. Springer Cham, 2020.
  25. 25. G. Zoutendijk. Nonlinear programming, computational methods. In: J. Abadie Ed., Integer and Nonlinear Programming, North-Holland, Amsterdam, pages 37–86, 1970.
  26. 26. Al-Baali Mehiddin. Descent property and global convergence of the fletcher—reeves method with inexact line search. IMA Journal of Numerical Analysis, 5(1):121–124, 1985.
  27. 27. Gilbert Jean Charles and Nocedal Jorge. Global convergence properties of conjugate gradient methods for optimization. SIAM Journal on optimization, 2(1):21–42, 1992.
  28. 28. Powell Michael JD. Nonconvex minimization calculations and the conjugate gradient method. In Numerical analysis, pages 122–141. Springer, 1984.
  29. 29. Babaie-Kafaki Saman and Ghanbari Reza. A descent family of dai–liao conjugate gradient methods. Optimization Methods and Software, 29(3):583–591, 2014.
  30. 30. Babaie-Kafaki Saman and Ghanbari Reza. An optimal extension of the Polak–Ribière–Polyak conjugate gradient method. Numerical Functional Analysis and Optimization, 38(9):1115–1124, 2017.
  31. 31. Wei Zengxin, Yao Shengwei, and Liu Liying. The convergence properties of some new conjugate gradient methods. Applied Mathematics and computation, 183(2):1341–1350, 2006.
  32. 32. Zhang Li. An improved wei–yao–liu nonlinear conjugate gradient method for optimization computation. Applied Mathematics and computation, 215(6):2269–2274, 2009.
  33. 33. Rivaie Mohd, Mamat Mustafa, June Leong Wah, and Mohd Ismail. A new class of nonlinear conjugate gradient coefficients with global convergence properties. Applied Mathematics and Computation, 218(22):11323–11332, 2012.
  34. 34. Dai Zhifeng. Comments on a new class of nonlinear conjugate gradient coefficients with global convergence properties. Applied Mathematics and Computation, 276:297–300, 2016.
  35. 35. Yousif Osman Omer Osman. The convergence properties of RMIL+ conjugate gradient method under the strong wolfe line search. Applied Mathematics and Computation, 367:124777, 2020.
  36. 36. Raydan Marcos. The barzilai and borwein gradient method for the large scale unconstrained minimization problem. SIAM Journal on Optimization, 7(1):26–33, 1997.
  37. 37. Zhang Li, Zhou Weijun, and Li Donghui. Global convergence of a modified Fletcher–Reeves conjugate gradient method with Armijo-type line search. Numerische mathematik, 104(4):561–572, 2006.
  38. 38. Liu JK, Feng YM, and Zou LM. A spectral conjugate gradient method for solving large-scale unconstrained optimization. Computers & Mathematics with Applications, 77(3):731–739, 2019.
  39. 39. Birgin Ernesto G and Martínez José Mario. A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and optimization, 43(2):117–128, 2001.
  40. 40. Touati-Ahmed D and Storey C. Efficient hybrid conjugate gradient techniques. Journal of optimization theory and applications, 64(2):379–397, 1990.
  41. 41. Salihu Nasiru, Odekunle Mathew Remilekun, Saleh Also Mohammed, and Salihu Suraj. A dai-liao hybrid hestenes-stiefel and fletcher-revees methods for unconstrained optimization. International Journal of Industrial Optimization, 2(1):33–50, 2021.
  42. 42. Salihu Nasiru, Odekunle Mathew, Waziri Mohammed, and Halilu Abubakar. A new hybrid conjugate gradient method based on secant equation for solving large scale unconstrained optimization problems. Iranian Journal of Optimization, 12(1):33–44, 2020.
  43. 43. Sun Zhongbo, Cao Xue, Guo Yingying, Ge Yuncheng, and Sun Yue. Two modified PRP conjugate gradient methods and their global convergence for unconstrained optimization. 2017 29th Chinese Control And Decision Conference (CCDC), pages 786–790, 2017.
  44. 44. Jian Jinbao, Chen Qian, Jiang Xianzhen, Zeng Youfang, and Yin Jianghua. A new spectral conjugate gradient method for large-scale unconstrained optimization. Optimization Methods and Software, 32(3):503–515, 2017.
  45. 45. Sun Zhongbo, Li Hongyang, Wang Jing, and Tian Yantao. Two modified spectral conjugate gradient methods and their global convergence for unconstrained optimization. International Journal of Computer Mathematics, 95(10):2082–2099, 2018.
  46. 46. Ibrahim Arzuka, Mohd R. Bakar Abu, and Leong Wah June. Scaled three-term conjugate gradient method for unconstrained optimization. Journal of Inequalities and Applications, 1:1–16, 2016.
  47. 47. Jian Jinbao, Yang Lin, Jiang Xianzhen, Liu Pengjie, and Liu Meixing. A spectral conjugate gradient method with descent property. Mathematics, 8(2):280, 2020.
  48. 48. Momin Jamil and Xin-She Yang. A literature survey of benchmark functions for global optimization problems. Int. Journal of Mathematical Modelling and Numerical Optimisation, 4(2):150–194, 2013.
  49. 49. Wu Xuesha. A new spectral polak-ribière-polak conjugate gradient method. ScienceAsia, 41:345–349, 2015.
  50. 50. Dolan Elizabeth D and Moré Jorge J. Benchmarking optimization software with performance profiles. Mathematical programming, 91(2):201–213, 2002.
  51. 51. Renfrew Alasdair. Introduction to robotics: Mechanics and control. International Journal of Electrical Engineering and Education, 41(4):388, 2004.
  52. 52. Zhang Yunong, He Liu, Hu Chaowei, Guo Jinjin, Li Jian, and Shi Yang. General four-step discrete-time zeroing and derivative dynamics applied to time-varying nonlinear optimization. Journal of Computational and Applied Mathematics, 347:314–329, 2019.
  53. 53. Maulana Malik, Ibrahim Mohammed Sulaiman, Abubakar Auwal Bala, Gianinna Ardaneswari, and Sukono Firman. A new family of hybrid three-term conjugate gradient method for unconstrained optimization with application to image restoration and portfolio selection. AIMS Mathematics, 8(1):1–28, 2022.
  54. 54. Gaohang Yu, Jinhong Huang, and Yi Zhou. A descent spectral conjugate gradient method for impulse noise removal. Applied Mathematics Letters, 23:555–560, 2010.
  55. 55. Xianzhen Jiang, Wei Liao, Jianghua Yin, and Jinbao Jian. A new family of hybrid three-term conjugate gradient methods with applications in image restoration. Numerical Algorithms, 91:161–191, 2022.
  56. 56. Awwal Aliyu Muhammed, Wang Lin, Kumam Poom, Hassan Mohammad. A two-step spectral gradient projection method for system of nonlinear monotone equations and image deblurring problems. Symmetry, 12(6):874, 2020.
  57. 57. Muhammad Abubakar Bakoji, Tammer Christiane, Awwal Aliyu Muhammed, Elster Rosalind and Ma Zhaoli. Inertial-type projection methods for solving convex constrained monotone nonlinear equations with applications to robotic motion control. Journal of Nonlinear and Variational Analysis, 5(5): 831–849, 2021.
  58. 58. Awwal Aliyu Muhammed, Kumam Poom, Wang Lin, Huang Shuang and Kumam Wiyada. Inertial-based derivative-free method for system of monotone nonlinear equations and application. IEEE Access, 8:226921–226930, 2020.