A NEW SPECTRAL CONJUGATE GRADIENT METHOD WITH DESCENT CONDITION AND GLOBAL CONVERGENCE PROPERTY FOR UNCONSTRAINED OPTIMIZATION

The Spectral conjugate gradient method is an efficient method for solving large-scale unconstrained optimization problems. In this paper, we propose a new spectral conjugate gradient method in which performance is analyzed numerically. We establish the descent condition and global convergence property under some assumptions and the strong Wolfe line search. Numerical experiments to evaluate the method’s efficiency are conducted using 98 problems with various dimensions and initial points. The numerical results based on the number of iterations and central processing unit time show that the new method has a high performance computational.


INTRODUCTION
The conjugate gradient method is an efficient method and very interesting for solving largescale optimization problems since it can be done with lower storage and easy calculation. For good reference for studies application of the conjugate gradient method, see [1,2,3]. In this article, we consider the following problem to be unconstrained minimization: where R is real number and f is a continuously differentiable function. Generally, conjugate gradient method is an iterative method with iterations formula defined as (2) x k+1 = x k + α k d k , k = 0, 1, 2, 3, ..., where x k is kth approximation to a solution of problem (1) with x 0 is starting point, and α k is step length obtained by some line search. In this article, we use strong Wolfe line search as follows (3) f is a gradient of function f at point x k , g T k is transpose g k , and 0 < δ < σ < 1 [4]. Search direction in conjugate gradient method d k defined as where β k is a coefficient determining different formulas [5]. The most well-know conjugate gradient methods are the Hestenes-Stiefel (HS) method [6], Fletcher-Reeves (FR) method [7], Conjugate Descent (CD) [8], Polak-Ribiére-Polyak (PRP) [9], and Wei-Yao-Liu (WYL) method [10], where the formulas β k for corresponding method respectively are Another common way to solve the problem (1) is to use the spectral conjugate gradient method introduced by Raydan [11] and initially the idea of the spectral conjugate gradient method proposed by Barzilai and Borwein [12]. The main difference between spectral gradient method and gradient conjugate method lies in the calculate of the search direction. The search direction of the spectral gradient method is as follows: where s k−1 = α k−1 d k−1 , and θ k is the spectral gradient parameter. In 2001, Birgin and Martinez [13] developed three kinds of the spectral methods which are a combination of spectral methods and conjugate gradient methods with the following parameters β k : Based on numerical results, three methods above are quite efficient, but the descent direction is not necessarily fulfilled. Therefore, Zhang et al. [14] make a modification to the FR method (MFR) so that the method has been proven descent direction, and satisfies global convergence property under Armijo line search, where the search direction is defined as which search direction of the MFR method can be written as follows: As well as in 2012, Liu and Jiang [15] has proposed a new kind of the spectral conjugate gradient method (SCD), where the coefficient β k , and spectral gradient parameter θ k determined by Recently, Jian et al. [16] proposed a new class of the spectral conjugate gradient method, which they are choice for spectral parameter θ k as follows: .
The JYJLL spectral conjugate greadient method (JYJLL-SCGM) always fulfills the descent condition without depending any line search and the global convergence properties under Wolfe line search are met also. The numerical experiments of the JYJLL-SCGM in comparison with AN1 [17], KD [18], HZ [19], and LFZ [20] methods show that JYJLL-SCGM is most effective.
The main objective of this paper is to propose a new spectral conjugate gradient method and compare its performance with the MFR, SCD, JYJLL, and NPRP method (see Zhang [21]).
This paper will be organized as follows: In section 2, a new spectral conjugate gradient formula, and the algorithm will be presented. In section 3, we will show the descent condition and global convergence property of our new method. Numerical experiments will be presented in section 4. Finally, our conclusion will be written in section 5.

NEW SPECTRAL CONJUGATE GRADIENT FORMULA
In this section, we first propose a new conjugate gradient coefficient based on the NPRP conjugate gradient formula in Ref. [21]. NPRP method is a modification of the PRP method, and development of WYL method. The coefficient β k of NPRP method is defined as: and our new coefficient defined as: , that is, we add a negative g T k g k−1 in the numerator β NPRP k , extend the denominator by (1 − µ) d k−1 2 + µ g k−1 2 , and prevent negative value, where µ = 0.9.
Secondly, we propose a new spectral parameter which is quite same to MFR formula (5) but different coefficient of β k as follows: The MMSMS is symbolizes Malik, Mustafa, Sabariah, Mohammed, Sukono.
In the following, we describe the algorithm spectral MMSMS (SpMMSMS) method for solving unconstrained optimization problems. Step 1. Choose an initial point x 0 ∈ R n . Given the stopping criteria ε > 0, parameter σ , and δ .
Step 2. Compute g k , if g k ≤ ε then stop, x k is optimal point. Else, go to Step 3.
Step 5. Compute step length α k using the strong Wolfe line search (3).

DESCENT CONDITION AND GLOBAL CONVERGENCE PROPERTY
In this section, we analyze the descent condition and global convergence property of the SpMMSMS method. Therefore, we need the following definition.
then the descent condition holds.

Definition 3.2. [22]
We say that a conjugate gradient method is global convergence if The theorem below shows that the SpMMSMS method always fulfills the descent condition.
Theorem 3.3. Suppose that the search direction d k is generated by SpMMSMS method, then holds for any k ≥ 0.
SpMMSMS method satisfies the descent condition. Now, for k ≥ 1, we have Substituting θ k by θ MMSMS k and β k by β MMSMS k , then we get Based on value of β MMSMS k , we have two cases for d k : (6), and (8), we have

Multiplying both sides by
Hence, the descent condition holds.
, then from (8) and multiplying both sides by g T k , we obtain So that the descent condition holds. Hence, for any k ≥ 0, the descent condition g T k d k < 0 always satisfies. The proof is completed.
The following lemma is essential to prove the global convergence of the SpMMSMS method.
Lemma 3.4. The relation always holds for any k ≥ 0.
Proof. Clearly, from (6) β MMSMS k can be 0 or Hence, the relation (9) is true. The proof is finished.
To investigate the global convergence property of the SpMMSMS method, we need the following assumption.
(A2) In any neighborhood Ω 0 of Ω, f is continuous and differentiable, and its gradient g(x) is Lipschitz continuous; in other words, there exists a Lipschitz constant L > 0 such that In the lemma below, we discuss the well-known condition of lemma Zoutendijk [23], which plays an essential role in the conjugate gradient method about convergence analysis.
Lemma 3.6. Suppose that Assumption 3.5 holds, let x k be generated by iterative method where α k is a step length which calculated by the strong Wolfe line search (3), and the search direction d k satisfies the descent direction such that g T k d k < 0. Then The theorem below, we establish the global convergence property of our new method. Proof. The proof is done by contradiction. Suppose that (11) is not true. Then there exist a positive constant φ > 0 such that g k ≥ φ , ∀k ≥ 0, which means From (8), we have d k + θ MMSMS k g k = β MMSMS k d k−1 , and squaring the both sides, we get Dividing both sides of (13) by g T k d k 2 , and combining with Theorem 3.3, and (9), we obtain We know that d 0 2 g 0 4 = 1 g 0 2 , applying (12) to above relation, we have where Z is an arbitrary scalar. So, we have which contradicts (10) in Lemma 3.6. So, base on Definition 3.2, we say that SpMMSMS fulfills global convergence property. The proof is completed.

NUMERICAL EXPERIMENTS AND DISCUSSION
In this section, we present the computational results of SpMMSMS, MFR, SCD, JYJLL, and NPRP method. Some test functions considered in Andrei [24] to analyze the efficiency of each method. The comparison is made using 98 problems with various initial points and dimensions, as in Table 1. Most of the initial points used are suggestions from Andrei [24], and dimensional variations by Malik et al. [25,26], namely 2, 3, 4, 10, 50, 100, 500, 1000, 5000 and 10000. The    Based on Table 2, we can summarize the results for each method, the total number of iterations, the total of CPU Times, and the problem that was successfully resolved in Table 3.  Figure 1 and Figure 2, respectively, display the performance profiles of each method using a performance profile introduced by Dolan and Moré [27]. The formulas used to describe the outcome of the profile will be explained as follows: r p,s = τ p,s min{τ p,s : p ∈ P and s ∈ S} , ρ s (τ) = 1 n p size{p ∈ P : r p,s ≤ τ} where • S is the set solvers, • P is the test set of problems, • n s is the number of solvers, • n p is the number of problems, • τ(p, s) is computing time (NOI or others) needed to solve problem p by solver s, • r p,s is performance profile ratio, • ρ s (τ) is the probability for solvers, r p,s used to compare the performance method by solver s with the best performance by any solver on problem p, and ρ s (τ) is a factor of the best possible ratio. In general, solvers with high vaules of ρ s (τ) or in the upper right of the image represent the best solver. Figure 1 and Figure 2 plots the performance profiles of the SpMMSMS, JYJLL, MFR, SCD, and NPRP methods based on the number of iteration and CPU time, respectively. From the pictures below, we can be seen that from the left side of Figure 1 and Figure 2; the SpMMSMS method is the high-performance method in solving of 98 the test problems. We can also see in Table 3; the SpMMSMS method has the best performance in a total of NOI and the total number of CPU times compared to the other methods. SpMMSMS method has the ability to solve all problems, so the percentage reaches 100%. All comparisons for performance profiles, the total number of NOI, the total number of CPU, and the successful percentage indicate that the SpMMSMS method has a high performance computational compared to the other methods.

CONCLUSION
In this article, we propose a new spectral conjugate gradient method, namely SpMMSMS method. The SpMMSMS method's performance was tested by comparing it to the previous methods (JYJLL, MFR, SCD, and NPRP). Our new method fulfills the descent condition and global convergence property under the strong Wolfe line search. Through 98 test problems, the SpMMSMS method has a high performance computational compared with the JYJLL, MFR, SCD, and NPRP method.