A MODIFIED HESTENES-STIEFEL METHOD FOR SOLVING UNCONSTRAINED OPTIMIZATION PROBLEMS

The conjugate gradient methods are among the most efficient methods for solving optimization models. This is due to its simplicity, low memory requirement and the properties of its global convergent. Many researchers try to improve this technique. In this paper, we suggested a modification of the conjugate gradient parameter with global convergence properties via exact minimization rule. Preliminary experiment was conducted using some unconstrained optimization benchmark problems. Numerical outcome showed that the new algorithm is efficient and promising as it performs better than other classical methods both in terms of number of iteration and CPU time.


INTRODUCTION
Conjugate gradient (CG) method is considered as an important tool for solving unconstrained optimization problems. It can be applied in many fields like industry, medical treatment and economics because of its low memory requests and global convergence properties (see [2,11,15]).
Generally, the optimization problem can be express as where : → is smooth. The CG methods computes it iterates starting from an initial point 0 ∈ as follows where the step-size ( ) can be obtained using a line search method along the search direction . The most preferred line search algorithm is the exact minimization condition: where is given by where is a scalar and = ( ).
The first CG algorithm was suggested by Hestenes and Stiefel (HS) [12] in (1952). Later, the Hestenes-Stiefel algorithm was improved to solve (1). The HS method is characterized by its Other known CG coefficients are presented in Table 1.  [20,21], 1969) [16], 1991) There are several researches about convergence properties of these methods (see [3,4,5,15,22,29,31,35]). Some convergent formulas are proposed by restricting the scalar to a nonnegative number [19]. The convergence analysis for the methods of HS, LS and PRP are yet to be established under other line searches. (see [13,32]). Some practical application of the optimization method can be referred to [28].
Recently, many researchers have studied CG methods. Table 2 provides a list of recent CG methods.

NEW FORMULA FOR
In early 21 st century, tremendous efforts have been made by researchers to improve the CG methods. The researchers suggested numerous variants of CG methods with strong convergence properties and efficient numerical results. A survey of the CG methods is given by Andrei [6].
Lately, Wei et al. [30] introduce a variation of the PRP coefficient referred to the WYL method.
Motivated by the ideas of [12,30], we introduce our known as * , where TM * represents Tala't and Mustafa. The new * is a variant of HS method which is as follows: The algorithm of the proposed coefficient is as follows:  (5), and produce +1 by (4).

CONVERGENCE ANALYSIS
An important condition for the convergence analysis of any CG algorithm is satisfying the sufficient descent condition (SDC) [2,27].

Sufficient descent condition
For the SDC to hold,

Global convergence
To establish the convergence properties of the method of * , we need to simplify * to make the proof easier. From (5) we can see that * = For the convergence of CG methods, next assumptions are always needed. ii. The gradient ( ) is Lipschitz continuous in , . ., Under this Assumption, we have the next lemma, that was proven by Zoutendijk [33].

Lemma 1.
Let Assumption 1 holds true for any CG method of the form (1), with search direction and fulfils (3). Then the following condition knowns as the Zoutendijk condition, holds .

Proof.
Let The same as the above proof, for the points 2 , we also have The proof is completed. ∎ By Lemma 1 and using (8), we obtain the following convergence theorem.

Theorem 2
Suppose that Assumption 1 is holds for any CG method in the form of (2), (4), and (8) Hence, the proof is completed. ∎

NUMERICAL RESULTS
To illustrates the efficiency of the proposed * , we compare it performance with that of FR, WYL and RMIL methods based on iteration number and CPU time. Table 3 displays some classical test problems, dimensions and the initial points considered for the experiments. Most of the selected test functions are from Andrei [6]. We choose = 10 −6 and the termination criteria is set as ‖ ‖ ≤ as suggested by Hillstron [13]. Three random initial guesses are used; starting from the points near the solution points, to a point far from it. All standard optimization test problems are tested in a small to large-scale dimension. If the line search fails to obtain the positive in some cases, the computation stopped [25,26]. The performance was displayed in Figure 1 and Figure 2 based on performance profile introduced by [8].

CONCLUSION
In this paper, we present a new modification of the CG coefficient that guaranteed the sufficient