DHO conjugate gradient method for unconstrained optimization

In this paper, we suggest a new coefficient conjugate gradient method for nonlinear unconstrained optimization by using two parameters one of them is parameter of (FR) and the other one is parameter of (CD), we give a descent condition of the suggested method.


Introduction
Consider the unconstrained optimization problem: Min ( ), ∈ (1.1) where : → is a real-valued, continuously differentiable function. A nonlinear conjugate gradient method generates a sequence { }, k ≥ 1, starting from an initial guess 1 ∈ , using the recurrence The positive step size is obtained by some line search, and is a search direction. Normally the search direction at the first iteration is the steepest descent direction, namely 1 = − 1 and the other search directions can be defined as:  [8], these formulas are as follows: ‖ ‖ 2 ( +1 )) (1.11) = +1 − , symbol ‖. ‖ denotes the Euclidean norm of vectors. The most well studied properties of conjugate gradient methods are its global convergence properties. The convergence of conjugate gradient methods under different line searches has been studied by many authors, such as Gilbert and Nocedal [4], and Hestenes and Stiefel [5].
This paper is organized as follow: in Section 2, we will suggest a new conjugate gradient method. In Section 3, we prove the descent condition of new method. In Section 4, some numerical experiments of the new conjugate gradient method. In section 5, we will give the conclusion.

Proof:
The proof is done induction, the result clearly holds for = 0 0 0 = −‖ 0 ‖ 2 ≤ 0 Now, we prove the current search direction in descent direction at the (k+1) From (1.3) and (2.5) we have, If the step length is chosen by an exact line search which requires +1 = 0. Then the proof is complete. If the step length is chosen by inexact line search which requires +1 ≠ 0, Since the parameter of (FR) is satisfies the descent condition then the above equation is satisfy the descent condition.

Numerical Results
This section is devoted to test the implementation of new method. We compare the our method with Conjugate Gradient (FR and PR), the comparative tests involve Well-known nonlinear problems (standard test function) with different dimensions4 ≤ ≤ 5000, all programs are written in FORTRAN90 language and for all cases the stopping condition is ‖ +1 ‖ ≤ 10 −5 the results given in table (1) specifically quote the number of function NOF and the nuber of iteration NOI . More experimental results in table (1) confirm that the new CG is superior to standard CG method with respect to the NOI and NOF.

Conclusion
In this paper, we suggested a new conjugate gradient method for unconstrained optimization. Implemented and tested to some extent, while numerical tests were carried out, on low and high dimensionality problems, and comparisons were made amongst different test functions with inexact line search. Some of the numerical results have been reported. In future we can use the new conjugate gradient method with other standard conjugate method to obtain the three terms conjugate gradient method [7].