Two New Extended PR Conjugate Gradient Methods for Solving Nonlinear Minimization Problems

In this paper, we have discussed and investigated two nonlinear extended PR-CG method which use function and gradient values. The two new methods involve the standard CG-methods and have the sufficient descent and globally convergence properties under certain conditions. We have got some important numerical results by comparing the new method with Wu and Chen PRCG-(2010) method in this field.


Introduction
This paper considers the calculation of a local minimizer x* say, for the problem: is a smooth nonlinear function (of n variables) and its gradient vector is available are calculated but the Hessian matrix is not available. At the current iterative point k x , the Conjugate Gradient (CG) method has the following form: where k  is a step-length; k d is a search direction; k  is a parameter.
Standard algorithms for solving this problem include CG-algorithms which are iterative algorithms and generate a sequence of approximations to minimize a function ) ( F x and their very low memory requirements. However, this paper considers a more general model than the usual quadratic function ) ( ) (5) and (6) are called the "Standard Wolfe" and "Strong Wolfe" conditions, respectively. When ,quadratic functions and exact line searches are used, all the above formulas in (4) are equivalent. However, these formulas very according to general functions. For general functions, [22] proved the global convergence of PR method with exact line search. On the other hand, the PR and HS methods perform similarly in terms of theoretical property. Nevertheless, [16] showed that the PR and the HS methods can cycle infinitely without approaching a solution, which implies that they do not have globally convergence.
In this paper, we have proposed, two new special formulas keeps the property of PR method, namely, if a very small step is generated the next search direction tends to the Steepest Descent (SD) direction, preventing a sequence of tiny steps from happening. Furthermore, finite quadratic termination is retained for the new methods. Since the sufficient descent condition is a property of great importance for the global convergence analysis of any CG-method, we have modified the conjugacy parameter of [21] to implement the nonquadratic rational model which satisfies the sufficient descent property and the modified Wolfe-Powel conditions introduced by Andrei [6] we illustrate this condition in section 4 . In addition, the global convergence property of the new proposed CG-method is discussed and a set of numerical results presented show that the new proposed method is efficient.

Extended CG-Methods For Non-Quadratic Models.
Over years, various authors have published works in this area, In this paper, a more general model than quadratic one is suggested as a basis for a CGalgorithm. If q(x) is a quadratic function, then a function F(q(x)) is defined as a non-linear scaling of q(x) if the following invariancy condition holds: ( 7) where, * x is the minimizer of q(x) with respect to x for more details see [19] and f is monotonic increasing, may be better to represent the objective and thus it gives an advantage to method based on this model. In order to obtain better global rate of convergence for minimization methods when applied to more general functions than the quadratic.
The following properties for ) (x f are immediately derived from the above condition: • Every contour line of q(x) is a contour line of x is a minimizer of q(x) , then it is a minimizer of in at most n step has been described by Fried [11].

ERCG-Method (Al-Bayati,1993). [2]
Al-Bayati's, 1993 non-quadratic model is defined as the quotient of two quadratic functions and so belongs also to the class of rational functions , Al-Bayati's rational function model was considered by:

The Special Cases.
In this paper ,we introduced the two special cases of AL-Bayati's (1993) and Tassopoulos and Storey (1984) extended CG-method which are invariant to nonlinear scaling of quadratic rational functions are proposed. The first investigated model is defined as the quotient of two quadratic functions and so belongs also to the class of rational functions a special of AL-Bayati's rational function model was considered by: ….. (11) Two New Extended PR Conjugate Gradient Methods for Solving Nonlinear … 76 From (7) We can rewritten (11) as: Where the function f is defined as nonlinear scaling of ) (x q and the invariancy property to nonlinear scaling (7) holds and ) (x q is defined in (9) is the quadratic function then it determines the solutions min x in a finite number of iterations not exceeding (n). It is shown the one-dimensional problem ( ) and k d is a search direction that the following updating process by the Boland theorem [10] to convert the quadratic model to a non-quadratic model in (12) we can write: (13) Since from the (12c) we have: If substituting (13) in (14) we get: Similarly the special case of the rational function of Tassopoulos and Storey [20] in (8a). In this paper we shall state the rational function of AL-Assady and Shakory [18] considered by: Since from the (12) we have: If substituting (16) in (17) we get: The (18) and (15) are called the special case of rational function model of Tassopoulos and Storey (1984) and AL-Bayati's (1993) ,respectively.

Two New Combined of Rational Functions.
A. We introduce the combined of two rational function as a convex combination of the special case of Tassopoulos and Storey in the equation (18) and the rational function AL-Bayati's (1993) in the equation (10) nonquadratic model to be investigated here is considered by :

B.
We introduce another combined of two rational function as a convex combination of the special case of AL-Bayati's in (15) and the and AL-Bayati's in (10) non-quadratic model to be investigated here is considered by :

Wu and Chen (2010) CG-Method.
In this section, we are going to present the recent work of the two wellknown Scientist Wu and Chen in (2010). They introduced several wellknown CG-formulas. The conjugacy parameters of these CG-methods are given by; , respectively by making use of the Powell's restarting criterion and the Armijo-type line search defined by: (22) They proved that all the above CG-methods satisfy the sufficient descent condition and have the global convergence property for more details see [21].

A New Extended CG-Method. 4.1 Transform Quadratic model to Non-quadratic.
Consider the following quadratic model we proceed as in [21]:   (19)- (20) to get two new extended CG method whose conjugacy parameters are defined by:

An Acceleration Scheme of the Line Search Parameter.
In the CG-methods the search directions tend to be poorly scaled and as a consequence the line search must perform more function evaluations in order to obtain a suitable step-length k  . In order to improve the performances of the CG-methods the efforts were directed to design procedures for direction computation based on the second order information. Jorge Nocedal [14] pointed out that in CG methods the step lengths may differ from 1 in a very unpredictable manner. They can be larger or smaller than 1 depending on how the problem is scaled. Numerical comparisons between CG methods and the limited memory QN method, by Liu and Nocedal [13], show that the latter is more successful [8]. Here, we have pointed out Andrei's [7] acceleration scheme; basically, this modifies the step length in a multiplicative manner to improve the reduction of the function values along the iterations [5,6].

Outline of The Two New Extended CG-Method.
Step 1: Given

) is an index of the algorithm
Step 2: Set k=1; Step 3: Using the modification WP line search conditions which fully described by Andrei (2009) determine the step length k  , such that, compute: Acceleration scheme, compute , , then compute Step 4: Compute Step 5: If Powell restarting, , satisfied then set: (27)), go to Step 2.

2 Theoretical Properties for the Two New Extended CG-Method.
In this section, we focus on the convergence behavior on the 2 1 , New k New k   methods with inexact line searches. Hence, we make the following basic assumptions on the objective function.

Assumption.[21]
f is bounded below in the level set U of the level set 0 x L , f is continuously differentiable and its gradient f  is Lipschitz continuous in the level set 0 x L , namely, there exists a constant L> 0 such that: for all x, y  The third term from the last equation can be simplified as: We now prove the theorem by contradiction and assume that there exists some constants  > 0 such that  For initial direction we have: Since our function f is uniformly convex function either in the quadratic or in the non-quadratic regions, then there exist a Lipschitz constant L >0 and a constant, 0   such that: For inexact line search using Wolfe-Powell conditions (5) and (6) we have: Powell and criteria which are defined as: From [17] Powell restarting criterion (45) we have: Using (45) Multiplying the search direction of (27) by T k g yields: For inexact line search using Wolfe-Powell conditions (5) and (6) we have: Thus our new proposed extended CG-method has sufficient descent directions using inexact line searches under the condition that Powell restarting condition must be used.

Theorem
Suppose that Assumption 4.3 hold. Consider the method (2)-(3) with the following three properties: , for more details see [12]. Therefore, the method has a global convergent property by satisfying the conditions of Zoutendijk theorem and the line search satisfy the strong Wolfe condition then from Gilbert and Nocedal in [12] these method is global convergent.