A new self-scaling variable metric (DFP) method for unconstrained optimization problems

: In this study, a new self-scaling variable metric (VM)-updating method for solving nonlinear unconstrained optimization problems is presented. The general strategy of (New VM-updating) is to propose a new quasi-newton condition used for update the usual DFP Hessian to a number of times in a way to be specified in some iteration with PCG method to improve the performance of the Hessian approximation. We show that it produces a positive definite matrix. Experimental results indicate that the new suggested method was more efficient than the standard DFP method, with respect to the number of functions evaluations (NOF) and number of iterations (NOI).


Introduction
In 1970 Broyden [1] introduce the Quasi-Newton family of variable metric formula that is the most efficient technique for minimizing a non-linear function ℎ( ) .
Often needed to update the iterate matrix .Traditionally, satisfies the following quasi-Newton equation: If is to be viewed as an approximation to −1 , it is natural to require that: where is the gradient of ℎ evaluated at the current iteration . One then computes the next iteration by f +1 = + , = 0,1,2, … where > 0 is the step length satisfies the Wolfe's conditions [8]: Where ∈ (0, 1 2 ) and ∈ (0,1). Now, after determined the point +1 we obtain the improved inverse Hessian matrix +1 by merge the information generated in the last iteration. The matrix +1 is given by for the parameter where = − 0 = and ϑ ∈ [0,1] Different values of the scalar correspond to different member of Broyden's Quasi-Newton -family, noted that if = 0 then equation (8) corresponds to the standard DFP algorithm introduced by Davidon [13] and Fletcher and Powell [9]. In studying the theoretical behavior of this technique it was shown by Fletcher and Powell that, on quadratic function with the exact line search, the standard DFP formula generates conjugate directions and hence minimizes a quadratic function in at most the iterations. Many modulations have been applied on QN-methods in bid to improve its performance. In 1974 Oren [11] develops the self-scaling VM-algorithms, Oren's formula can be written as: Where = and ϑ = 1 Clearly when γ j = 1, formula (9) reduces to Broyden's class update defined in (7). Also, to improve the performance of VM updates Biggs [6] proposed to choose +1 to satisfy the following modified equation +1 = , where > 0 is a scaling parameter. The modified BFGS may be written as: where = 1 = 6 (ℎ( ) − ℎ( +1 ) + +1 ) − 2 also S. Shareef, and A. Ibrahim [12] made a modification for self-scaling symmetric rank one update with QN condition +1 ̅ = as follow where = (1 + (1 − ) ), ≥ 0, ∈ (0,1) and = ‖ ‖ 2 .
We end this general introduction by content of this paper which is organized as follows: In section two, we present the new method. In section three, Numerical results, percentages and discussion are reported. In the final section, we present a conclusion. Throughout this paper, ‖. ‖ denotes the Euclidean norm of a vector or matrix.

Derivation of New Self-scaling VM methods
In this section a new formula for a self-scaling VM-method with preconditioned conjugate gradient (PCG) method is presented. Further, Zhang et al. [4] and Zhang and Xu [5] expanded this condition and derived a class of modified secant condition with a vector parameter, in the form where is any vector satisfying > 0 , and is defined by: For the new method we have investigated a new expression for the QN-condition as follows: Where ∈ (0,1) and = = for more details see [3] Then, the new self-scaling VM method becomes as follows 2.1 The Outlines of the New Self-Scaling VM Method with PCG Method Step (1): Set = 0, select 0 and a real symmetric positive definite 0 = , = 10 −5 .
Step (7): Step (8) (14) by ̅ from the right, we obtained: Since ̅ ̅ and also ̅ are scalars. Then Then, +1 ̅ = The proof is complete. ∎ Theorem 2: In the new self-scaling VM method, if is positive definite, then so is the matrix +1 .
Proof: we can write the equation (14) in quadratic form: And by using the definitions of and , we obtain Hence, We know that and are positive and ∈ (0,1), so we have is positive. The fractional terms on the right-hand side of (18) are nonnegative.
Therefore, to show that +1 > 0 , for ≠ 0, we need only to demonstrate that these terms do not both vanish simultaneously. The first term vanishes only if and are proportional, that is, if = for some scalar . To complete the proof it is enough to show that if = then Thus for all ≠ 0, +1 > 0. Then the proof is completed. ∎

Numerical Results
This section is devoted to test the implementation of the new VM method. The comparative test involve wellknown nonlinear problems (standard test function) [7] given in the Appendix with different dimensions, are tested in the range (4 ≤ ≤ 5000), all programs are written in FORTRAN95 language. The numerical results in Table (1) illustrate that, the VM method is more efficient than standard DFP method with respect to NOI and NOF.   50  100  500  1000  5000   47  55  112  49  47   114  132  255  119  114   47  54  49  49  49   113  131  117  119  119  Total  1928  8398  1431  5571   Table (

Conclusion
In this paper, we have offered a VM-type for unconstrained optimization problems based on a modified quasi-Newton condition. We showed that the new method satisfy the modified quasi-newton condition and positive definite property. It is clear that from the numerical results the new modified VM-updating formula has an improvement on the standard DFP method in about 29% in both NOI and NOF.