New low storage VM-algorithm for constrained optimization

In this paper a new low-storage VM-algorithm for constrained optimization is investigated both theoretically and experimentally. The new algorithm is based on both the well-known Fletcher's low storage algorithm which generates columns Z spanned on the gradient vectors g1, g2, ... gn and the idea of both Buckley and LeNir of combined variable storageconjugate gradient method. The well-known SUMT algorithm is adapted to implement the new idea. The new algorithm is very robust compared with the standard low-storage Fletcher algorithm and the standard SUMT algorithm which was designed for solving constrained problems, of the numerical results of application very promising . 1. Interior point method (Barrier function): In the interior penalty function method, the Barrier term is defined to keep the solution from leaving the feasible region. It is impossible to directly handle equality constraints in this method. Thus the original problem for this method is defined as follows: for i= 1,......,m ...(1)    0 ) ( ) ( f x c x f Mininum


Interior point method (Barrier function):
In the interior penalty function method, the Barrier term is defined to keep the solution from leaving the feasible region. It is impossible to directly handle equality constraints in this method. Thus the original problem for this method is defined as follows: Where  is a positive constant , known as the barrier parameter. It is chosen that the inequality constraints are satisfied( >0) always ,and a positive term is added to the objective function. As we move closer to a constraint boundary 0 → i c , causing the need for a large term to be added to the objective function. Thus the method keeps the solution away from the constraint boundaries and hence is also known as the barrier function method (INT [1]).

Low storage methods:
Quasi-Newton (variable metric) methods, which are based on generating an approximation to the inverse of the Hessian matrix, require only the gradient of the objective function .The advantageous due to their fast convergence and absence of second -order derivative computation. Limited memory Quasi-Newton methods are known to be effective technique for solving certain classes of large-scale unconstrained problems (Buckley and LeNir (1983), Liu and Nocedal (1989), Gilbert and Lemarechal (1989). They make simple approximation of Hessian matrix, which are often good enough to provide a fast rate of linear convergence, and require minimal storage. For these reasons it is desirable to use limited memory approximation for solving problems.(see Richard ,etal.,1990)

Fletcher low storage algorithm:
The columns of z span while z ? contain only It is now possible to describe a BFGS like low storage method based of this information structure. Let storage for e vectors in Z be available. The method can be followed for e-1 iterations. After which it is possible to carry out PCG steps as the preconditioner for e k  . These steps are continued as in the Buckley -LeNir method until some test Then Z is reset to k k g g / and the whole process is restarted (Fletcher ,1990). The rate of improvement as e increase rather slow, at which point the low storage method dose not performe as well as BFGS.

Self -Scaling variable metric method:
This is a variable metric CG method depended by Buckely in (1978) for the first time and it combines the CG and QN methods in an attempt to provide their main advantages, i.e. The low storage requirements of CG methods and the rapped convergence of the QN method.The CG -QN algorithm implemented to use a variable amount of storage depending on the variability of space, with minimum requirement of locations. In order to eliminate the truncation and rounding error, the new scalar parameter ? is added to make the sequence and efficiency( as problem dimension )increase. The poor scaling is an imbalance between the values of the function and change in x ,the function values may be change very little even though x is changing significantly. This difficulty can sometime be remove by good scaling factor for the updating H and the performance of self-scaling method is undoubtedly favorable in some cases especially when the number of variables are large (scales,1985).

The derivation of new self-scaling parameter:
In QN methods the approximation k H to the inverse of the Hessian can be selected to satisfy the QN condition which can be written in the form is scalar We introduce suitable special alternative equivalent scalars to the special case due to the QN condition. The weakened form is duo to the following secant condition. Multiplying it by vector k g , we have Which is defined in (7) we kwon that Which is defined in (8) But we have Which is defined in (9)

New proposed low storage method:
Step 1: Find an initial approximation x 0 in the interior of the feasible region for the inequality constrains Step 4: where the columns of Z generated by g 1 , g 2 , .. g k .
Step 5: x k+1 =x k +? k d k Step 6:if go to step (6) else go to step (7) Step 7: check , then stop else go to (7) e k = set H e =H n and go to step (11) Step 8: if Step 9: Step 10: check g ? ? 0 extend  Step 11: update Z to give with new scaling factor, which is define in eq. (21) Step Satisfied go to step (2) else set k=k+1 , 10 1 k k   = + and go to step (4) 6. Numerical results: Several standard nonlinear constrained test functions are minimized to compare the new algorithm with standard algorithm see (Appendix) with x c FORTRAN programs were written to implement the suggested and previous algorithms. All numerical results quoted here are obtained using (Pentium 4 computer). All cases the stopping criterion taken to be All the algorithms in his paper use the same ELS, which is the cubic fitting technique fully, described from (Bundy, 1989). The comparative performance for all these algorithms are evaluated by considering NOF , NOI and NOC , are considered as the comparative performance of the following algorithms: 1 F/R low storage algorithm (1990). 2 New selfscaling F/R low storage algorithm. In Table (1) we have compared between F/R low storage algorithm and the new selfscaling F/R low storage algorithm.