A New Method for Unconstrained Optimization Problem

Zhiguang Zhang Department of Mathematics, Dezhou University, Dezhou, Shandong 253023, China E-mail: zhiguangzhang@126.com Abstract This paper presents a new memory gradient method for unconstrained optimization problems. This method makes use of the current and previous multi-step iteration information to generate a new iteration and add the freedom of some parameters. Therefore it is suitable to solve large scale unconstrained optimization problems. The global convergence is proved under some mild conditions. Numerical experiments show the algorithm is efficient in many situations.


Introduction
Consider the unconstrained optimization problem min ( ), , where n R is an n-dimensional Euclidean space and 1 : n f R R  is a continuously differentiable function.Most of the well-known iterative algorithms for solving (1) take the form 1 , 0,1,2, , where k d is a search direction of ( ) Many traditional methods for solving (1) are line search methods such as steepest descent method, Newton-type methods, conjugate gradient methods, etc.Generally, the conjugate gradient method is a useful technique for solving large scale problems because it avoids the computation and storage of some matrices.Memory gradient methods have these good qualities too (e.g., (Cantrell, J.W., 1969) (Miele, A. and Cantrell, J.w., 1969)(Yuan Yaxiang, Sun Wenyu, 1997) etc.).
In order to make full use of the current and previous multi-step iterative information to improve the capability of methods and guarantee them be convergent, some scholars studied memory gradient methods and super-memory gradient methods.These two methods, like conjugate gradient methods, are suitable to solve large scale optimization problems.They are more stable than conjugate gradient methods (e.g., (Cragg, E.E., and Levy, A.V., 1969)(Shi Zhenjun, and Shen J., 2005)(Shi Zhenjun, 2003), etc.), because they use more previous iterative information and add the freedom of selecting parameters.Taking advantage of the line search rule that was presented in (Shi Zhenjun, and Shen J., 2005), this paper presents a new memory gradient method and proves its global convergence under some mild conditions.
The paper is organized as follows.Section 2 describes the new memory gradient algorithm.Section 3 analyzes the global convergence under some mild conditions.

New Memory Gradient Method
We assume that (H1) The objective function ( ) f x has a lower bound on the level set , where 0 x is given.
(H2) The gradient ( ) ( ) where B is satisfied with Using the line search rule that was presented in (Shi Zhenjun, and Shen J., 2005), we present a new memory gradient algorithm.
New Algorithm: Step 1 If 0 k g  then stop!Else go to step 2; Step 2 1 ( ) where Step 3 Step 4 Let : 1 k k   and go to step 1.
Obviously, the algorithm has an important feature that the search direction and step-size are defined at each iteration.It does good to find more suitable search direction and step-size.For simplicity, we denote ( ) Therefore there exists an infinite subset ).
By mean value theorem, there exists .