A Simple SQP Algorithm for Constrained Finite Minimax Problems

A simple sequential quadratic programming method is proposed to solve the constrained minimax problem. At each iteration, through introducing an auxiliary variable, the descent direction is given by solving only one quadratic programming. By solving a corresponding quadratic programming, a high-order revised direction is obtained, which can avoid the Maratos effect. Furthermore, under some mild conditions, the global and superlinear convergence of the algorithm is achieved. Finally, some numerical results reported show that the algorithm in this paper is successful.


Introduction
Consider the following constrained minimax optimization problems: where ( ) = max{ ( ) | ∈ = {1, 2, . . . , }} and ( ), ( ), ℎ ( ) : → are continuously differentiable. Minimax problem is one of the most important nondifferentiable optimization problems, and it can be widely applied in many fields (such as [1][2][3][4]). In real life, a lot of problems can be stated as a minimax problem, such as financial decision making, engineering design, and other fields which wants to obtain the objection functions minimum under conditions of the maximum of the functions. Since the objective function ( ) is non-differentiable, we cannot use the classical methods for smooth optimization problems directly to solve such constrained optimization problems.
Generally speaking, a lot of the schemes have been proposed for solving minimax problems, by converting Obviously, from the problem (2), the KKT conditions of (1) can be stated as follows: where , , and ] are the corresponding vector. Based on the equivalent relationship between the K-T point of (2) 2 The Scientific World Journal and the stationary point of (1), a lot of methods focus on finding the K-T point of (1), namely, solving (3). And many algorithms have been proposed to solve minimax problem [5][6][7][8][9][10][11][12][13][14][15]. Such as [5][6][7][8], the minimax problems are discussed with nonmonotone line search, which can effectively avoid the Maratos effect. Combining the trust-region methods with the line-search methods and curve-search methods, Wang and Zhang [9] propose a hybrid algorithm for linearly constrained minimax problems. Many other effective algorithms for solving the minimax problems are presented, such as [11][12][13][14][15]. Sequential quadratic programming (SQP) method is one of the efficient algorithms for solving smooth constrained optimization problems because of its fast convergence rate. Thus, it is studied deeply and widely (see, e.g., [16][17][18][19][20], etc.). For typical SQP method, the standard search direction should be obtained by solving the following quadratic programming: where is a symmetric positive definite matrix. Since the objective function ( ) contains the max operator, it is continuous but non-differentiable even if every constrained function ( ) ( ∈ ) is differentiable. Therefore this method may fail to reach an optimum for the minimax problem. In view of this and combining with (2), one considers the following quadratic programming through introducing an auxiliary variable : However, it is well known that the solution of (5) may not be a feasible descent direction and can not avoid the Maratos effect. Recently, many researches have extended the popular SQP scheme to the minimax problems (see [21][22][23][24][25][26], etc.). Jian et al. [22] and Q.-J. Hu and J.-Z. Hu [23] process pivoting operation to generate an -active constraint subset associated with the current iteration point. At each iteration of their proposed algorithm, a main search direction is obtained by solving a reduced quadratic program which always has a solution.
The feasible direction method (MFD) (see [27,28], etc.) is another effective way for solving smooth constrained optimization problems. An advantage of MFD over the classical SQP method is that a feasible direction of descent can be obtained by solving only one quadratic programming. In this paper, to obtain a feasible direction of descent and reduce the computational cost, we construct a new quadratic programming subproblem. Suppose is the current iteration point; at each iteration, the descent direction is obtained by solving the following quadratic programming subproblem: where is a symmetric positive definite matrix and is nonnegative auxiliary variable. In order to avoid the Maratos effect, a height-order correction direction is computed by the corresponding quadratic programming: Under suitable conditions, the theoretical analysis shows that the convergence of our algorithm can be obtained. The plan of the paper is as follow. The algorithm is proposed in Section 2. In Section 3, we show that the algorithm is globally convergent, while the superlinear convergence rate is analyzed in Section 4. Finally, some preliminary numerical tests are reported in Section 5.

Description of the Algorithm
Now we state our algorithm as follows.
Step 3 (the line search). A merit function is defined as follows: The Scientific World Journal 3 where ( ) = max{ ( ), ∈ ; 0} and is a suitable large positive scalar.

Global Convergence of the Algorithm
For convenience, we denote In this section, we analyze the convergence of the algorithm. The following general assumptions are true throughout this paper.
is linearly independent.
Secondly, we show that is a K-T point of problem (1) when = 0. From the problem (6), the K-T condition at is defined as follows: If = 0, then = 0, and according to the definition of in Step 4, we have = 0. Furthermore, it holds that That is to see that the results hold.
In the following of this section, we will show the global convergence of the algorithm. Since { , , , } is bounded under all the above-mentioned assumptions, we can assume without loss of generality that there exist an infinite index set and a constant * such that Proof. The first statement is obvious, the only stopping point being in Step 1. Thus, assume that the algorithm generates an infinite sequence { } and (30) holds. The cases * = 0 and * > 0 are considered separately.

Rate of Convergence
In this section, we show the convergence rate of the algorithm. For this purpose, we add the following some stronger regularity assumptions.
To get the superlinearly convergent rate of the above proposed algorithm, the following additional assumption is necessary.

Numerical Experiments
In this section, we select several problems to show the efficiency of the algorithm in Section 2. Some preliminary numerical experiments are tested on an Intel(R) Celeron(R) CPU 2.40 GHz computer. The code of the proposed algorithm is written by using MATLAB 7.0 and utilized the optimization toolbox to solve the quadratic programmings (6) and (7). The results show that the proposed algorithm is efficient.
is updated by the BFGS 8 The Scientific World Journal formula [16]. In the implementation, the stopping criterion of Step 1 is changed to If ‖ 0 ‖ ≤ 10 −6 , STOP. This algorithm has been tested on some problems from [10,11,26]. The results are summarized in Table 1. The columns of this table have the following meanings: Number: the number of the test problem in [10,11] or [26];

Concluding Remarks
In this paper, we propose a simple feasible sequential quadratic programming algorithm for inequality constrained minimax problems. With the help of the technique of method of feasible direction, at each iteration, a main search direction is obtained by solving only one reduced quadratic programming subproblem. Then, a correction direction is yielded by solving another quadratic programming to avoid Maratos effect and guarantee the superlinear convergence under mild conditions. The preliminary numerical results also show that the proposed algorithm is effective.
As further work, we can get the main search direction by other techniques, for example, sequential systems of linear equations technique. And we can also consider removing the strict complementarity.