SECOND ORDER OPTIMALITY CONDITIONS AND REFORMULATIONS FOR NONCONVEX QUADRATICALLY CONSTRAINED QUADRATIC PROGRAMMING PROBLEMS

. In this paper, we present an optimality condition which could de-termine whether a given KKT solution is globally optimal. This condition is equivalent to determining if the Hessian of the corresponding Largrangian is copositive over a set. To ﬁnd the corresponding Lagrangian multiplier, two linear conic programming problems are constructed and then relaxed for computational purpose. Under the new condition, we proposed a local search based scheme to ﬁnd a global optimal solution and showed its eﬀectiveness by three examples.

1. Introduction. In this paper, we discuss quadratically constrained quadratic programming problems (QP in short) of the following form: where Q and Q i are n × n real symmetric matrices, c and c i are real vectors in R n . The QP problem is NP-Hard in general, which could not be solved in polynomial time, unless P=NP [7]. Even for some important special cases with convex feasible domains, such as box-constrained QP problems, standard quadratic programming problems, the QP problems are still NP-Hard [14].
Many scholars have done a lot of researches on nonconvex quadratic programming problems. For example, based on the canonical dual theory [5], Gao proposes a positive semidefinite condition (Theorem 2 in [6]) to provide an efficient method to verify the global optimality of a KKT solution. The canonical duality theory is also extended in [13,20], and a more general global optimality condition for 0-1 quadratic programming is proposed. Based on S-Lemma and L-subdifferentials denotes the convex hull of G, Cone(G) the convex conic hull of G, and Closure(G) the closure of G.
The Lagrangian function for QP is where λ ∈ R m + , and By definition, for any x ∈ F and λ ∈ R m + , since λ i ( 1 2 x T Q i x + c T i x − b i ) 0 for any i = 1, 2, ..., m, we know L(x, λ) F (x). Let x * T Q i x * + c T i x * = b i } be the active constraint set at x * . Moreover, we assume that the following condition holds: Condition 1 (Linear Independence Constraint Qualification). The vectors of {∇G i (x * )} i∈I(x * ) , are linearly independent.
For a QP problem, the KKT condition can be stated as follows: for a local optimal solution x * with Linear Independence Constraint Qualification satisfied, there exists a Lagrangian multiplier λ * ∈ R m such that .., m. This condition provides a necessary condition of local optimality for a feasible solution x * . In this paper, without specification, we always assume the Linear Independence Constraint Qualification is satisfied.
By the complementary condition Based on classic optimization theory, for convex QP cases with nonempty convex feasible domain, the KKT condition is also a sufficient global optimality condition. This classic result can be interpreted for QP cases as follows: since Q, Q 1 , ..., Q m are all positive semidefinite matrices for convex cases, which implies Q+ The above intuitive interpretation can be stated as the following theorem: Theorem 2.1. Let x * be a KKT solution of the given QP with a corresponding Lagrangian multiplier λ * . If x * is a global minimizer of L(x, λ * ) on F, then x * is also a global minimizer of F (x) on F.
Note that Theorem 2.1 does not assume the convexity of QP. Indeed, this result is also true for general cases. We define the condition of Theorem 2.1 as the basic condition of this paper. Condition 2 (Basic Condition). The QP problem has a KKT solution x * with its Lagrangian vector λ * such that x * is a global optimal solution of min x∈F L(x, λ * ).

Remark 1.
Although the Basic Condition is a version of the classic saddle point condition, it is not easy to derive an algorithm for solving QP from this condition directly.
Based on Theorem 2.1, the following positive semidefinite condition can be derived directly. 6,9,15]). Let x * be a KKT solution of the given QP with a corresponding Lagrangian multiplier λ * . If Q + n i=1 λ * i Q i 0, then x * is a global optimal solution of QP.
Proof. It is easy to verify that x * is a global minimizer of L(x, λ * ) on F. By Theorem 2.1, x * is a global optimal solution of QP. Corollary 2.2 can be found in several articles, e.g., Theorem 2 of [6], Proposition 3.2 of [9], and [15] for QP cases with two quadratic constraints. Although the condition in Corollary 2.2 is just a special case of the Basic Condition with less generality, there exist special polynomial-time algorithms for solving cases that satisfy this conditions (see [6]). In this sense, the main contribution of this stronger global optimality condition is that it uncovers a computable class of nonconvex QP problems. In this paper, we will derive a new computable class, which is more general than the above class. Now, we transform the Basic Condition to another version. Note the KKT solution x * is an optimal solution of min x∈F L(x, λ * ) if and only if L(x, λ * )−L(x * , λ * ) 0 for any x ∈ F. Let d = x − x * . By KKT conditions, we can verify Denote We have the following theorem.
then (x * , λ * ) satisfies the Basic Condition. Besides, for cases that F is convex, the two condition are equivalent.
Besides, for cases that F is convex, it is easy to verify that for any nonzero vectord, Theorem 2.3 provides a stronger global optimality condition, and shows that for cases with a closed convex feasible domain, the basic condition is equivalent to the copositivity of the Hessian matrix of L(x, λ * ). Hence, the problem of verifying the global optimality of a KKT solution can be based on verification of matrix copositivity. Besides, it is easy to show that the condition in Theorem 2.3 is weaker than that in Corollary 2.2.
In the remaining part of the paper, we call the condition in Theorem 2.3 as Copositivity Condition. Condition 3 (Copositivity Condition). The QP problem has a KKT solution A direct consequent result can be obtained as follows.
Theorem 2.4. Let x * be a KKT solution of the given QP with a corresponding Lagrangian multiplier Proof. The first part of the theorem can be proved directly based on Theorems 2.1 and 2.3. We only prove the second part.
is continuous and strictly increasing on [0, 1], f (1) = L(x, λ * ) must not be a local minimizer. Hence x * is the unique local optimal solution of L(x, λ * ) on Convex(F).
, then the unique local optimal solution of L(x, λ * ) over Convex(F) is also a global optimal solution. Hence, if the reformulated problem min x∈Convex(F ) L(x, λ * ) is obtained, then it can be solved locally using any conventional optimization technique, such as gradient projection method, to obtain a global optimal solution of the original QP problem. The main question is how to obtain the reformulated problem numerically, in order to solve the original problem globally.
Recently, the problem of verification the copositivity condition of a given matrix is a hot topic. Theorems 2.3 and 2.4 bridge a connection between the topic of optimality conditions of QP and copositive matrice verification.
3. Conic programming methods for solving copositivity condition. In this section, we will show that if there exists a KKT solution with its Lagrangian multiplier satisfy the Copositivity Condition, then we can construct two conic programming problems with a unique optimal solution which equals to the corresponding Lagrangian multiplier. Let For a KKT pair (x * , λ * ), define a corresponding matrix Based on the above definitions, we give another version of Copositivity Condition with a lifted dimension.
Theorem 3.1 shows the equivalence between the Copositivity Condition and the condition D(x * , λ * ) ∈ D F . This equivalence relationship is important for us to design a scheme.
The similar condition of Theorem 3.1 is also studied in [12], in which the authors define and propose an algorithm for searching a KKT pair (x * , λ * ) that satisfies D(x * , λ * ) ∈ D n+1 . Let closed convex cone C n+1 be a subset of D n+1 . They define the following conic approximation problem.
Let ν d = 1 2 σ * − b T λ * be the optimal value of COP 1 , then the second problem is defined by: We cite the following result from [12] directly.
Note that D F is a subset of D n+1 . We apply Theorem 3.2 to D F and replace C n+1 by D F in (COP 1 ) and (COP 2 ), then we have the following problems: where v d is the optimal value of COP 1. Then, based on Theorem 2.3, Theorem 3.1 and Theorem 3.2, we have the following corollary: , then λ * is the unique optimal solution of COP2.
Hence, if there exists a KKT solution with its Lagrangian multiplier that satisfies Copositivity Condition, then the Lagrangian multiplier is the unique optimal solution of COP2.
For general cases, it is not easy to verify D(x * , λ * ) ∈ D F . Problems COP1 and COP2 may not be easier than the original QP problem. For computational 878 ZIYE SHI AND QINGWEI JIN purpose, a computable cone C ⊂ D F could be used as an approximation. We define the following conic programming problems: where ν opt is the optimal value of COR1, can be easily solved.

Local search based scheme with preprocessing step. Theorem 2.3 and Corollary 3.3 show that for a KKT solution x * with its Lagrangian multiplier λ
, then x * is the unique local optimal solution of L(x, λ * ) on Convex(F), and λ * is the unique optimal solution of COP2. Hence, we can compute λ * by solving COP2 (the preprocessing step), and then solve min x∈Convex(F ) L(x, λ * ) using gradient projection algorithm to obtain its unique local optimal solution x * (the local search step), which is also a global optimal solution of QP.
Since to solve general conic programming problems COP1 and COP2 is not an easy problem, we need to design a tractable approximation cone C that satisfies C ⊂ D F and solve problems COR1 and COR2 instead.
Based on the above ideas, we design the following local search scheme with a preprocessing step to solve QP problem.

Local Search Scheme with Preprocessing (LSSP in short).
Step 1: For a given problem QP, choose a suitable approximation cone C, and construct the conic programming problem COR1.
Step 2: Solve the problem COR1 for its optimal value ν opt . If failed, then stop. The problem cannot be solved by the current scheme.
Step 3: Solve the problem COR2 to obtain an optimal solution λ opt .
Step 4: Solve min x∈Convex(F ) L(x, λ opt ) using a local optimization algorithm to obtain a local optimal solution x opt .
Step 5: If F (x opt ) = ν opt , and x opt is feasible for QP, then return x opt as a global optimal solution of QP, with its optimal value that equal to ν opt ; otherwise return ν opt as a lower bound of QP.
Remark 3. If we do not get a global optimal solution of QP, we could use LSSP as a sub-scheme of many branch and bound algorithms in QP. We could compute a lower bound of a branch by LSSP and compare it with the upper bound or the optimal value of the other branch. If the lower bound of one branch is higher than the upper bound or the optimal value of another branch, we could delete this branch directly. Many scholars have done a lot of deep researches on the branch and bound algorithm (e.g., see [16,10]) and this scheme will be helpful to solve QP as a subroutine. Besides, a better approximation of D F could improve the possibility of finding a solution in STEP 4. In [3], the authors try to develop an adaptive approach to refine such approximation. In this paper, the focus is the relationship among the KKT condition, the local optimal solution and its global optimality.
Notice that, Step 4 can adopt any type of local optimization algorithm (e.g., gradient projection algorithm) and it could be very cheap in computing time to obtain a local solution. As proved in the next theorem, the advantage of the preprocessing steps (Step 1 -Step 3) is to increase the possibility of obtaining a global optimal solution of a given QP problem with nonconvex objective function.
The LSSP is guaranteed to be successful under certain conditions. Theorem 4.1. Let x * be a KKT solution of QP, with its Lagrangian multiplier being λ * . If D(x * , λ * ) ∈ C, then the λ opt obtained in Step 3 of LSSP satisfies , then x opt = x * is the unique optimal solution of L(x, λ opt ) over Convex(F).
Proof. Since C ⊆ D F , under the assumption D(x * , λ * ) ∈ C, we have D(x * , λ * ) ∈ D F . From Theorem 3.2 and Corollary 3.3, we know that λ * is the unique optimal solution of COP2, which is also the unique optimal solution of COR2. Hence, λ opt = λ * . By Theorem 3.1, the KKT pair (x * , λ * ) satisfies the Copositivity Condition.
Besides, if Q + m i=1 λ * i Q i is strictly copositive over T (x * ), then from Theorem 2.4, x opt = x * is the unique optimal solution of L(x, λ opt ) over Convex(F).
Hence, for cases that Q + m i=1 λ opt i Q i is strictly copositive over T (x * ), if the approximation cone C is large enough, such that D(x * , λ * ) ∈ C is satisfied, then LSSP will successfully obtain a global optimal solution of QP. However, without the preprocessing steps, there is no guarantee to obtain a global optimal solution for these sub-classes of QP problems. The condition of Theorem 4.1 is also weaker than that of Corollary 2.2. In this sense, the computable class of LSSP is more general than that based on Corollary 2.2.
To demonstrate the above process, we study the following examples (all numerical experiments have been tested with Matlab and SeDuMi toolkit [18]).  The matrix Q is not positive semidefinite, hence the problem is not a convex problem. We approximate the cone D F in the conic programming problems COP1 and COP2 by S 11 + N 11 , to construct two numerically trackable conic programming problems.
After solving the two problems, we obtain a solution λ * = [0, 0, 0, 0, 42, 20, 0, 136, 0, 0] T , and the optimal value of the first problem is σ * = −1489.6. Besides, since Q + 2Diag(λ) has two negative eigenvalues, the Lagrangian function is not convex. However, by solving the problem min L(x, λ * ) using gradient projection method from randomly generated initial point x 0 ∈ [0, 1] 10 , we can always obtain the local optimal solution x * = [1, 0.5, 1, 0.5, 0, 0, 1, 0, 0.5, 1] T , with objective value −1489.6, which is equal to σ * . Hence it is a global optimal solution of this problem. Besides, it is easy to verify that (x * , λ * ) satisfies the KKT condition, and D(x * , λ * ) ∈ S 11 + N 11 ⊆ D F . Hence, by Theorem 3.1, the KKT point x * satisfies the copositivity condition, which theoretically proves the global optimality of x * . Now we consider another example: i − x i 0 and x 2 i − x i 0. Then following a similar procedure as in the previous example, we obtain a solution with an objective value −16. This solution is indeed the optimal solution of this example.
In the last example, we consider a 4-dimensional nonconvex problem. We can verify the feasible domain F is convex, although the first constraint is not convex. Besides, it is easy to verify F ⊂ R 4 + . Hence, we can still choose C = S 4 + N 4 as an approximation cone for LSSP. In Step 3, we obtain λ opt = (2, 4). Then solving min x∈F L(x, λ opt ) locally, we obtain its unique local optimal solution x opt = (0, 0, 7, 3) T with objective value 162.5, which equals to v opt . Hence, x opt is a global optimal solution of the above problem. In contrast, if we define the Lagrangian function P (λ) = min x∈R 4 L(x, λ), and solve max λ∈R 4 + P (λ) using the dual descent algorithm, then we obtain its optimal value 120.2, with a duality gap 42.3.

Remark 4.
The third example shows that under the condition of Theorem 4.1, the algorithm can return a global optimal solution of QP successfully, even for cases with a nonzero duality gap. The main advantage of the preprocessing step is to increase the chance of obtaining a global optimal solution of QP in a local search based algorithm, and provide a larger set of computable cases than conventional local optimization methods.

5.
Conclusions. To solve nonconvex quadratic programming problems globally, we provide a new sufficient condition to prove the global optimality of a local optimal solution. Based on the Basic Condition and its corresponding copositivity condition, two conic programming problems have been constructed. To solve these conic programming problems approximately, a solution λ opt is obtained for reformulating the problem min x∈Convex(F ) L(x, λ opt ). Under conditions of Theorem 4.1, the unique optimal solution of the reformulated problem is also a global optimal solution of the original problem.