of Applied & Computational Mathematics

In this paper, we give in section (1) compact description of the algorithm for solving general quadratic programming problems (that is, obtaining a local minimum of a quadratic function subject to inequality constraints) is presented. In section (2), we give practical application of the algorithm, we also discuss the computation work and performing by the algorithm and try to achieve efficiency and stability as possible as we can. In section (3), we show how to update the QR-factors of A 1(K) , when the tableau is complementary ,we give updating to the LDL T -Factors of ( ) KA G . In section (4) we are not going to describe a fully detailed method of obtaining an initial feasible point, since linear programming literature is full of such techniques.


Practical Application of the Algorithm
The algorithm presented above represents a general outline of a method for solving indefinite quadratic programming problems rather than an exact definition of a computer implementation. In this section we discuss the computational work performed by the algorithm, and try to achieve efficiency and stability as possible as we can. In doing so we follow, with slight modifications, the work of Gill and Murray which has been applied to active set methods since mid-seventies until now [7][8][9][10]. The slight modifications are made to cope with the new forms of the matrices used in the method when G is indefinite. In the case when G is positive (semi definite) the active set methods are considered to be equivalent, [20], pointed out. There he gave a detailed description of that equivalence. He also re-mentioned this equivalence [6]. The major computational work of the algorithm is in the solution of and We do not solve them directly; instead, we make use of the special structure of the matrices involved. We use the matrices H, T and U defined in eqn. (5). Thus, accordingly the solution in eqn. (3) is given by: and , ( ) H, U, T define the inverse of the upper left partition of the basis matrix when the tableau is complementary. This calls for making them available at every complementary tableau. In other words they are to be updated from a complementary tableau to another [12].
, H, T and U are given by: Where S (K) and Z (K) satisfy and ( ) ( ) The choice of S (K) and Z (K) to satisfy in eqns. (13) and (14), respectively is generally open. Here we take the choice given in S=Q 1 R -T , Z=Q 2 which is, according to K(Z T GZ)≤K(G), is advantageous as far as stability is concerned. For the sake of making this section selfcontained we show how S (K) and Z (K) are obtained in away suitable to this section. Let: represent the QR factorization of ( ) eqn. (15) we have: and ( ) ( ) so in eqns. (16) and (17) we define ( ) Where I  the identity matrix is whose columns are reversed. Thus we conclude by saying that the computation is focused on using the QR factorization of ( ) 1 K A , (when the k th iteration is complementary).
So updating these factors is required at each iteration when the tableau is complementary [25].
Updating the QR-Factors of 1

( ) K A
In this section we show how to update the QR-factors of ( ) 1 K A , when the tableau is complementary, also we give updating to the LDL T -Factors of ( ) Following the stream of our discussions, two cases are to be considered separately. The case when the (k+1) th iteration results in a complementary tableau, and the case when complementarity is restored at the (k+r+1) th iteration after r successive non-complementary tableaux. In the first case the factors of A 1 (k) are updated to give those of ( 1) 1 K A + , and this is the case when a column, q a say, is deleted from A 1 (k) .
In the second case the factors of A 1 (k) are used to give those of A 1 (k+r+1) , and this is the case when one column, q a say, is deleted from A 1 (k) and then r other columns are added to A 1 (k) . We follow the same steps carried [9], with the appropriate modification in the second case. In the first case, let A 1 (k) be the n×(L k -1) matrix obtained by deleting the q th column, q a , from A 1 (k) . Suppose the QR-factorization of A 12 (k) is given by:  where R 11 is (q-1)×(q-1) upper triangular, R 12 is (q-1)×(L k -q), R 22 is (L kq)×(L k -q) upper triangular, α is a (q-L) vector, β is an (L K -q) vector and γ is a scalar. Since ( 1) ( ) ( ) ( ) ( 1) 1 11 12 , will have the form: 11 12 Q be the product of the plane rotations which gives: Which is orthogonal, then Thus, only the rows from the q th to the L k th of Q (K) are altered in obtaining Q (K+1) , so if Q (K+1) is partitioned into then ( 1) 2 K Q + , in particular, takes the form: is complementary). Otherwise the third iteration will definitely restore complementarity at another vertex leaving (3) 0 A G = . In the former case the dimension of (2) A G is 1. In general the dimension of ( ) K A G keeps on increasing when constraints are deleted, and updating the factors is straight forward as will be shown. On the other hand the dimension of ( ) K A G keeps on decreasing when constraints are added to the active set, and in this case we are faced with re-factorizing the factors. We return to the case when the (k+1) th iteration is complementary. Here we are almost copying the work of in eqn. (9). In this case, as in eqn. (25) shows ( 1) 2 ( ) 2 , and using in eqn. (19) we have : It can be shown that when a symmetric matrix is augmented by a single row and a column, the lower-triangular factor is augmented by a single row. Define: If we substitute in eqn. (31) and in eqn.(32) into the identity: , we obtain L and d n-L K +1 as the solution of the equations Before ending this section we show that when the kth iteration and the (k+1) th iteration are complementary then ( 1) K A G + must be positive definite.
Let the tableau be complementary at the k th iteration. Let ( ) 1 K A be the matrix whose columns correspond to the active constraints, and and x changes according to For the next tableau (i.e., the (k+1) th )) to be complementary u qq must be negative, and the new value of v q must not violate feasibility.
Thus, using in eqn. (35), we have ( 1) 2 ( ) 1 Note also that the first q-1 rows of Q 1 (k) are not changed. This fact might be helpful as far as efficiency is concerned if we want to think of another alternative of choosing q in eqn. (1), such an alternative is: So that increase the number of rows of Q 1 (k) and R (K) which are unaltered in iteration (K+1), which in turns reduces the effort, especially when L k is relatively large. We now consider the second case when complementarity is restored at the (k+r+1) th iteration. Let Thus Pre-multiplying both sides in eqn. (24) by Q (k+1) (defined in eqn. (21)) we get where ' Define the QR-factorization of ' 2 W . Here '' 2 Q is (h-L k +1)×(n-L k +1) and orthogonal, and Thus we obtain the QR-factorization of A (k+r+1) with And Updating The LDL T -Factors of ( ) is positive definite as we shall see. This fact is counted as one of the good numerical features of the method. We consider the case when the (k+1) th iteration results in a complementary tableau. Unfortunately, in the other case when complementarity is restored at the (k+r+1) th iteration, we are unable till now to explore a way of using the factors of ( ) K A G in obtaining those of ( 1) K r A G + + . However n-L k -r, the dimension of ( 1) K r A G + + , decreases with r, in which case the effort of re-factorizing ( 1) K r A G + + might not be so much, especially when n-L k is itself small. This calls for choosing the starting L 1 so that n-L 1 is small. In the case when the number of constraints is greater than n, L 1 is chosen to be equal to n; that is the initial guess (1) x is a vertex. With this choice (1) 0 A G = , and in the second iteration we might expect a constraint to be deleted from the active set (which is the case when the second iteration Pre-multiply both sides of in eqn. (37) by ( 1) T K A + to get: showing that q T e lies in the space spanned by the columns of Z (k+1) , so for some (n-L k +1) vector h . Since along q T e at ( 1) We therefore conclude, in the active set methods sense, that the direction ( 1) from which we conclude that ( 1) ( 1) ( 1)

Finding an Initial Feasible Point
In this section we are not going to describe a fully detailed method of obtaining an initial feasible point, since linear programming literature is full of such techniques. The method of finding a feasible point has been resolved in linear programming by a technique known as phase 1 simplex [27]. The basis of the technique is to define an artificial objective function, namely: . In the case when m exceeds n, a non-feasible vertex is available as an initial feasible point to phase 1 and the simplex method is applied to minimize ( ) F x  . This process will ultimately lead to a feasible vertex [28]. Direct application of this method to finding a feasible point in the case when m is less than n is not feasible since, although a feasible point may exist a feasible vertex will not. Under these circumstances artificial vertices can be defined by adding simple bounds to the variables, but this could lead to either a poor initial point, since some of these artificial constraints must be active, or exclusion of the feasible region. A way out of this dilemma is described [6][7][8][9] a number of methods including the above one have been described. Gill and Murray is advantageous in that it makes available the QR-factorization of the initial matrix of active constraints which is then directly used in our algorithm.