New error bound for linear complementarity problem of S - S DDS - B matrices

: S - S DDS - B matrices is a subclass of P -matrices which contains B -matrices. New error bound of the linear complementarity problem for S - S DDS - B matrices is presented, which improves the corresponding result in [1]. Numerical examples are given to verify the corresponding results.


Introduction
Many fundamental problems in optimization and mathematical programming can be described as a linear complementarity problem (LCP). Such as quadratic programming, nonlinear obstacle problem, invariant capital stock, the Nash eqilibrium point of a bimatrix game, optimal stopping, free boundary problem for journal bearing and so on, see [1][2][3]. The error bound on the distance between an arbitrary point in R n and the solution set of the LCP plays an important role in the convergence analysis of algorithm, for details, see [4][5][6][7].
It is well known that LCP has a unique solution for any vector q ∈ R n if and only if M is a P-matrix. Some basic definitions for the special matrix are given below: A matrix M = (m i j ) ∈ R n×n is called a Z-matrix, if m i j ≤ 0 for any i j; a P-matrix, if all its principal minors are positive; an M-matrix, if M −1 ≥ 0 and M is a Z-matrix; an H-matrix, if its comparison matrix M is an M-matrix, where the comparison matrix is given bym Linear complementarity problem is to find a vector x ∈ R n such that x ≥ 0, Mx + q ≥ 0, x T (Mx + q) = 0 or to prove that no such vector x exists, where M = (m i j ) ∈ R n×n and q ∈ R n . One of the essencial problems in the LCP(M,q) is to estimate which is used to bound the error x − x * ∞ , that is where x * is the solution of the LCP(M, q), r(x) = min{x, Mx + q}, D = diag(d i ) with 0 ≤ d i ≤ 1, and the min operator r(x) denote the componentwise of the two vectors. When real H-matrices with positive diagonal entries form a subclass of P-matrices, the error bound becomes simpler (see formula (2.4) in [8]). Nowadays, many scholars interest in the research on special H-matrices, such as QN-matrices [9], S -S DD matrices [10], Nekrasov matrices [11] and Ostrowski matrices [12]. The corresponding error bounds for LCPs of QN-matrices are achieved by Dai et al. in [13] and Gao et al. in [14]. A new error bound for the LCP of Σ-S DD matrices was given in [15], which only depended on the entries of the involved matrices. When the matrix A is not an H matrix we can not use formula (2.4) in [8]. However, for some subclasses of P-matrices that are not H-matrices, error bounds for LCPs have also been needed. For example, for S B-matrices [16], for B S -matrices [17], for weakly chained diagonally dominant B-matrices [18], for DB-matrices [19] and for MB-matrices [20]. B-matrices as an important subclass of P-matrices has been researched for years and has achieved fruitful results, see [18,[21][22][23][24][25].
In this paper, we focus on the error bound for the LCP(M, q) when M is an S -S DDS -B-matrix, that is a P-matrix. In Section 2, we introduce some notations, definitions and lemmas, which will be used in the subsequence analysis. In Section 3, a new error bound is presented, then the new error bound is compared with the bound in [1]. In Section 4, we give some numerical examples and graphs to show the efficiency of the method in our paper.

Preliminaries
In this section, some notations, definitions and lemmas are recalled. Give a matrix A = (a i j ) ∈ R n×n and a subset S ⊂ n , n ≥ 2, we denote where j i, i = 1, · · · , n, and also j∈S |a i j |, i S . S ∪ S = n ,S is the complement of S in n .
In according with [26], a matrix A = (a i j ), n ≥ 2 is said to be S -S DD if the following conditions are fulfilled: , f or all i ∈ S and j ∈S . We extend the S -S DD matrices by introducing the following definitions.
Definition 2.1. [26] A matrix A = (a i j ) ∈ R n×n is said to be S -S DDS (S -S DD Sparse) if the following conditions are satisfied: (iii) For all i ∈ S and all j ∈S such that a i j 0 or a ji 0 There is an equivalence definition in [27], which is closely related to strictly diagonally dominant matrices. Immediately, we know S -S DDS -B matrices contain B-matrices from Definition 2.3. That is Now,we will introduce some useful lemmas.
Let γ > 0 and η > 0, for any x ∈ [0, 1], Lemma 2.5. [27] Let A ∈ R n×n is a nonsingular M-matrix, P is a nonnegative matrix with rank 1, then A + P is a P-matrix.

Main results
In this section, a new error bound of LCP(M, q) is presented when M is an S -S DDS -B matrix. Firstly, we prove that an S -S DDS -B matrix is a P-matrix.
Proof. By Definition 2.2, we have that C in (2.2) is a nonnegative matrix with rank 1. By the fact that S -S DDS matrix is a nonnegative H-matrix, we have B + is a nonnegative M-matrix. We can conclude A is a P-matrix from Lemma 2.5.
thenM is an S -S DDS matrix with positive diagonal entries.
Proof. FromM = I − D + DM = (m i j ), we havẽ Because M is an S -S DDS matrix with positive diagonal entries and D = diag(d i ), 0 ≤ d i ≤ 1, for any i ∈ S , we get . Similarly, for some j ∈S , we have For any i ∈ S , j ∈S , we obtain .
Proof. We denote A D = I − D + DA, then where B + D = I − D + DB + , C D = DC. Since B + is an S -S DDS matrix with positive diagonal entries, it's easy to know B + D is an S -S DDS matrix from Lemma 3.2. Note that the estimation of the (B + D ) −1 ∞ will be given below. Since B + D = I − D+ DB + =: (b i j ), from Lemma 2.2, we have When r S i (B + D ) = 0, it is easy to get r S i (B + ) = 0, or d i = 0 for any i ∈ N. (1) If d i = 0, for any i ∈ N, we get (2) If rS i (B + ) = 0, for any i ∈ S , we have (3.6) (5) If r S j (B + D ) 0, there existb ji 0 for some i ∈ S , we arrive at the inequality Consequently, (3.1) holds. The proof is completed.
The bound in (3.1) also holds for B-matrix, because B-matrix is a subclass of S -S DDS -B-matrix. Next, we will indicate that the bound in Theorem 3.1 is better than that in Lemma 2.3 in some conditions. Proof. From Lemma 2.3, β = min i∈N {β i } and β i = b ii − j∈N, j i b i j , when b ii − r S i (B + ) < 1 and b j j − rS j (B + ) < 1, it is obvious that In the same way, we get . .
, for any i ∈ S , j ∈S , we can multiply rS i (B + ) on two sides and plus (b ii − r S i (B + ))(b j j − rS j (B + )), then .
, r S j (B + ), the following inequality can obtain in the same way So the conclusion in (3.8) holds.

An application
In this section, an example is given to show the advantage of the bound in Theorem 3.1.

Example 1. Consider the S -S DDS -B matrix
Matrix A can be split into A = B + + C, where Since A is a B-matrix, by Lemma 2.3, then The results in (4.1) and (4.2) indicate that Theorem 3.1 is better than Lemma 2.3.
It is shown by Figure 1, in which the first 1000 matrices are given by the following MATLAB codes, that 18 is better than 30 for max (I−D+DA) −1 ∞ . Blue stars in Figure 1  MATLAB codes: For i = 1:1000; D=diag(rand(5,1)); end.
A can be split into A = B + + C, where Taking in account that B + is not a a strictly diagonally dominant matrix and so A is not a B-matrix. It is easy to check that when S = {1, 2, 3, 4} andS = {5, 6, 7}, it fulfills Definition 2.2. Therefore, by Theorem 1, we obtain max d∈[0,1] n (I − D + DA) −1 ∞ ≤ 2.8002.

Conclusions
In this paper, we first give a new error bound for the LCP(M,q) with S -S DDS -B matrices, which depends only on the matrix of M. Then, based on the new result, we compare it with the error bound in [1]. From Figure 1, we can find that our result improves that in [1].