AN ITERATIVE ALGORITHM FOR SOLVING A CLASS OF GENERALIZED COUPLED SYLVESTER-TRANSPOSE MATRIX EQUATIONS OVER BISYMMETRIC OR SKEW-ANTI-SYMMETRIC MATRICES

This paper presents an iterative algorithm to solve a class of generalized coupled Sylvester-transpose matrix equations over bisymmetric or skew-anti-symmetric matrices. When the matrix equations are consistent, the bisymmetric or skew-anti-symmetric solutions can be obtained within finite iteration steps in the absence of round-off errors for any initial bisymmetric or skew-anti-symmetric matrices by the proposed iterative algorithm. In addition, we can obtain the least norm solution by choosing the special initial matrices. Finally, numerical examples are given to demonstrate the iterative algorithm is quite efficient. The merit of our method is that it is easy to implement.


Introduction
Matrix equations are widely applied in many fields of science and engineering computing, such as control theory, stability theory, system theory [7,11,12,19]. For instance, in [56], the second-order linear system where M , D, K and B are known matrices with appropriate dimensions, is closely related with a matrix equation of the form where F and R are given matrices, and (V ,W ) is a pair of matrices to be determined. In recent years, there have been many studies on solving matrix equations [1-6, 8-10, 13-18, 20-61]. The skew-symmetric solution of matrix equation was obtained by constructing the orthogonal residual matrices [27]. In [18], Dehghan and Shirilord proposed a generalized modified Hermitian and skew-Hermitian splitting (GMHSS) method for solving complex Sylvester matrix equation where A ∈ C m×m , B ∈ C n×n and C ∈ C m×n are given complex matrices. Also A, B and C are large and sparse matrices. Zhang, Wei , Li and Zhao proposed an efficient method for the minimal norm least squares Hermitian solution of the complex matrix equation by applying the real representation matrices of complex matrices, the particular structure of real representation matrices, the Kronecker product of matrices and the Moore-Penrose generalized inverse, and then the expression of the minimal norm least squares solution was obtained [59]. where A, B, C, D, E and F are given matrices of suitable sizes, and X is an unknown matrix of suitable size over real number field, complex number field or quaternion field. Very recently, some researchers studied coupled Sylvester matrix equations or generalized coupled Sylvester matrix equations [9,14,15,22,23,28,32,35,39,43,51,57,61]. The form of the equation is and others. With the progress of scientific research the coupled Sylvester-transpose matrix equations and generalized coupled Sylvester-transpose matrix equations are investigated [20,36,40,44,50]. For instance [50], the authors considered the generalized coupled Sylvester-transpose matrix equation as follows    AXB + CY T D = S 1 , (1.5) Also they gave an iterative algorithm to solve the system over reflexive (or antireflexive) matrix. In [24], M. Hajarian extended the conjugate direction (CD) method to obtain an efficient method for solving the general coupled Sylvester discrete-time periodic (GCSDT P ) matrix equations In [25], first M. Hajarian developed the BiCOR and CORS methods to solve the coupled Sylvester-transpose matrix equations        l k=1 (A 1,k XB 1,k + C 1,k X T D 1,k + E 1,k Y F 1,k ) = M 1 , l k=1 (A 2,k XB 2,k + C 2,k X T D 2,k + E 2,k Y F 2,k ) = M 2 .
In [26], first M. Hajarian considered the problem of solving the Sylvester-transpose matrix equation: Second he considered the periodic Sylvester matrix equation: In this paper, we derive and analyze an efficient algorithm to solve the generalized coupled Sylvester-transpose matrix equations of the form (1.6) over bisymmetric or skew-anti-symmetric matrices. where A 1 , A 2 , A 3 , C 1 , C 2 , C 3 , E 1 , are known constant matrices, X, Y, Z ∈ R n×n are unknown matrices to be solved. For the convenience of description, the notations and definitions used in this paper are summarized as follows. Let R n denotes the set of n-dimensional vector space, R m×n denotes the set of m × n real matrices, For A ∈ R m×n , the symbols A T , tr(A), ∥ · ∥ and R(A) denote the transpose, the trace, the Frobenius norm and the column space of real matrix A, respectively. For arbitrary A = (a ij ), B = (b ij ), the Kroncker product is A ⊗ B = (a ij B). For any A ∈ R m×n , B ∈ R m×n , the inner product is defined as ⟨A, B⟩ = tr(B T A). Let a matrix A ∈ R m×n , vec(A) denotes the vec operator defined by vec(A) = (a T 1 , a T 2 , · · · , a T n ) T , where a i is the i-th column of A. For S = [e m , e m−1 , · · · , e 1 ], where e i is an m-order unit vector with ith component 1, the matrix S ∈ R m×m is said to be an m-order quasi-identity matrix. Definition 1.1. Let S be an m-order quasi-identity matrix, the matrix X ∈ R m×m is said to be a bisymmetric matrix if X T = X = SXS and BSR m×m denotes the set of m-order bisymmetric matrices. Definition 1.2. Let S be an m-order quasi-identity matrix, the matrix X ∈ R m×m is said to be a skew-anti-symmetric matrix if X T = X = −SXS and SASR m×m denotes the set of m-order skew-anti-symmetric matrices.
The remainder of this paper is structured as follows. In Section 2, after several lemmas, we introduce a MCG (modified conjugate gradient) method for solving the matrix equations (1.6) over bisymmetric or skew-anti-symmetric matrix and prove that a solution (X * , Y * , Z * ) of Eq.(1.6) can be obtained by the MCG method within finite iterative steps in the absence round-off errors for arbitrary initial value. In Section 3, we show that the least norm solution can be obtained by giving a special initial matrix. In Section 4, numerical examples are given to demonstrate that the introduced iterative algorithm is efficient. Finally, we conclude the paper in Section 5.

The iterative algorithm for solving the matrix
Eq.(1.6) In this section, we propose a MCG(modified conjugate gradient) iterative method for solving the generalized coupled Sylvester-transpose matrix equations (1.6) over bisymmetric or skew-anti-symmetric matrix. Firstly, on the basis of the properties of the inner product and theorem 4.3.8 and corollary in [30], we give a few useful lemmas.
where P mn is a permutation matrix, that is Remark 2.1. From the lemma 2.1, we are very easy to obtain P 2 mn = P mn P nm = I mn , and P mn vec(X T ) = vec(X).
According to the Kronecker product, the vec operator, and the above lemmas 2.1−2.2, we can transform generalized coupled Sylvester-transpose matrix equations (1.6) into the following system of linear equations: We can use the classical conjugate gradient method to solve Eq.(2.3), but it has no practical significance for this equation. Because the size of Eq.(2.3) is usually very large, it will take a lot of time and occupy a lot of computer storage space in the calculation process. Therefore, we use MCG method in matrix form to solve the Eq.(2.3). Next, we give the MCG algorithms for solving Eq.(2.3) over bisymmetric or skew-anti-symmetric matrices.
Step 2 If R k = 0 ,or R k ̸ = 0 and P k = 0, stop; otherwise, Compute Step 3 Calculate (2.23) Step 4 Set k := k + 1, return to Step 2. In order to prove that the sequences {X k , Y k , Z k } generated by algorithm 2.1 converges to the solution (X * , Y * , Z * ), We can list the following results see [50] and lemmas. Let A, B ∈ R m×n , C ∈ R n×m , X ∈ R n×n , then the following equalities hold.
(2.25) Lemma 2.4. If X ∈ BSR n×n R ∈ R n×n and S be an n-order quasi-identity matrix, then Proof. By the quasi-identity matrix and definition 1.1 , we can obtain S T = S and SXS = X = X T , then Proof. By direct calculation, we can get Proof. We use induction to prove Eq.(2.28). For k = 2 , we have 1 , P 1 ⟩ +⟨P Assuming that Eq.(2.28) holds for k = s, then for k = s + 1, we can obtain 1 + (R 1 + (R 1 + (R 1 + (R 1 ) = 0.
Remark 2.4. The Lemmas 2.5-2.7 are obtained under the assumption that the initial matrices are the bisymmetric matrices. Analogously, if the initial matrices are skew-anti-symmetric matrices, the same results can be obtained. Therefore, we no longer present these results. Theorem 2.1. Suppose that the generalized coupled Sylvester-transpose matrix equations (1.6) are consistent over bisymmetric matrices. Then, for any initial bisymmetric matrix pair (X 1 , Y 1 , Z 1 ), an exact solution of the system (1.6) can be derived within at most 3m 2 + 1 iteration steps by Algorithm 2.1.
(2.30) Therefore {R 1 , R 2 , · · · , R 3m 2 } is an orthogonal basis in the subspace The proof is completed. □ Note that if the generalized coupled Sylvester-transpose matrix equations (1.6) are consistent over bisymmetric matrices and the solution may not be unique, we need to find the least norm solution of Eqs.(1.6). In the following section, we will discuss the least norm solution of Eqs.(1.6).

The least norm solution
Firstly, we give the following lemmas in order to show that Eqs.(1.6) can obtain the least norm solution within finite iterative steps.

Lemma 3.1. The generalized coupled Sylvester-transpose matrix equations (1.6) have a bisymmetric solution if and only if the matrix equations
are consistent.
It can be verified that the matrix equations are consistent over bisymmetric matrices. We give the initial matrix pair X 1 = zeros(6), Y 1 = zeros(6) and Z 1 = zeros (6), by using Algorithm 2.1 and after 86 iterations, we get an approximation solution: It can be found that the solutions are bisymmetric matrices. The corresponding residual and norm are ⟨R 86 , R 86 ⟩ = 9.5088 × 10 −12 < 10 −11 , ∥X 86 ∥ F = 5.5160, ∥Y 86 ∥ F = 11.8934, ∥Z 86 ∥ F = 12.3229. We can observe that the residual is decreasing, and converges to zero as k increases.

Conclusions
In this paper, we studied MCG algorithm for solving a class of generalized coupled Sylvester-transpose matrix equations (1.6) over bisymmetric or skew-anti-symmetric matrices. We showed that if the generalized coupled Sylvester-transpose matrix equations (1.6) are consistent over bisymmetric matrices, an exact solution of the system (1.6) can be derived within finite iteration steps by Algorithm 2.1. Simultaneously, we also proved that if the matrix equations (1.6) are consistent over bisymmetric matrices, we can obtain the unique least Frobenius norm bisymmetric solution for the arbitrary initial bisymmetric matrices. Finally, some numerical examples were presented to demonstrate our theoretical analysis.