Iterative Algorithms for Symmetric Positive Semidefinite Solutions of the Lyapunov Matrix Equations

It is well-known that the stability of a first-order autonomous system can be determined by testing the symmetric positive definite solutions of associated Lyapunov matrix equations. However, the research on the constrained solutions of the Lyapunov matrix equations is quite few. In this paper, we present three iterative algorithms for symmetric positive semidefinite solutions of the Lyapunov matrix equations. -e first and second iterative algorithms are based on the relaxed proximal point algorithm (RPPA) and the Peaceman–Rachford splitting method (PRSM), respectively, and their global convergence can be ensured by corresponding results in the literature. -e third iterative algorithm is based on the famous alternating direction method of multipliers (ADMM), and its convergence is subsequently discussed in detail. Finally, numerical simulation results illustrate the effectiveness of the proposed iterative algorithms.


Introduction
Linear matrix equations are often encountered in various engineering disciplines, such as control theory [1,2], system theory [3,4] and stability analysis [5,6]. For example, the stability of the first-order autonomous system _ x(t) � Ax(t) is determined by whether the associated continuous-time Lyapunov equation XA + A ⊤ X � − M exists a positive definite solution or not, and the stability of the discrete-time system x(k + 1) � Ax(k) can be determined by solving the associated discrete-time Lyapunov matrix equation X − A ⊤ XA � M, where M is a given positive definite matrix with approximate size [7].
In this paper, we consider the following constrained (continuous-time) Lyapunov matrix equations (termed as CLMEs): finding a matrix X * ∈ S m + such that where S m + denotes the symmetric positive semidefinite (SPSD) matrix cone, A ∈ R m×m and C ∈ R m×m are two given constant matrices, and X * ∈ R m×m is an unknown matrix to be determined. To solve problem (1), many iterative methods have been developed in the literature. For example, based on the hierarchical identification principle in parameter estimation, Ding et al. [8] designed a gradient-based iterative algorithm and a least-squares iterative algorithm for problem (1) without X * ∈ S m + . Niu et al. [9] and Xie and Ma [10] extended the method in [8] by designing some accelerated gradient-based iterative algorithms for problem (1) without X * ∈ S m + , which often have more efficient numerical performance by choosing suitable parameters. Based on the hierarchical identification principle and a new fixed point equation, Sun et al. [7] designed two least-squares iterative algorithms for problem (1) without X * ∈ S m + . Conjugate gradient method is an influential numerical method for nonlinear optimization programming, which has also been applied to solve problem (1) without X * ∈ S m + . For example, Huang and Ma [11] proposed a conjugate gradient method to solve problem (1) whether it is consistent or not. An appealing characteristic of the method proposed by Huang and Ma [11] is that: (i) it has finite termination property in the absence of round-off errors; (ii) it can obtain least Frobenius norm solution of problem (1) when it adopts some special kind of initial matrix. Furthermore, Ke and Ma [12] extended the alternating direction method of multipliers (ADMM), which is a famous numerical method in separable optimization programming, to solve problem (1) with the nonnegative constraint X * ≥ 0, which is obviously motivated by the design principle proposed by Xu et al. [13].
ough the above iterative methods often have high efficient numerical performance, they only solve problem (1) without the constraint X * ∈ S m + . However, we often need to solve problem (1) with the constraint X * ∈ S m + in practice. In this paper, we are going to deal with this issue and propose three iterative algorithms, which are based on the famous relaxed proximal point algorithm (RPPA), the augmented Lagrangian method (ALM) and the alternating direction method of multipliers (ADMM), respectively. By transforming CLMEs into some optimization programming, we can determine the existence of solutions to CLMEs by judging that the optimal value of the transformed optimization programming equals to zero or not.
Before ending this section, let us introduce the notations used in this paper. For any matrix N ∈ R m×n , we use N ⊤ and tr(N) to denote its transpose and trace, respectively. e symbol I m stands for an identity matrix of order m. Let 〈A, B〉 � tr(B ⊤ A) � ij a ij b ij , where the matrices A and B has the same size. e Frobenius norm ‖N‖ is defined as ‖N‖ � �������� tr(N ⊤ N). e symbol A ⊗ B is defined as A ⊗ B � (a ij B), which represents the Kronecker product of matrices A and B. For a matrix A ∈ R m×n , the vectorization operator vec(A) is defined by vec(A) � (a ⊤ 1 , a ⊤ 2 , . . . , a ⊤ n ) ⊤ , where a k is the k-th column of the matrix A. According to the property of the Kronecker product, for any matrices M, N and X with approximate size, it holds that vec(MXN) � N ⊤ ⊗ M vec(X). (2) e Kronecker product satisfies the following properties [14]: Property 2. Let A ∈ R m×n , C � ∈ R n×p , B ∈ R q×r , and D ∈ R r×s . en, Let P S m + (·) denote the projection operator of a matrix onto the convex set S m + , i.e., which satisfies the following property [15]: Furthermore, P S m + (A) has a closed-form expression, i.e., and the computational load of the above procedure is 10 m 3 flops. e remainder of this paper is organized as follows. In Sections 2 and 3, we present the first and second iterative algorithm for problem (1) and analyze their global convergence. Section 4 discusses the third iterative algorithm for problem (1) and presents its global convergence. Section 5 presents some numerical results to illustrate the efficiency of the proposed algorithms. Finally, Section 6 concludes the entire paper and discusses future research opportunities.

The First Iterative Algorithm and Its Convergence
In this section, motivated by the relaxed proximal point algorithm (RPPA) in [16], we shall present an iterative algorithm for the constrained Lyapunov matrix equations and discuss its convergence. Firstly, let us transform the CLMEs into the following constrained matrix optimization (termed as CMO): Remark 1. Obviously, the objective function of CMO (8) has lower bound zero, therefore, CMO (8) must have optimal solution, and from its optimal value, we can determine the existence of solutions of CLMEs. In fact, if the optimal value of CMO (8) equals to zero, CLMEs has at least one solution; otherwise, CLMEs does not have solution.
Since CMO (8) can be viewed as the matrix form of the traditionary convex optimization with vector variables, we apply the relaxed proximal point algorithm (termed as RPPA) proposed by Gol'shtein and Tret'yakov in to solve CMO (8), and get the following RPPA.

Mathematical Problems in Engineering
Step 3. If then, stop and return an approximate solution X k+1 of (8).
Step 4. Compute the new iterate by Set k ≔ k + 1, and goto Step 2.
In the following, we set the matrix G in RPPA as Now, let us elaborate on the optimal subproblem in (9). For the given X k , by some simple manipulations, the X subproblem in (9) is amount to solving the following problem: where the second equality follows from the two properties in Section 1, i.e., en, X k can be computed by the formula (7).
From [16], we have the following theorem, which means that the iterative sequence X k generated by RPPA is Fejér monotone with respective to the solution set. e proof of its global convergence follows immediately from this property and the results in [17]. Theorem 1. Let the sequences X k be generated by the RPPA. en, for any k ≥ 0, one has where X * is a solution of CMO.

The Second Iterative Algorithm and Its Convergence
In this section, based on the Peaceman-Rachford splitting methods in [18,19], we shall present another iterative algorithm for the constrained Lyapunov matrix equations and discuss its convergence. By introducing an auxiliary matrix variable y ∈ R m , CMO (8) can be further written as the following separable convex optimization (termed as SCO): Since SCO (16) can be viewed as the matrix form of the traditionary convex optimization with vector variables, we apply the Peaceman-Rachford splitting method in [18,19] to solve SCO (15), and get the following Peaceman-Rachford splitting method (termed as PRSM).

Mathematical Problems in Engineering
Step 2. Generate the new iterate W k+1 � (X k+1 , Y k+1 , Λ k+1 ) by Step 3. If then stop and return an approximate solution (X k+1 , Y k+1 ) of (16); else set k ≔ k + 1, and goto Step 2. Now, let us elaborate on the two optimization subproblems in (17). For the given Y k , Λ k , the X subproblem in (17) is amount to solving the following normal equation: where us, we have For the given X k+1 , Λ k+(1/2) , the Y subproblem in (17) is amount to solving the following optimization problem: which can be computed by the formula given in (7).

Remark 2.
ough the computation of X k+1 involves an inverse of a matrix. However, since this term is invariant in each iteration, we only need to compute it once before all iterations.
From [18,19], we have the following theorem, which means that the iterative sequence W k generated by PRSM is Fejér monotone with respective to the solution set. Obviously, SCO is a convex optimization problem. us, its global convergence follows immediately from this property and the results in [17].

Theorem 2.
Let W k be the sequence generated by the PRSM. en, for any k ≥ 0, one has where H and N are two positive definite matrices, W * � (X * , Y * , Λ * ) with (X * , Y * ) being a solution of SCO, Λ * being the corresponding Lagrangian multiplier, and W k is an auxiliary sequence defined as Remark 3. Following the above procedure, some famous iterative algorithms for convex programming, such as the proximal point algorithm (PPA), the relaxed PPA [16], can also be used to solve SCO.

The Third Iterative Algorithm and Its Convergence
In this section, motivated by the alternating direction methods of multiplier (ADMM) in [12,13], we shall present the third iterative algorithm for the constrained Lyapunov matrix equations and discuss its convergence. Firstly, CMO (8) can be written as the following equivalent form: e augmented Lagrangian function of (25) is where the matrices Λ, Π ∈ R m×m are Lagrangian multipliers and the constants α, β > 0 are penalty parameters. en, we can derive the second iterative algorithm (termed as ADMM) for CMO by alternating minimizing one variable in L(X, Y, Z, Λ, Π) when other variables is fixed.

Algorithm 3.
e ADMM for problem (25) Step 1. Input three parameters α, β > 0, c ∈ (0, (1 Step 3. If then stop and return an approximate solution (X k+1 , Y k+1 , Z k+1 ) of (25); else, set k ≔ k + 1, and goto Step 2. Now, let us elaborate on the three optimization subproblems in (27). For the given Y k , Z k , Λ k , and Π k , the X subproblem in (27) is amount to solving the following normal equation: from which, we have For the given X k+1 , Z k , Λ k , and Π k , the Y subproblem in (27) is amount to solving the following normal equation: from which, it holds that For the given X k+1 , Y k+1 , Λ k , and Π k , the Z subproblem in (27) is amount to solving the following optimization problem: which can be computed by the formula given in (7).

Remark 4.
It is obvious that the two matrices αI m + A ⊤ A and (α + β)I m + A ⊤ A are symmetric positive definite for any real constants α, β > 0, which are invariant during the iteration, and we only need to compute their inverses once before all iterations.
Mathematical Problems in Engineering e KKT conditions for problem (25) are: where Λ, Π, and Γ are three Lagrangian multipliers corresponding to the constraints X − Y � 0, Y − Z � 0 and Z ∈ S m + in (25), respectively. Eliminating the variable Γ, the above KKT system can be written as: If the point (X, Y, Z, Λ, Π) satisfies the above conditions, then (X, Y, Z) is called KKT point, and (Λ, Π) is the corresponding multiplier.
To simplify notation, we set W � (X, Y, Z, Λ, Π). Now, let us analyze the convergence of ADMM. Due to the objective function of (25) is not separable, we only get the following weak convergence result. Theorem 3. Let W k be the sequence generated by ADMM which satisfies the condition en, any accumulation point of W k satisfies the KKT conditions for problem (25).
In the following, we only need to prove that (39) also hold for W. From (35) and (36), it holds that is and the symmetry of X and C imply that the matrix Π is also symmetric. From (44) and (46), it holds that 6 Mathematical Problems in Engineering From this and (6), we have Setting B � 0 and B � 2Z in the above inequality, we get us, 〈Π, Z〉 � 0. Let λ i (i � 1, 2, . . . , m) with λ 1 ≥ λ 2 ≥ · · · ≥ λ m be the m eigenvalues of the matrix Π. Suppose the matrix Π is not positive semidefinite, then λ m < 0. Let P ∈ R m×m be an orthogonal matrix, such that us, at is, en, tλ m ≥ 0, which contradicts with t > 0 and λ m < 0. us, the matrix Π ∈ S m + . is completes the proof.
□ Remark 5. Based on the proximal ADMM [20], we can design an inverse-free ADMM for the studied problem by attaching some proximal terms to the subproblems with respective to X and Y in ADMM. Furthermore, based on the parallel or partially parallel splitting method or their proximal versions [21,22], we can design some inverse-free (partially) parallel splitting methods for the studied problem [23,24].

Numerical Results
In this section, we evaluate the performance of the three iterative algorithms designed in this paper for solving the Lyapunov matrix equations, and report some preliminary numerical results. e codes of our algorithms were written entirely in Matlab R2014a, and were executed on a inkPad notebook with Pentium(R) Dual-Core CPU T4400@ 2.2 GHz, 2 GB of memory. e parameters in the three tested algorithms are set as follows: For RPPA: τ � ‖Δ ⊤ Δ‖, c � 1.8 For PRSM: α � 0.9, β � 2 For ADMM: α � 2, β � 2, c � 1.618 ree curves in Figure 4 indicates that for CLMEs with nonsymmetric matrix A, RPPA and PRSM fall into the premature convergence, while ADMM successfully solve this problem.
e numerical results of the three tested algorithms are plotted in Figure 5, from which we find that the performance of ADMM is a litter better than that of PRSM, which is obviously better than that of RPPA.

Conclusions
In this paper, three iterative algorithms have been proposed to solve constrained Lyapunov matrix equations, and their convergence is discussed in detail. Furthermore, preliminary numerical results have been included, which verify the efficiency of the proposed algorithms.
Among various challenges for large-scale constrained Lyapunov matrix equations, high performance is a must, not an option. erefore, an object of our future investigations is to design more efficient iterative algorithms and propose some valid criteria, from which we can directly determine whether the Lyapunov matrix equations has a symmetric positive semidefinite solution or not.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request. Mathematical Problems in Engineering 9