An iterative algorithm for robust simulation of the Sylvester matrix differential equations

This paper proposes a new effective pseudo-spectral approximation to solve the Sylvester and Lyapunov matrix differential equations. The properties of the Chebyshev basis operational matrix of derivative are applied to convert the main equation to the matrix equations. Afterwards, an iterative algorithm is examined for solving the obtained equations. Also, the error analysis of the propounded method is presented, which reveals the spectral rate of convergence. To illustrate the effectiveness of the proposed framework, several numerical examples are given.

In recent years spectral collocation methods have received attention of many researchers [18][19][20]. The ease of implying and the exponential rate of convergence are two main advantages of these methods [21,22]. The main contribution of the current paper is to implement the Chebyshev collocation method to evaluate (1). With the aid of collocation points, we obtain a coupled linear matrix equations where their solution gives the unknown coefficients. In the literature, finding a solution to different kinds of linear and coupled matrix equations is a subject of interest and has been studied extensively. Several approaches have been established for solving the mentioned equations; for example, the idea of conjugate gradient method, gradient-based iterative algorithm, Paige algorithm, and Krylov subspace methods; for more details, see [13,[23][24][25][26][27] and the references therein. We propose an iterative algorithm based on Paige's algorithm [28] to solve the obtained coupled matrix equations.
In Sect. 2, we first review some definitions and notations. Then, we review some of the necessary properties of the Chebyshev polynomials for our latest developments. In Sect. 3, we employ the Chebyshev basis to reduce problem (1) to the solution of coupled matrix equations. Then, a new iterative algorithm is presented to solve the obtained coupled matrix equations. Moreover, we give an error estimation of the proposed method. Section 4 is dedicated to numerical simulations. Finally, a conclusion is provided.

Preliminaries
We review Paige's algorithm and some basic definitions and properties of matrix algebra and the Chebyshev polynomials.
The main idea behind of Paige's algorithm [27][28][29] is using bidiagonalization algorithm as a basis for solution of The solution generated by Algorithm 1 is the minimum Euclidean norm solution of the computational importance of the algorithms in their application to very large problems with sparse matrices. In this case the computation per step and storage is about as small as could be hoped. Moreover, theoretically the number of steps will be no greater than the minimum dimension of A.
and ω(x) be the weight function, the inner product and norm in L 2 ω are defined as follows: The Chebyshev polynomials are defined on the interval [-1, 1] and determined by recurrence formulae These polynomials are orthogonal with ω(t) = 1 where h = t f -t 0 , the shifted Chebyshev polynomials in x on the interval [t 0 , t f ] are obtained as follows: and for i = 1, 2, 3, . . . , The orthogonality condition for the shifted Chebyshev basis functions is . So, f ∈ H has unique best approximation such as y m , that is, For more details, see [31].

is the truncated orthogonal Chebyshev series of f and I m f is the interpolation of f in the Chebyshev-Gauss points. Then
where , h = t ft 0 and C, C are constants independent of m and f . (3), can be given by where the (m + 1) × (m + 1) matrix D is called the operational matrix of derivative for the Chebyshev polynomials in [t 0 , t f ]. Straightforward computations show that each entry of D = [d ij ] (m+1)×(m+1) for i, j = 1, . . . , m + 1 is defined as follows [32]: , if j = 1 and i is even, 0, otherwise.

Main results
Let us approximate each entry of P(t) = [p ij (t)] p×q in (1) by the Chebyshev polynomials. Consequently, we have where A ij ∈ R 1×(m+1) are the unknown row vectors and m is the order of the Chebyshev polynomial. We can write where notation ⊗ denotes the Kronecker product of two matrixes, A ∈ R p×q(m+1) and Thus By substituting equations (7) and (8) in (1), we derive We discretize the above equation at m points ξ i (1 ≤ i ≤ m) such that R m (ξ i ) = 0 p×q . The selected collocation points are the roots of φ m (t) (the Chebyshev-Gauss nodes in [t 0 , t f ]).
These m roots that we use as the collocation nodes are defined by which are all in [t 0 , t f ]. By replacing ξ i nodes in (9), we obtain the coupled equations Moreover, from the initial condition we set A(I q ⊗ Φ(t 0 )) = P(t 0 ) and define ξ 0 = t 0 , C 0 = 0 q(m+1)×q , D 0 = I p , E 0 = I q ⊗ Φ(t 0 ), F 0 = 0 q(m+1)×q , and G 0 = -P(t 0 ). Therefore, we may solve the coupled equations where Using the following relation [2] (AXB) = B T ⊗ A (X) and the operator "vec" which transforms a matrix A of size m × s to a vector a = vec(A) of size ms × 1 by stacking the columns of A, equations (10) are equivalent to the linear system Ax = b with the following parameters: where A i and b i denote the rows of A and b, respectively, and I p is the identity matrix of order p. We can solve the above linear system by classical schemes such as Krylov subspace methods, but the size of the coefficient matrix may be too large. So, it is more advisable to apply an iterative scheme to solve the coupled linear matrix equations rather than the linear system.

Solving the coupled matrix equations
We propose a new iterative algorithm to solve (10), which is essentially based on Paige's algorithm. Recently, Paige's algorithm has been extended to find the bisymmetric minimum norm solution of the coupled linear matrix equations [27] A 1 XB 1 = C 1 , Using the "vec(·)" operator, the authors elaborated on how Paige's algorithm can be derived. The reported results reveal the superior convergence properties of their algorithm in comparison to the algorithms derived via the extension of the conjugate gradient method [29], which was presented in the literature for solving different types of coupled linear matrix equations. This motivates us to generalize Paige's algorithm to resolve (10). where G = (G 0 , G 1 , . . . , G m ) and G i ∈ R p×q , i = 0, 1, . . . , m. Suppose that the linear operator D is defined as . In Algorithm 2, is the given small tolerance to compute the unique minimum Frobenius norm, and for computational purposes, we choose = 10 -14 .

Implementing the method
We employ the step-by-step method for solving (1). To do so, we choose s = 0, starting with x 0 := t 0 , Z 0 := P(x 0 ) and considering the points with the framework described in Sect. 3.1, we consecutively approximate P(t) by Z(t). Then, to compute the approximate solution Z(t) of P(t) on the next subinterval, we set Z i+1 = Z(x i+1 ).

Error estimation
We illustrate an error analysis based on the notion employed to the Volterra type integral equations [33,34].
Theorem 1 Consider the Sylvester matrix differential problem (1), ] q×q , and Q(t) = [q ij (t)] p×q are given so that a ij (t), b ij (t) and q ij (t) are sufficiently smooth. Also, assume that P m = A(I q ⊗ Φ) denotes the approximation of P. Furthermore, suppose that M 1 = max i,j max x l ≤t≤x l+1 |a ij (t)| and M 2 = max i,j max x l ≤t≤x l+1 |b ij (t)|. Then the following statement holds: where C is a finite constant. Proof For ξ 0 = x l and the Chebyshev-Gauss nodes ξ n , 1 ≤ n ≤ m, on the interval [x l , From (12), we obtain where E = [e ij ] p×q = P m -P. For L n as the Lagrange interpolating polynomial, we have

Subtracting this equation from (11) yields
where

A(t)E(t) + E(t)B(t) dt
and Since P m is a polynomial of order m, we may rewrite (14) in the form By implying the Gronwall inequality on (15), we have Since A(t), B(t), and Q(t) are sufficiently smooth, for R 1 (x) and R 2 (x) we obtain the following results. From Definition 4, Using (5) for k = 1 and (4), it can be deduced that Also, for R 2 (x), we derive that Now the assertion can be concluded from (16)- (18) immediately.

Example 1 Consider the Sylvester equation [8]
where with the exact solution The obtained results of the spline method [8] and our method with m = 5, h = 0.1 are given in Table 1. The results show that our method has fewer errors in comparison with [8].

Conclusions
The Sylvester and Lyapunov equations have numerous notable applications in analysis and design of control systems. The properties of the Chebyshev basis have been employed to solve the Sylvester equations by a new framework. The Sylvester differential equations are useful in solving periodic eigenvalue assignment problems. Our approach converts the main problem to the coupled linear matrix equations. An iterative algorithm was proposed for solving the obtained coupled linear matrix equations. Also, an error estimation of the method was obtained. Numerical experiments have been explained to show the applicably of our scheme.