On the Existence of Schur-like Forms for Matrices with Symmetry Structures

Schur-like forms are developed for matrices that have a symmetry structure with respect to an indefinite inner product induced by a Hermitian and unitary Gram matrix. It is characterized under which conditions these forms can be computed by structure-preserving unitary transformations. The main results combines and generalizes the two well-known results from the literature that on the one hand any normal matrix can be unitarily diagonalized and on the other hand a Hamiltonian matrix can be transformed to Hamiltonian Schur form via a unitary similarity transformation if and only if its purely imaginary eigenvalues satisfy certain conditions that involve the sign characteristic of the matrix under consideration.

in the sense that a diagonalization can be performed by backward stable and structurepreserving algorithms.
The picture changes drastically, if one considers matrices that carry a symmetry structure with respect to an indefinite inner product, i.e., with respect to a nondegenerate Hermitian form that is not necessarily positive definite or with respect to a nondegenerate skew-Hermitian form. An important example are Hamiltonian matrices, i.e., matrices A ∈ R 2n,2n satisfying A J + J A = 0, where J denotes the skew-symmetric matrix The identity A J = −J A can be interpreted as skew-symmetry of the matrix A with respect to the skew-symmetric inner product induced by J . The corresponding Hamiltonian eigenvalue problem arises in many application, e.g., in system theory and the theory of Algebraic Riccati equations, see [1,5,11] and the references therein. For practical reasons, one often switches to the complex version of the problem which leads to the consideration of matrices A ∈ C 2n,2n satisfying A H J + J A = 0. Typically, these matrices are also called Hamiltonian in the Numerical Linear Algebra community and we will follow this convention in this paper. It should be noted though that other communities prefer the terminology J-skew-adjoint for such matrices (e.g., see [2]) in order to avoid confusion with complex matrices A ∈ C 2n,2n satisfying A J + J A = 0 which are called Hamiltonian as well.
For the solution of the Hamiltonian eigenvalue problem, the so-called Hamiltonian Schur form was suggested in [12] as a target form. This is a Hamiltonian matrix of the block form where R ∈ C n,n is upper triangular. It is straightforward to see that this is just a permutation of the classical upper triangular Schur form of a complex matrix and as a consequence the eigenvalues can be read off from the diagonal. It was shown in [12] that this form can always be achieved under a unitary symplectic similarity transformation provided that the given Hamiltonian matrices does not have eigenvalues on the imaginary axis. Recall that a matrix S ∈ C 2n,2n is called symplectic (following the convention in the Numerical Linear Algebra community) if S H J S = J . It is easily checked that similarity transformations with symplectic matrices preserve the Hamiltonian structure and are therefore the base of structure-preserving algorithms for the solution of the Hamiltonian eigenvalue problem. However, since the condition number of symplectic matrices may be arbitrarily large, it is favorable to further restrict oneself to the class of unitary symplectic similarity transformations in order to guarantee numerical stability. Surprisingly, there are Hamiltonian matrices that cannot be transformed to Hamiltonian Schur form via a unitary symplectic similarity transformation. As an obvious example, consider the matrix J which is both Hamiltonian and symplectic (and even unitary). Clearly, if U ∈ C 2n,2n is unitary and symplectic then U −1 J U = U H J U = J , so J is invariant under any unitary symplectic similarity transformations and hence cannot be transformed to Hamiltonian Schur form. It is clear from the form in (2) that a necessary condition for the existence of a Hamiltonian Schur form is that the purely imaginary eigenvalues of the given matrix have even multiplicity. Indeed, if λ is an eigenvalue of R then −λ is an eigenvalue of −R H , so any purely imaginary eigenvalue of R will also be one of −R H . The example of J however shows that this condition is not sufficient. The long open problem of characterizing all Hamiltonian matrices that can be transformed to Hamiltonian Schur form was finally solved in [6] with the help of a newly developed structured canonical form for Hamiltonian matrices.
At first sight, one may come to the conclusion that the nonexistence of the Hamiltonian Schur form is related to the fact that in contrast to the Euclidean inner product the two fundamental properties of structure-preservation and numerical stability are now partitioned among two sets of transformation matrices instead of only one. Thus, to enable both features, one has to restrict oneself to the set of unitary symplectic matrices which is a much "smaller" subset (in terms of dimension as a manifold) than the sets of symplectic matrices or unitary matrices. This conclusion, however, turns out to be wrong as it is well-known that the Hamiltonian Schur form exists under unitary symplectic similarity transformations if and only if it exists under similarity transformations that are symplectic only. This equivalence can easily be shown by applying a structure-preserving QR decomposition to the transformation matrix, see, e.g., [6] for details. Thus, both in the case of normal matrices with respect to the Euclidean inner product and in the case of Hamiltonian matrices, the actual problem is the computation of a Schur-like form by similarity transformations from the group of matrices that are unitary with respect to the considered inner product and therefore preserve the given symmetry structure of the matrix they are acting on.
We will show in this paper that the diagonal Schur form of normal matrices and the Hamiltonian Schur form of Hamiltonian matrices are two extreme cases of a much more general Schur-like form for matrices carrying a symmetry structure with respect to an indefinite inner product. To treat the problem in full generality, we will consider a generalization of normality of matrices in an indefinite inner product space, the so-called polynomial Hnormality which will be introduced in Section 2 where we will also review the basic theory of indefinite inner products. In Section 3 we will formulate the main result of this paper and develop a Schur-like form for polynomially H -normal matrices for the case of a Hermitian and unitary Gram matrix H . Then we will discuss how the Schur form of normal matrices and the Hamiltonian Schur form can be deduced as special cases of this result. The proof of the main result will then be given in Section 4 followed by a short summary in Section 5.
Notation By J n (λ) we denote the n × n upper triangular Jordan block associated with the eigenvalue λ. The reverse identity of size n is denote by R n , i.e., If H ∈ C n,n is a Hermitian matrix, then its inertia index is denoted by (π, ν, ζ ), where π , ν, and ζ are the numbers of positive, negative and zero eigenvalues, respectively, each counted with algebraic multiplicities.

Indefinite Inner Products and Polynomially H-normal Matrices
Let H ∈ C n,n be Hermitian and invertible. Then H defines an indefinite inner product on C n via [x, y] := x H Hy, x, y ∈ C n .
As in the case of positive definite inner product, we will call H the Gram matrix of the indefinite inner product. If A ∈ C n,n then the matrix A In the following, we will restrict ourselves to Hermitian inner products, because any skew-Hermitian inner product can easily be transformed to a Hermitian inner product by multiplying the corresponding Gram matrix with the imaginary unit i. In particular, a matrix A ∈ C 2n,2n is Hamiltonian if and only if A is (iJ )-skew-adjoint, where J is the matrix as in (1). Canonical forms for H -selfadjoint, H -skew-adjoint, and H -unitary matrices are well known, see, e.g., [2,8]. More generally, one can define the set of H -normal matrices as the set of all matrices A ∈ C n,n satisfying A [ * ] A = AA [ * ] . Unfortunately, this set turns out to be "too big", because it was shown in [3] that the problem of classifying H -normal matrices is a wild problem and hence canonical forms cannot be obtained. Therefore, it was suggested in [10] to consider the set of polynomially H -normal matrices instead. A matrix A ∈ C n,n is called polynomially H-normal if there exists a polynomial p in one variable such that A [ * ] = p(A). The polynomial p is then called the H-normality polynomial of A. It is easily checked that any polynomially H -normal matrix is H -normal, but the converse is not true, see [10]. Still, the set of polynomially H -normal matrices is large enough to contain the sets of H -selfadjoint, H -skew-adjoint, and H -unitary matrices. Indeed, if the matrix A is H -selfadjoint or H -skew-adjoint, then it is polynomially H -normal with Hnormality polynomial p(t) = t or p(t) = −t, respectively, and since A [ * ] = A −1 and the inverse of a matrix is always a polynomial in that matrix, it follows that also any H -unitary matrix is polynomially H -normal.
The major advantage of polynomially H -normal matrices over H -normal matrices is the fact that a complete classification is available under the following equivalence relation.

Remark 1
Let H ∈ C n,n be Hermitian and invertible and let A ∈ C n,n be polynomially H -normal with H -normality polynomial p. If T ∈ C n,n is invertible, then T −1 AT is polynomially T H H T -normal with H -normality polynomial p. In particular, the relation is an equivalence relation on the set of pairs (A, H ), where H is Hermitian and invertible and A is polynomially H -normal with H -normality polynomial p.
where A j is H j -indecomposable and where A j and H j have one of the following forms: i) blocks associated with eigenvalues λ j ∈ C satisfying p(λ j ) = λ j : where n j ∈ N, σ j ∈ {1, −1}, θ j ∈ [0, π), and r j,2 , . . . , r j,n j −1 ∈ R; ii) blocks associated with a pair (λ j , μ j ) of eigenvalues with where m j ∈ N.
Moreover, the form (4) is unique up to the permutation of blocks, and the parameters θ j , and r j,2 , . . . , r j,n j −1 in (5) are uniquely determined by λ j and the coefficients of p, and they can be computed from the identity (We highlight that the eigenvalues λ j in i) are not necessarily pairwise distinct, i.e., the same eigenvalue may occur in different blocks. The same is true for the eigenvalues λ j , μ j in ii).) Besides the eigenvalues and their partial multiplicities, the signs σ j = ±1 attached to each Jordan block corresponding to an eigenvalue λ j satisfying λ j = p(λ j ) are additional invariants under the equivalence relation (3). The list of all signs associated with a fixed eigenvalue λ j is referred to as the sign characteristic of the eigenvalue λ j extending the terminology in [2] used for H -selfadjoint and H -unitary matrices.
The following values related to the sign characteristic of a fixed eigenvalue will play a crucial role in the characterization when a structured Schur-like form will exist.

Definition 1
Let H ∈ C n,n be Hermitian and invertible, let A ∈ C n,n be polynomially Hnormal, and let λ ∈ C be an eigenvalue of A that satisfies λ = p(λ). Then the sum of all signs σ j from the sign characteristic of λ j attached to blocks of odd size is called the sign sum of λ j and is denoted by signsum(λ j ).

To illustrate Definition 1 consider the matrices
, and 0 = p(0). The sign sum of the eigenvalue λ = 0 is then given by Note that in accordance with Definition 1 the values σ 2 and σ 6 do not contribute to the sign sum as they are attached to blocks of the even sizes 4 and 2, respectively.
The signsum has an important impact on the inertia index of the given Hermitian matrix defining the indefinite inner product as we will show in the following lemma.

Lemma 1
Let H ∈ C n,n be Hermitian with inertia index (π, ν, 0) and let A ∈ C n,n be polynomially H normal. If λ 1 , . . . , λ r ∈ C are the pairwise distinct eigenvalues of A satisfying λ j = p(λ j ), then Proof The proof immediately follows by inspection from the canonical form given in Theorem 1. Indeed, one easily checks that the matrices H j from blocks of type ii) and blocks of type i) corresponding to an even size n j contribute equally to the positive and negative eigenvalues of H . On the other hand, the matrix H j = σ j R n j of a block as in type i) that corresponds to an odd size n j = 2k Finally, we recall the concept of neutral subspaces in indefinite inner product spaces.
If H ∈ C n,n is Hermitian and invertible, then

Schur-like Forms and Invariant Maximal H-neutral Subspaces
In this section, we will develop the main result of this paper: the introduction of a structured Schur-like form for polynomially H -normal matrices combined with a characterization of its existence. As pointed out in the introduction, it was shown in [6] that a Hamiltonian matrix can be transformed to Hamiltonian Schur form via a unitary symplectic similarity transformation if and only if the same can be done via a similarity transformation that is only symplectic. An analysis of the corresponding proof in [6] reveals that the property of the matrix J in (1) to be unitary is a crucial fact in this equivalence. Therefore, we will assume throughout this section that the Gram matrix of the given indefinite inner product is not only Hermitian, but also unitary. Many important examples of Gram matrices such as R n , 0 I n I n 0 , satisfy this extra condition. (We mention in passing that properties of inner products that are either Hermitian or unitary are discussed in [7].)

Theorem 2
Let H ∈ C n,n be unitary and Hermitian with inertia index (π, ν, 0) and let A ∈ C n,n be polynomially H -normal with H -normality polynomial p. Furthermore, let m := |π − ν|. Then n − m is even and the following statements are equivalent, where k := n−m 2 .

1) There exists an H -neutral subspace of dimension k that is A-invariant.
2) There exists a unitary matrix U ∈ C n,n such that where s = π−ν |π−ν| (if m = 0 and thus π − ν = 0, else let s := 1), B 11 ∈ C k,k is upper triangular and B 33 ∈ C m,m is diagonal.
3) There exists an invertible matrix S ∈ C n,n such that Proof The proof of Theorem 2 is rather long and will therefore be presented in a separate section.
As mentioned in the introduction, Theorem 2 combines and generalizes two important results from the literature that we will restate below as corollaries. The first results recovers the well-known result on unitary diagonalizability of (I n -)normal matrices.
Corollary 1 (Schur-form of normal matrices) Let A ∈ C n,n be a normal matrix, i.e., A H A = AA H . Then A is unitarily diagonalizable.
Proof By [4] normality with respect to the Euclidean inner product is equivalent to polynomially I n -normality and hence Theorem 2 can be applied with H = I n . Since I n has the inertia index (n, 0, 0), we find that k = 0. Thus, condition 2) of Theorem 2 states the existence of a unitary matrix U ∈ C n,n such that U −1 AU = B 33 is diagonal. 1) There exists an n-dimensional subspace of C 2n that is J -neutral and A-invariant.
2) There exists a unitary symplectic matrix Q ∈ C 2n,2n such that Q −1 AQ is in Hamiltonian Schur form, i.e., where B ∈ C n,n is upper triangular. 3) There exists a symplectic matrix S ∈ C 2n,2n such that S −1 AS is in Hamiltonian Schur form, i.e., where B ∈ C n,n is upper triangular. 4) For any purely imaginary eigenvalue λ of A, the number of odd partial multiplicities corresponding to λ with sign +1 is equal to the number of partial multiplicities corresponding to λ with sign −1.
Proof First, we recall that A is Hamiltonian if and only if A is H -skew-adjoint with H = iJ . In particular, A is polynomially H -normal with H -normality polynomial p(t) = −t. We will frequently make use of this fact in the following. "1) ⇒ 2)": It is trivial that the J -neutral subspace in 1) is also iJ -neutral. Moreover, the inertia index of iJ is (n, n, 0) and thus, Theorem 2 implies the existence of a unitary matrix U ∈ C 2n,2n such that Setting Q := U · diag(I n , iI n ), we obtain that i.e., Q is unitary and symplectic. This implies 2). "2) ⇒ 3)" is trivial and "3) ⇒ 4)" and "4) ⇒ 1)" follow immediately from Theorem 2 taking into account that the eigenvalues satisfying λ = p(λ) = −λ are exactly the purely imaginary eigenvalues of A, and that signsum(λ) = 0 is equivalent to the statement that the number of odd partial multiplicities corresponding to λ with sign +1 is equal to the number of partial multiplicities corresponding to λ with sign −1.
The equivalence of 2), 3) and 4) was proved in [6] while the equivalence of 1) and 4) was proved in [13]. Clearly, the equivalence of 1) and 2) -or 1) and 3) -immediately follows from those two results in the literature and since then this has been implicitly known by many researchers dealing with Hamiltonian matrices. Nevertheless, it seems that a theorem combining all four equivalent conditions into a single result was explicitly formulated only as late as in [9].

Remark 2
The two results in Corollaries 1 and 2 represent the two extreme cases k = 0 and m = 0 in Theorem 2. Whenever k, m = 0 as it would be the case for Gram matrices of the form diag(I p , −I q ) with p = q, then the corresponding Schur-like form will have triangular and diagonal parts on the block diagonal as indicated in (7).
We highlight that the transformation in Theorem 2 is a transformation that changes the inner product, but keeps the symmetry structure of the matrix U −1 AU linked to the transformed Gram matrix U H H U in the sense of Remark 1. For the practical use of Theorem 2, we advise to first transform the pair (A, H ) to a form (A , H ), where H already has the form When Theorem 2 is then applied to the pair (A , H ) it yields the existence of a unitary matrix U such that U −1 A U is in the Schur-like form as in (7) while U H H U = H . The latter conditions just means that the matrix U is not only unitary, but also Hunitary.

Proof of the Main Result
Before we prove Theorem 2, we start with a technical lemma that will be used frequently in the following. Proof Let the vectors v 1 , . . . , v k 1 ∈ C n 1 and w 1 , . . . , w k 2 ∈ C n 2 form bases of the A iinvariant H i -neutral subspaces for i = 1, 2, respectively. Then it is straightforward to verify that the vectors

Lemma 2 Let
form a basis of an A-invariant H -neutral subspace which obviously has dimension k 1 + k 2 .
Proof of Theorem2 "1) ⇒ 2)": By switching to an orthonormal basis whose first k columns span the where U 2 ∈ C k,k and V 2 ∈ C n−k,n−k are unitary. Setting Q := diag(U 2 , V 2 ) we obtain "2) ⇒ 3)": This is trivial. "3) ⇒ 4)": The proof proceeds by induction on the number r of eigenvalues satisfying λ = p(λ). If r = 0, then the block B 33 in (8) is void, because it is diagonal and satisfies B H 33 = p(B 33 ), so all its eigenvalues satisfy λ = p(λ). Thus, we have m = 0 and 4) is satisfied by the definition of the empty sum. Now assume that r > 0. Let λ := λ 1 , i.e., we have λ = p(λ). Starting from 3) we can assume without loss of generality that A and H are in the form (8), where the eigenvalues on the diagonal of B 11 and B 33 are ordered in such a way that all occurrences of λ come first, i.e., A has the form (Similar renaming steps will occur after each of the following steps without further notice.) We illustrate the A 12 -elimination-step in the following diagram, where non-zero block-entries, that were effected by the current transformation are marked as bullets. In each substep, the pair (i, j ) denotes the block-entry of the transformation matrix that differs from the one in the identity matrix. Observe that the similarity transformation with such a matrix adds the i-th block column to the j -th block column, but the j -th block row to the i-th block row while in the corresponding congruence transformation the i-th block column and i-th block row are added to the j -th block column or j -th block row, respectively: A : In the next step, we eliminate A 16 by applying a similarity transformation on A that is obtained from I by changing the (1, 6)-block to the solution X of the Sylvester equation XA 66 − A 11 X = A 16 . The corresponding congruence transformation on H introduces the matrix X in the (3, 6)-block-position and X H in the (6, 3)-block-position. We can restore H as follows: first, we apply a congruence with a matrix that differs from I by −sX H in the (6, 3)-block-position. This will annihilate the (3, 6)-and (6, 3)-block entries, but introduce the block −sXX H in the (3, 3)-block entry. The corresponding similarity transformation on A only effects the block-entries A 23 and A 26 . Then, we eliminate the (3, 3)-block entry of H by a congruence transformation with the matrix that coincides with I except for having − 1 2 X H X as its (1, 3)-block. The corresponding similarity transformation on A only changes the block A 13 . As before, we illustrate this elimination step with the help of a diagram: We continue by eliminating A 54 with a similarity transformation that differs from I by the solution of the Sylvester equation Xp(A 22 ) H − A 55 X = A 54 in the (5, 4)-blockposition followed by transformations that restore H . Since the step is analogous to the A 12elimination step, we restrict ourselves to the illustration via a diagram: Restoring H imposes another similarity transformation on A with the relevant entry in the (2, 3)-block position which effects the entry A 23 only. This step is depicted as follows: The key observation is now that the block zero pattern obtained for A is invariant under multiplication and thus, all powers of A as well as p(A) will have the same block zero pattern. But then, the fact that A is polynomially H -normal with H -normality polynomial and we obtain that A 23 = 0, A 25 = 0, A 43 = 0, and A 63 = 0. But then, after applying a block permutation, we may assume that A and H have the forms A = A 1 ⊕ A 2 and H = H 1 ⊕ H 2 with Note that A 1 has only one eigenvalue which is λ 1 = λ. It immediately follows from Lemma 1 that m 1 s = signsum(λ 1 ) and s · signsum λ 1 = m 1 ≥ 0. On the other hand, the matrix A 2 now has precisely r − 1 eigenvalues λ 2 , . . . , λ r satisfying λ i = p(λ i ) and we can apply the induction hypothesis to obtain that s · signsum(λ i ) ≥ 0 for i = 2, . . . , r and which proves 4). "4) ⇒ 1)": We may assume without loss of generality that A and H are in the forms of Theorem 1. Thus, by Lemma 2, we may consider some of the corresponding diagonal blocks of A and H separately in order to construct an A-invariant H -neutral subspace. We will do this by individually investigating each block in the canonical form of Theorem 1 that is of type ii) or of type i) having even dimension. Concerning the blocks of type i) having odd dimension, we will have to consider all of them together to obtain the desired dimension of the A-invariant H -neutral subspace. We proceed by first considering the following three special cases, before discussing the general case.
Special Case 1: A is a block of type ii). In this case, we have m = 0, k = n 2 and where p(λ) = λ. Obviously, the first k standard basis vectors of C n span a k-dimensional A-invariant subspace that is H -neutral.
Special Case 2: A is a block of type i) of even dimension. In this case, we again have m = 0, k = n 2 and where p(λ) = λ, σ ∈ {1, −1}, and where θ, r 2 , . . . , r n−1 are as specified in Theorem 1. Again, it is obvious that the first k standard basis vectors of C n span a k-dimensional Ainvariant subspace that is H -neutral.

Special Case 3: A only consists of blocks of type i) with odd dimension.
In this case, we have A = (λI n 1 +e iθ 1 T (0, 1, ir 1,2 , . . . , ir 1,n 1 −1 ))⊕· · ·⊕(λI n +e iθ T (0, 1, ir ,2 , . . . , ir ,n −1 )) and H = σ 1 R n 1 ⊕ · · · ⊕ σ R n , where n j = 2k j + 1 for some nonnegative integers k 1 , . . . , k . By 4), we have |signsum(λ)| = m. Without loss of generality we may assume that s = 1 considering −H instead of H otherwise, i.e., we have that signsum(λ) = m. It follows that ≥ m. More precisely, there exists a nonnegative integer α such that = m + 2α and such that m + α signs among σ 1 , . . . , σ are positive and α are negative. Without loss of generality, we may assume that among the diagonal blocks the first 2α blocks have alternating signs starting with σ 1 = 1 and thus the last m blocks all have positive sign. Let us first consider a group of two blocks with signs +1, −1, say where j ∈ {1, . . . , α} and where T i ∈ C n i ,n i is an upper triangular matrix having λI n j as its diagonal. Then it is easy to check that the vectors e 1 , . . . e k 2j −1 , e n 2j −1 +1 , . . . , e n 2j −1 +k 2j , e k 2j −1 +1 + e n 2j −1 +k 2j +1 form a basis of an A j -invariant H j -neutral subspace. Indeed, partitioning where all blocks have size n 2j −1 +n 2j 2 . Observe that the first n 2j −1 +n 2j 2 columns of the transformation matrix that transform (A j , H j ) to the pair of the matrices in (10) coincide with the vectors in (9), possibly up to scalar multiples.
Here, the first k j standard basis vectors span an A j -invariant H -neutral subspace.
In view of Lemma 2, we obtain the existence of an A-invariant H -neutral subspace of dimension α j =1 (k 2j −1 + k 2j + 1) + j =2α+1 where we used that − 2α = m. The general case: Now let A be general having r eigenvalues λ 1 , . . . , λ r ∈ C satisfying λ i = p(λ i ), i = 1, . . . , r. Putting together the special cases above in view of our observation, we obtain that there exists an A-invariant H -neutral subspace of dimension

Conclusions
We have developed a Schur-like form for polynomially H -normal matrices, where H is Hermitian and unitary and characterized under which conditions these forms can be obtained via structure preserving (and unitary) similarity transformations. In particular, the result can be applied to all matrices that are selfadjoint, skew-adjoint, or unitary with respect to an indefinite inner product that has a unitary Hermitian Gram matrix. As two extreme special cases, the unitary diagonalizability of normal matrices and equivalent conditions for the existence of the Hamiltonian Schur form of Hamiltonian matrices have been recovered. While structure-preserving and numerically backward stable algorithms for the numerical computation of these forms are well known in the mentioned two special cases, it remains to develop such algorithms for the general case.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.