On the sign characteristics of Hermitian matrix polynomials

Article history: Received 27 November 2015 Accepted 1 September 2016 Available online 14 September 2016 Submitted by F. Dopico MSC: 15B57 65F15 37J25 37J40


Introduction
We study the sign characteristic of Hermitian matrix polynomials. The sign characteristic is an invariant associated with particular eigenvalues of structured matrices, matrix pencils, or matrix polynomials. Particular examples are Hamiltonian matrices, Hermitian, even/odd pencils, and their extensions to matrix polynomials [26]. We formulate our results in terms of Hermitian matrices, pencils or polynomials and eigenvalues on the real line, however, at least in the complex case there are completely analogous results associated with Hamiltonian matrices, even pencils or polynomials, which are obtained by replacing λ with ıλ, where ı = √ −1. The sign characteristic is very important for the understanding of several physical phenomena, such as bifurcation of solutions in dynamical systems or the perturbation behavior of eigenvalues under structured perturbations. This perturbation theory is essential in the stability analysis of Hamiltonian systems and in other applications in control theory, see [5]. The sign characteristic is also closely connected to inertias of bilinear forms as well as other invariants, and it comes in different forms and flavors in many scientific fields and applications. For matrices and matrix pencils, the theory goes back to Krein, see, e.g. [21,22] and the recent survey [19], which also motivates the term Krein characteristic. The first systematic treatment of the sign characteristic for Hermitian matrix polynomials is given by Gohberg, Lancaster and Rodman in [9], where they present three equivalent descriptions of the sign (see also [10,11]). However, their theory assumes matrix polynomials with nonsingular leading matrix coefficient 1 , i.e., regular matrix polynomials with only finite eigenvalues. A generalization to Hermitian matrix polynomial with singular leading coefficient should be independent of specific representations of the matrix polynomial (coefficient expansions in polynomial bases such as, e.g., monomials, Lagrange, Newton, Chebyshev, etc.) and should be constructed in such a way that it allows a perturbation, so the definition remains valid also in a small neighborhood. To achieve these goals, we discuss an extension to general Hermitian matrix polynomials of Gohberg, Lancaster and Rodman's third description of the sign characteristic. We derive a systematic approach which allows to show that a signature constraint theorem still holds. We analyze in detail the consequences on the perturbation theory. We show that in the case of odd degree matrix polynomials this does not lead to a uniform treatment in the neighborhood of the eigenvalue infinity. This problem of non-uniformity can be resolved by adding higher powers with zero coefficients to the matrix polynomial. We also discuss the consequences of this procedure and present several examples. Note that the first description of the sign characteristic in [9,10,11] relies on a special linearization of the matrix polynomial expressed in the monomial basis and does not easily extend to matrix polynomials with singular leading matrix coefficient or to matrix polynomials expressed in non monomial bases.
Our approach to study the sign characteristics is analytic rather than algebraic, and hence, it is essentially basis-independent. However, for the sake of concreteness and simplicity, we have decided to present our results on matrix polynomials using the monomial basis. We note en passant that it would be straightforward to present the theory employing any other basis. The only potential exception is the notion of a leading coefficient, central in Section 3. This is only natural in a degree-graded basis. Yet, the problem is easily overcome via the notion of reversal, which is basis-independent: for the purposes of Section 3, in fact, the leading coefficient could be defined as the reversal polynomial evaluated at 0.
Let us consider a few well known examples from [30] expressed in the framework of Hermitian pencils, see also the survey [5]. Example 1.1. In the optimal H ∞ control problem, see [3,4,38] one has (in the complex case) to deal with parameterized matrix pencils of the form where γ > 0 is a real parameter. In the so called γ iteration one has to determine the smallest possible γ such that the pencil has no real eigenvalues and it is essential that this γ is computed accurately. In the limiting situation when the optimal γ is achieved, the sign characteristic of the eigenvalue(s) on the real axis (and if E is singular the eigenvalue infinity) plays an essential role.

Example 1.2. Consider a control system
with real or complex matrices E, A, B, C, D of sizes n × n, n × n, n × m, p × n, p × m, respectively. If all the finite eigenvalues of the pencil xE − A are in the open left half complex plane, then the system is passive, i.e., it it does not generate energy, if and only if the pencil has no real eigenvalues and the eigenvalue infinity has equal algebraic and geometric multiplicity. In industrial practice, these systems arise from the discretization of partial differential equations, model reduction, realization or system identification, and often they are non-passive even though the underlying physical problem is. In this case one is interested in constructing small perturbations to E, A, B, C, D such that the system becomes passive, see e.g., [2,5,6,12], and this requires explicit knowledge about the sign characteristic. Example 1.3. The stability of linear second order gyroscopic systems, see [16,24,37], can be analyzed via the following quadratic eigenvalue problem where G, K ∈ C n×n , K is Hermitian positive definite, G is nonsingular skew-Hermitian, and δ > 0 is a parameter. To stabilize the system one needs to find the smallest real δ such that all the eigenvalues of P (x) are real, which means that the gyroscopic system is stable. For the system to be robustly stable it is essential that multiple real eigenvalues do not have mixed sign characteristic.
In all these applications, and many others, see [5] for a recent survey, the location of the real eigenvalues needs to be checked numerically at different values of parameters or perturbations. In this perturbation analysis, the sign characteristic plays a very essential role.
The paper is organized as follows. After some preliminaries, introducing the sign characteristics and the sign feature in the following subsections, in Section 2 we discuss the effect of transformations. A signature constraint theorem and its applications are discussed in Section 3. In Section 4 we discuss the behavior of the sign characteristic under perturbations. A short summary concludes the paper.

Notation and preliminaries
In the following by R, C we denote the real and complex numbers and by R m×n , the set of m × n matrices with elements in a ring R. For an open interval Ω ⊆ R we use the following sets: C ω C (Ω), the ring of complex valued functions that are analytic on Ω, M(Ω), the field of fractions of C ω C (Ω), i.e., the field of functions that are meromorphic on Ω, and A n (Ω) := C ω C (Ω) n×n , the ring of n × n matrices with complex-valued Ω-analytic elements.
Furthermore, F[x] is the ring of univariate polynomials in x with coefficients in the field F and F(x) is the field of fractions of F[x], i.e., the field of rational functions in x. For A(x) ∈ A n (Ω), we denote by A(x) * the complex conjugate transpose of A.
One of the key ingredients of our approach to studying the sign characteristic is a theorem of Rellich [32,33], that is used in several classical monographs such as, e.g., [11,17].
Note that the nature of the proof in [11] for Ω = R is completely local, and therefore, R can be safely replaced by any simply connected open subset of R, i.e., any open interval Ω. In the following we call a decomposition as in Theorem 1.4 a Rellich decomposition. It is the analytic function analogue of the spectral theorem for complex Hermitian matrices.
The normal rank of a polynomial matrix P (x) ∈ C[x] n×n , denoted by rank C(x) P (x), is the rank of P (x) as a matrix over C(x). A finite eigenvalue of P (x) is an x 0 ∈ C such that the rank over C of P (x 0 ) ∈ C n×n is strictly less than the normal rank of P (x), i.e., Similarly, the normal rank of an analytic matrix function A(x) ∈ A n (Ω) is defined as its rank over the field M(Ω).
The following proposition shows that the normal rank of a Hermitian matrix function is equal to the number of nonvanishing diagonal elements of D(x) in the Rellich decomposition.
Proof. Any d ii (x), i = 1, . . . , n is analytic on Ω. Let Z i ⊆ Ω be the set of zeros of d ii , i.e., Z i = {x 0 ∈ Ω|d ii (x 0 ) = 0}. By the identity theorem [20, Corollary 1.2.7], either Z i = Ω or Z i does not have a limit point in Ω. Hence, either Z i = Ω or Z i is a countable set. Let z be the cardinality of the set and define Z = i∈I Z i (if z = 0, we set Z = ∅). Since I is constructed as the set of indices i such that d ii (x) ≡ 0 on Ω, it suffices to show that z = r. Note that the complement of Z, denoted by Ω\Z, is not empty. If z > r, then, for any x 0 ∈ Ω\Z, rank C H(x 0 ) = rank C D(x 0 ) = z > r, which is a contradiction. Conversely suppose that z < r. Then for all x 0 ∈ Ω, rank C H(x 0 ) = rank C D(x 0 ) ≤ z < r, which again is a contradiction.
The Rellich decomposition naturally separates the invertible and non-invertible parts of a Hermitian matrix function by means of a congruence transformation with a unitary matrix function V (x) in A n (Ω). The decomposition and the separation of the parts is actually numerically computable, see [23,Theorem 3.9], where an explicit differential equation for V (x) is derived and there exist numerical methods that can be used to compute the decomposition [7,29] . It should be noted that, even if the matrix function is polynomial, usually the factors V (x), D(x) are not polynomial.
We use the Rellich decomposition to define the sign characteristic and a related property, the sign feature, of a real eigenvalue of a (possibly singular) analytic Hermitian matrix function P (x) of normal rank r, by just considering the nonzero elements of D(x).
). Let λ ∈ Ω be a real root of some d ii (x) that is not identically zero on Ω, and consider a Taylor expansion where m λ i ∈ N, c λ i ∈ R is positive and λ i ∈ {1, −1}. We say that m λ i is the ith partial multiplicity of the real eigenvalue λ of H(x) and that λ i is its ith sign characteristic. Furthermore, is called the ith sign feature of λ.
Remark 1.8. Usually, for a real analytic A(x), the partial multiplicities of an eigenvalue are defined via the Smith normal form, which is possible since C ω C (Ω) is an elementary divisor domain [14], and hence, the Smith form exists. When a Smith form exists, then also a local Smith form exists, a fact that can be shown in a similar way as for the polynomial case in [11, Theorem S1.10]). However, note that in the Rellich decomposition of a Hermitian H(x) = V * (x)D(x)V (x), we have that V (λ) is nonsingular for any λ ∈ Ω. Therefore, the local Smith forms at λ of H(x) and D(x) are the same, and hence, the partial multiplicities are also equal.
The sign characteristics have the property of being invariant under analytic congruence transformations. Theorem 1.9 (Theorem 3.6 in [9]). Let H(x) = H(x) * ∈ A n (Ω), and assume that λ ∈ Ω is an eigenvalue of H(x). Let R(x) ∈ A n (Ω) satisfy det R(λ) = 0. Then H(x) and R * (x)H(x)R(x) have the same sign characteristics at λ.
One of the main goals of this paper is to extend the concept of sign characteristic to the eigenvalue at infinity and thus to extend the classical results of Gohberg, Lancaster and Rodman in [9] to matrix polynomials with singular leading matrix coefficient. To do this in a systematic way, we need the concept of the grade of a matrix polynomial. Consider a matrix polynomial of degree k, i.e., the leading matrix coefficient P k is not the zero matrix. Then we can associate with P an integer g ≥ k, called grade of P and express P as At first sight this looks artificial, but under some circumstances, and especially for structured matrix polynomials, it is a very useful concept, see [27,28]. In particular, and this is the main reason for using the grade instead of the degree, it has been shown in [27,31] that Möbius transformations, which play an important role in our analysis, are grade-preserving, but in general not degree-preserving. Once the grade g of a matrix polynomial P (x) = g j=0 P j x j is fixed, the reversal of P is given by and the ith partial multiplicity of the eigenvalue 0 of rev g P (x) is defined to be the ith partial multiplicity of the eigenvalue ∞ of P (x).
Using the reversal we can then also introduce the ith sign characteristic of the eigenvalue ∞. Definition 1.10 (Sign characteristic and sign feature of the eigenvalue infinity). Let P (x) ∈ C[x] n×n be Hermitian, have grade g and let S(v) = − rev g P (v). If P (x) has an eigenvalue at infinity with ith partial multiplicity m ∞ i , then we say that the sign characteristic of infinity, ∞ i , is the sign characteristic of the eigenvalue 0 of S(v) having corresponding ith partial multiplicity. Furthermore, we call To see that this definition is reasonable, we first show that it does not depend on the particular choice of the grade. Proposition 1.11. Let P (x) ∈ C[x] n×n be Hermitian, have grade g and degree k. The definition of the sign feature at ∞ does not depend on the particular choice of the grade, i.e., the ith sign feature φ ∞ i is the same for all g > k. For g = k, the definition remains consistent provided that the corresponding partial multiplicity does not become zero (in which case there is no sign feature because infinity is not an eigenvalue anymore).
is the diagonal term in a Rellich decomposition of S(v). Then, since both the ith partial multiplicity of 0 and the degree k are increased by g 1 , it is clear that the sign feature at infinity is independent of the choice of grade.
, grade equal to degree. The use of grade (when g > k) introduces additional eigenvalues at ∞, all with partial multiplicities g − k. This provides a simple way to distinguish the "original" infinite eigenvalues from the "artificial" ones, the former having partial multiplicities > g − k.
The motivation for the minus sign in the definition of S(v) and the presence of g in the definition of the sign feature is that we aim to obtain an elegant signature constraint theorem, as we will see in the next sections. This goal could have also been achieved via the definition of an anti reversal, x g P (−x −1 ). It is not clear which choice is better, but we prefer our definition, since it has been used in the previous literature [1,35].

Transformations and their effect on the sign characteristics
In this section we study the effect of transformations of the form on the sign characteristics and the sign features, where in (5) f (y) is a diffeomorphism and w(y) is a nonvanishing function. We restrict our attention to smooth real-analytic transformations, as we want to preserve analyticity.

Definition 2.1.
Let Ω ⊆ R be an open interval, and let f : Ω → f (Ω) be a real-valued real-analytic diffeomorphism. We say that f is orientation-preserving Observe that this definition makes sense, because any real diffeomorphism must have a derivative of constant sign. Note that this is a simpler version (on a one-dimensional Euclidean space) of the more general concept of an orientationpreserving diffeomorphism [ 1. If f is orientation-preserving, then the sign characteristics of as an eigenvalue of E(y) are equal to the sign characteristics of x 0 as an eigenvalue of H(x).

If f is orientation-inverting, then the sign characteristics of y
for any x ∈ f (Ω), and analogously, E(y) has a Rellich decomposition for any y ∈ Ω. It follows that V (x) is analytic and unitary for all x ∈ f (Ω) if and only if V (f (y)) is for all y ∈ Ω, as f is locally analytic and invertible. This Theorem 2.2 emphasizes the intuitive fact that the orientation plays an important role, and that one needs to keep track of whether a change of variable is orientation-preserving or orientation-inverting.

Remark 2.3.
Note that the statement of Theorem 2.2 includes the special cases Ω = R or f (Ω) = R, i.e., a diffeomorphism from the real line to an open interval, or vice versa. This could be exploited to define the signs at infinity using, for example, the map P (x) → (sin θ) g P (cot θ). Note in fact that f (θ) = cot θ is analytic and diffeomorphic in (0, π), so that this approach is essentially equivalent to the one via reversals (it excludes one point). However, we will not follow this approach as we prefer to map polynomials to polynomials.
From Theorem 2.2, we can easily deduce as a corollary the effect of a reparametrization on the sign feature. As a second step we analyze the effect on the sign characteristic of multiplications by non-vanishing functions.
, with an analytic non-vanishing function w : Ω → R. Then the sign characteristics (resp., features) of an eigenvalue x 0 ∈ Ω of E(x) are equal to the sign characteristics (resp., features) of x 0 as eigenvalue of H(x) multiplied by sign(w(x 0 )).

Proof. With a Rellich decomposition
, from which the claim follows.

Example 2.6 (Effect of a Möbius transformation on the sign characteristics and on the sign features).
As an application of the discussed transformations, we study the effect of a real Möbius transformation on the sign characteristics of a Hermitian matrix polynomial. Suppose that P (x) ∈ C[x] n×n is Hermitian and has grade g, and for α, β, γ, δ ∈ R let ∆ := det α γ β δ = 0. Then with the Möbius transformation f (y) = αy+β γy+δ we have that f (y) = ∆ (γy+δ) 2 , and hence, f is a diffeomorphism on (−∞, −δ/γ) and in (−δ/γ, +∞). It is either orientation inverting or orientation preserving according to the sign of ∆. Now consider the mapping Applying Theorems 2.2, 2.4, and 2.5, as well as Definition 1.10, we obtain the following results.
• If λ has ith partial multiplicity m λ i , sign characteristic λ i and sign feature φ λ i , then one has the following cases: if m λ i is even, then by definition both λ and µ must have sign feature 0; -if m λ i is even and g is even, then λ and µ must have the same sign characteristic; if m λ i is even and g is odd, then λ has sign characteristic λ i if and only if µ has sign characteristic sign(γµ + δ) λ i ; -if m λ i is odd and g is even, then λ has sign characteristic λ i (resp. sign feature φ λ i ) if and only if µ has sign characteristic sign(∆) λ i (resp. sign feature sign(∆)φ λ i ); -if m λ i is odd and g is odd, then λ has sign characteristic λ i (resp. sign feature φ λ i ) if and only if µ has sign characteristic sign(γµ+δ) sign(∆) λ i (resp. sign feature sign(γµ + δ) sign(∆)φ λ i ).
Let us now first assume that γ = 0. In this case, one has the following.
• The finite eigenvalue λ = α/γ of P is mapped to the eigenvalue µ = ∞ of Q.
• If λ has ith partial multiplicity m λ i , sign characteristic λ i and sign feature φ λ i , then one has the following cases: if m λ i is even and g is even, then λ and µ must have opposite sign characteristic, and the sign feature of µ is by definition equal to 0; if m λ i is even and g is odd, then λ has sign characteristic λ i if and only if µ has sign characteristic − sign(γ) λ i . Moreover, µ has sign feature − sign(γ) λ i ; -if m λ i is odd and g is even, then λ has sign characteristic λ i (resp. sign feature φ λ i ) if and only if µ has sign characteristic sign(∆) λ i (resp. sign feature sign(∆)φ λ i ); -if m λ i is odd and g is odd, then λ has sign characteristic λ i if and only if µ has sign characteristic sign(γ) sign(∆) λ i . Moreover, µ has sign feature 0.
• Ifλ has ith partial multiplicity mλ i , sign characteristic λ i and sign feature φλ i , then one has the following cases: if mλ i is even, then by definitionμ must have sign feature 0; if mλ i is even and g is even, thenλ andμ must have opposite sign characteristic; if mλ i is even and g is odd, thenλ has sign characteristic Conversely, if γ = 0, then the eigenvalue infinity stay at infinity. Assuming that ∞, as an eigenvalue of P , has partial multiplicity m ∞ i , sign characteristic ∞ i , and sign feature φ ∞ i , then one has the following (note that α = 0 = δ since otherwise ∆ = 0): -if m ∞ i and g are either both even or both odd, then ∞ has sign feature 0 both as an eigenvalue of P and as an eigenvalue of Q; -if m ∞ i is even and g is even, then ∞ must have the same sign characteristic when seen as an eigenvalue of P and when seen as an eigenvalue of Q; -if m ∞ i is even and g is odd, then ∞ has sign characteristic ∞ i (resp. sign feature φ ∞ i ) as an eigenvalue of P if and only if it has sign characteristic sign(α) ∞ i (resp. sign feature sign(α)φ ∞ i ) as an eigenvalue of Q; -if m ∞ i is odd and g is even, then ∞ has sign characteristic ∞ i (resp. sign feature φ ∞ i ) as an eigenvalue of P if and only if it has sign characteristic sign(∆) ∞ i (resp. sign feature sign(∆)φλ i ) as an eigenvalue of Q; -if m ∞ i is odd and g is odd, then ∞ has sign characteristic ∞ i as an eigenvalue of P if and only if it has sign characteristic sign(δ) ∞ i as an eigenvalue of Q.
In this section we have studied the effect of transformations on the sign feature and sign characteristic. These results will be used in the following section to derive a global constraint for these quantities.

A signature constraint theorem
In this section we discuss a conservation law for the sign feature and sign characteristic, extending to possibly singular matrix polynomials the signature constraint theorem [11]. Consider a Hermitian matrix polynomial P (x) of grade g. Then P (x) is holomorphic on the whole complex plane C, so in particular its restriction to the real line is real analytic, and the results of the previous section apply with Ω = R.
Recall that the Sylvester inertia index, or simply inertia, of a Hermitian matrix H is the triple (n + , n 0 , n − ), where n + (resp. n 0 , n − ) is the number of positive (resp. zero, negative) eigenvalues of H. Furthermore, the signature of H is defined as sig(H) = n + − n − .
To derive a signature constraint law, it is convenient to first discuss the case where P (x) has no infinite eigenvalues. A sufficient condition for this is that P (x) has nonsingular leading matrix coefficient P g . In this case, the proof can be found in [11,Proposition 10.12]. Note that [11,Proposition 10.12] is stated for a monic matrix polynomial, but it is easily generalizable to any nonsingular leading matrix coefficient using [10, Eqn. 12.2.12]. Our signature constraint result, Theorem 3.4, is stronger, because it allows for a general Hermitian P (x), including the case that the leading matrix coefficient P g is singular. In the following we denote by Λ R * (P ) the set of all real eigenvalues of the matrix polynomial P including ∞, and we use again the set I as defined in (3). For i ∈ I, λ ∈ Λ R * (P ) we denote by m λ i , φ λ i (P ), respectively, the ith partial multiplicity and sign feature associated with λ and P . Theorem 3.1. Let P (x) = g j=0 P j x j be a Hermitian matrix polynomial of grade g with no infinite eigenvalues. For λ ∈ Λ R * (P ) and i ∈ I, let m λ i be the partial multiplicities, and let φ λ i (P ) be the corresponding sign features. Then Proof. Since there are no eigenvalues at infinity, it follows that rank P g = r = rank C(x) P (x). Observe that this implies that either P (x) ≡ 0 or that g is equal to the degree k of P . If P (x) ≡ 0, then the assertion holds trivially, so we consider the case k = g and let (n + , n − , n 0 ) be the inertia of P g . Note that n + +n − +n 0 = n and that n 0 = n − r. Then the proof follows by a counting argument on the number of zeros with odd multiplicity of d ii (x), i ∈ I. Indeed, for i ∈ I a root λ ∈ Λ R * (P ) of d ii (x) has odd multiplicity m λ i if and only if it is associated with an eigenvalue of nonzero sign feature. In other words, the sign feature is −1 if d ii (x) is positive to the left of the root and negative to the right, and it is +1 if it is negative to the left and positive to the right. Now let β > 0 be larger than the largest (in absolute value) real eigenvalue of P (x). Then counts the sum of total sign features associated to that value of i. Summing over all i ∈ I we get that the sum of all the sign features is Suppose first that g is even. Then P (β) and P (−β) both have the same inertia as P g , and therefore the right hand side is equal to If g is odd, then P (β) has the same inertia as P g and P (−β) has the same inertia as −P g . Therefore, the right hand side becomes To extend the result to the case where P has infinite eigenvalues, it is convenient to consider three auxiliary matrix polynomials. Let β > |λ max |, where λ max is the finite real eigenvalue of P of maximal absolute value. Then introduce Observe that neither Q nor R has an infinite eigenvalue, so that we can apply Theorem 3.1 to them. We have the following lemma.

Lemma 3.2.
Let P (x) = g j=0 P j x j be a Hermitian matrix polynomial of grade g. Let I be defined as in (3). If λ is a finite real eigenvalue of P (x) with partial multiplicities m λ i and sign features φ λ i (P ), i ∈ I, then −1 β+λ is a finite eigenvalue of Q(y) with partial multiplicities m λ i and sign features φ λ i (P ), i ∈ I, and in the same way, 1 β−λ is a finite eigenvalue of R(z) with partial multiplicities m λ i and sign features φ λ i (P ), i ∈ I. If λ = ∞ is an eigenvalue of P (x) with partial multiplicities m ∞ i , i ∈ I, then 0 is an eigenvalue of both Q(y) and R(z) each with multiplicities m ∞ i , i ∈ I, and furthermore, if g is even, then the sign features of 0 as an eigenvalue of Q and R are the same, while if g is odd, then the sign features of 0 as an eigenvalue of Q and R are opposite in sign.
Proof. The conservation of the partial multiplicities follows immediately from [28,Theorem 5.3] or [31,Theorem 4.1]. Thus, it suffices to prove the statements on the sign features for which we apply Theorems 2.4 and 2.5, or equivalently Example 2.6. We observe that both Möbius reparametrizations y = −1 β+x and z = 1 β−x are orientation preserving (on the open intervals where they are a diffeomorphism), because they have determinant 1. Therefore the sign features of a finite nonzero real eigenvalue of Q (resp. R) can only differ from those of the corresponding finite real eigenvalue λ of P if g is odd and 1 β+λ (resp. 1 β−λ ) is negative. But this happens if and only if λ < −β (resp. if and only if λ > β), which is impossible by the definition of β.
Finally, by comparing the two Möbius transformations in (6), we see that . Using Theorem 2.4 we see that the reparametrization has no effect because it is orientation preserving. However, by Theorem 2.5, the global factor (−1) g comes into play, thus proving the assertions on the sign features associated with the 0 eigenvalue of Q(y) and R(z).
A third matrix polynomial with eigenvalues at 0 is S(v) as constructed in Definition 1.10. Comparing S(v) with Q(y) and R(z) we have the following Lemma. Combining these results we have the following theorem. Theorem 3.4 (Signature Constraint Theorem). Let P (x) = g j=0 P j x j be a Hermitian matrix polynomial of grade g. Then Proof. Suppose first that g is even. Applying Theorem 3.1 to Q(y) with real eigenvalue set Λ R (Q) = Λ(Q) ∩ R, we get that using the fact that g is even and that m 0 i must be odd, because otherwise φ 0 i (Q) = 0 does not contribute to the summation. The assertion follows, since by definition φ 0 i (S) = φ ∞ i (P ) as g is even. The case of odd g requires some further discussion. Consider β as a parameter varying in (|λ max |, +∞). Let A(β) (resp. B(β)) be the leading matrix coefficient of Q(y) (resp. R(z)). From the formula in [31, Proof of Proposition 3.2, second bullet] we get A(β) = (−1) g P (−β) and B(β) = P (β). Moreover, both A and B are Hermitian matrices that depend analytically on the real parameter β, and hence, by Theorem 1.4 and Proposition 1.5 we have that their eigenvalues are analytic functions of β, of which n − r are constantly zero, where r is the normal rank of P (x) and n is its size. In particular, since there is no eigenvalue of P (x) in the interval (|λ max |, +∞), the number of positive and negative eigenvalues of A(β) and B(β) must be independent of β. As a consequence, their signatures are constant, and we may simply write sig(A) and sig(B), omitting β.
Being polynomial, S(b) is analytic at 0, and hence, it admits a Rellich decomposition. Setting γ = dim ker P g + r − n, such a decomposition is given by where 0 k is the k × k zero matrix, ⊕ denotes the direct sum, α 0 j are some nonzero constants, c 0 j are positive constants, and 0 j (resp. m 0 j ) are the sign characteristics (resp. partial multiplicities) at 0 of S(b), which are, by definition, the sign characteristics (resp. partial multiplicities) at ∞ of P (x). Clearly, the signature of S(b) is the same as the signature of the diagonal matrix in (7). When |b| > 0 is small enough, then only the lowest order terms in b matter. Thus, there exists b 0 > 0 such that for 0 < b < b 0 we have that On the other hand Using Theorem 3.1, Lemma 3.2 and Lemma 3.3 (with g odd), this is in turn equal to sig(A) + sig(B) = 2 The result follows by observing that, when m 0 i is even, 0 i is, by definition, the sign feature at infinity of P (x).

Remark 3.5.
Observe that, when g > k, the sum of the sign feature is always zero for any g because P g = 0. The difference occurs only when g = k. If k is even the sum is still zero, but when k is odd, the sum is sig(P k ).
However, the proof of Theorem 3.4 shows that the sign characteristics associated with partial multiplicities g − k (that are, by Remark 1.12, associated with those infinite eigenvalues that are "artificial") are the inertia indices of −P k . Moreover, their sign features are all zero if k is even and are their sign characteristics if k is odd. Hence, the sum of the "extra" sign features at infinity is zero when k is even and is −sig(P k ) is k is odd, making the whole picture coherent. Remark 3.6. Observe that Theorem 3.4 can also be obtained by defining the sign features at infinity as the sign features of the anti reversal T (z) = z g P (−z −1 ). Indeed, it is immediate that T (z) = (−1) g+1 S(−z), and hence, the sign characteristic of a zero eigenvalue of partial multiplicity m 0 i of T (z) is (−1) g+1+m 0 i times the sign characteristic of a zero eigenvalue of S(w), of the corresponding partial multiplicity. In particular, when g + m 0 i is odd, then these signs are unchanged. But given Definition 1.10, the case of g + m 0 i odd is precisely the one that is relevant in Theorem 3.4.

Connection with the canonical form of Hermitian pencils
In this section we discuss the connection of our results to the canonical form for Hermitian pencils under congruence, see [25,36] and the references therein.
We first give a technical lemma that is useful to compute the signature of the leading matrix coefficient of a Hermitian pencil.
Proof. We just need to prove that sig(F i ) = 1−(−1) i 2 , as all the other claims follow immediately (recalling that 0 A A 0 is similar to A ⊕ −A.) Suppose first that i = 2µ i is even. Then, block-diagonalizing xI 2µ i − F 2µ i by an appropriate permutation similarity, it is readily seen that det(xI 2µ i − F 2µ i ) = (x 2 − 1) µ i , yielding sig(F 2µ i ) = 0. The case of odd i = 2µ i + 1 can be reduced to the previous one, as by a Laplace expansion by the central row, we have det(xI 2µ i +1 − F 2µ i +1 ) = (x − 1) det(xI 2µ i − F 2µ i ), and hence, sig(F 2µ i +1 ) = 1.
It turns out that the signs δ 1 , . . . , δ r , η 1 , . . . , η q in Theorem 3.7 determine the sign characteristics associated with real and infinite eigenvalues, as the next results show. Note that in the literature there is a minor incoherence in the description of the exact relation between these signs and the sign characteristic, see e.g., [25].  (8), has a unique real eigenvalue at −α of partial multiplicity and sign characteristic (−1) +1 .
The analytic Hermitian matrix pencil F k + xG k has a unique eigenvalue at infinity of partial multiplicity k and sign characteristic (−1) k .
Proof. It suffices to prove the first statement, as together with Definition 1.10 it immediately implies the second. Observe that by a simple change of variable we may assume that α = 0. It is clear by direct inspection that A(x) = xF + G has an eigenvalue at 0 of partial multiplicity and geometric multiplicity 1. It remains to compute its sign characteristic.
By the definition of G , A(0) has precisely one zero eigenvalue. Therefore, using the Rellich decomposition (Theorem 1. , and hence, the sign characteristic of A(x) at 0 is (−1) +1 .
Hence, we may easily obtain an alternative proof of Theorem 3.4 for the special case of pencils, i.e., g = 1. Indeed, observe that there is no less of generality in assuming that a pencil A + Bx is in the canonical form described in Theorem 3.7, for if it is not, we may just apply Theorem 1.9 (specialized to the case where A(x) is a pencil and R(x) is constant and nonsingular).
Then, since B is block diagonal, its signature is the sum of the signatures of each block, i.e., by Lemma 3.8, But on the other hand, by Theorem 3.9, the sign feature of any finite real eigenvalue α i is precisely 0 if i is even and η i if i is odd, whereas the sign feature of any infinite eigenvalue is 0 if k i is odd and δ i if k i is even. Therefore, we have verified that Theorem 3.4 is coherent with Theorem 3.7.

Perturbation theory and sign features: a local conservation rule
Theorem 3.4 can be interpreted as a global conservation law. If the Hermitian matrix polynomial P (x) is perturbed, then the sum of its sign features (for even g) or the sum of its sign features minus the signature of its leading matrix coefficient (for odd g) is preserved. However, as we will discuss in this section, a stronger result can be proved, that the sign features of a regular self-adjoint matrix function are locally preserved. Related results are obtained in [9,Section 3.2] in the case of a polynomial with nonsingular leading matrix coefficient. Here we give a more general statement with our own proof. We will also explain why the result is false for singular analytic matrix functions. Then, we will see some application to the perturbation theory of regular Hermitian matrix polynomial, discussing the nontrivial role of the grade.

Classical results on the smoothness of eigenvalues
Before considering the local conservation results, it is convenient to recall some basic results about the smoothness of the eigenvalues of a matrix. It is known that, for analytic perturbations, non-analyticity can only occur when eigenvalues coalesce [17,Ch. II]. Clearly, the analysis can be reduced to the problem of determining the smoothness of the roots of a polynomial for which we have the following well-known result. [13]). Let p(z) = z n + n−1 i=0 a i z i be a monic polynomial with complex coefficients and with roots r 1 , . . . , r n . Moreover, denote by ∼ the equivalence class on C n defined by v 1 ∼ v 2 if and only if v 2 is a permutation of v 1 . Then the function that maps the coefficients of p(z) to its roots is a homeomorphism, when seen as a function from C n to C n / ∼.

Theorem 4.1 (Theorem A in
In [13, Theorem A] the Euclidian topology on C n is used, whereas on C n / ∼ the quotient topology is employed [18, pp. 94-99]. An entirely different question is whether one can obtain an inverse function theorem, i.e., whether one can label n continuous functions r i (a 0 , . . . , a n−1 ), i = 1, . . . , n, such that p( r i (a 0 , . . . , a n−1 )). In general, the answer to this question is negative, as shown by the example p(z) = z 2 − x for a complex parameter x. Two important exceptions are discussed in [17,Section II.5.2]. First, if all the coefficients of p(z) depend continuously on a single real parameter t, then one can pick n continuous functions of t to represent the roots [17,Theorem 5.2]. Furthermore, if the coefficients of the polynomial depend analytically on t, then the n functions are analytic as well. The second important exception is when all the roots are real, or more generally, as our presentation will illustrate, when they lie on any set where the topology induced by the Euclidean topology on C becomes an order topology, e.g., a simple and open curve. Essentially, the key property is the ability to continuously reorder an n-tuple. For this we introduce the reordering map: where σ is any permutation of {1, . . . , n} such that v σ(1) ≥ · · · ≥ v σ(n) .
Then we have the following theorem which is implicit in [17].

Lemma 4.2. The reordering map is continuous.
Proof. Let {v m } ⊂ R n / ∼ be any sequence satisfying lim m→∞ v m = v ∈ R n / ∼. Denote by the number of distinct entries in v, i.e., suppose that there exists w 1 > · · · > w ∈ R are such that µ k entries of v are equal to w k , with k=1 µ k = n. Let δ = min i,j |w i − w j |. Then, since {v m } is a convergent sequence in the quotient topology, given any 0 < ε < δ, for m large enough and for any k = 1, . . . , , v m has exactly µ k components in the open interval J k = (w k − ε, w k + ε). Then, for any x i ∈ J i , x j ∈ J j , we have x i > x j if and only if i < j. This holds because the intervals J k are disjoint by construction, and because the Euclidean topology on R is the order topology induced by <. (Note that this is not true, e.g., for C.) Therefore, for m large enough, χ(v m ) is such that its first µ 1 components lie in J 1 , the second µ 2 components lie in J 2 , et cetera. Hence, lim m→∞ χ(v m ) = χ(v), implying that χ is continuous.
By the above results, we have the following theorem, stated (without proof) in [17,Section II.5.7]. A(x, ζ) be a matrix whose elements depend (jointly) continuously on the real parameters (x, ζ), and such that for any (x, ζ) in a certain domain Ω ⊂ R 2 all the eigenvalues of A(x, ζ) are real. Then there exist n jointly continuous functions f j (x, ζ), j = 1, . . . , n, that are the eigenvalues of A(x, ζ) for all (x, ζ) ∈ Ω.

Theorem 4.3. Let
Proof. For the proof it suffices to compose two continuous functions: the map from (x, ζ) to the real coefficients of the characteristic polynomial of A, and the map from those coefficients to the (ordered) n-tuple χ (f 1 (x, ζ), . . . , f n (x, ζ)) ∈ R n of the eigenvalues of A(x, ζ), which is continuous by [13, Theorem A] and Lemma 4.2.

Remark 4.4.
Another interesting question is whether in the case that the coefficients of a monic polynomial are jointly analytic functions of two real parameters (x, ζ), we can find n jointly analytic functions f 1 (x, ζ), . . . , f n (x, ζ), that are the roots of the polynomial at each point? The answer is again negative as the example p(z) = z 2 − 3xz + 2x 2 − ζ 2 (x − 1) 2 demonstrates, see [17,Section II.5.7] for further remarks and examples.
Note that, by Rellich's Theorem 1.4, for any fixed ζ and for any polynomial whose coefficients depend jointly analytically on x and ζ, e.g., the characteristic polynomial of an Hermitian matrix function, we can find two eigenvalue functions that are analytic in x, and vice versa for any fixed x we obtain analytic eigenvalue functions in ζ. Unfortunately, unlike for complex holomorphic functions, in the real case this condition does not imply that we have n jointly analytic functions, as the standard counterexample [20] f (x, ζ) = 2 x ζ x 2 + ζ 2 , f (0, 0) = 0 shows. Indeed, the latter function is separately analytic on R 2 , but not even jointly continuous at (0, 0).
In the next subsection we will expand on this discussion and derive some perturbation theory results for regular Hermitian functions.

Perturbation theory for regular self-adjoint matrix functions
To derive our perturbation analysis for regular self-adjoint matrix functions, it is convenient to introduce some further notation. Let λ ∈ Ω ⊆ R, and δ > 0 be such that J := [λ − δ, λ + δ] ⊂ Ω. For any nonzero f (x), that is analytic in Ω and such that f (λ − δ)f (λ + δ) = 0, we define the local type of f in the interval J to be the ordered pair Note that since J is compact, the function f (x) can only have finitely many roots in J . Observe furthermore that, by continuity, the local type of a function determines the parity of the number of roots of odd multiplicity that f (x) has in J . It also determines the associated sign characteristics at such roots, i.e., the sign of the first nonzero derivative evaluated at the roots of odd multiplicities. More specifically we have the following result. Proof. We only give a proof of item 1., as the other cases are analogous. The argument can be best followed by considering Figure 1 below. even mult.
s.c. +1 s.c. -1 f > 0 f > 0 local type (+,+) Since f is analytic, it is in particular continuous. Thus, each time that f has a root of odd multiplicity at a point, say, x 0 ∈ J, then it must have opposite signs in an interval containing real numbers strictly smaller than x 0 and in an interval containing real numbers strictly larger than x 0 . Conversely, for any root of even multiplicity, say, x 1 , there exists a neighborhood of x 1 such that f is constant in sign. Now suppose that f (x) has some roots of odd multiplicity in J , as otherwise there is nothing to prove. Let r be the smallest one. Since f (x) > 0 at the left endpoint of J , and since there are no roots of odd multiplicity smaller than r, we have f (x) > 0 in a left neighborhood of r, and hence, f (x) < 0 in a right neighborhood of r. Therefore, expanding f (x) = ∞ k=m c k (x − r) k for some odd m, we see that necessarily c m < 0, proving that the sign characteristic at r is −1.
Repeating the argument yields the fact the sign characteristics at the roots of odd multiplicity must alternate in sign, whereas the fact that f (x) > 0 at the right endpoint of J guarantees that the largest such root must have sign characteristic +1, and hence, there are an even number of roots of odd multiplicity.
Note that, generally, from the local type nothing can be inferred about the roots with even multiplicities. Nonetheless, using Proposition 4.5, we can associate any local type with a specific value of the sum of the sign features over all the roots of f that lie in the interval J . The cases are summarized in Table 1.  The following results illustrate why the local types are a useful tool for studying the local sum of sign features on a given interval. the number of η j (x), j = 1, . . . , n that are of type (·, ·) in J . Observe that, by definition, the q i are simply related to the local types by the following formulae, subject to the constraints q 1 + q 2 = q 3 + q 4 = n: By Table 1  (ii) for sufficiently small ζ, the following conservation law holds: Proof. Denote by η j (x) the zeros of the polynomial p(z) = det(H(x) − zI) considered as functions of x. Clearly these are the functions d jj (x) in the Rellich decomposition of H(x) and thus, the η j (x) are analytic functions of x, and the sign characteristics at λ are the signs of a m λ j in the series whenever m λ j > 0, i.e., λ is an eigenvalue of H(x) of partial multiplicity m λ j . We denote these signs by λ j . Now consider the perturbed Hermitian matrix function H(x) = H(x) + ζE(x), and let q(z) = det(H(x) + ζE(x) − zI).
By Theorem 4.3, we know that we can label n jointly continuous functions f j (x, ζ) such that for any (x, ζ) in Ω × R they are the roots of q(z), i.e., the eigenvalues ofĤ(x, ζ). Rellich's Theorem 1.4 and the uniqueness of the set of the eigenvalues of a square matrix guarantee the following fact.
For any arbitrarily small ζ > 0 we see that neither f 1 (x, ζ) nor f 2 (x, ζ) have a root in a neighborhood of x = 0. Moreover, f 2 (x, ζ) has a root of multiplicity 2 at x = 1. Therefore, the sum of the sign features is not locally preserved at 0. Remark 4.11. One may wonder if the sum of sign features associated with each partial multiplicity is locally preserved. The answer is clearly negative, as is illustrated by the Hermitian matrix function 0 x x 0 , which has partial multiplicities 1, 1 at the eigenvalue 0, with sign features 1 and −1. We can perturb it to ζ x x 0 which for any ζ > 0 has partial multiplicity 2 at the eigenvalue 0, with sign feature 0.
Finally, the subtleties described in Section 4.1 are key in arguing that, in a sense made precise by Theorem 4.12, the geometric multiplicity of an eigenvalue cannot locally increase by a small perturbation. Note that Theorem 4.12 is based on Theorem 4.3, and hence, holds more generally for matrices (not necessarily Hermitian) depending continuously on a parameter. However, for simplicity we state it only for the special case that we need. Hence, by Remark 4.8, we deduce that there exists δ > 0 such that, for all x, ζ satisfying (x − λ) 2 + ζ 2 < δ 2 , and for all j > , there is a permutation σ yielding |f σ(j) (x, ζ)| > ρ/2.
It follows that for any ζ < δ there is an interval J (ζ) containing λ with the property that at most eigenvalue functions of H(x) + ζE(x) can have roots in the interval J (ζ).
We stress that the results in this section imply that, at a finite real point, a set of real eigenvalues can be removed from the real line by a perturbation if and only if the sum of their sign features is 0. This observation will be important in the next subsection.

Perturbation theory of infinite eigenvalues for regular Hermitian matrix polynomials
In this section we discuss the local invariants at infinity for a regular Hermitian matrix polynomial P (x). Assume that the perturbation E(x) is also polynomial. Note that in this situation the most natural choice for the grade might not be deg P (x), but max(deg P (x), deg E(x)), see also [28]. By Theorem 4.7, we know that, for any λ ∈ R, there exists an interval J containing λ such that, for ζ small enough, the sum of the sign features for all eigenvalues of the perturbed polynomial P (x) + ζE(x) that lie in J is equal to the sum of the sign features over all the partial multiplicities of λ seen as an eigenvalue of P (x).
To simplify expressions, we rephrase this property as the sum is locally preserved on R. The question is whether we can extend this statement to a neighborhood of ∞, or at least whether we can find another local invariant at infinity.
If the grade is even, then this is straightforward. By Theorem 4.7 applied to S(x) = − rev g P (x), the sign features at 0 of S(x) are locally preserved. But by Theorem 2.4 and Theorem 2.5, the sign features of small eigenvalues of a perturbed S(x) − ζ rev g E(x) are precisely the same of those of large eigenvalues of P (x) + ζE(x). Hence, the sum of the sign features of P (x) is preserved in a neighborhood of infinity, i.e., in (−∞, −M ) ∪ (M, ∞) for sufficiently large M > 0, and thus we have the following theorem. On the other hand, if the grade is odd, it is hopeless to have a local conservation of the sign features. Indeed, going to the reversal, what must be locally preserved is the sum of sign characteristics (or sign features) associated with the odd multiplicities corresponding to S(x). In particular, for the eigenvalue zero of S(x) and the eigenvalue ∞ of P (x), the sign features corresponding to the former are associated with odd partial multiplicities whereas those corresponding to the latter are associated with even partial multiplicities. They are totally different. The mapping laws of the sign characteristics prescribed in Theorem 2.4 and Theorem 2.5 depend on which neighborhood of infinity (left or right) one considers. The only way to express a local conservation rule in a neighborhood of infinity is to go back to the sign features of the reversal S(x). Unfortunately this does not yield a statement as nice as in the case of even grade.
where in abuse of notationˆ ∞ i denotes the sign characteristics of the eigenvalues of the perturbed problem that stay at ∞ andm ∞ i is the corresponding partial multiplicity.
Proof. By definition, the sign characteristics at infinity of P (x) are those of S(y) = −y g P (1/y) at 0. Applying Theorems 2.4 and 2.5, we see that for an odd partial multiplicity m the sign characteristics of P (x) at a large λ are equal to (resp. opposite to) those of S(y) at a small λ −1 if and only if λ > 0 (resp. λ < 0). Applying Theorem 4.7 to S(y) and the appropriate neighborhood of 0 (that is mapped to a neighborhood of infinity for P (x)), and recalling that the sign features correspond to the sign characteristics for the odd partial multiplicities (and are 0 for the even partial multiplicities), the statement follows.
The presented analysis shows that our definition of sign features, that lead to the global constraint of Theorem 3.4, fits well with the local conservation rule at infinity if and only if the grade is even. When the grade is odd, things are more complicated. This is not a defect of our definition, but a necessary consequence of the fact that, for odd grade, the signature of the leading matrix coefficient is involved in the signature constraint theorem. This makes it impossible to obtain a definition that works well both globally and locally.
There are two possible ways out of this global/local dichotomy for odd grade Hermitian matrix polynomials. Either one always forces the grade to be even by adding another zero coefficient, at the price of allowing a larger set of perturbations (including perturbation to the zero leading matrix coefficient), or one uses Theorem 4.14, at the price of having a much less elegant and more complicated rule. We give a few examples to illustrate these facts. of grade 3. Note that P (x) has an eigenvalue 0 of multiplicity 3 with sign feature 1, an eigenvalue 0 of multiplicity 1 with sign feature 1, an eigenvalue ∞ of multiplicity 3 with sign feature 0 and sign characteristic −1, and an eigenvalue ∞ of multiplicity 2 with sign feature −1. As shown in Theorem 3.4, the global sum of the sign features is 1. However, any perturbation, however small, can change the signature of the leading matrix coefficient. Suppose that there is a finite open cover of the compactification of the real line such that in each open subset of the cover there is a local conservation rule for the sum of the sign features. This would violate Theorem 3.4: to see this, take a perturbation that changes the signature of the leading matrix coefficient. Hence, there cannot be such an open cover. On the other hand, by Theorem 4.7, the sum of sign features is locally preserved on all R. Therefore, there must be a possible exception at infinity, i.e., there cannot be any open subset of infinity that allows for a local conservation law of sign features. This is illustrated bŷ Note that neither the partial multiplicities nor the sign features of the zero eigenvalue are changed by this particular perturbation. However, for any ζ > 0, there exist two real eigenvalues 1 ζ each of partial multiplicity 1 with sign feature −1, and a real eigenvalue (− 1 ζ ) of partial multiplicity 1 with sign feature −1. The global sum of sign features is now −1, as expected, since the signature of the leading matrix coefficient has changed. Yet, no matter how small ζ > 0, the sum of sign features in a neighborhood of infinity is −3 = −1. Note that this example is coherent with Theorem 4.14, since − 1 ζ < 0 and hence we must multiply its sign characteristics by −1 in the summation of the statement of Theorem 4.14.
If we had picked even grade, say, 4, for P (x) andP (x), then we would have a local conservation law at infinity of sign features, as predicted by Theorem 4.13. Indeed with this choice of the grade, the sum of sign features at infinity for P is −2, whereasP (x) has three extra simple eigenvalues at infinity, with sign features −1, 1, and 1, so that in a neighborhood of infinity the sum is still −2.
Example 4.16. Let p(x) = 1 have grade 1, i.e., it has a simple infinite eigenvalue with sign feature 0, and sign characteristic −1. Then any perturbationp(x) = 1 + ζ 0 + ζ 1 x must have a real eigenvalue. Note that the product of the sign of the perturbed eigenvalue and its sign characteristic must be −1, coherently with Theorem 4.14.
Suppose now that we take the grade to be 2, then P has a double infinite eigenvalue with sign feature 0. It can be removed from the compactification of the real line by a degree 2 perturbation such asp(x) = 1 + ζx 2 , ζ > 0. The reason why a degree 1, but grade 2, perturbation cannot remove it is that such a perturbation must still have a simple infinite eigenvalue, and hence, a complex conjugate pair of eigenvalues cannot be produced, i.e., it must also have another large real eigenvalue, of opposite sign feature. Example 4.17. Let p(x) = x of grade 3, then it has a double infinite eigenvalue with sign feature −1 and a simple zero eigenvalue with sign feature 1. However, the perturbationp(x) = x + ζx 3 (ζ > 0) has only one real eigenvalue at 0 and the double infinite eigenvalue has been removed from the compactification of the real line, in spite of having nonzero sign feature, but coherently with Theorem 4.14.
Considering the grade to be 4, then originally there was a triple infinite eigenvalue, with sign feature −1. In this case it is impossible to remove all the three eigenvalues (counting multiplicity), although of course we may remove two of them while still locally preserving the sum of sign features: this is precisely what happens witĥ p(x).

Coalescence of simple real eigenvalues
An application of the discussed theory is the analysis of two nearby simple eigenvalues colliding at a point, in particular the question whether they can be removed from the compactification of the real line. Let us analyze the situation when the point is infinity. When the grade is even, infinity is not special at all, so the rule is as usual, that they can be removed if and only if the sum of their sign features, or, equivalently in this case, their sign characteristics is 0, as prescribed by Theorem 4.13. For odd grade, we can apply the more complicated Theorem 4.14 to obtain the following cases.
• If both eigenvalues are finite, large and having the same sign, then they can be removed if and only if the sum of their sign characteristics is 0; • If both eigenvalues are finite and large, one being positive and the other being negative, then they can be removed if and only if the sum of their sign characteristics is nonzero i.e. either 2 or −2; • If one eigenvalue is infinite and the other is finite, large and positive, then they can be removed if and only if the sum of their sign characteristics is 0; • If one eigenvalue is infinite and the other is finite, large, and negative, then they can be removed if and only if the sum of their sign characteristic is nonzero, i.e., ±2.
Note that in this case it is the sign characteristics at infinity, and not the sign feature, that determines what happens. This is because with odd grade there is no local conservation of the sign feature at infinity, and hence, one is forced to go to the reversal, where the sign features at zero correspond to the sign characteristics at zero.
Once again, the conclusion is that giving a simple local conservation law at infinity is not possible. One must either always see things as even grade, or alternatively, rely heavily on Theorems 2.4 and 2.5.

Conclusions
We have studied a systematic extension of the definition of sign characteristic for Hermitian matrix polynomials to the eigenvalue ∞. The goal was to achieve a concept that is uniform with the one for finite eigenvalues and that stays valid under small perturbations. For matrix polynomials of even grade (degree) we have realized this goal, while for odd grade we have argued that the task seems to be not possible, except if one resorts to increasing the grade to an even number. We have studied the change of sign characteristics under analytic reparametrizations and multiplication by scalar functions, and we have shown a sign constraint theorem and studied the invariance of this result under perturbations.