Reducibility of self-adjoint linear relations and application to generalized Nevanlinna functions

Necessary and sufficient conditions for reducidibility of a self-adjoint linear relation in a Krein space are given. Then a generalized Nevanlinna function $Q$, represented by a self-adjoint linear relation $A$, is decomposed by means of the reducing subspaces of $A$. The sum of two functions $Q_{i}{\in N}_{\kappa_{i}}\left( \mathcal{H} \right),\thinspace i=1,\thinspace 2$, minimally represented by the triplets $\left( \mathcal{K}_{i},A_{i},\Gamma_{i} \right)$, is also studied. For that purpose, a model $( \tilde{\mathcal{K}},\tilde{A},\tilde{\Gamma } )$ to represent $Q:=Q_{1}+Q_{2}$ in terms of $\left( \mathcal{K}_{i},A_{i},\Gamma_{i} \right)$ is created. By means of that model, necessary and sufficient conditions for $\kappa =\kappa_{1}+\kappa_{2}$ are proven in analytic terms. At the end, it is explained how degenerate Jordan chains of the representing relation $A$ affect reducing subspaces of $A$ and decomposition of the corresponding function $Q$.

; z, w ∈ D(Q) ∩ C + , has κ negative squares. In other words, for arbitrary n ∈ N, z 1 , . . . , z n ∈ D(Q) ∩ C + and h 1 , . . . , h n ∈ H, the Hermitian matrix (N Q (z i , z j ) h i , h j ) n i,j=1 has κ negative eigenvalues at most, and for at least one choice of n; z 1 , . . . , z n , and h 1 , . . . , h n , it has exactly κ negative eigenvalues.
It is easy to verify that Nevanlinna kernel is a Hermtian kernel, i.e. N Q (z, w) * = N Q (w, z) , z, w ∈ D(Q) ∩ C + .
The following definitions of a linear relation and basic concepts related to it can be found for example in [1,4,18]. In the sequel, H, K, M are inner product spaces. Recall, a set M is called linear manifold (or linear space) if for any two vectors x, y ∈ M and for any two scalars α, β ∈ C it holds αx + βy ∈ M . A linear relation from H into K is a linear manifold T of the product space H × K. If H = K, T is said to be a linear relation in K. We will use the following concepts and notations for linear relations, T and S from H into K and a linear relation R from K into M. Note that in definition of the adjoint linear relation T + , we use the following notation for inner product spaces: (H, (., .)) and (K, [., .]).
If mulT := T (0) = {0}, we say that T is an operator, or single-valued linear relation. A linear relation is closed if it is a closed subset in the product space H × K.
Let A be a linear relation in K. We say that A is symmetric (self-adjoint ) if it holds A ⊆ A + (A = A + ). Every point α ∈ C for which {f, αf } ∈ A, with some f = 0, is called a finite eigenvalue. The corresponding vectors are eigenvectors belonging to the eigenvalue α. The set that consists of all points z ∈ C for which the relation (A − zI) −1 is an operator defined on the entire K, is called the resolvent set ρ(A).
Let κ ∈ N ∪ {0} and let (K, [., .]) denote a Krein space. That is a complex vector space on which a scalar product, i.e. a Hermitian sesquilinear form [., .], is defined such that the following decomposition If the scalar product [., .] has κ (< ∞) negative squares, then we call it a Pontryagin space of the index κ. The definition of a Pontryagin space and other concepts related to it can be found e.g. in [11].
The following construction of a Pontryagin space can be found in [10,12,9] and a similar construction of a Hilbert space can be found in [14]: For any generalized Nevanlinna function Q, a linear space L(Q) with a (possibly degenerate) indefinite inner product [., .] can be introduced as follows: Consider the set of all finite formal sums where h z ∈ H, and ε z is a symbol associated with each z ∈ D (Q). Then, an inner product is defined by: [ε z h z , εzhz] := Q ′ (z) h z , hz , z ∈ D (Q) .
To ease communication, let us call L(Q) the state manifold of Q. The linear relation defined by is symmetric. For z 0 ∈ D (Q), the operator Γ z0 : H → L (Q) is defined by Γ z0 h = ε z0 h. The Pontryagin space K is obtained by factorization of L(Q) with respect to its isotropic part L 0 0 := L(Q) ∩ L(Q) [⊥] and by completion of the factor space. It is called the state space of Q. In the process, A 0 and Γ z0 give rise to the self-adjoint relation A in K and bounded linear operator Γ : H → K, with z 0 ∈ ρ(A). Then the following theorem holds.
where A is a self-adjoint linear relation in some Pontryagin space (K, [., .]) of the index κ ≥ κ; Γ : H → K is a bounded operator. (Obviously ρ(A) ⊆ D(Q).) This representation can be chosen to be minimal, that is, If realization (1.1) is minimal, then Q ∈ N κ (H) if and only ifκ equals κ. In the case of minimal representation ρ(A) = D(A) and the triple (K, A, Γ) is uniquely determined (up to isomorphism).
Such operator representations were developed by M. G. Krein and H. Langer [12,13] and later converted to representations in terms of linear relations (see e.g. [9,10]).
In this paper, a point α ∈ C is called a generalized pole of Q if it is an eigenvalue of the representing relation A. It may be an isolated singularity, i.e. an ordinary pole, as well as an embedded singularity of Q. The latter may be the case only if α ∈ R.

Introduction
We start Section 2 with extending the definition of reducibility of operators in Hilbert spaces to reducibility of linear relations in Krein spaces. Then in Lemma 2.3 we prove several statements about decompositions, i.e. about relation matrix, of a linear relation in a Krein space K that we need in the proof of the main result, Theorem 2.4. In that theorem we give necessary and sufficient conditions for a self-adjoint linear relation A in K to be reduced to the sum A = A 1 [+] A 2 , where "[+]" is direct and orthogonal sum of linear relations, A i are self-adjoint linear relations in the reducing subspaces K i , and K = K 1 [∔] K 2 . Then, by means of reducing subspaces and reducing linear relations we study decompositions of a generalized Nevanlinna function Q.
The number of negative squares κ ∈ N ∪{0} is an important feature of the generalized then Q belongs to some generalized Nevanlinna class N κ (H) and κ ≤ κ 1 + κ 2 holds. There are two basic questions: , is the number of negative squares preserved in the sum Q = Q 1 + Q 2 or not?
In other words, we investigate the circumstances under which functions Q, Q 1 and Q 2 that satisfy (i) and (ii) also satisfy The question of preservation of the number of negative squares of the sum of Hermitian kernels K(z, w) = K 1 (z, w)+K 2 (z, w) was studied in [3]. The authors give necessary and sufficient conditions for κ 1 + κ 2 = κ in terms of complementary reproducing kernel Pontryagin spaces K 1 , K 2 , c.f. [3,Theorem 1.5.5]. We alternatively give necessary and sufficient conditions for The question of preservation of the number of negative squares in products, sums, and in some transformations of generalized Nevanlina functions has been, among other topics, summarised in the survey [15]. In the present paper we prove analytic criteria that establish whether the sum of the indexes of the functions that comprise the sum is equal or it is greater than the negative index of the sum.
It is very difficult to determine the negative index κ of a given generalized Nevanlinna function. The established relation between negative indexes of the above sum (ii) gives us information that might help in determining the numbers of negative indexes of the functions in the sum.
There are interesting results about decompositions of generalized Nevanlinna functions in [8,13], for matrix and scalar functions represented by unitary and self-adjont operators. In those papers, the decompositions of Q ∈ N n×n κ were obtained by means of spectral families of the representing operators and their appropriate invariant spectral subspaces. The decomposing functions Q i obtained by that method must have disjoint sets of generalized poles (see [8,Proposition 3.1]). In the present article, we do not use spectral families and spectral subspaces; we use instead a concept of the reducing subspaces of the representing self-adjoint relation in the Pontryagin state space. That way we obtain decomposition where decomposing functions Q i , i = 1, 2 may have common generalized poles.
In Theorem 3.1, we give a general answer on the question (a); we decompose function Q by means of reducing subspaces K i and reducing relations A i of the representing relation A.
Regarding sums of generalized Nevanlina functions, in [8,Proposition 3.2] it has been proven that the sum of two generalized Nevanlinna matrix functions preserves the number of negative squares under the condition that functions in the sum have disjoint sets of generalized poles. In our study we do not use that condition.
We start the study of the sum Q := Q 1 +Q 2 with two functions Q i ∈ N κi (H) , i = 1, 2, represented minimally in Pontryagin spaces by triplets (K i , A i , Γ i ) , i = 1, 2. Then we create Pontryagin spaceK := K 1 [∔] K 2 , and representation of the function Q := Q 1 + Q 2 in terms of the triplets (K i , A i , Γ i ). That representation, denoted by (3.3) in the text, we call orthogonal sum representation. Then, in Theorem 3.2, we describe the structure of the possibly non-minimal state spaceK := K 1 [∔] K 2 representing the sum Q = Q 1 + Q 2 . In Corollary 3.4, we give necessary and sufficient conditions for κ = κ 1 + κ 2 in terms of the inner structure of the state spaceK.
In Theorems 4.5 and 4.6 we prove some analytic criteria for κ = κ 1 +κ 2 or κ < κ 1 +κ 2 . These criteria are easy to use; we do not need to know operator representations of the functions comprising the sum. Given how Definition 1.1 is impractical for use and how difficult it is to find operator representations, our criteria are useful tool for research of both, the underlying state space, and features of the sum Q := Q 1 + Q 2 .
In Proposition 5.1, we decompose a function Q by means of Theorem 3.1 using linear spans of non-degenerate Jordan chains as reducing subspaces. Proposition 5.1 is a straightforward result that we needed to approach the more complicated case of degenerate chains which we study in Proposition 5.2. In Proposition 5.2 we consider the model where the self-adjoint operator A in a Pontryagin space K has two simple, independent, and degenerate chains (neutral eigenvectors) at α ∈ R. We prove that, unlike non-degenerate chains, studied in Proposition 5.1, the two degenerate chains at α ∈ R cannot reduce the representing operator and cannot induce two different functions Q i in any decomposition of Q. The conclusion of Section 5 is in Corollary 5.3 2 Reducing subspaces of the self-adjoint linear relation in the Krein space In the sequel "[+]", rather than "[∔]", denotes direct and orthogonal sum of both, relations and vectors. From the context it is usually clear when we deal with "operator-like" addition of linear relations, as well as we deal with addition of relations as subspaces, and addition of vectors. If necessary, we will specify.
Lemma 2.1 Assume that K 1 and K 2 are Krein spaces and A l ⊆ K 2 l , l = 1, 2, are linear relations. We can define direct orthogonal sum The linear relation A := A 1 [+] A 2 is symmetric (self-adjoint) in K if and only if linear relations A l ⊆ K 2 l , l = 1, 2, are symmetric (self-adjoint).
Proof: This lemma is a straightforward verification and left to the reader. Let A be a linear relation in the Krein space K, where nontrivial subspaces K l are also Krien spaces and E l : K → K l , l = 1, 2, are the corresponding orthogonal projections. The following four linear relations can be introduced In this notation the subscript "i" is associated with the domain subspace K i , the superscript "j" is associated with the range subspace K j . For example h 1 h 2 1 ∈ A 2 1 . Let us now extend the definition of the reducing subspaces of the unbounded operator in the Hilbert space, see e.g. [2,Section 40], to the reducing subspaces of the (multivalued) linear relation in Krein space. [., .]) be a Krein space and let K 1 ⊂ K be a nontrivial Krein subspace of K, and K 2 = K [−] K 1 . We will say that the subspaces K 1 and K 2 reduce relation A if there exist linear relations A i ⊆ K i × K i , i = 1, 2, such that it holds where [+] stands for direct orthogonal addition of relations, as defined in Lemma 2.1. The relations A i are called reducing relations of A.
Recall, if K is a Pontryagin space and K 1 is a non-degenerate closed subspace, then K = K 1 [+]K 2 and both K i , i = 1, 2, are also Pontryagin spaces, see [11, Theorem 3.2 and Corollary 2].
where "+" stands for operator-like addition, and "+" stands for addition of the subspaces, not necessarily direct.
(iii) If B : K i → K j , i, j = 1, 2, is a closed relation, then Then the first two statements of the lemma follow directly from the definition of the relations A j i (iii) If B is a linear relation in a Krein space, not necessarily closed, then it holds, To prove the converse inclusion (⊇) we need assumption that B is closed. Then we have Hence, the converse inclusion holds too, which completes the proof of (2.1).
(iv) Let us here clarify notation that we will frequently use in this lemma and the next theorem. For In the sequel we will for addition of vectors frequently use simply + rather than [+] because the notation of the vectors in the particular sums indicate when the direct orthogonal sum applies.
Let us now assume that A is a symmetric relation and let us, for example, show that it holds Let us select arbitrary Then for every Hence,

This proves
By the same token it holds: (v) We will prove this statement for i = 1, j = 2. Hence, we assume that D(A) ∩ K 1 is dense in K 1 . Let us apply formula (2.1) on the (closed) relation B = A 1 1 [ * ] . We get In the following theorem, we give necessary and sufficient conditions for a self-adjoint linear relation in a Krein space to be reduced in the sense of Definition 2.2. The important statement is (vii). Some of the other listed statements are merely the important steps in the proof of the statement (vii).
(ii) A 2 2 is single-valued self-adjoint relation in K 2 .
(iii) A 2 2 and A 1 2 are densely defined operators in K 2 .
Proof: By assumption h 2 . We will first verify that for every This equation is obviously equivalent to which holds according to assumption . Therefore, .
(iv) Because A is self-adjoint and A 1 1 ⊆ A, the following implications hold: It also holds A 1 , see the proof of (2.1). Because, A = A [ * ] is closed we can apply formula (2.1) to A. We get: According to the assumption A(K 1 ∩ D(A)) ⊆ K 1 , it holds A(0) = A 1 1 (0) and, therefore, the "⊆" signs become "=" signs in the above line, which proves (iv).
(vi) Let us first prove (⇒) Let us assume that A 1 1 [ * ] = A 1 1 , and observe two arbitrary elements Because A is self-adjoint, it holds 1 and A 2 2 are symmetric this equation reduces to in Lemma 2.3, it holds . We will first prove that for every This equation is equivalent to According to our assumption , it holds h 1 , k 1 1 = h 1 1 , k 1 . It remains to prove 0 = h 1 2 , k 1 . According to our assumption, and (iv), it holds .
Hence, (2.2) is satisfied. It further means According to (vi) we have Therefore, for arbitrarily selected element from A = A 1 Conversely, from and from Lemma 2.1 it follows that relations A i i are self-adjoint in the corresponding (viii) This statement also follows from (2.1).

Direct sum representation of generalized Nevanlinna functions
3.1 Let us assume that functions Q i ∈ N κi (H) are minimally represented by triplets spaceK as the orthogonal direct sum, Scalar product inK is naturally defined by In this subsection we will create a minimal state space of Q withinK by means of the elementsΓ First, we will find state manifold of L (Q). We start with linear space The closure of L inK is given bȳ It is important to note that, in general case, an indefinite scalar product [., .] may degenerate on the closure of a manifold even if it does not degenerate on the given manifold, see [11, p. 39]. Later, we will prove that it is not the case with L andL, see Lemma 4.1.
We define operatorΓ = Γ 1 Γ 2 : Therefore,Γ + : where we consider that Γ + l , l = 1, 2, is extended on the whole space Let the functions Q i again be minimally represented by (1.1). For the function Q := Q 1 + Q 2 , consider the following representation .3) is defined only when Γ 1 and Γ 2 simultaneously map the same vector h ∈ H intoK. That means that manifold L is the linear span of the vectors where the resolvent is defined by We know that the following holds Then it is easy to verify that for function Q = Q 1 + Q 2 , the following holds According to thse equations we can, as in [7], identify building blocks of the state manifold L(Q) with the building blocks of L ⊆K defined by (3.2). In other words, the following holds and L = L (Q).

3.2
In Section 2, we have proved that relation A can be reduced in the sense of Definition 2.2 if it satisfies conditions of Theorem 2.4. In the following theorem we will describe decomposition of Q in terms of the reducing nontrivial subspaces K i and reducing relations A i , i = 1, 2, of the representing relation A of Q.  (iii) In that case it holds κ 1 + κ 2 = κ.
Proof: (i) We know that negative index of the minimal state space K is equal to κ, the negative index of Q. Let K 1 and K 2 be nontrivial non-degenerate subspaces that reduce representing relation A.
Because A is a self-adjoint relation, according to Lemma 2.1, A i are also self-adjoint relations in K i . Let E i : K → K i be orthogonal projections, and Γ i := E i • Γ, i = 1, 2. Then the following decompositions hold: and The constant operators Q i (z 0 ) * can be arbitrarily selected as long as Q 1 (z 0 ) * +Q 2 (z 0 ) * = Q (z 0 ) * . Hence, the minimal representation (1.1) of Q can be expressed as the orthogonal sum representation (3.3). This proves (c) and (d).
Because A i are self-adjoint linear relations in the Pontryagin spaces K i , functions (3.6) are generalized Nevanlinna functions. From (3.5) and from the minimality of representation (1.1), minimality of representations (3.6) follows.
If we keep y 2 = 0, we can conclude that Q 1 is minimally represented by (K 1 , A 1 , Γ 1 ). By the same token we can conclude that Q 2 is minimally represented by (K 2 , A 2 , Γ 2 ). This further means that negative indexes of functions Q i are equal to κ i , the negative indexes of space K i . Hence, Q i ∈ N κi (H), i = 1, 2. This proves (b).
From the equation κ 1 + κ 2 = κ established for negative indexes of K i and K, now we can conclude that the same equation holds for negative indexes of the functions Q i and Q. This proves (iii).
(ii) Assume now that conditions (b), (c) and (d) are satisfied, whereÃ := A 1 [+] A 2 is the representing relation of Q. Then subspaces K i and relations A i satisfy conditions of Definition 2.2, i.e. they are reducing subspaces and reducing relations of the representing relationÃ in (3.3). Because, A i are self-adjoint relations, according to Lemma 2.1 the relationÃ is also self-adjoint. According to assumption (d) and Theorem 1.2, the triplet K ,Ã,Γ is uniquely determined (up to isomorphism). Hence, representation (3.3) is of the form (1.1). This proves statement (a), which completes the proof of (ii).
If the conditions of Theorem 2.4 are satisfied, then A 2 is densely defined (singlevalued) self-adjoint operator in K 2 . In that case function Q 2 has some nice features at infinity, see e.g. [  For our purpose, we need to decomposeK by means of L 0 . Obviously, L 0 is finitedimensional because it is isotropic subspace ofL ⊆K. According to [11,Theorem 3.3 and Theorem 3.4] the following decompositions hold where L 1 and L 2 are non-degenerate subspaces and F is a neutral subspace ofK, skewly linked to L 0 . Thenκ 0 := dim L 0 is the negative index of the non-degenerate subspace L 0+ F. Letκ i , i = 1, 2, denote the negative indexes of subspaces L i in decomposition (3.7). (K, A, Γ) again denotes the triplet that minimaly represents Q = Q 1 + Q 2 . Then the subspace L 1 in decomposition (3.7) is unitarily equivalent to the minimal state space K of the function Q = Q 1 + Q 2 . Therefore, K and L 1 , including the corresponding scalar products, can be identified, i.e. K = L 1 andκ 1 = κ. and decomposition (3.7) ofK. In subsection 3.1 we have proved that we can consider ε z =Γ z , i.e. we can identify manifold L defined by (3.2) with the state manifold L (Q), the starting manifold in the building of the minimal state space K of the given function Q. Therefore, we can use the usual construction to obtain the minimal Pontryagin state space K of Q by means ofΓ z and L. Then we will prove that K is unitarily equivalent to L 1 . Let us first prove that the minimal space K of Q = Q 1 + Q 2 , which is equal to completion of L / L 0 0 , is also equal to the completion of For that purpose, let us prove that the naturally defined mapping is an isometric bijection between L L 0 0 and L / L 0 . It obviously holds L 0 0 ⊆ L 0 . Now we have: In order to prove the converse implication, let us assume the contrary: Then   (iii) L 0 = {0} is necessary but not sufficient condition for κ = κ 1 + κ 2 .
Proof: (i) Assume,K is minimal state space of Q. According to first equation of (3.7) it holds L 1 ⊆L ⊆K. According to Theorem 3.2, L 1 is minimal state space of Q. Therefore, Conversely, if L 1 =L =K holds, then minimality ofK follows from Theorem 3.2. Then κ = κ 1 + κ 2 follows from Theorem 3.1.
In Example 4.7 we will prove that there exists the case whereK = L 1 [+]L 2 and L 2 is positive.
In Example 4.8 we will show that there exists the case whereK = L 1 [+]L 2 and L 2 is negative subspace. That is an example where it holds L 0 = {0} and κ < κ 1 + κ 2 .
4 Analytic criteria 4.1 In this section, we will prove criteria that enable us to research the underlying state space, and negative index of the sum Q := Q 1 + Q 2 analytically, without knowing operator representations of Q, Q 1 , Q 2 . In order to derive equations in those criteria we will have to use Definition 1.1 and definitions of scalar products in terms of formal sums, see Section 1.1.
Let us consider any function Q ∈ N κ (H). By definition κ is the maximal (finite) number of negative squares of the sesquilinear form [., .] defined by the sums n i,j=1 where z l ∈ D(Q), h l ∈ H, l = 1, ..., n. In other words, κ is the negative index of the state manifold (L(Q), [., .]). According to Theorem 1.2, the negative index of the minimal state space K is also equal to κ.
Let us now focus on the sum Q = Q 1 + Q 2 . Then sum (4.1) can be written as n i,j=1 where z l ∈ D (Q 1 ) ∩ D (Q 2 ) =: D (Q) , h l ∈ H, l = 1, ..., n. Such sums are subset of sums (4.2) below, which generate the inner product inK := K 1 [+]K 2 . Indeed, here Q 1 and Q 2 take the same domain points z l ∈ D(Q), while in (4.2) Q 1 and Q 2 take domain points z l ∈ D(Q 1 ) and ζ l ∈ D(Q 2 ) independently. This means that the spaceK created by means of the sums (4.2) may be larger than the state space K, which is created by means of the sums (4.1).
Now we can prove the following lemma. Because Pontryagin spaceK is complete, the closureL ⊆K is also complete. Then it holds L ⊆ K ⊆L.
Pontryagin space, K is non-degenerate. Because, completion K is a closed set inK, andL is the smallest closed set which contains L, we conclude K =L. Hence,L is non-degenerate. Then according to (3.7) it holdsK = K[+]L 2 .
4.2 By definition ofK, see (3.1), the negative index κ := κ 1 + κ 2 ofK is equal to the maximal number of negative squares of the form defined by means of the sums where z l ∈ D (Q 1 ) , ζ l ∈ D (Q 2 ) ; h l , f l ∈ H, l = 1, ..., n. Because points z l , ζ l are arbitrarily selected in their domains, we can create the following sums out of (4.2).
where w l ∈ D (Q) , z l ∈ D (Q 1 ) , ζ l ∈ D (Q 2 ), and the second sum is created by vectors that satisfy condition Note that the first sum here is associated with L. The orthogonality condition for vectors from the second sum in (4.3) can be written with simplified notation as: where z ∈ D (Q 1 ) , ζ ∈ D (Q 2 ) , h i ∈ H, i = 1, 2. Because, scalar product (., .) in H is non-degenerate, this condition can be written as the equation see [9,10]. Assume the contrary to the claim (i), that for some w ∈ D(A) it holds Γ w h = 0, Γ z h = 0. Then we have According to [1, 2.11] it holds Therefore wΓ w h ∈ A (Γ w h), i.e. w is an eigenvalues of A. This contradicts to the fact that w is a regular point of A. This proves ker Γ z ⊆ ker Γ w . The converse inclusion is obvious. This proves the first equation of (i). Because, ker Γ w is independent of w ∈ D(Q), we can introduce ker Γ := ker Γ w , w ∈ D(Q). It is obvious now that claim (i) holds for any two points z, w ∈ D(Q). This completes the proof of (i).
The following statement is a criteria that identifies zero-symbols ε z h = Γ z h, i.e. the symbols that do not play any role in the state manifold L(Q).
It is easy to find regular matrix functions that satisfy (4.5), i.e. that have ker Γ = {0}.

Now we can classify solutions of equation (4.4). According to Lemma 4.2, if
Let us call such solutions h 1 h 2 of (4.4) singular solutions. Then, according to do not exist inK. Therefore, we can exclude singular solutions of (4.4) from the following considerations about structure ofK, without loss of generality. Hence, in the following definitions we assume that we deal only with non-singular solutions. It is consistent with the standard assumption that the functions Γ z are injections.
Let us introduce expression i.e.
For z 1 = z 2 = z and h 1 = h 2 = h we get an important special case of equation (4.4): . That is how a negative square is lost, i.e. the negative index is reduced in sum Q 1 + Q 2 . Then in the underlying spaceK we have: Assume that (z; h) is a nontrivial (and non-singular) solution of (4.6). That means that there exists a nonzero vectorΓ z h := Γ 1z h Γ 2z h ∈K. On the other hand, according to Corollary 4.3, for the symbol Γ z h corresponding to Q(z) in the minimal state space K of Q it holds Γ z h = 0. Hence, we learn that (0 =)Γ z h ∈ L 0 0 ⊂K corresponds to (0 =)Γ z h :=Γ z h + L 0 0 ∈ L L 0 0 ⊆ K. Let us interpret this explanation in terms of almost Pontryagin spaces. Recall that an almost Pontryagin space is a Pontryagin space to which a finite dimensional degenerate linear space has been added orthogonaly, see [17].
According to Theorem 3. and excluded from the considerations. Hence, the overlap does not affect the negative index κ. However, the negative index κ is affected by the existence of nonzero elements Γ z h in the isotropic subspace L 0 of the almost Pontryagin spaceL. Those elements are characterized by the non-singular, nontrivial solutions of the equation (4.6).
The following theorem gives us further analytic means to investigate structure of the state spaceK and to compare number of negative squares κ of Q := Q 1 + Q 2 with the sum κ 1 + κ 2 .  (i) By defintion the existence of the nontrivial (which is also non-singular) solution (z 1 , z 2 ; h 1 , h 2 ) of (4.4) means that for at least one function Q i , i = 1, 2, it holds: In other words, the existence of the nontrivial solution (z 1 , z 2 ; h 1 , h 2 ) of (4.4) is equivalent to: This is equivalent to existence of This is further equivalent to the claim that one of the cases (b), (c), or (d) is satisfied, which is according to Corollary 3.4 (i) equivalent to the claim thatK is not minimal state space of Q.
(ii) In Section 3.2 we showed that we can identify ε z h =Γ z . Solution (z; h) is a nontrivial solution of (4.6), if and only ifΓ z h ∈ L and Γ z h,Γ w g = 0, ∀w ∈ D (Q) , ∀g ∈ H. This is equivalent to 0 =Γ z h ∈ L ∩ L [⊥] , i.e. it is an isotropic element in L.
(iv) Let us first prove the claim: If G = l.s.{x : [x, x] > 0}, then G is a positive manifold.
Assume now that x and y are two positive and linearly independent vectors. For every α = |α|e iϕ ∈ C it holds |α| 2 [x, x] = [αx, αx] = |α| 2 [e iϕ x, e iϕ x] > 0. Because of this property, in the sequel we can consider α ∈ R in the linear combinations of the form αx + y, without loss of generality.
Then, for every α ∈ R and two positive independent vectors x, y ∈ G, it holds Quadratic polynomial P (α) ≥ 0 because [x, x] > 0 and its discriminant is non-positive, according to Cauchy-Schwartz inequality. As we know, equality sign in Cauchy-Schwartz inequality holds only when x and y are two linearly dependent vectors. Hence, for positive independent vectors x, y ∈ G, which we have here, it holds [αx + y, αx + y] > 0. Because we already proved that linear combination of two linearly dependent positive vectors x and y is positive, we can claim that linear combination of any two positive vectors is a positive vector. Then the positivity of a linear combination of n positive vectors follows by induction. Assume now that all solutions of (4.4) are positive. According to the above claim, quadratic form in the second sum of (4.3) is positive. Then the scalar product in the subspace is non-negative or positive definite. We know that and L 2 is a positive definite subspace, see [11,Theorem 3.3]. Therefore, if there exist a neutral vector e ∈ L [⊥] , then it has to be in L 0 . That is equivalent tõ κ 1 < κ 1 + κ 2 . L 0 = {0} means thatL is degenerate. According to Lemma 4.1, then L(Q) is also degenerate. If L 0 = {0} we have case (c), which is equivalent to κ = κ 1 + κ 2 . In Example 4.7 we will prove existence of the case (c).
(v) If we assume, in contrast to the first claim of (v), that (z; h) is a nontrivial solution of (4.6), then we get This means that 0 =Γ z h ∈ L 0 0 . According to Corollary 3.4 (iii), it holds κ < κ 1 + κ 2 . This is a contradiction that proves the first claim of (v).
In Example 4.8 we will see that even when equation (4.4) has only trivial solution it is possible to have κ < κ 1 + κ 2 . That will prove the second claim of (v).
The following theorem gives us some analytic tools to research existence of positive, negative, isotropic and neutral vectors inL. This equation has only the trivial solution h 1 = h 2 = 0. Hence, this is an example of the sum Q := Q 1 + Q 2 that has only a trivial solution of (4.4) and still does not preserve the number of negative squares. This completes the proof of Theorem 4.5 (v).
According to Theorem 4.5 the subspace L [⊥] should be non-positive. We will prove that L [⊥] is negative. This will also prove existence of the case (b) in the proof of Theorem 4.5. In order to do that we will use operator representations: According to the definitions in Section 3 we have, Then for y = Hence, vector y ∈ L [⊥] is strictly negative. This is indeed the case (b) anticipated in the proof of Theorem 4.5.
5 The final decomposition of Q 5.1 Let the function Q ∈ N κ (H) be minimally represented by (1.1) and let α ∈ R be a generalized pole of Q that is not of positive type. It is customary to say that A and Γ are closely connected if representation (1.1) is minimal. Let us decompose the function Q by means of the Jordan chains of the representing relation A at α. According to [6,Lemma 1], there is no loss of generality to assume that α ∈ R is a single generalized pole that is not of positive type. In that case A is an operator. For given eigenvector x 0 of A at α ∈ R, let us denote by X one of the maximal Jordan chains of x 0 . Let us denote by S α (x 0 ) := l.s. {X} .
Let the Hilbert subspace, denoted here by K 0 ⊂ K, consist of all positive eigenvectors of the representing operator A at α. Let E 0 : K → K 0 be the orthogonal projection E ′ := I − E 0 , K ′ := E ′ K and Γ 0 := E 0 Γ. Subspaces K 0 and K ′ obviously reduce operator A. We define Γ ′ := E ′ Γ and A ′ := E ′ AE ′ . Now let x 1 0 , . . . , x 1 l1−1 be a maximal non-degenerate Jordan chain of A ′ at α in the Pontryagin space K ′ . We define the projection: E 1 : K ′ → S α (x 1 0 ), and sub-space We can repeat these steps until we exhaust all non-degenerate Jordan chains. At every step we can decompose the corresponding function as in Theorem 3.1.
Assume that there are r > 0 such (non-degenerate) chains at α. We introduce Subspaces EK and K r+1 obviously reduce A. From the construction of the Pontryagin space K r+1 we conclude that all degenerate chains of A at α are in K r+1 . By using the above notation, we can summarize these results in the following proposition.
Proposition 5.1 Let α ∈ R be a generalized pole that is not of positive type of Q ∈ N κ (H), where Q is given by minimal representation (1.1). Then where r ∈ N is the number of independent non-degenerate Jordan chains of A at α; K i are A-invariant Pontryagin subspaces of indices κ i , i = 0, 1, . . . , r, r + 1, respectively; κ 0 = 0, κ = r+1 i=1 κ i . For every i = 1, 2, . . . , r, subspace K i is a linear span of the corresponding maximal non-degenerate Jordan chain x i 0 , . . . , x i li−1 . All positive eigenvectors are in K 0 . All degenerate chains of A at α are in K r+1 .
The corresponding nontrivial decomposition Q :

5.2
Because K i , i = 1, . . . , r, is a linear span of a maximal Jordan chain, it does not have a nontrivial invariant subspaces of A. In Proposition 5.1, we separated nondegenerate maximal Jordan chains X i and X j by A-invariant disjoint subspaces K i and K j , i.e. X i ⊂ K i , X j ⊂ K j , K i ∩K j = {0} , ∀i = j. The following natural question arises: Is it possible to separate degenerate Jordan chains in a similar way? More precisely: Let A be a self-adjoint operator in a Pontryagin space K. Given two degenerate maximal Jordan chains X i , i = 1, 2, at an eigenvalue α ∈ R, is it possible to find an A-invariant non-degenerate subspace K 1 such that it holds X 1 ⊂ K 1 and X 2 ∩ K 1 = ∅?
In order to address this question, we introduce the following model with two independent degenerate chains at α = 0 of the first order, i.e. two neutral eigenvectors. We denote: k = l.s.{k}.  (iv) If operator A 11 is irreducible, then operator A does not have any invariant nondegenerate subspace that contains one eigenvector x i 0 and not the other, x j 0 , i = j; i, j = 1, 2.
Statements (i) and (ii) are straightforward verification.
(iii) In contrast to the statement, assume that the operator A has the eigenvalue β = 0. Then A 11 h + a 1 γ 1 + a 2 γ 2 (h, a 1 ) + α 1 γ 1 (h, a 2 ) + α 2 γ 2 0 0 Hence, γ i = 0, i = 1, 2. Therefore, This means that operator A 11 has a nonzero eigenvalue β. Because A 11 is a bounded self-adjoint operator on H, it is reduced by the eigenvector h ∈ H, see also definition of reductibility in [2,Section 40]. That contradicts the assumption that A 11 is irreducible. This proves (iii).
(iv) In contrast to the statement, assume that operator A 11 is irreducible in H and that there exist a non-degenerate, A-invariant, nontrivial subspace K 1 of K such that x 1 0 ∈ K 1 and x 2 0 / ∈ K 1 . Then K 1 must contain f 1 ; otherwise, according to (5.2), the subspace K 1 would be degenerate. Similarly, K 1 cannot contain f 2 , because then K 1 without x 2 0 would be degenerate. Hence, vectors from K 1 must satisfy γ 1 = 0,γ 2 = 0 , and H 1 := H ∩ K 1 must contain vectors of the form A 11 h + a 1 γ 1 ∈ H. This means, K 1 is of the form: where H 1 = {0}. It is easy to verify, that it holds x 2 0 ∈ K [⊥] 1 . For an arbitrarily selected k 1 ∈ K 1 we have: Because K 1 is A-invariant, Ak 1 must be of the form (5.3). Hence, it must be (h, a 2 ) = 0, ∀h ∈ H 1 and A 11 h + a 1 γ 1 ∈ H 1 , ∀h ∈ H 1 , ∀γ 1 ∈ C .
If we set γ 1 = 0 in the second equation, then we conclude that A 11 h ∈ H 1 , ∀h ∈ H 1 . Therefore, H 1 is an A 11 -invariant, nontrivial subspace in H. Because A 11 is bounded self-adjoint operator on the Hilbert space H, operator A 11 is reduced by H 1 , see again [2,Section 40]. That contradicts the assumption of irreducibility of A 11 and proves (iv).
This example shows that there does not exist an A-invariant subspace that contains one and not the other degenerate eigenvector of A at α.

Corollary 5.3
There is no nontrivial decomposition of K r+1 and Q r+1 , i.e. decomposition (5.1) of K and corresponding decomposition of Q are final.