Triple Variational Principles for Self-Adjoint Operator Functions

For a very general class of unbounded self-adjoint operator function we prove upper bounds for eigenvalues which lie within arbitrary gaps of the essential spectrum. These upper bounds are given by triple variations. Furthermore, we find conditions which imply that a point is in the resolvent set. For norm resolvent continuous operator functions we show that the variational inequality becomes an equality.


Introduction
In many applications of operator and spectral theory eigenvalue problems appear which are nonlinear in the eigenvalue parameter, e.g. polynomially or rationally. Very often such problems can be dealt with by introducing a function of the spectral parameter whose values are linear operators in a Hilbert space. To be more specific, let T (·) be an operator function that is defined on some set ∆ ⊂ C and whose values are closed linear operators in a Hilbert space (H, · , · ); for each λ ∈ ∆ the domain of the operator T (λ) is denoted by dom(T (λ)). A number λ ∈ ∆ is called an eigenvalue of the operator function T if there exists an x ∈ dom(T (λ)) \ {0} such that T (λ)x = 0, i.e. 0 is in the point spectrum of the operator T (λ). The spectrum, essential spectrum, discrete spectrum and resolvent set of T are defined as follows: σ(T ) := λ ∈ ∆ : 0 ∈ σ(T (λ)) , σ ess (T ) := λ ∈ ∆ : 0 ∈ σ ess (T (λ)) = λ ∈ ∆ : T (λ) is not Fredholm , σ dis (T ) := σ(T ) \ σ ess (T ), ρ(T ) := λ ∈ ∆ : 0 ∈ ρ(T (λ)) ; note that a closed operator is called Fredholm if the dimension of the kernel and the (algebraic) co-dimension of the range are finite. A trivial example is given by T (λ) = A − λI where A is a closed operator; in this case the spectra of the operator function T and the operator A clearly coincide. More complicated examples are operator polynomials or Schur complements of block operator matrices; see, e.g. [21,25] and references therein; see also the survey article [24] about numerical methods for eigenvalues of quadratic matrix polynomials.
It is our aim to show spectral enclosures and variational principles for eigenvalues of operator functions. In the 1950s R. J. Duffin [5] proved a variational principle for eigenvalues of certain quadratic matrix polynomials, which was generalised to infinite-dimensional spaces and more general operator functions in the following decades; see, e.g. [1,21,23,27]. Basically, the following situation was considered. Let T be a function defined on an interval [α, β] whose values are bounded selfadjoint operators in a Hilbert space H such that T (α) ≫ 0 (i.e. T (α) is uniformly positive), T (β) ≪ 0 and T is differentiable with T ′ (λ) ≪ 0 for λ ∈ [α, β]. For every x ∈ H \ {0} the scalar function λ → T (λ)x, x has exactly one zero in (α, β), which we denote by p(x). The mapping x → p(x) is called a generalised Rayleigh functional. The eigenvalues of T below the essential spectrum of T can accumulate at most at the bottom of σ ess (T ); if they are denoted by λ 1 ≤ λ 2 ≤ · · · , then they are characterised by the following variational principle: here L denotes finite-dimensional subspaces of H. If T (λ) = A − λI where A is a bounded self-adjoint operator, then (1.1) reduces to the standard variational principle for eigenvalues of a self-adjoint operator; the generalised Rayleigh functional is then just the classical Rayleigh quotient: p(x) = Ax,x x 2 . In [2] the assumption that T (α) is uniformly positive was relaxed and replaced by the assumption that the negative spectrum of T (λ) consists of only a finite number κ of eigenvalues (counted with multiplicities), in which case n has to be replaced by n + κ in the variations over the subspaces; also the generalised Rayleigh quotient has to be slightly modified (see Definition 2.1 (i) below). In [7] also functions whose values are unbounded operators were allowed. Moreover, in these papers the monotonicity assumption on T was weakened; see Assumption (A3) below in Section 2.
The main aim of our paper is to remove the assumption of the finiteness of the negative spectrum of T (α) and to allow also the characterisation of eigenvalues in gaps of the essential spectrum. In order to do this, a third variation is needed; see Theorem 5.1, the main result of the paper. This theorem greatly sharpens and extends [6,Theorem 2.4], where only an inequality was shown for operator functions and where it was assumed that the values are bounded operators (for some quadratic polynomials equality was proved). As part of the proof of Theorem 5.1 we also show such an inequality (Theorem 2.3) for a class of operator functions with less continuity assumptions then needed in Theorem 5.1. We believe that the first triple variational principle appeared in [22] where eigenvalues of positive operators in Krein spaces were characterised.
Our second main result, Theorem 2.2, is connected with the inequality in Theorem 2.3 and gives a sufficient condition for points being in the resolvent set of an operator function. In Theorem 3.2 this is used to obtain the existence of spectral gaps for perturbed self-adjoint operators. In a forthcoming paper [20] we will also apply Theorem 2.2 to prove spectral inclusions for certain block operator matrices.
Let us give a brief synopsis of the paper. In Section 2 we state and prove the result about points in the resolvent set of an operator function (Theorem 2.2) and the variational inequality (Theorem 2.3). We should mention that also an inequality for the essential spectrum is obtained. In Section 3 we consider self-adjoint operators, which need not be semi-bounded, and prove a variational principle for eigenvalues in arbitrary gaps of the essential spectrum (Theorem 3.1). Moreover, the above mentioned perturbation result for spectral gaps is proved there (Theorem 3.2). In Section 4 we prove a decomposition of the Hilbert space into a direct sum of three subspaces, one being the span of the eigenvectors corresponding to eigenvalues in an interval and the other two being spectral subspaces connected with the operators at the two endpoints of the interval (Theorem 4.1). This is the main ingredient in the proof of the other inequality of the variational principle in Theorem 5.1. Further, in Section 4 we prove that eigenvalues cannot accumulate outside the essential spectrum of an analytic operator function (Proposition 4.2). Finally, in Section 5 we prove the triple variational principle for eigenvalues of norm resolvent continuous operator functions.
Throughout this paper the term 'subspace' refers to a linear manifold, which is not necessarily closed. Moreover, ∔ denotes a direct sum of two subspaces.

A general variational inequality
In this section we consider a rather general class of self-adjoint operator functions and prove variational inequalities for eigenvalues. Moreover, we give sufficient conditions for points to belong to the resolvent set of such operator functions.
Let A be a self-adjoint operator in a Hilbert space H and let E be its spectral measure. We define the corresponding sesquilinear form a by x ∈ dom(a).
Note that, for x ∈ dom(A) and y ∈ dom(a), we have a[x, y] = Ax, y . If A is bounded below, then this definition clearly coincides with the definition in [12,§IV.1.5]. For more information on non-semi-bounded forms see, e.g. [8,11]. Let L be a (not necessarily closed) subspace of dom(a). We say that L is a-non- L is called maximal a-non-negative if it cannot be extended to a larger subspace with the same property. Throughout the paper denote by L ∆ (A) the spectral subspace for A corresponding to a Borel set ∆ ⊂ R, i.e. L ∆ (A) = ran E(∆).

Assumptions (A1)-(A3).
Let T be an operator function defined on some interval ∆ ⊂ R whose values are operators in a Hilbert space H. We assume that the following conditions are satisfied: (A1) T (λ) is self-adjoint for every λ ∈ ∆ with corresponding quadratic form t(λ); (A2) dom(t(λ)) = dom |T (λ)| 1/2 is independent of λ and denoted by D; (A3) for each x ∈ D \ {0}, the function λ → t(λ)[x] is continuous and decreasing at value 0, i.e. if t(λ 0 )[x] = 0 for some λ 0 ∈ ∆, then Occasionally -in particular, when the essential spectrum is involved -we need the following condition, which is named after A. Virozub and V. Matsaev (see [26] and also [18,16]): (VM − ) for every u ∈ D, the function t(·)[u] is differentiable on ∆ and, for every compact subinterval I of ∆, there exist ε, δ > 0 such that, for all x ∈ D with x = 1 and all λ ∈ I, Obviously, for fixed λ ∈ I, this condition is equivalent to the condition that for all x ∈ D.
In [26,18,16] the Virozub-Matsaev condition was studied with t ′ (λ) ≥ δ instead of t ′ (λ) ≤ −δ. Moreover, the definition was slightly different but equivalent to ours (apart from the different sign) for the functions considered in [16], which were assumed to have bounded operators as values and to be continuously differentiable in the operator norm, cf. [16,Lemma 3.6].
Next we define the notion of a generalised Rayleigh functional. First note that, by Assumption (A3), the function λ → t(λ)[x] has at most one zero for a given x ∈ D \ {0}. If it has a zero, we define a generalised Rayleigh functional p(x) to be equal to this zero; otherwise, we assign a value outside ∆. More precisely, we define a generalised Rayleigh functional as follows.
Definition 2.1. Let T be an operator function defined on ∆ that satisfies Assumptions (A1)-(A3) and let t(λ) be the corresponding forms.
In [2], [6] and [7] generalised Rayleigh functionals were defined such that p(x) = −∞ and p(x) = +∞ in the second and third case in (i) above. This does not change results, but our definition gives more flexibility in applications; the choice with ±∞ is also allowed in our definition. Note that, for all λ ∈ ∆ and x ∈ D \ {0}, Moreover, if λ 0 is an eigenvalue of T with eigenvector x 0 , i.e. T (λ 0 )x 0 = 0, then p(x 0 ) = λ 0 . The next two theorems are the main results of this section. The first one can be used to show that some point is in the resolvent set of an operator function. The second one, which is a generalisation of [6, Theorem 2.4], gives triple variational inequalities for eigenvalues and the bottom of the essential spectrum of an operator function.
then µ 2 ∈ ρ(T ). Moreover, let p be a generalised Rayleigh functional for T on ∆, let γ ∈ ρ(T ) with γ < β, and set Let (λ j ) N j=1 , N ∈ N 0 ∪ {∞}, be a finite or infinite sequence of eigenvalues of T in the interval (γ, λ e ) in non-decreasing order such that, for each set of k coinciding eigenvalues, say λ i = λ i+1 = . . . = λ i+k−1 , one has dim ker T (λ i ) ≥ k. Then Moreover, if T satisfies the condition (VM − ) and σ ess (T ) ∩ (γ, β) = ∅, then The spectrum of T consists only of eigenvalues: but the variations on the left-hand side of (2.5) are equal to 0 for all n ∈ N if one chooses, e.g. γ = 0. (iv) Note that in the last statement of the theorem the condition (VM − ) is necessary as can be seen from the example given in [7, Remark 2.10].
Before we prove the theorems, we need a couple of lemmas.
where C M and C M ′ are bounded operators from K + to K − with dom(C M ) = K + and dom(C M ′ ) ⊂ K + . This implies that M is isomorphic to K + and M ′ is isomorphic to a subspace of K + . From this the claim is immediate.
Proof. We can estimate Since a + c > 0, the quadratic form in u and v is positive definite if and only if As this inequality is true by assumption, the expression in (2.7) is positive unless both u and v are zero.
In the next lemmas T is an operator function defined on an interval ∆.
Then there exists a subspace M ′ such that M M ′ ⊂ D and Theorem 2.2 is now an immediate consequence of the previous lemma.
Before we prove Theorem 2.3 we need two more lemmas.
Proof. We prove only the assertion in (i); the statement in (ii) is proved analogously.
Since t(ν)[y] ≥ 0 and T satisfies Assumption (A3), we have t(λ m )[y] ≥ 0. Using the fact that λ m is an eigenvalue of T with eigenvector u m , i.e. that T (λ m )u m = 0, we obtain Finally, we can once more use Assumption (A3) to prove the claim.
Then there exist linearly independent vectors u 1 , . . . , u m such that u j is an eigenvector of T corresponding to λ j , j = 1, . . . , m.
Proof. For every λ j choose an eigenvector u j such that for coinciding eigenvalues the eigenvectors are linearly independent. Assume that there exist numbers α 1 , . . . , α m ∈ C, not all equal to 0, such that Let α n be the last non-zero coefficient, i.e. α n = 0 and Because the u j are chosen to be linearly independent for coinciding eigenvalues, we have λ 1 < λ n . Let k be such that Since u k+1 , . . . , u n are linearly independent and α n = 0, it follows that From Lemma 2.10 (ii) with y = 0 we obtain that The fact that α k+1 u k+1 + . . . + α n u n is an eigenvector to the eigenvalue λ n implies that t(λ n )[α k+1 u k+1 + . . . + α n u n ] = 0. Hence by (A3), which is a contradiction to (2.10).
Note that, without assumption (A3), the statement of the previous lemma is false in general; see, e.g. the example in [18,Remark 7.7]. Now we can turn to the proof of Theorem 2.3.
Proof of Theorem 2.3. First we show Remark 2.4 (ii). Let n ∈ N and assume that T has at least n eigenvalues in (γ, λ e ) counted with multiplicities. It follows from Lemma 2.11 that there exist linearly independent eigenvectors u 1 , . . . , u n of T corresponding to λ 1 , . . . , λ n . By Lemma 2.10 (i) the space span{u 1 , . . . , u n } is t(γ)-nonnegative and can be extended to a maximal t(γ)-non-negative subspace by Zorn's lemma, which shows the statement concerning (2.5). For the analogous statement for (2.6) let µ 2 ∈ σ ess (T )∩(γ, β) and a, b, c as in Lemma 2.8 where ε, δ are such that (2.1) is valid on [γ, µ 2 ]. If n ∈ N, then there exists an n-dimensional subspace M ′ of dom(T (µ 2 )) such that T (µ 2 )v ≤ b v for v ∈ M ′ . It follows from Lemma 2.8 with u = 0 that the space M ′ is t(γ)-non-negative. Again we can extend this space to a maximal t(γ)-non-negative subspace.
Suppose that the inequality in (2.5) is false for some n. According to the definition of λ e there exists a number µ 1 ∈ σ ess (T ) so that λ e ≤ µ 1 < µ 2 . It follows from (2.13) that t(µ 2 )[x] ≥ 0 for x ∈ M ⊖ L and hence from Lemma 2.7 that where a := min{ε, δ(µ 2 − µ 1 )} and ε, δ are such that (2.1) is valid for all λ ∈ [γ, µ 2 ] and x ∈ D. Set c := min{ε, δ(µ 1 − γ)} and choose b > 0 such that b < a and ac > b(a + b + 3c). Since µ 1 ∈ σ ess (T ), i.e. 0 ∈ σ ess (T (µ 1 )), there exists an ndimensional subspace V of dom(T (µ 1 )) ⊂ D such that Set M ′ := (M⊖L)+ V; the sum is direct because of (2.14), (2.15) and the inequality b < a. It follows from (2.14), (2.15) and Lemma 2.8 that In the following theorem eigenvalues in a gap of the essential spectrum are characterised by a triple variational principle. This result is a generalisation of [6, Theorem 3.1] to unbounded operators. For other types of variational principles for eigenvalues of self-adjoint operators in gaps of the essential spectrum see, e.g. [4,10,13,17], where a given decomposition of the space is used. Note that Theorem 3.1 is not a corollary of Theorem 5.1 below since there we assume that the values of the operator function are operators that are bounded from below, which is not assumed in Theorem 3.1. Moreover, let (λ j ) N j=1 , N ∈ N 0 ∪{∞}, be the finite or infinite sequence of eigenvalues in (γ, λ e ) in non-decreasing order and counted according to their multiplicities: λ 1 ≤ λ 2 ≤ · · · . Then Moreover, if N is finite and σ ess (A) ∩ (γ, ∞) = ∅, then The following theorem shows that for a non-negative perturbation of a selfadjoint operator a spectral gap closes only from one side.

Theorem 3.2. Let
A be a self-adjoint operator with corresponding quadratic form a and α, β ∈ R such that (α, β) ⊂ ρ(A). Moreover, let b be a non-negative quadratic form with dom(b) ⊃ dom(a) such that a + b with domain dom(a) is the quadratic form of a self-adjoint operator C, and assume that with some a, b ≥ 0. Ifα < β wherê α := α + a + bα, then (α, β) ⊂ ρ(C).
If A is bounded from below and b is a non-negative form with dom(b) ⊃ dom(a), then a+b with domain dom(a) is a closed form that is bounded from below (see, e.g. [ If B is a bounded non-negative operator, the assertion of Theorem 3.2 follows, e.g. from the considerations in [3, Section 9.4].

A spectral decomposition
In this section we consider operator functions that are continuous in the norm resolvent sense and are such that, on some interval [α, β], its spectrum consists only of a finite number of eigenvalues. The main result is a decomposition of the space into three components: two components are connected with the endpoints α, β, and the third component is the span of the eigenvectors corresponding to the eigenvalues in [α, β]. This decomposition result is an analogue of [18,Theorem 7.3] where analytic operator functions whose values are bounded operators were considered but arbitrary spectrum was allowed in [α, β]; cf. also similar results for Schur complements of block operator matrices in [19] and [15]. The decomposition in the following theorem is also used in the next section to prove a variational principle.
In Proposition 4.2 we prove that, for holomorphic functions of type (B), no accumulation of eigenvalues outside the essential spectrum can occur, so that the discreteness assumption of Theorem 4.1 is automatically satisfied outside the essential spectrum.
Theorem 4.1. Let T be an operator function defined on the interval [α, β], where α, β ∈ R, α < β, which satisfies Assumptions (A1)-(A3), is continuous in the norm resolvent sense, and T (λ) is bounded from below for each λ ∈ [α, β]. Assume that α, β ∈ ρ(T ) and that where λ 1 ≤ · · · ≤ λ n are repeated according to their multiplicities. If u 1 , . . . , u n is a corresponding set of linearly independent eigenvectors (which exists by Lemma 2.11), then The next proposition gives a sufficient condition for σ(T ) having no accumulation point on an interval. Note that, without any further continuity assumption, functions satisfying (A1)-(A3) may have a sequence of eigenvalues that accumulates outside the essential spectrum; see, e.g. the example in Remark 2.4 (iii). Recall that an operator function T defined on a domain U ⊂ C is said to be holomorphic of type (B) if T (λ) is m-sectorial for every λ ∈ U , dom(t(λ)) ≡ D is independent of λ and t(·)[x] is holomorphic on U for every x ∈ D. Instead of (A3) we assume the slightly stronger assumption: In [16] this condition with the reverse inequality for the derivative was called (vm). Note that, without any assumption of type (A3) or (A3)', the result would be incorrect as the zero function on a finite-dimensional space shows.   x n = 1 for all n ∈ N, x n + y n → 0 as n → ∞.
Proof. If L 1 ∔ L 2 is not closed, then, by [9, Theorem 2.1.1], there exist x n ∈ L 1 , y n ∈ L 2 such that x n + y n < 1 n x n + y n .
Clearly, x n = 0 for all n ∈ N. Without loss of generality we can choose x n such that x n = 1. The relation 1 n x n + y n > x n + y n ≥ y n − x n implies that y n ≤ n+1 n−1 for n ≥ 2 and hence that x n + y n → 0, which is (4.2). Conversely, assume that there exist x n ∈ L 1 , y n ∈ L 2 that satisfy (4.2). Then, clearly, there exists no K such that x n + y n ≥ K x n + y n for all n ∈ N.
In Lemmas 4.4-4.6 we assume that the assumptions of Theorem 4.1 are satisfied. there exist x n ∈ H 1 and y n ∈ L (0,∞) (T (b)), n ∈ N, such that x n = 1 and x n + y n → 0 as n → ∞.
Let E be the spectral measure associated with the operator T (b). Then, for all n ∈ N, we have it follows that the right-hand side of (4.4) is positive for all sufficiently large n. This contradiction shows that the first sum in (4.3) is closed.
Since the second sum can be written as it is closed by the first part of the proof and the fact that ker(T (b)) is finitedimensional; see, e.g.  Then Proof. Since b ∈ σ dis (T ) ∪ ρ(T ), the sum in (4.6) is direct by Lemma 4.4 and there exists δ > 0 such that [−δ, 0) ⊂ ρ (T (b)). It follows from [12, Theorem VI.5.10] that there exists ε > 0 such that −δ, − δ By [12,Theorem VII.4.2] the operators T (µ) + 2δ 3 are uniformly bounded from below on [b − ε, b], say T (µ) + 2δ 3 ≫ M . Then If Γ is a circle passing through 1 M and 3 δ , then Since T is continuous in norm resolvent sense, the family of spectral projections P (µ) is uniformly continuous on the interval [b − ε, b]. Now let x 0 ∈ H. We show that x 0 is contained in the set on the right-hand side of (4.6). To this end, let b n ∈ (b − ε, b) for n ∈ N with b n → b. By (4.5) we can write x 0 = x n + y n with x n ∈ H 1 , y n ∈ L (0,∞) T (b n ) . Suppose that y n is not bounded. Without loss of generality assume that y n → ∞, which implies that x n → ∞. Clearly, P (b n )y n = y n since y n ∈ L (0,∞) (T (b n )) ⊂ L (− 2δ 3 ,∞) (T (b n )). Setŷ n := P (b)y n ∈ L [0,∞) (T (b)). Since δ n := P (b n )−P (b) → 0 as n → ∞, we have This relation together with x n → ∞ yields is not closed, which contradicts Lemma 4.4. Hence the sequences (x n ) and (y n ) are uniformly bounded and therefore y n −ŷ n → 0. Setting we obtain x 0 − x 0 (n) = P (b n ) − P (b) y n → 0. This implies that x 0 ∈ H 1 ∔ L [0,∞) (T (b)) since the latter space is closed by Lemma 4.4. Then there exists an ε > 0 such that Proof. First we prove that the sum on the right-hand side of (4.8) is direct and closed for all µ ∈ [µ 0 , β]. It follows from Lemma 4.4 that the sum H 1 +L (0,∞) (T (µ)) is direct and closed. Since ker(T (µ 0 )) is finite-dimensional, the sum on the righthand side of (4.8) is closed; see [9, Corollary 2.1.1]. Assume that it is not direct. Then there exist u ∈ H 1 , v ∈ ker(T (µ 0 )), w ∈ L (0,∞) (T (µ)) such that u + v + w = 0 and w = 0. By Lemma 2.10 (ii) we have t(µ)[u + v] ≤ 0, which contradicts w ∈ L (0,∞) (T (µ)). Hence the sum on the right-hand side of (4.8) is direct and closed. Next we show that there exists a K > 0 such that Assume that this is not true. Then there exist x n ∈ H 1 , y n ∈ ker(T (µ 0 )), w n ∈ L (0,∞) (T (µ 0 )) such that x n + y n + w n = 1 and w n → ∞. In this case also y n + w n → ∞ and hence x n → ∞. Since x n x n + y n + w n x n = x n + y n + w n x n → 0, Lemma 4.3 implies that the sum H 1 ∔L [0,∞) (T (µ 0 )) is not closed, which contradicts (4.7). Hence a K > 0 with the desired property exists. Let P (µ) be the orthogonal projection onto L (0,∞) (T (µ)) for µ ∈ [µ 0 , β]. Similarly as in the proof of the previous lemma one shows that d µ := P (µ)−P (µ 0 ) → 0 as µ ց µ 0 . Hence there exists an ε > 0 such that δ µ K < 1 for all µ ∈ [µ 0 , µ 0 + ε). We show that (4.8) holds for all such µ. Assume that this is not the case. Then, for some µ ∈ [µ 0 , µ 0 + ε) there exists an x 0 ∈ H with x 0 = 1 which is orthogonal to the right-hand side of (4.8). Since (4.7) is true by assumption, we can write By (4.9) we have w ≤ K. Now set y := u + v + P (µ)w, which is contained in the right-hand side of (4.8). Then which is a contradiction to the facts that x 0 ⊥ y and x 0 = 1.
In order to prove Proposition 4.2, we first need the following lemma.
Lemma 4.7. Let T be a holomorphic family of operators of type (B) defined on the complex domain U ⊂ C with closed forms t such that dom t(λ) = D for all λ ∈ U . Moreover, let x(λ) ∈ D for λ ∈ U such that x(·) is holomorphic.
We conclude that the function which shows (4.11).

Variational principles for norm resolvent continuous operator functions
In this section we prove that under stronger continuity assumptions on the operator function we have equality in the variational principle from Theorem 2.3.
Theorem 5.1. Let ∆ ⊂ R be an interval with right endpoint β ∈ R ∪ {∞} and let T be an operator function defined on ∆ which satisfies Assumptions (A1)-(A3) on ∆, is continuous in the norm resolvent sense on ∆, and T (λ) is bounded from below for each λ ∈ ∆. Moreover, let p be a generalised Rayleigh functional for T on ∆, let γ ∈ ρ(T ) ∩ ∆ with γ < β, let M + γ be defined as in Definition 2.1 and let λ e be as in (2.4).
Assume that the spectrum of T in (γ, λ e ) has no accumulation point in [γ, λ e ), i.e. σ(T ) ∩ [γ, λ e ) is empty or consists of a finite or infinite non-decreasing sequence of eigenvalues (λ n ) N n=1 with N ∈ N ∪ {∞}, counted according to their multiplicities, which can accumulate at most at λ e .
If σ(T ) ∩ (γ, λ e ) = ∅, then Remark 5.2. If T is a holomorphic family of type (B) in a neighbourhood of ∆, then one does not have to assume that the eigenvalues cannot accumulate in [γ, λ e ), but this follows from Proposition 4.2. Theorem 5.1 and Proposition 4.2 can be applied, e.g. to operator polynomials and Schur complements of certain block operator matrices; for the latter see [20].
Now let x ∈ M \ {0} such that x ⊥ L = (I − P )M. Then x ∈ D, and the relation x = P x + (I − P )x implies that x 2 = P x, x + (I − P )x, x = P x 2 , which shows that x ∈ ran P = K. It follows from Lemma 2.10 (i) that t(λ n )[x] ≥ 0, which proves (5.3).
In order to prove (5.2), assume that N is finite and let n > N . Moreover, let µ ∈ (λ N , λ e ) be arbitrary and P be the orthogonal projection in H onto L (0,∞) (T (µ)). Similarly to the first part of the proof we can choose M := span{u 1 , . . . , u N } + L (0,∞) (T (µ)) ∩ D , which is in M + γ . The space L ′ := (I − P )M is an N -dimensional subspace of M, which can be seen as above. Extend L ′ to an (n − 1)-dimensional subspace L of M. Then inf x∈M\{0} x⊥L p(x) ≥ µ, which shows (5.2) since µ ∈ (λ N , λ e ) was arbitrary.