First order least-squares formulations for eigenvalue problems

In this paper we discuss spectral properties of operators associated with the least-squares finite element approximation of elliptic partial differential equations. The convergence of the discrete eigenvalues and eigenfunctions towards the corresponding continuous eigenmodes is studied and analyzed with the help of appropriate $L^2$ error estimates. A priori and a posteriori estimates are proved.


Introduction
Least-squares finite element formulations have been successfully used for the approximation of several problems described in terms of partial differential equations.
In particular, we are considering formulations that approximate simultaneously scalar (potential) and vector (flux) variables in the spirit of first-order system least squares [4]. While least-squares schemes posses an inherent error control and are particularly suited for problems involving coupling conditions, other approaches involving mixed or hybrid schemes [6] enjoy good conservation properties. The closedness property from [10] show how sometimes results from one approach can be transferred to the other one.
Only few papers deal with eigenvalue problems associated with least-squares formulations. In [8] the authors apply their theory to a second order least-squares formulation of a Dirichlet eigenvalue problem. In [9] a first order least-squares formulation is introduced for the approximation of the eigenvalues of Maxwell's equations.
In this paper we aim at investigating the least-squares finite element approximation of the eigensolutions of operators associated with second order elliptic equations. Even if the proposed method may be not competitive with other solution techniques, the presented analysis sheds some light on fundamental properties of least-squares formulations, in particular in connection with the simulation of evolution problems.
We start with presenting several least-squares formulations for the approximation of the eigensolutions of the Diriclet Laplace problem. For each formulation we characterize the eigenmodes obtained after finite element discretization and we describe the structure of the underlined algebraic systems.
We then discuss the convergence of the discrete solutions towards the continuous eigenmodes. We use the standard theory of the approximation of compact operators (see [3,5] and the references therein); it can be easily seen that standard energy estimates (in the graph norm) are not enough to guarantee the uniform convergence of the discrete solution operator sequence to the continuous solution operator. This is a consequence of the known lack of compactness of the solution operator in the energy norm; for this reason, we consider the solution operator in L 2 (Ω) and we discuss various L 2 (Ω) error estimates. It turns out that in the case of div formulations for FOSLS (First order system least-squares) and LL * formulations, if the flux variable is approximated with Raviart-Thomas spaces (or, in general, with other mixed spaces [6]) then the presented approximations are optimally convergent. On the other hand, the corresponding div-curl formulations suffer, as expected, from serious issues when applied to singular solutions such as those occurring when the computational domain presents reentrant corners; in this case continuous finite elements cannot correctly approximate the flux which is not H 1 (Ω) regular and the corresponding eigenvalues converge to a wrong solution.
A priori and a posteriori error estimates are presented and rigorously proved for the proposed formulations.
Several numerical tests conclude the paper, confirming our results and investigating situations not covered by the theory.

The Laplace eigenvalue problem
The problem we are considering is to find λ ∈ R and u non vanishing such that − ∆u = λu in Ω u = 0 on ∂Ω Our problem can be written in the following standard first order formulation: find λ ∈ R and u non vanishing such that for some σ More general symmetric elliptic problems in divergence form could be considered, as well as different homogeneous boundary conditions. Since all our analysis applies with standard modifications to more general situations, we describe our theory in the simplest possible setting.
2.1. FOSLS formulation. The simplest least squares formulation for the source problem is given by the minimization of the following functional [20]: If the L 2 (Ω) norm is considered, this leads to the following variational formulation: find σ ∈ H(div; Ω) and u ∈ H 1 0 (Ω) such that (Ω) This formulation can be used in a naturally way to consider the following eigenvalue problem: find λ ∈ C and u ∈ H 1 0 (Ω) with u = 0 such that for some σ ∈ H(div; Ω) it holds ∀v ∈ H 1 0 (Ω) Even if the formulation is not symmetric, it can be easily shown that the eigenvalues are real. We state this result in the next proposition since its proof might have interesting consequences for the numerical approximation of our problem. Proposition 1. Problem (F1) admits a sequence of positive eigenvalues diverging to +∞. The corresponding eigenspaces span the space H 1 0 (Ω). Proof. The result follows by the simple observation that the solution operator associated with problem (1) is exactly the same as for the standard Laplace equation. We would like however to show explicitly that the eigenvalues of (F1) are real since this has interesting implications for the finite element discretization.
The (non symmetric) operator form of problem (F1), with natural notation, is given by After integration by parts, thanks to the boundary conditions we have D = −B so that (2) can be reduced to the following equivalent symmetric Schur complement formulation where we have used the equality y = −C −1 Bx. Another possible way of observing that (2) corresponds to a symmetric problem is to rearrange its terms as follows Remark 1. One might think that problem (F1) (see in particular formulation (2)) gives a number of infinite eigenvalues; however, in our formulation of problem (F1) the eigenfunctions we are looking for correspond to the component u of the solution only. We will go back to this remark later when the approximation of (F1) is considered.
2.2. The transpose FOSLS formulation. Since our problem is self-adjoint, another possibility is to consider the transpose of (F1): find λ ∈ R and σ ∈ H(div; Ω) with σ = 0 such that for some u ∈ H 1 0 (Ω) it holds (Ω) This leads to the following operator form Proof. The equivalence can be seen, for instance, by solving the matrix problem (2) for x, thus obtaining x = A −1 (λDy − B y) which gives Cy = (λ + 1)BA −1 B y, that is (5).
2.3. The LL * formulation. Another popular choice for the approximation of the problem under consideration is the so called LL * formulation [11]. One of the reasons for its introduction is the possibility to deal with less regular right hand sides; moreover, it gives rise to an intrinsically symmetric formulation, which makes it appealing for the application to eigenvalue problems. In the case of the source problem it reads: find χ ∈ H(div; Ω) and p ∈ H 1 0 (Ω) such that It turns out that this formulation is related to our original Laplace problem by the following relation: The eigenvalue problem associated with (6) is: find µ ∈ R and p ∈ H 1 0 (Ω), with p = 0, such that for some χ ∈ H(div; Ω) it holds (LL*) (χ, ξ) + (div χ, div ξ) − (∇ p, ξ) = 0 ∀ξ ∈ H(div; Ω) − (χ, ∇ q) + (∇ p, ∇ q) = µ(p, q) ∀q ∈ H 1 0 (Ω) As already anticipated, this problem is symmetric and it can be written in the following form in terms of the underlined operators: By using the links between the LL * formulation and the original problem, as stated in (7), we can see how to relate the eigenvalues of (LL*) to the ones of the problem we are interested in. Proposition 3. The eigenvalues µ of (LL*) are in one-to-one correspondence with the eigenvalues λ of the Laplace eigenproblem using the relation Moreover, the eigenfunctions u of the Laplace eigenproblem are given by div χ and their gradients ∇ u are equal to ∇ p − χ.

2.4.
Enriching the formulations with curl σ. Since σ is a gradient, it satisfies curl σ = 0; a commonly used modification of the FOSLS methods consists in using a least-squares functional that contains the term curl σ, that is, With natural modifications the two corresponding formulations read: find λ ∈ R and u ∈ H 1 0 (Ω) with u = 0 such that for some σ ∈ H(div; Ω) ∩ H(curl; Ω) it holds (Ω) and find λ ∈ R and σ ∈ H(div; Ω) ∩ H(curl; Ω) with σ = 0 such that for some ∀v ∈ H 1 0 (Ω) which lead to reduced formulations analogous to the previous ones with appropriate modification of the matrix A.
Remark 2. Sometimes formulation (F1curl) is presented in the literature with a different choice of functional spaces, that is {σ, τ } ∈ H 1 (Ω) instead of {σ, τ } ∈ H(div; Ω) ∩ H(curl; Ω). Although for smooth domains the two spaces are the same, this is not the case when singular solutions are presented, that could be in H(div; Ω) ∩ H(curl; Ω) but not in H 1 (Ω).
In a natural way it is possible to consider the LL * formulation associated to the formulation enriched with curl σ: find χ ∈ H(div; Ω) ∩ H(curl; Ω), p ∈ H 1 0 (Ω), and z ∈ H 1 (Ω), such that The corresponding eigenvalue problem is then: find λ ∈ R and p ∈ H 1 0 (Ω), with p = 0, such that for some χ ∈ H(div; Ω) ∩ H(curl; Ω) and z ∈ H 1 (Ω) it holds (LL*curl) The operators structure of this problem is now

Galerkin discretizazion
We now discuss the Galerkin discretization of the problems we have introduced in the previous section.

Approximation of the FOSLS formulations. Let Σ h ⊂ H(div; Ω) and
U h ⊂ H 1 0 (Ω) be conforming finite element spaces. The discretization of (F1) reads: Analogously, the approximation of (F1*) has the following form: After introducing basis functions of Σ h and U h , the matrix structure of Problems (F1h) and (F1*h) are the ones already anticipated in (2) and (4) and that will be repeated in the next two propositions, where we characterize their eigensolutions. We will then show that the relevant eigenmodes of the two formulations are identical.
Before giving a characterization of the eigenvalues of our discrete formulation, we discuss in the following remark the solution of (possibly degenerate) generalized eigenvalue problems.
Remark 3. In general our discrete problems have the form of a generalized eigenvalue problem where the matrices A and/or B may be singular. The solution of this problem satisfies the following properties.
(1) If the matrix B is invertible, then (8) is equivalent to the standard eigenvalue problem B −1 Ax = λx. (2) If K = ker A ∩ ker B is not trivial then the eigenvalue problem is degenerate and vectors in K do not correspond to any eigenvalue of (8). (3) If the matrix B has a non-trivial kernel ker(B) which does not contain any nonzero vector of ker(A) then it is conventionally assumed that (8) has an eigenvalue λ = ∞ with eigenspace equal to ker(B). (4) If B is singular and A is not (which is the most common situation in our framework) then it may be convenient to switch the roles of the two matrices and to consider the problem Then (µ, x) with µ = 0 corresponds to the eigenmode (∞, x) of (8); the remaining eigenmodes are (λ, x) with λ = 1/µ. The next proposition is related to the eigensolutions to (F1h).
Proposition 4. Let us consider the following matrices associated with Problem (F1h).
• A is the matrix associated with the bilinear form (σ, τ ) + (div σ, div τ ), • B is the matrix associated with the bilinear form −(σ, ∇ v), • C is the matrix associated with the bilinear form (∇ u, ∇ v). Then the following generalized problem (see (2)) x y has three families of eigenvalues. More precisely: Proof. The dimension of the eigenproblem is dim Σ h +dim U h which is clearly equal to the number of eigenvalues in the three families since dim U h = dim ker(B ) + rank(B ). The eigenvalues of the first and of the second family are associated to eigenvectors in the kernel of the matrix on the right hand side. Those are of the form (x, y) with x corresponding to any element in Σ h and y corresponding to elements of U h in ker(B ).
The eigenvalues of the third family are characterized by looking at the Schur complement The following proposition is related to the eigensolutions to (F1*h).
Proposition 5. Let A, B, and C be the matrices introduced in Proposition 4. Then the following generalized eigenvalue problem associated with Problem (F1*h) x y has three families of eigenvalues. More precisely: Proof. The proof is analogous to the one of Proposition 4 by considering the corresponding Schur complement Since we started from a self-adjoint problem, it is not surprising that formulations (F1h) and (F1*h) are indeed equivalent. This will be shown in the next proposition. Proof. Solving the matrix formulation of (F1h) (see (2) and Proposition 4) for x gives x = −A −1 (λ h B y + B y), yielding that is, the Schur complement of the matrix formulation of (F1*h) (see the proof of Proposition 5).
We conclude this section with another equivalent matrix formulation of (F1h) and (F1*h).
Starting from Remark 4. The analysis presented in this section applies without modifications to the formulations enriched with the curl. The only change is the definition of the matrix A which corresponds to the bilinear form (σ, τ ) + (div σ, div τ ) + (curl σ, curl τ ).

3.2.
Approximation of the LL * formulation. The discretization of the LL * formulation (LL*) is obtained after introducing discrete spaces Σ h ⊂ H(div; Ω) and As already observed, this problem is symmetric and it can be written in the following matrix form after introducing in a natural way the following matrices: • A associated with the bilinear form (χ, ξ) + (div χ, div ξ), • B associated with the bilinear form −(χ, ∇ q), • C associated with the bilinear form (∇ p, ∇ q), • M associated with the bilinear form (p, q).
The Schur complement associated with the LL * formulation is easily seen to be equal to The next proposition, whose proof is immediate, characterizes the eigenvalues of the LL * formulation.
Proposition 7. The generalized eigenvalue problem (9) has the following two families of eigensolutions: (1) µ h = +∞ with multiplicity equal to dim Σ h , (2) a number of positive eigenvalues µ h equal to dim U h .

Convergence analysis
The convergence analysis of the proposed schemes can be performed within the standard abstract setting presented in [3] (see also [5]). We first consider the convergence of the eigenmodes (and absence of spurious modes), then we discuss the rate of convergence.
4.1. Analysis of the FOSLS formulations. We start with the analysis of the first formulation that we have considered in (F1). Thanks to the equivalence shown in Proposition 6, the same analysis applies to formulation (F1*) as well.
We introduce a suitable solution operator T F 1 : L 2 (Ω) → L 2 (Ω) associated with the FOSLS formulation presented in (F1). Given f ∈ L 2 (Ω) we define T F 1 f ∈ H 1 0 (Ω) as the second component of the solution of (1), so that it solves the following problem for some σ ∈ H(div; Ω): ∀v ∈ H 1 0 (Ω) It is easily seen that the operator T F 1 is compact (its range is included in H 1 0 (Ω) which is compact in L 2 (Ω) and self-adjoint (it is the solution operator associated with the Laplace problem). We enumerate the reciprocals of its non-vanishing eigenvalues in increasing order so that they form a sequence tending to +∞ The corresponding eigenfunctions are denoted by {u i }, i = 1, 2, . . . , i, . . . . We consider eigenfunctions normalized in L 2 (Ω) and we repeat the λ i 's according to their multiplicities.
Let Σ h ⊂ H(div; Ω) and U h ⊂ H 1 0 (Ω) be conforming finite element spaces. The discrete counterpart of T F 1 is the operator T F 1,h : L 2 (Ω) → L 2 (Ω) defined as follows. Given f ∈ L 2 (Ω), we define T F 1,h f ∈ U h as the second component of the solution of the Galerkin approximation of (1), so that is solves the following problem for some σ h ∈ Σ h : Since U h is finite dimensional, the operator T F 1,h is compact; moreover, it is selfadjoint (see, for instance, all equivalent matrix characterizations presented in the previous section). We denote the reciprocals of its non-vaninshing eigenvalues in analogy to what we have done for the continuous operator T F 1 : We summarize in the following proposition what is needed in order to show the convergence of the discrete eigenmodes to the continuous ones (see [3] and [5]).
Proposition 8. Let us assume that the operator sequence T F 1,h converges in norm to T F 1 as h goes to zero, that is, T with ρ(h) tending to zero as h goes to zero. Let λ i = λ i+1 = · · · = λ i+m−1 be an eigenvalue of multiplicity m associated with the operator T F 1 . Then, for h small enough, so that N (h) ≥ i+m−1, the m discrete eigenvalues λ j,h (j = i, . . . , i+m−1) associated with the operator T F 1,h converge to λ i . Moreover, the corresponding eigenfunctions converge, that is where δ denote as usual the gap between Hilbert subspaces, E is the continuous eigenspace spanned by {u i , . . . , u i+m−1 }, and E h is its discrete counterpart spanned by {u i,h , . . . , u i+m−1,h }.
We recall the standard a priori error estimate for the solution of the source problem (F1). It follows with standard arguments since the formulation is coercive that we have Let us assume that the domain is a Lipschitz polygon/polyhedron, then we know that if f is in L 2 (Ω) then the solution u belongs to H 1+s (Ω) for some s ∈ (1/2, 1]. Unfortunately, estimate (11) is not enough to obtain the uniform convergence (10) of T F 1,h to T F 1 . Take, for instance, standard finite element spaces, so that the best approximation properties on the right hand side of (11) read as follows Clearly, the regularity of σ is not enough to guarantee a rate of convergence, since div σ = −f cannot be assumed more regular than L 2 (Ω), whence σ in general is not in H 1+s .
The approximation of σ could be improved when using more natural discretization of H(div; Ω), such as the Raviart-Thomas spaces, as follows: However, also in this case we see that we cannot get a rate of convergence out of this estimate for the same reason as before.
What we have observed is a well known fact due to the lack of compactness of the problem we are studying, when considered in terms of both component of the solution.
On the other hand, the a priori estimate (11) is a very strong result, since it involves the error in the H(div; Ω) norm of σ and the error in the H 1 (Ω) norm of u combined together. For the uniform convergence it is enough to estimate the error in the L 2 (Ω) of the only component u. This can be done by using a standard duality argument and the corresponding result is stated in the next lemma.
Lemma 9. Let u ∈ H 1+s (Ω) (s > 1/2) be the second component of the solution to (1) and u h ∈ U h the corresponding numerical solution. Assume that the finite element spaces Σ h and U h satisfy the following approximation properties Then the following estimate holds true Proof. This proof has been essentially already presented in [2,Sec. 7] in a different context for convex domains (see also [12]).
We aim at providing a refined L 2 estimate of the error u − u h of the formulation (1) and of its corresponding discretization (with appropriate choice of the finite element spaces). The error will be estimated in terms of the natural error We consider the following dual problem (which is pretty much related to the formulation (6)): find χ ∈ H(div; Ω) and p ∈ H 1 0 (Ω) such that If the domain is convex (or in general if the domain is smooth enough so that the Poisson problem has H 2 regularity), the solution of the above problem satisfies (13) so that, in particular, div χ = g; moreover, the following stability bound is valid: In the case of the regularity assumed in our case (s > 1/2) we have that (13) is valid in variational form with g ∈ H 1+s (Ω) ∩ H 1 0 (Ω) and we obtain the following bound: (14) p Taking as test functions in (12) ξ = σ − σ h and q = u − u h in (12), summing the two equations, and using the error equations related to (1) and its discretization, we obtain Using the approximation estimates assumed for Σ h and U h and the bound in (14) we finally get The results of the previous lemma gives directly the uniform convergence that implies the convergence of the eigenvalues according to Proposition 8.
Theorem 10. Under the same hypotheses as in Lemma 9 the uniform convergence (10) holds true.
Proof. We have Let us now move to the analysis of the rate of convergence. We start with the estimate of the eigenfunctions. Standard Babuška-Osborn theory (see [3] or [5,Th. 9.10]) implies the following result.
In order to bound the right hand side in (15) we can use the standard energy norm estimate for (1) which reads The final estimate is summarized in the following theorem.
Theorem 12. Let λ i = λ i+1 = · · · = λ i+m−1 be an eigenvalue of multiplicity m; denote by E = span{u i , . . . , u i+m−1 } its eigenspace and by E h = span{u i,h , . . . , u i+m−1,h } the space generated by the corresponding discrete eigenfunctions. Then for all j = i, . . . , i + m − 1 there exists u h ∈ E h such that Once we have the optimal estimate for the eigenfunctions, it is straightforward to obtain the analogous optimal estimate for the eigenvalues. In this case, since we have seen that our formulation is symmetric (see for instance the Schur complement formulation (3)), we obtain as usual double order of convergence.
Theorem 13. Let λ i = λ i+1 = · · · = λ i+m−1 be an eigenvalue of multiplicity m and denote by λ (h) the quantity appearing on the right hand side of estimate (16).
Remark 5. One of the most commonly used scheme used for the approximation of (1), based on Ravart-Thomas spaces, is RT k−1 − P k (k ≥ 1). In this case the rate of convergence predicted by (16) is O(h k ) provided u belongs to H k+1 (Ω). In particular, for the lowest order choice, u ∈ H 2 (Ω) implies first order convergence O(h) for the eigenfunctions and second order convergence O(h 2 ) for the eigenvalues.
Remark 6. If standard (nodal) finite elements are used for the definition of Σ h , then the approximation properties assumed in Lemma 9 are not valid anymore. It is not clear in this case if the uniform convergence (10) is satisfied and if the eigenmodes are well approximated. We are going to present some numerical experiments in Section 6 where it is shown that the method seems to work in simple cases.

4.2.
Analysis of the LL * formulation. The analysis of the convergence for the LL * formulation can be performed in a similar way as for the FOSLS formulation. We consider the solution operator T LL * associated with the LL * formulation: T LL * f ∈ H 1 0 (Ω) solves the following problem for some χ ∈ H(div; Ω) The corresponding discrete operator T LL * ,h is defined by T LL * ,h f ∈ U h that solves the following problem for some As for the FOSLS formulation the uniform convergence of T LL * ,h to T LL * is related to an L 2 (Ω) estimate for the LL * formulation that can be derived by using a duality argument which makes use of the following auxiliary problem: findχ ∈ H(div; Ω) andp ∈ H 1 0 (Ω) such that (χ, ξ) + (divχ, div ξ) − (∇p, ξ) = 0 ∀ξ ∈ H(div; Ω) Then the following theorem can be proved as in Lemma 9.
Theorem 14. Let us assume the same regularity for the solution of our problem as in Lemma 9. Then the following uniform convergence holds true where ρ(h) tends to zero as h goes to zero.
Remark 7. Using the previous theorem and the abstract results about the approximation of eigenvalue problems (see Proposition 8, and [3,5]), together with the equivalence stated in Proposition 3, Theorems analogous to 12 and to 13 can be obtained.

4.3.
Remarks on the formulation enriched with curl σ. In this section we recall some issues related to the formulations presented in Subsection 2.4. First of all we observe that in this case it is not possible to use Raviart-Thomas elements for the definition of Σ h . Indeed, the conformity in H(div; Ω) implies the continuity of the normal trace across elements (which is compatible with Raviart-Thomas elements), while the conformity in H(curl; Ω) requires the continuity of the tangential trace. In practice, if Σ h contains piecewise polynomials, if must be made of continuous elements, so that we have Σ h ⊂ H 1 (Ω).
A duality argument leading to a refined L 2 (Ω) estimate for the div-curl source problem associated with formulation (F1curl) was presented in [19]. Under certain hypothesis on the domain the following estimate was shown: there exists t > 1 such that On the other hand, in [14] is was shown that the space H 1 (Ω) ∩ H 0 (curl; Ω) is closed in H(div; Ω)∩H 0 (curl; Ω). This fact has negative consequences for the finite element approximation of the solution of (F1curl) and of (LL*curl) when σ does not belong to H 1 (Ω). This fact has been observed, in the case of least-squares finite element methods, in [18,16] and later in the case of finite element approximation of Maxwell's eigenvalues in [15].
In Section 6 we show an example of bad behavior of the discrete solution in presence of singularity. We believe that a modification of the scheme in the spirit of what has been proposed in [18,16] and [15] could lead to good results.

A posteriori analysis
In this section we show how it is possible to define a residual based a posteriori error estimator and to show its equivalence to the actual error. For simplicity, we will only discuss the case of the FOSLS formulation (F1) even if analogous constructions can be performed by the other formulations.
Usually, least-squares finite element formulations come with an intrinsic a posteriori estimator which is based on the functional used for the definition of the method. However, in the case of the eigenvalue problem that we presented, we are computing eigensolutions of the operator associated with the least-square formulations of the source problem. It follows that the construction and the analysis of our posteriori error estimator will be performed in a more conventional way like for standard variational formuations.
The analysis we are presenting is using arguments that have been already adopted in the literature for analogous problems. We refer, in particular, to [17] for the approximation of standard Laplace eigenproblem and to [1,13] for the source Laplace problem in mixed form. The interested reader is also referred to [7] for the Laplace eigenproblem in mixed form.
We consider the following estimator on a single element T which gives as usual the global estimator The next theorem shows the reliability of the proposed error indicator. For the sake of readability we state the result in the case of a simple eigenvalue. More general situations can be handled with standard arguments. We consider the approximation of (F1) where the spaces Σ h and U h are one of the standard mixed families (Raviart-Thomas, Brezzi-Douglas-Marini, etc.) and a standard finite element space of continuous piecewise polynomials in H 1 0 (Ω), respectively. We do not impose any condition on the polynomial order of Σ h and U h .
Theorem 15 (Reliability). Let λ ∈ R be a simple eigenvalue of (F1) with eigenfunction u ∈ H 1 0 (Ω) and let σ ∈ H(div; Ω) be the other component of the solution. Consider the approximation λ h of λ with eigenfunction u h ∈ U h converging to u (this can be obtained by appropriate normalization and choice of the sign) and let σ h ∈ Σ h be converging analogously to σ. Then there exists a constant C, depending only on the choice of the spaces Σ h and U h , and on the shape of the elements, such that Proof. Let us start with the estimate of σ − σ h L 2 (Ω) . We consider the Helmholtz It is then standard to estimate ∇ z as follows where z I is an approximation of z in U h and where we used the error equation associated with our formulation. The estimate of curl β is performed as usual by considering the Scott-Zhang interpolant β I of β; we observe that we have (curl β, curl β I ) = −(σ − σ h , curl β I ) = 0 Indeed choosing τ = curl β I in the following error equation Let us now move to the estimate of ∇(u − u h ). We observe that from our error equation we have for all v ∈ U h . It follows where w ∈ U h is the Scott-Zhang interpolant of u − u h . The second term in the last expression can be easily be bounded by σ − σ h L 2 (Ω) ∇(u − u h ) L 2 (Ω) , so that we have to estimate the first one. By standard arguments we have Hence we have which together with the obtained estimate for σ − σ h L 2 (Ω) implies the result.
The efficiency of the proposed estimator can be shown as it is standard by local inverse inequalities and the use of suitable bubble functions. Without giving any additional detail we state the final result.
Theorem 16 (Efficiency). We the same hypotheses as for the reliability result, we have that the error is an upper bound for our estimator, that is

Numerical examples
In this section we report some numerical examples that confirm the theoretical results of this paper. Moreover, we shall show how the a posteriori analysis developed in Section 5 can be used in the framework of an adaptive scheme.
6.1. A priori convergence: FOSLS formulation. In order to confirm the convergence rates stated in Theorems 12 and 13 we first consider a square domain Ω =]0, 1[ 2 where the solution of the Laplace eigenvalue problems is well known. We compare the solutions computed with a standard finite element formulation (continuous Lagrangian elements of order one), a standard mixed finite element formulation (based on lowest order Raviart-Thomas elements) and the FOSLS formulation (F1h), where we have made three choices for the space Σ h : Raviart-Thomas element, Brezzi-Douglas-Marini element, and standard Lagrangian element of lowest order; in all cases we use continuous piecewise linear polynomials for the space U h in the FOSLS formulation. It turns out that the results are pretty much comparable and that also in the case of the FOSLS formulation with Lagrangian elements, which is not covered by our theory, we obtain reasonable results. Figure 1 shows various error quantities related to the approximation of the smallest eigenvalue with the considered numerical schemes.  We computed the eigenvalues of our problem on an L-shaped domain with continuous piecewise polynomials for both variables. From the convergence plots, shown in Figure 2, it is clear the first five eigenvalues have different convergence properties. In particular, the first (singular) eigenvalue is not converging; it could be actually shown that it converges optimally towards a wrong value. This is a similar behavior as what as been previously observed for other formulations involving div and curl of σ (see, for instance, [18,16,15]. 6.3. A posteriori analysis and adaptive algorithm. The a posteriori error estimator studied in Section 5 can be naturally used in order to drive an adaptive scheme within the usual SOLVE-ESTIMATE-MARK-REFINE cycle, when Dörfler marking is adopted. We used the FOSLS formulation with Raviart-Thomas elements in order to approximate the fundamental mode of the Laplace eigenvalue problems on an L-shaped domain. Figure 3 shows the error plots as a function of the number of degrees of freedom corresponding to different choices of the Dörfler bulk parameter ϑ. Uniform refinement corresponds to the choice ϑ = 1. The results show that the choice ϑ = 0.3 gives optimal convergence.