Unusual square roots in the ghost-free theory of massive gravity

A crucial building block of the ghost free massive gravity is the square root function of a matrix. This is a problematic entity from the viewpoint of existence and uniqueness properties. We accurately describe the freedom of choosing a square root of a (non-degenerate) matrix. It has discrete and (in special cases) continuous parts. When continuous freedom is present, the usual perturbation theory in terms of matrices can be critically ill defined for some choices of the square root. We consider the new formulation of massive and bimetric gravity which deals directly with eigenvalues (in disguise of elementary symmetric polynomials) instead of matrices. It allows for a meaningful discussion of perturbation theory in such cases, even though certain non-analytic features arise.


Introduction
Over the last century the theory of general relativity has been confronted with experiment at ever increasing precision in a big variety of scales. It works very well in many different circumstances, from Solar System tests and even GPS technology to cosmological perturbation theory. However, in the latter case there is a number of issues that might hint towards modifications of GR. Those include the missing mass and the accelerated expansion problems. It perfectly motivates a whole variety of approaches to modify or extend the theory of gravitational interactions. Massive (and bimetric) gravity represent a rather popular option for that; one which is theoretically challenging but promising and relatively well studied.
The idea of a massive graviton dates back to classical work of Fierz and Pauli in 1939 [1] which at the time was more of a theoretical excercise. At later times, the naive expectation

JHEP06(2017)130
was that the Yukawa exponent should give rise to an effective cut-off for the gravitational force on large scales leading to possible explanation of the Dark Energy phenomenon. In reality it turned out not that simple, of course. And, even at the most formal level, one of the earliest issues was the infamous vDVZ discontinuity (van Dam-Veltman-Zakharov [2,3]) discovered in 1970: when the graviton mass tends to zero, the (by definition, linearised) Fierz-Pauli theory does not restore massless GR since the scalar mode does not completely decouple. This problem can have a resolution via non-linear effects known under the name of Vainshtein mechanism [4] which has been proposed in 1972, see also ref. [5].
However, the second significant problem was precisely about building a non-linear realisation of the Fierz-Pauli model. In GR ten independent variables are reduced to two degrees of freedom because the lapse and shifts play the role of four Lagrange multipliers enforcing four first class constraints in the spatial sector of four otherwise independent variables. If one modifies the theory, the non-dynamical variables generically would enter the action non-linearly leading to disappearance of constraints in the spatial sector and to six degrees of freedom being left. Five of them are the usual polarisations of massive spin two, while the remaining one is a ghost, the Boulware-Deser ghost [6]. Therefore, a modified theory needs to have a constraint for elimination of the ghost. A beautiful solution was suggested in 2010 by de Rham, Gabadadze and Tolley [7,8]. It has a pair of second class constraints for the spatial sector [9,10].
Obviously, such models require a second metric on top of the physical one (in order to have an invariant non-derivative interaction potential, in the first place). Initially, the model made use of Minkowski metric η µν which serves as a proper background for perturbation theory in the weak gravity limit. However, it was quickly generalised to an arbitrary non-dynamical symmetric tensor f µν [11,12], and even to the full-fledged bimetric gravity [12,13] with two dynamical metrics g µν and f µν . Massive gravity and its various extensions have been widely applied in cosmological settings. Having a viable cosmology with massive gravity appears non-trivial [14] but not impossible [15,16]. Actually, it might even be good enough for reproducing effects of Dark Matter [17]. Nice reviews of massive and bimetric gravity can be found in refs. [18,19].
As we have already stated above, massive gravity has two metrics, one of which (g µν ) is the physical one which couples to matter, while the second one is an auxiliary (fiducial) metric which can either be fixed of dynamical. They go with an interaction potential made of g −1 f , the square root of the matrix X µ ν ≡ g µα f αν . Among very many papers devoted to different aspects of bimetric gravity, only very few [20][21][22][23][24] deal with the formal aspects of solving the matrix equation A 2 = X that arises in theory and defines the interaction between metrics. This paper aims at filling this yawning gap based on our new approach [25] which deals with eigenvalues instead of matrix components.
The paper is organised as follows. In section 2 we review the bimetric gravity. In section 3 (and appendix A) we describe square roots of matrices and related problems of massive gravity. In section 4 we describe our new formulation of the model. In section 5 we apply it to perturbation theory around unusual square roots with continuous ambiguity. Finally, in section 6 we discuss implications and conclude.

JHEP06(2017)130 2 Bimetric theory
Let us first review the basic foundations of bimetric theory. The action can be written (in four dimensions) as with the elementary symmetric polynomials e n ≡ i 1 <i 2 <...<in λ i 1 λ i 2 · · · λ in of the eigenvalues λ i of the matrix g −1 f ≡ √ X. By definition we take e 0 (M) = 1, and then the following recurrent relation can be proven:

JHEP06(2017)130
Similarly for f µν we find from eq. (2.7) that In this paper we consider the massive gravity regime with the auxiliary metric f µν = η µν , and therefore the R f term in the action should be ignored. However, the issues we discuss are generic for any situations in massive and bimetric gravity in metric formulation.

The constraint analysis
We are interested in how far the good properties of the model, including the absence of Boulware-Deser ghost, can go with unusual square roots. Initially, the degrees of freedom count has been done in perturbation theory around the trivial square root of unit matrix. However, it left open the question of self-consistency beyond perturbative level which required methods of non-perturbative Hamiltonian analysis.
The latter was first done by Hassan and Rosen in a series of papers starting from [10] with the simplest case of β 1 model and Minkowski fiducial metric which is enough for the purposes of our discussion. They used the ADM decomposition for g µν : where the lapse N , the shift N i , and the spatial metric γ ij are the usual ADM variables for g µν , and δ ij is the Euclidean spatial metric (as a part of fiducial Minkowski η µν ). The first order Lagrangian of the β 1 model can be written in these variables as: where the number 3 in the potential stands solely for compensating the effective cosmological constant from the β 1 term. R 0 , R i are the usual constraints of GR, and π ij is the canonical conjugate momentum of γ ij . The crucial feature of generic massive gravity is that the lapse and shift enter the potential term non-linearly, and therefore they fail to serve as Lagrange multipliers. Their variations give algebraic equations for themselves instead of constraining the spatial sector. That's how we get 6 degrees of freedom: 5 of massive spin two, and 1 of the scalar ghost. A way out would be to have a very special potential which allows one (primary) constraint to survive in the spatial sector. Hassan and Rosen have shown that one can make a (linear in lapse) redefinition of the shift vector such that the Lagrangian density of the β 1 model is rendered linear in the lapse,

JHEP06(2017)130
It allows the lapse variable to serve as Lagrange multiplier and constrain the spatial sector while the new shift n i would be determined by its own (algebraic) equation of motion. Namely, one can check by direct calculation [10] that the matrices A and B are given by provided that the transformation of shifts is obtained by solving equation The equation of motion for n i reads where the second factor is the Jacobian of the transformation (2.8) and cannot be equal to zero, while vanishing of the first factor determines the shift: The variation with respect to N gives then: which, after substitution of the shifts, is the constraint equation for the spatial sector. Its conservation in time would amount to another constraint of a second class pair. At the final step of the Hamiltonian analysis the value of the lapse would be determined. Let us note that this proof allows for at least some freedom in choosing the square root X. Indeed, the 3 × 3-dimensional square root remains unspecified. And the freedom of changing the sign of the temporal eigenvalue can be realised by simply changing the overall sign of the √ X together with explicit three-dimensional freedom. However, we see that changing the signs of A and B does not alter the conclusions more than simply changing the sign of the shift: n i = +R j δ ji 4m 4 detγ + R k δ kl R l −1/2 . Therefore, it seems that the proof of Hassan and Rosen has pretty much generic consequences, beyond the standard choice of the square roots. One can also give independent arguments which look even more generic [28]. It is possible to write down the β 1 term as β 1 √ γ [F] with the constraint, enforced by Lagrange multipliers, that F 2 = N X. Then one can show that there appears a constraint in the spatial sector [28]. Therefore it might be sensible to consider massive gravity with non-standard square roots. And we must understand the mathematical structures behind them.

The structure of square roots
The mathematics of extracting square roots of matrices amounts to solving a simple matrix equation

JHEP06(2017)130
where X is a given n × n matrix (g −1 f in our case), and A is an unknown which we denote by A ≡ √ X. Note that the procedure of squaring a matrix is respected by similarity transformations in the sense that for any non-singular matrix S. As usual, it is more convenient to start the discussion over the field of complex numbers. As is well known [27], any matrix can be brought to the Jordan normal form by a similarity transformation. Therefore, it is enough to understand how to take square roots of matrices in the Jordan normal form, for otherwise a suitable similarity transformation can be applied. It can be shown, see appendix A and ref. [27], that a square root of a Jordan normal form is obtained by taking square roots of separate Jordan blocks 1 which is always possible (recall that we are working with complex numbers) apart from the case of degenerate (λ = 0) Jordan blocks of dimension greater than one. The latter case does not concern us much since our matrices are by definition non-degenerate (see however [21,22]). The details are given in the appendix A, while here we assume for simplicity that the matrix X is diagonalisable, i.e. all its Jordan blocks are 1-dimensional.
For any diagonalisable matrix which features discrete freedom which corresponds to 2 n elements in (Z 2 ) n . If some eigenvalues are equal then also a larger freedom appears. Indeed, if we have chosen + √ λ in one place and − √ λ in another, then a non-trivial similarity transformation in the subspace spanned by corresponding eigenvectors would change the square root √ X but not the matrix X itself, for the latter is proportional to identity in this subspace.
Suppose that the continuous freedom is present. With a given choice of eigenvalues for √ X, the set of different solutions for √ X naturally has a structure of smooth manifold. Indeed, the group of similarity transformations can be viewed as the adjoint representation of the general linear group GL(n, C) given by L S : X −→ SXS −1 . The stabiliser subgroup (or centraliser) of a given matrix X is, by definition, a subgroup which maps X to itself, S X ≡ {S ∈ GL(n, C) : L S X = X}. Obviously, S √ X ⊂ S X , and the continuous freedom arises when S √ X = S X which is true if and only if equal eigenvalues of X were chosen to have unequal square roots. In this case S √ X naturally acts on S X , and the freedom of choosing the square root is parametrised by the homogeneous space [26] S X /S √ X .

On conditions of reality
Of course, we are interested in classifying the real square roots. In this case, one has to bear in mind that a real matrix can have a pair of complex eigenvalues, complex conjugate JHEP06(2017)130 to each other. As an example, note that a matrix a b −b a has the normal form which reads a+ib 0 0 a−ib . In this case we need to choose the square roots √ a ± ib to be complex conjugate to each other if we want the inverse similarity transformation to bring it back to real values. It narrows the freedom.
Note also that if the matrix X has negative eigenvalues, then their square roots are imaginary. However, if they go in pairs, or more generally if there are even numbers of identical Jordan blocks with negative eigenvalues, then the √ λ-s can be chosen in complex conjugate pairs (which of course invokes continuous freedom), and the matrix √ X would become real after a suitable similarity transformation from what is allowed by the continuous freedom. Otherwise, a real square root does not exist. See also the ref. [29] for geometrical and physical understanding of reality conditions.

The problem with variations
In the standard approach to massive gravity we make an assumption that the object δ √ X ≡ √ X + δX − √ X does exist as an infinitesimal variation of √ X. This assumption can break down with non-standard square roots [23]. This is somewhat unusual. Indeed, usually we can even have a series expansion in powers of √ X, at least around a solution with proportional metrics for which X = c · I with c ∈ R. However, even for proportional backgrounds, if we choose √ I = I then a generic δX does not commute with √ I, and we cannot write the usual expansion for √ I + δX in terms of powers of δX. It makes the standard variations problematic. And actually, in certain cases, the problem is more profound than just a mere technicality. In cases of continuous freedom a smooth variation of √ X does not exist at all. Let us consider the 2 × 2 unit matrix I = ( 1 0 0 1 ). It has two isolated roots, we have only four isolated roots: Let us recapitulate the situation. If the freedom is only of discrete type, then it is basically a technical issue of finding a proper series expansion when X and δX do not commute [30]. For example, the linear correction solves the equation of Sylvester type: And the problem is to find δ √ X explicitly. It can be done at the expense of using clever field redefinitions [31].
Continuous freedom presents a much worse problem. Typically in symmetric set-ups, we can have coincident eigenvalues in the background matrix X, and therefore the freedom of continuous type in √ X. Generic perturbation δX would split the eigenvalues, and only some discrete subset of possible solutions for √ X would be smoothly related to those for √ X + δX. Perturbation theory in terms of matrices is not well defined. Note that the approach with the Sylvester equation works well if and only if the spectra of √ X and − √ X 2 Introducing new variables b± ≡ 1 √ 2 (b1 ± b2), we see that the manifold is the surface a 2 + b 2

JHEP06(2017)130
have empty intersection [31,32]. Obviously, it fails precisely when some equal eigenvalues have different square roots and continuous freedom exists. Note though that the continuous freedom does not affect the eigenvalues which are the only thing which enters the action. Therefore, it should be better to perform the variations at the level of eigenvalues. We have recently proposed one such method [25]. In this approach we are not interested in square root matrices, instead we define the polynomials e i in the action (2.1) as solutions of algebraic equations in terms of elementary symmetric polynomials of the g −1 f or f −1 g matrix. Effectively, it eliminates the continuous freedom since the eigenvalues are not changing under the similarity transformations.

New approach to the action principle
As we have seen, working in terms of eigenvalues could allow to overcome the problems with perturbation theory which arise when continuous freedom is present. However, explicitly computing eigenvalues for a generic matrix in massive gravity might be a very unpleasant task. In our previous paper [25] we offered a better method. The action (2.1) depends only on elementary symmetric polynomials of eigenvalues. And we used the following observation: to find how the symmetric polynomials of X and √ X are related to each other. From now on we denote for brevity and get the desired relation from (4.1) with M = √ X: In four dimension we have more explicitly: The interaction Lagrangian takes the form of β i e i . And we can perform the variation as: JHEP06 (2017)130 where one easily gets Indeed, for n = 1 we trivially see that g µα ∂ ∂g αν g −1 η = X µ ν and more generally g −1 ∂ ∂g −1 [X n ] = nX n , and then go by induction using p n = 1 In any case when the standard approach in terms of matrices works well, it is guaranteed that these equations of motion coincide with the usual ones since we are doing literally the same variation though without having the matrix elements explicitly. Let us illustrate how it works in the simplest case of two dimensions where equations (4.2) take the form of and therefore ∂p ∂e After a simple calculation we obtain from (4.7) and (4.8) Multiplying this relation by g −1 , we need to prove that M 2 = X for where we have used the Cayley-Hamilton theorem (see appendix A.5), X 2 − p 1 X + p 2 I = O, for the matrix X. It can be done by direct calculation employing again the Cayley-Hamilton theorem.
In four dimensions the things become rather bulky, and we have and for the β 1 term we have to check again that where we have used, and need to further use in M 2 , the Cayley-Hamilton theorem which in four dimensions reads After we have got an expression for √ X, all other terms can be considered straightforwardly.

JHEP06(2017)130
Obviously, it can be done in any dimension, and works well as long as det ∂p ∂e = 0. However, we will not go into this combinatorics, and we only write down the determinant in the three dimensional case for future use.

Perturbation theory
Obviously, it is not always possible to have an exact solution, and one often uses perturbation theory. In particular, for the weak gravity limit the background solution corresponds to X = I and values of polynomials p 1 = 4, p 2 = 6, p 3 = 4, p 4 = 1. With the trivial choice of the square root √ X = I we have e i = p i , and one can check that det ∂p ∂e = 0. For perturbative calculations around Minkowski space in massive gravity it is convenient to use the symmetry of massive gravity (2.3) and work with , and having worked out a symmetric polynomial one does not need to multiply its perturbative expansion by expansion for √ −g. In the previous paper [25], we have shown how it goes, and reproduced the standard result that Minkowski space is a vacuum solution if and only if and the quadratic action is then which is the Fierz-Pauli term.

Perturbation theory around unusual roots
Now let us consider perturbation theory around unusual roots. Note that in those cases when odd numbers of eigenvalues are taken with reversed signs, we need to be more accurate about the symmetry transformation (2.3). Since any such choice entails det η −1 g < 0, . This is not a big deal, but unfortunately it is not the end of the story.
Unusual roots are precisely the cases when det ∂p ∂e = 0. And perturbation theory ceases to be analytic and nice. Let us first look at a toy example of 2D. Unusual square roots are given by √ I = We see that there are two solutions for e 1 . Moreover, at the lowest order we have which scales linearly with H but not analytic. The reason is simple. For a generic perturbation H two eigenvalues are different, and the sign of e 1 depends on which of them (the smaller or the larger) is taken with the minus sign.
In the interaction potential we have then L int = −β 0 e 2 − β 1 e 1 which has at the linear . We see that vanishing of the linear variation with arbitrary H around this solution requires two conditions, for analytic and non-analytic parts separately. In 2D it is too restrictive, and sets both β-s to zero. Obviously, non-analytic features should be seen in more complicated cases, too. The number of solutions is given by the number of possible ways to choose the distribution of signs among the square roots of eigenvalues. We will now check it explicitly in three and four dimensions.

The 3D case
Let us now turn to three dimensions. Since we are interested in Minkowski background, we need to discuss square roots of three-dimensional unit matrix. In terms of eigenvalues, we choose the signs of three square roots of unity. Barring the overall sign of the potential term, there are two distinct options: either all three square roots have equal signs which is the standard solution without continuous freedom, or we have one eigenvalue with a different sign. We see that in three dimensions it suffices to study the following unusual square root of I: and all its similarity transformations. We have the interaction term where the prefactor of m 2 M 2 P l is omitted (or absorbed into the β-s). Also the β 3 term has been omitted since it does not depend on g. The difference from the standard approach is JHEP06(2017)130 that we now treat the quantities e i as solutions of algebraic equations (4.2) with no regard to square root matrices which effectively eliminates the continuous freedom of similarity transformations. Equations (4.2) take the form 3) with p i ≡ e i η −1 g = e i (I + H): where h i ≡ e i (H) are elementary symmetric polynomials of the matrix H µ ν = η µα · δg αν . We now build the perturbation theory in the form of Now we substitute the expansions (5.5) and (5.6) into the system of equations (5.2), (5.3) and keep only terms of the first order: The system is obviously degenerate. We have 1 + e 1 e 1 , and, after taking it into account, the second one (5.3) finally allows to know the first order completely: This is a generic cubic equations (the three coefficients depend on three parameters h i ). Its three roots correspond to freedom of choosing which one of three eigenvalues of perturbed matrix X would go with the reversed sign. It is worth to perform a consistency check here. Let λ i , i = 1, 2, 3 be eigenvalues of H. Then we have h 1 = i λ i , h 2 = i<j λ i λ j , h 3 = i<j<k λ i λ j λ k = λ 1 λ 2 λ 3 . One can easily check by Viet theorem that the three solutions of the cubic equation (5.9) are 1 2 (−λ 1 + λ 2 + λ 3 ), 1 2 (−λ 2 + λ 3 + λ 1 ), and 1 2 (−λ 3 + λ 1 + λ 2 ). These are indeed the three possible first order contributions to e 1 with eigenvalues ± √ 1 + λ i and our choice of the square root.
After we have reproduced the full freedom of choosing the eigenvalues, the system (5.2), (5.3) becomes non-degenerate giving two new pieces of information at each order. In particular, at the fourth order the first equation 1 both quadratically and linearly. However, after taking the first equation (5.10) into account, one gets a linear equation for e (2) 1 which can be unequivocally solved. Explicit expressions are rather cumbersome, and we don't go into that in this paper. 3 The first order contribution to the interaction Lagrangian is: 1 .
If Minkowski space is to be a solution of equations of motion, then we need to impose δ (1) L int = 0. Since e 1 is generically irrational for rational values of h i , we need the two parts, with h 1 and with e

JHEP06(2017)130
It is interesting to note that one can formally plug the Minkowski space with our choice (5.1) of g −1 η into the naive equation of motion (2.5), and deduce the same condition (5.11) for β-s. Now we look at the second order Lagrangian for perturbations: We see that it contains the root e 1 of the cubic equation (5.9), and therefore is not analytic in H. If we enumerate the eigenvalues of H such that e (1) The three branches of solution correspond to three possible choices of two out of three eigenvalues for putting them into δ (2) L int .

Square roots in the physical dimension
In four dimensions the system of equations (4.2) would be slightly more complicated: 4 = −1, and one easily checks that the Jacobian (4.9) of transformation from e i to p i vanishes.

JHEP06(2017)130
As before, the last polynomial (5.15) is easy to find: The remaining equations (5.12)-(5.14) give at the first order: The system is degenerate, and we can only find that e (1) The degeneracy gets broken at the fourth order level producing a generic (in terms of independent parameters) quartic equation for e (1) 1 . Let us now have a look at the interaction Lagrangian: β n · e 4−n η −1 g the first order contribution to which reads 1 .
Again, the Minkowski space is a solution when δ (1) L int = 0. And we require the two parts, with h 1 and e 1 , to vanish independently: As in 3D, one can formally plug the Minkowski space with (5.16) into the naive equa- and deduce the same condition (5.17) for β-s.

JHEP06(2017)130
Now we look at the second order Lagrangian for perturbations: If we enumerate the eigenvalues of H such that e (1) , then we get: The structure is basically the same as for the Fierz-Pauli term, though with only a part of the full set of eigenvalues. We remind to the reader that for such square roots the usual perturbation theory in terms of matrices is not well defined unless the perturbation respects separation into two subspaces (of positive and negative eigenvalues). In our case, of this and the previous subsections, the latter occurs if h 1i = 0 for i ≥ 2. Then the structure of Otherwise the perturbation theory is not analytic and has three (respectively four, or D) branches according to solutions of the cubic (respectively quartic, or power D) equation.
An interesting fact is that the background equation of motion is correctly reproduced by naively applying formula (2.5) which is not even well defined in such cases. Apparently, it has to do with the fact that X ∝ I can be treated as a limiting case of a cosmological matrix diag (N (t), a(t), . . . , a(t)) for which the perturbations are well defined. Note though that this limit is singular in the sense that it features appearance of non-analyticity in perturbations, and is not defined at all in the language of matrices (compare to [23] where they also mention non-analyticity in the case of non-zero "new shift" variable which is probably the first indication in the literature towards the phenomena we have thoroughly studied in the present paper).

4D: the second case
Another option available in four dimensions can be represented by the root

−2e
(1) 1 + e which gives a finite value for e 1 + e Then, at the fourth order, the system of equations (4.3)-(4.5) becomes non-degenerate, and the first equation

JHEP06(2017)130
Two other equations are very cumbersome. However, subtracting the second (4.4) from the third (4.5), and using the first one (5.21), we get a very nice equation which fully determines the first order of perturbations: 1 + e and, given also (5.20), those values of e 1 are indeed the solutions of (5.22). Let us now have a look at the interaction Lagrangian: with the first order expression being 1 .
We look for Minkowski space to be a solution: δ (1) L int = 0, and demand Probably, at this point the reader would not be much surprised if we say that the naive equation of motion 3 which entails the same condition (5.24) for β-s. The second order Lagrangian for perturbations is which, given the identifications (5.20) and (5.23) above, is equivalent to

JHEP06(2017)130
The structure is basically the same as for the Fierz-Pauli term, though with two separate parts of the full set of eigenvalues. As we have already mentioned before, the usual perturbation theory in terms of matrices is not well defined unless the perturbation respects separation into two subspaces (of positive and negative eigenvalues). In our case the latter occurs if h ij = 0 for i ≤ 2 and j ≥ 3 or vice versa. Then the structure of quadratic Lagrangian has two Fierz-Pauli-like terms Otherwise the perturbation theory is not analytic. The six branches of it correspond to different separations of four eigenvalues into two groups of two.
Note that this kind of unusual root can also be related to bimetric cosmology. However, making it regular would require at least an axially symmetric Bianchi I spacetime. In bimetric Friedmann cosmology given by , the classical equations of motion are well defined only for square roots of g −1 f = ±{±W, Y /a, Y /a, Y /a} type since they do not make equal eigenvalues different. But we must be careful if choosing another square root: g −1 f = {W, Y /a, −Y /a, −Y /a} which would give rise to non-analyticity of the type which has been considered in our subsection 5.1. The standard perturbation theory would not be well defined (compare to [23]).
In higher dimensions the concrete calculations become more and more involved. We don't aim at tackling this issues in the present paper. However, it seems reasonable to conjecture that the quadratic Lagrangian around Minkowski would always look like where C 1 and C 2 are constants, and I − and I + are the sets of indices corresponding to negative and positive eigenvalues of √ I respectively.

Discussion and conclusions
It is amazingly difficult to promote the simple action of Fierz and Pauli to a full-fledged non-linear theory of massive graviton. In the metric formulation one needs to compute the square root function of the matrix g −1 f . This procedure is definitely not among the simplest recipes of modern theoretical physics. And even in perturbation theory it is not so easy to deal with perturbative expansion of the square root, unless around a matrix proportional to identity. In some cases, it is not a mere technicality but rather a severe obstacle for correctly introducing small perturbations. There is some freedom in taking the square root of a matrix. It can be of two types. The first type is absolutely generic. It gives discrete freedom which basically refers to the usual two branches of the square root function in complex analysis. The peculiarity is that those branches can be independently chosen for different Jordan blocks. The second type is continuous and happens only in special cases. If we have taken different branches for Jordan blocks with equal eigenvalues, then a manifold of solutions arises given by those similarity transformations which change the square root without affecting the initial matrix.

JHEP06(2017)130
In presence of continuous freedom, the naive perturbation theory in terms of matrices breaks down. Generic perturbation collapses the manifold to a finite number of points which depend on the chosen perturbation. Perturbation theory around such square roots is not well defined. A positive observation on top of that is that the similarity transformations which generate the manifold do not change the eigenvalues, and the eigenvalues are the only quantities which affect the Lagrangian. Of course, working directly with eigenvalues is very difficult. But fortunately everything can be done in terms of their elementary symmetric polynomials, and the mission is rendered accomplishable.
In this paper we have described the mathematical issues of working with square roots in massive gravity and developed the new perturbation theory in terms of eigenvalues and their elementary symmetric polynomials. The latter appeared to be a suitable tool for perturbation theory, even when around unusual roots. Our method allows to meaningfully define perturbations in cases of continuous freedom, even though with non-analyticity. Note that the origins of non-analyticity are very clear, however its implications remain to be understood, as well as its possible relations to the vielbein formulations of massive gravity.

Acknowledgments
FS thanks Nicolay Gordeev and Yelena Yashina for useful discussions about some mathematical questions related to this paper. AG enjoyed many inspiring discussions about the square roots and other topics of massive gravity with Fawad Hassan and Mikica Kocic. AG is grateful to the Dynasty Foundation and to the Saint Petersburg State University travel grant 11.42.687.2017 for support; and also support from the Russian Science Foundation (project 16-11-10218) is gratefully acknowledged.

A Some known facts from the theory of matrices
In this appendix we briefly review some well known facts from the theory of matrices over the field of complex numbers.

A.1 Spectral theorem
In finite-dimensional spaces one can give the full spectral classification of linear operators, without any requirements of self-adjointness or unitarity. In the language of matrices, an n × n matrix X can be brought by a similarity transformation to the Jordan normal form which is a block-diagonal form SXS −1 = J k 1 (λ 1 ) J k 2 (λ 2 ) . . . with k i standing for the dimensionality of the i-th block, and the latter is given by with the complex number λ being the eigenvalue (of geometric multiplicity 1 and algebraic multiplicity k in this particular block). In components, we have non-zero elements J i,i = λ

JHEP06(2017)130
and J i,i+1 = 1. If all Jordan blocks are 1-dimensional then the matrix is diagonalisable. Note also that a similarity transformation can be viewed as simply changing the basis in the linear space in which the operator acts. The matrix S maps one basis onto another.
Various proofs of this theorem can be found in numerous textbooks on matrices. Here we only sketch a simple one. Given an n×n matrix X over complex numbers, one can always find an eigenvector, 4 and choosing it as the n-th element of the basis we make X n,k = 0 for k < n. After that we do the same for the upper left (n − 1) × (n − 1) corner of the new matrix, and so on. By induction, we bring X to an upper triangular form. Obviously, the numbers on the diagonal are eigenvalues, i.e. the roots of the characteristic polynomial of X. By renumbering the elements of the basis, we can group them into sets of equal eigenvalues (if there are such). Let us choose one of the eigenvalues, λ 1 . Then, we have whereX is non-degenerate and X 0 is nilpotent. If we make a similarity transformation by matrix I C O I then the matrix X − λ 1 I, and also the matrix X, becomes block-diagonal ifXC − CX 0 = M which can be solved as C = i≥0X −i−1 M X i 0 which is a well-defined finite sum sinceX is non-degenerate and X 0 is nilpotent. By induction, we separate all blocks with different eigenvalues. Now we only have to understand the structure of a matrix X λ 1 with all eigenvalues equal to a given number λ 1 . Obviously, it suffices to study the nilpotent matrix X 0 = X λ 1 − λ 1 I of a given size k × k. Actually, what we need now is a very generic statement which does not rely on the nice properties of the complex numbers: for any nilpotent linear operator in any (finite-dimensional) linear space one can decompose the vector space into direct sum of cyclic subspaces. Let's take the minimal number k 1 ≤ k such that X k 1 0 = 0. Then there is a vector e 0 for which X k 1 −1 0 e 0 = 0. This vector together with all vectors X i 0 e 0 generates a vector subspace V 0 in which the matrix X 0 acts as a Jordan block with λ = 0. If k 1 = k, we are done. Otherwise, let's take a suitable 5 vector e 1 from a linear complement to V 0 . We want to build a new cyclic subspace by vectors X i 0 e 1 . This strategy might fail if for some i we get 0 = X i 0 e 1 ∈ V 0 . Then it means that the vector X i 0 e 1 is a linear combination of vectors X j 0 e 0 with j ≥ i (for otherwise we have X k 1 0 e 1 = 0). Therefore, there is someẽ ∈ V 0 such that X i 0ẽ = X i 0 e 1 , and one can substitute e 1 by e 1 = e 1 −ẽ to overcome this difficulty. This procedure can be repeated if needed. By induction, we prove that X 0 can be brought to the form of direct sum of nilpotent Jordan blocks of sizes k i such that k i = k. The spectral theorem is proven.

A.2 Analytic functions of matrices
Similarity transformations preserve polynomial relations. If P is a polynomial, then obviously P (SXS −1 ) = SP (X)S −1 . Of course, it can be extended to any functions of matrices defined by norm-convergent series. The Jordan normal form gives also a very convenient JHEP06(2017)130 tool for representing functions of matrices by One can easily compute the powers of the Jordan block and see that for an analytic function f which is given by its Taylor series it gives It allows to extend the definition to the class of C k−1 (U) functions where k is the maximal size of the Jordan blocks, and U ∈ R is a domain which contains the full spectrum of the matrix.
In particular, since f (x) = √ x is smooth for x = 0, we have a definition of the square root of any non-degenerate matrix. Since the square root is a multivalued function, we see that the definition of √ X admits the discrete freedom of choosing the sign of √ λ in each Jordan block. One has a matrix from the similarity class of J k (± √ λ) for J k (λ) where λ = 0. We will see below that it is not the end of the story if we look at square roots as solutions of the matrix equation A · A = X.
Before doing that, let us comment on degenerate matrices. One can easily see that Therefore, (J 3 (0)) 2 is equivalent to J 2 (0) J 1 (0) under renumbering of the basis' elements. The Jordan block splits into two. Obviously, the splitting is related to the fact that f (λ) = 0. Note that it never occurs if f (λ) = 0. Given also that (J 2 (0)) 2 = O, we see that a 3 × 3 matrix A such that A 2 = J 3 (0) does not exist. For some degenerate matrices, square roots do not exist at all, even over the field of complex numbers.

A.3 Commuting matrices
In order to determine all solutions of quadratic equation A · A = X, we will need to know the full set of solutions for commutation equation of the form AX = XA.
Let us work in the basis in which the given matrix X acquires the canonical Jordan form X = i J k i (λ i ). This is a block diagonal form, and so let us consider the matrix A in a block form composed of k i × k j matrices A i,j . Then the equation AX = XA takes the form A i,j J k j (λ j ) = J k i (λ i )A i,j (no summation).
If λ i = λ j we get (λ i − λ j )A i,j = J k i (0)A i,j − A i,j J k j (0). The matrices J(0) in the right hand side are nilpotent. We can multiply this equation by λ i − λ j as many times as JHEP06 (2017)130 we want, and have higher powers of nilpotent matrices in the right hand side. Finally, we get A i,j = 0.
If λ i = λ j , then we have a simple equation A i,j J k j (λ) = J k i (λ)A i,j . The Jordan blocks have very simple structure, and one can easily check that if k i = k j (for example, i = jdiagonal blocks) then A i,j is an upper triangular matrix with components (A i,j ) a,a+b = C b given by arbitrary numbers C b for b = 0, 1, . . . k − 1. If k i = k j then the solution is either whereÃ i,j is an upper triangular matrix of the same form as presented above and the size of min(k i , k j ) × min(k i , k j ).
If the matrix X was diagonalisable, then all the blocks are 1 × 1-dimensional, and we simply have that the matrix element A ij is arbitrary if λ i = λ j , and A ij = 0 if λ i = λ j . It is very easy to understand: in those subspaces where the matrix X is proportional to identity it commutes with anything. And in general we need the two matrices to be simultaneously diagonalisable which is well known in quantum mechanics as the condition for two quantities to be simultaneously measurable.

A.4 Square roots of matrices
Let us now assume we are given a non-degenerate matrix and need to solve the quadratic equation A 2 = X for A. The matrix A can also be decomposed into the direct some of non-degenerate Jordan blocks which do not split under taking the square of the matrix. And therefore there is a one-to-one correspondence between the Jordan blocks of X and A. And, obviously, instead of J k (± √ λ) we can use J k (λ) in the spectral decomposition for A: where T gives a similarity transformation to be found, with the simplest solution being T = S.
Given that A 2 = X we have: T −1 ( i J k i (λ i )) T = S −1 ( i J k i (λ i )) S. It easily translates into the commutation property: ST −1 ( i J k i (λ i )) = ( i J k i (λ i )) ST −1 . One solution is always T = S, while the general solution is that T S −1 commutes with i J k i (λ i ). In other words, T is a product of S and an arbitrary matrix which commutes with i J k i (λ i ). Of course, it gives a new square root only if T S −1 does not commute with i J k i (λ i ). Now, we can see where appears a continuous freedom in square roots. If there are two Jordan blocks in X with equal eigenvalues, λ i = λ j , then a similarity transformation matrix S which leaves X invariant can have non-trivial components in the S i,j block which were described above in A.3. At the same time, if we chose to have √ λ i = λ j , then such transformation does change the square root √ X. This is how we get the continuous freedom corresponding to the manifold of similarity transformations which change √ X leaving X intact.

A.5 Cayley-Hamilton theorem
Let P (λ) be the characteristic polynomial of X. The theorem states that P (X) = O. It trivially follows from our considerations above. Indeed, if there is a Jordan block J k i (λ i ) in the spectral decomposition of X, then it means that λ i is a root of P of order not less than k i . Therefore, P (J k i (λ i )) = O for every block in the spectral representation of X.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.