Cosmology in doubly coupled massive gravity: constraints from SNIa, BAO and CMB

Massive gravity in the presence of doubly coupled matter field via en effective composite metric yields an accelerated expansion of the universe. It has been recently shown that the model admits stable de Sitter attractor solutions and could be used as a dark energy model. In this work, we perform a first analysis of the constraints imposed by the SNIa, BAO and CMB data on the massive gravity model with the effective composite metric and show that all the background observations are mutually compatible at the one sigma level with the model.


I. INTRODUCTION
Last year was the centenary year of Einstein's General Theory of Relativity. This remarkable theory survived hundred years with great successes and is still the fundamental theory that describes the underlying gravitational physics for a vast range of scales. Albeit a great deal of inquiry, it outlived against most competitors of alternative theories. One of the first predictions of the theory was the right amount of gravitational deflection of light, which was confirmed by Arthur Eddington very soon after its inception. Nowadays the direct application of this in form of gravitational lensing is one of the indispensable tools in astrophysics and cosmology. Another powerful prediction of General Relativity is the presence of gravitational waves. These constitute the ripples of space-time itself, that travel outward from a massive object in form of waves. Its discovery was a breathtaking event [1].
Granting all this, there remains unsolved problems. Attempts to describe the gravitational interactions by the principles of quantum mechanics failed tenaciously. The absence of meaningful application of renormalizability techniques diminishes the predictive power of the theory at large energy scales. This problem has motivated to investigate ultraviolet modification of General Relativity with the aim to successfully implement the quantum behaviour of gravity. Furthermore, the presence of black hole and cosmological singularities are unwanted pathologies of the theory. It could be that the new physics of quantum gravity automatically takes care of these singularities regularizing curvature divergences. Alternatively, one could also consider classical modifications in which curvature scalars are regular not due to quantum effects but rather due to different behaviour of gravitational force at high energies implemented in the modifi-cations. This can also have important consequences for the early universe allowing alternatives to the standard inflationary scenario [2,3].
Another challenge was faced by the discovery of the accelerated expansion of the universe, which is still one of the most intriguing problems in modern cosmology. This detection has been now confirmed with high precision by many different observations like Type Ia supernovae (SNIa), Baryon Acoustic Oscillations (BAO) and Cosmic Microwave Background (CMB) temperature power spectrum. Taking General Relativity as granted, the expectation for the evolution of the universe would rather be a deceleration. Therefore one is forced to inject some sort of unknown non-ordinary energy into the theory. The inclusion of a cosmological constant indeed accounts for most of the observations with very good precision requiring the value of the cosmological constant to be of the order of 10 −47 GeV 4 . The fact that it corresponds to this very tiny value poses theoretical problems if one assumes that it corresponds to the energy density of the vacuum of space, which is of the order of ∼ 10 120 larger. This constitutes the worst problem of fine-tuning and is known as the cosmological constant problem [4]. Even if one fine-tunes the value of the cosmological constant to be very small at the classical level, this value is not radiatively stable. Considering quantum corrections in terms of matter loops will renormalize the value of the cosmological constant proportional to the mass of the matter field and hence one has to fine-tune the value at each loop order. This renders the theory unnatural. Similarly to the ultraviolet modifications to cure the pathologies at high energies, one can consider infrared modifications that could either tackle the old cosmological constant problem or provide a mechanism that accounts for the right dark energy phenomenology.
In order to fit observations there is also the need for cold dark matter. Its origin is still a mystery as well and has not been detected yet despite many efforts. Together with the cosmological constant it builds the standard model of cosmology, the Λ-CDM model. It prevails against all the alternative models and explains for exam-ple perfectly well the observed fluctuations of the CMB and the structures on large scales. Despite the great agreement with the observations, some reported anomalies call for attention, even though they are statistically not very significant yet. Moreover, the model might have problems to account for the right observations of dark matter at galactic scales and below, for example it fails to describe the tight correlations between dark and luminous matter in galaxy halos [5,6]. This might be due to the lack of a complete understanding of the astrophysical phenomenology. However, in this respect modifications in form of modified newtonian dynamics has been pursued [7] even though its successful extrapolation to cosmological scales is problematic and calls for a better implementation of the theory into a some sort of hybrid model [8][9][10][11].
To address the above mentioned challenges one can consider modifications of gravity in form of scalartensor [12][13][14][15][16][17][18], vector-tensor [19][20][21][22][23][24][25][26] or tensor-tensor theories [27,28]. As a concrete infrared modification of General Relativity, the framework of massive gravity has witnessed promising developments [27,29], that could either be used for dark energy [16] or for the cosmological constant problem [30]. The theory requires the mass of the graviton to be small but this small value is technically natural in the sense that it is stable under quantum corrections [31][32][33]. Within possible scenarios the formulation of the model in the presence of doubly coupled matter field via an effective composite metric yields interesting cosmological solutions with stable perturbations [34,35].
In a previous work we have shown the presence of stable de Sitter attractor solutions [36]. The presence of de Sitter attractor does not guarantee a good fit to observations. We would like to test the model using background observations like SNIa and distance priors. We first introduce the framework that we are considering here in section II and state the background equations of motion. For our analysis, the important modification is encoded in the Hubble function which we compute in section III by expressing the equations of motion in terms of redshift before integrating them. After this preliminary analysis we first compare the modified Hubble function with the supernova data in section IV and put constraints on the model parameters. We further include the constraints coming from the BAO data in section V and from the CMB data in section VI. Finally, we compare the combined constraints of the model parameters in section VII and show that all these background observations are mutually compatible at the one sigma level with our model.

II. MASSIVE GRAVITY WITH EFFECTIVE COMPOSITE METRIC
The model that we would like to compare with background observations is the doubly coupled massive gravity model proposed in [34], where a matter field of the dark sector is coupled to an effective composite metric built out of the dynamical and fiducial metric, whereas the standard matter fields couple only to the dynamical metric. The cosmological consequences of this model was already discussed in [34,35,37]. In a previous work we have further showed the presence of stable de Sitter attractor solutions, making the model viable for dark energy studies. The action of the model reads being the Ricci scalar of the dynamical metric and U[K] denoting the allowed ghost-free potential interactions between the two metrics g and f [27, 38] with [..] denoting the trace and we already neglected the tadpole U 1 and cosmological constant U 0 contributions. The effective composite metric is defined as [34] g eff with the two arbitrary free parameters α and β. Actually, without loss of generality one can fix α = 1 since the interesting dependence will be in the form of the ratio between the two parameters. This effective composite metric is special in the sense that its volume element corresponds to the right potential interactions For the matter field of the dark sector that couples minimally to the effective composite metric we will assume a fluid with energy densityρ, pressureP and sound speedc 2 s encoded in L eff matter (g eff ,ρ,P ,c 2 s ). Note that its pressure can be very small but the important requirement is that it is non-vanishing. And for the standard matter fields we will assume dust and radiation type of matter fields that live on the dynamical metric represented by L matter (g, ρ, P, c 2 s ). We shall assume that the dynamical metric is of the form of the homogeneous and isotropic flat FLRW metric ds 2 g = −N 2 dt 2 +a 2 δ ij dx i dx j and similarly the fiducial metric as Hence the effective metric is simply ds 2 eff = −N 2 eff dt 2 + a 2 eff δ ij dx i dx j , with N eff ≡ α N + βḟ and a eff ≡ α a + β a 0 being the effective lapse and scale factor respectively. The modified Friedmann equation in this model corresponds to with the energy density of the standard matter field ρ, the energy density of the matter field that lives on the effective composite metricρ and the dimensionless effective energy density from the mass term being n=2 α n (1 − A) n and A stands for the ratio of the scale factors A ≡ a 0 /a. The acceleration equation of the system reads where J = 1 3 ∂ A ρ m and r ≡ḟ /a0 N/a . The matter fields living on the dynamical and the effective composite metric have their corresponding conservation equations Last but not least we have the Stueckelberg equation as constraint equation For the purpose of our present work, it will be convenient to express the background equations in terms of the redshift. In the next section we shall bring the relevant equations in the form that will be most suitable for data comparison. Furthermore, we will assume N = 1.

III. MODIFIED HUBBLE FUNCTION
We will first solve the constraint equation (8) for the pressure of the matter field in the dark sector and use the Friedmann equation (5) to solve it for its energy densitỹ where ρ m and ρ r are the energy density of matter and radiation respectively, living on the dynamical metric g. After using these two equations, the explicit dependence on the matter field in the dark sector disappears. Next, we plug in the expressions forρ andP into the acceleration equation (6), which simplifies tȯ with P m and P r being the pressure of the matter fields that live on the dynamical metric. We replace all the time dependent variables by their corresponding expressions in redshift and their time differentiation byV = −(1 + z)H(z) d dz V(z), where V stands for all the time dependent variables like ρ A (t), H(t), ρ r (t)...etc. For the matter field we assume zero pressure P m =0 and solving the continuity equation gives for its energy density to be ρ m = Ω m (1 + z) 3 , with Ω m being the density parameter of matter. For radiation we assume P r = 1 3 ρ r and solving its continuity equation gives this time ρ r = Ω r (1 + z) 4 with the corresponding density parameter for radiation Ω r . Hence the acceleration equation in redshift space becomes where we have introduced the combinations of parameters κ 1 = 3(α 2 + α 3 ) + α 4 , κ 2 = −2(α 2 + 2 α 3 + α 4 ) and κ 3 = α 3 + α 4 for convenience. We can simply integrate the above equation and obtain the evolution of the Hubble function, which results in with c 1 being an integration constant. Furthermore, it will be convenient to introduce the normalization Ω m h 2 , Ω r h 2 ...etc. with H(z = 0) = 100hkms −1 Mpc −1 . Note also that the density parameter for radiation contains the contribution of photons as well as the relativistic neutrinos where Ω γ h 2 = 2.469 × 10 −5 at the CMB temperature T CMB = 2.725 K and N eff = 3.04 stands for the effective number of relativistic neutrino species. In the following sections we shall compare our model with the background observations like SNIa, BAO and CMB. Note that we will assume Ω k = 0 throughout the paper. Furthermore, without loss of generality we shall put α = 1 but keep β arbitrary. Also since the integration constant c 1 in (13) will be put in relation to Ω m through the Friedmann equation, we will be marginalizing over c 1 .

IV. CONSTRAINTS FROM SNIA
Supernova Type Ia are used as standard candles with known brightness to refer physical distances. The logarithm of the luminosity of an astronomical object seen from a 10 parsecs distance gives its absolute magnitude, which on the other hand can be used to give its brightness. We shall use the distance modulus µ to relate the expansion history of the universe to the apparent magnitude of a supernova at a given redshift. It is defined as the difference between the apparent magnitude m and the absolute magnitude M of the supernova and relates to the distance through with µ 0 = 42.38 and dimensionless D L = H 0 d L . The luminosity distance on the other hand is given by d L = (1 + z)r(z) with r(z) standing for the comoving distance Once we have the distance modulus of our model, we can directly compare it with the supernova data and compute the χ 2 estimator Since the supernova dataset is below redshift 2, we have neglected the contribution coming from radiation, hence we have set Ω r = 0 in this section. Since h is degenerate with the absolute magnitude, we marginalised over h. Furthermore, a careful analysis of the Likelihood reveals a degeneracy in the β parameter. A full detail scrutiny of the Likelihood would require a MCMC method applied to the four dimensional parameter space, which is out of scope of the present analysis. We instead adapted to the grid-wise exploration of the Likelihood in the different directions and explored a local minimum. Due to the degeneracy in the β direction, without loss of generality we fixed it to a given number and the value of κ 1 and the integration constant c 1 to be the ones at the local minimum for this given β value, whereas marginalised over h. The local minimum of the Likelihood that we considered here is approximated by β ∼ 10, c 1 ∼ 0.27 and κ 1 ∼ −0.02. Furthermore, Ω m is related to the integration constant through the Friedmann equation. Thus out of the higher dimensional parameter space {Ω m , Ω r , h, c 1 , κ 1 , κ 2 , κ 3 , β}, after marginalizing and/or fixing the minimum values for {Ω m , Ω r , h, c 1 , κ 1 , β} we were left with two parameters {κ 2 , κ 3 }. Through the grid-wise exploration of the two dimensional Likelihood we constrained the remaining parameter space {κ 2 , κ 3 } using the supernova data. We use the union data set [39]. The 68%, 95% and 99% C.L. regions for the supernova data is shown in Fig. 1. The local minimum of the χ 2 estimator is around (κ 2 ∼ −0.5, κ 3 ∼ −1.1). We could have chosen any other combination of pairs of the parameters in order to compare with the data, but since there is a degeneracy in the β direction and the integration constant c 1 is related to the matter energy density, any combination of the κ parameters could be equally good, but we could not explore further this possibility since FIG. 1: We plot the marginalised χ 2 estimator in the κ 2 and κ 3 parameter space. The 68%, 95% and 99% C.L. regions for the SNIa dataset union [39] are shown by the colour gradient. Recall that κ 1 = 3(α 2 + α 3 ) + α 4 , κ 2 = −2(α 2 + 2 α 3 + α 4 ) and κ 3 = α 3 + α 4 , so this plot can be also seen as the constraints on the α n parameters. We have chosen the κ representation for convenience.
this would require a full MCMC analysis of the model parameters. Our choice serves as a rule of principal how the supernova data can be nicely used to constrain the model parameters.

V. CONSTRAINTS FROM BAO
The density of baryonic matter has periodic fluctuations referred to as baryon acoustic oscillations, which is the outcome of counteracting forces of pressure and gravity. The pressure released by the photons after decoupling creates a shell of baryonic matter at the sound horizon. The measurement of these baryonic oscillations yields the following distance-redshift relation at the redshifts z = 0.2 and z = 0.35 [40] with the sound horizon expressed as and the dilation scale as The redshift value z d represents the epoch at which the baryons were released from the photons and is given by the fitting formula [41] z d = 1291(Ω m h 2 ) 0.251 1 + 0.659(Ω m h 2 ) 0. 828 with the parameters b 1 and b 2 standing for the short-cut notations The corresponding BAO data vector results in and the χ 2 estimator with the inverse covariance matrix [40] We are now ready to compare our model with the BAO data points. We can proceed in the same way as for the SNIa data. However, note a crucial difference. The χ 2 estimator for BAO depends directly on the density parameter of the baryons Ω b , hence we need to marginalise over this parameter as well. The parameters κ 1 and β have been fixed to the value of the local minimum, whereas we marginalised over h again. The marginalised χ 2 estimator over the parameters (κ 2 , κ 3 ) is given in Fig.2

VI. CONSTRAINTS FROM CMB
As next we would like to confront our model to the CMB data. For this purpose we tightly follow the distance priors method of Komatsu et al [42], which relies on the use of two distance ratios. The first distance ratio constitutes the ratio between the angular diameter distance to the decoupling epoch and the comoving sound horizon size r s at the decoupling epoch with the fitting function The second distance ratio is the one between the angular diameter distance and the Hubble horizon size at the decoupling time Following Kommatsu et al [42], we take the following values for the distance priors 302.10 ± 0.86 1.710 ± 0.019 1090.04 ± 0.93 with the CMB data vector as and the inverse covariance matrix The corresponding χ 2 estimator of the CMB is Note that these CMB distance priors are applicable only under the assumption that the dark energy component is not relevant at the decoupling time. Since in our model of massive gravity the modifications become appreciable only at small redshifts, the usage of these priors is justified. In difference to the previous analysis, the CMB distance priors is very sensitive to the radiation component, hence we reinstal the explicit dependence of Ω r in the Hubble function. Furthermore, we have to marginalize over Ω b h 2 and h. We show the 68%, 95% and 99% C.L. regions for the distance priors of the CMB in

VII. CONCLUSIONS
This work was devoted to the detail study of the background evolution of massive gravity in the presence of the composite effective metric to which the matter fields in the dark sector couple. Using the constraint and Friedmann equation, we have seen that the direct dependence of the matter field in the dark sector disappears. The resulting modified Hubble function only depends on the model parameters and the fluid dynamics of the standard matter fields that live on the space-time metric. Clearly, in order to constrain the matter field in the dark sector, one would need to go beyond background observations and consider the implications coming in the perturbations. This shall be investigated in a future work. In a previous work, we had shown the existence of attractor de Sitter critical points and studied the stability of perturbations. In this work, we have studied the constraints on the parameters of the theory coming from SNI, BAO and CMB data. Since the model contains too many parameters, we either fix or marginalised over some of the parameters leaving two parameters free. The two free parameters appearing in the effective composite metric enter such that one of them can be fixed to unity. The remaining parameter β introduces degeneracy in the Likelihood. The model consists of six parameters {Ω m , h, β, κ 1 , κ 2 , κ 3 }. We first obtained the constraints coming from the SNI data. For this we explored grid-wise the Likelihood and fixed the two parameters β and κ 1 to the values of a local minimum whereas marginalised over h, leaving the two parameters κ 2 and κ 3 free. In a similar way we compared our model to the BAO and CMB data as well. As can be seen in Fig.4, the contours of the SNI data agree nicely with the BAO and CMB data FIG. 4: This plot shows the 68%, 95% and 99% C.L. regions for the SNI, BAO and CMB data. We see nicely that the contours of the SNI data set is in nicely agreement with the BAO and CMB data set.
even in the simple grid-wise exploration of the Likelihood of the parameters. The one sigma contours of the three observations overlap nicely and the preferred values for the two parameters are {κ 2 ∼ −0.6, κ 3 ∼ −0.5}, thus the agreement is at the one sigma level. In a future work, a detail analysis of the background observations will be performed using a MCMC method together with the constraints analysis imposed by the observations coming from perturbations.