Nonparametric reconstruction of the O m diagnostic to test Λ CDM

,


INTRODUCTION
At present numerous projects and surveys are either underway or being proposed [Amendola et al. (2013)] to discover the underlying cause of the accelerated expansion which is well established by present observations as: Supernovae Type Ia (SNIa) [Riess et al. (1998)], Baryon Acoustic Oscillations (BAO) [Eisenstein et al. (2005)], Cosmic Microwave Background Radiation (CMBR) anisotropies [Spergel et al. (2003)], Large Scale Structure formation [Tegmark et al. (2004)] and Weak Lensing [Jain & Taylor (2003)]. The current standard cosmological model, consistent with these vast observations, is the ΛCDM or concordance model, in which this accelerated behaviour is driven by a cosmological constant Λ and filled with Cold Dark Matter (CDM). This Λ is usually related to an extra component in the Universe, the so-called Dark Energy (DE) with w = −1. Despite of its simplicity, the ΛCDM model has a couple of theoretical loopholes (e.g the fine tuning and coincidence problems [Perivolaropoulos (2011)] which had lead to alternative proposals that either modified the General Relativity or consider a landscape with a dynamic DE. In this way, DE can be described by an equation of state (EoS) written in terms of the redshift, w(z), but until * Email: cescamilla@mctp.mx †Email: fabris@pq.cnpq.br now, we do not have a precise evidence and/or evolution of this quantity. Since its properties are still under research, a wide zoo of reconstructions of DE parameterizations has been proposed to help to discern on the dynamics of this component [Gong & Zhang (2005)].
Despite the efforts to solve the theoretical loopholes of the concordance model there has been no strong alternative yet. In this matter, it results useful to test the consistency ΛCDM with cosmological observations and comparing it with the alternatives models or parameterizations. However, this mainstream is unlike to give any new physics beyond this scenario, but to reveal such possible new physics is essential to avoid a prior knowledge of a cosmological model in order to find an adequate EoS that reliable describe the astrophysical data available. An important goal in the same line is to differentiate ΛCDM model from others DE models in a scenario that has the less priors as possible, because as we have experienced over the years, incorrect priors of w(z) or values of the density quantities can lead us to incorrect cosmological results. An interesting null test of DE, called Om diagnostic, was proposed in [Sahni, Shafieloo & Starobinsky (2008)]. The elegance of this proposal lies in its theoretical form, which is constructed using only the Hubble parameter H(z), quantity that can be measure directly from the observations. This procedure allows differentiate between the cosmological constant (flat ΛCDM) from a dynamical model (curved ΛCDM) only by considering as a prior the value of Ωm. Even if the value of Ωm is not accurately known, the authors of the previous reference gives some interesting insights in [Shafieloo, Sahni & Starobinsky (2012)] using an extension of the Om diagnostic called two-points difference. As a step forward, in [Seikel et al. (2012)] was analyzed a curved ΛCDM, in where the diagnostic function O (2) m includes first derivatives of H(z) and a new parameter related to the curvature, O k , enters to the scene. These tests are quite helpful because we have a scenario in where the diagnostic function can tell us if the previous DE assumptions are in agreement with the ΛCDM model or deviates from it towards an alternative DE or a modified gravity model.
One of the most useful astrophysical tool used is the luminosity distance of SNIa observations, which had the advantage to lead to H(z) via the first derivative of this quantity. So far, there are two astrophysical samples that reflect directly measures of it: first, the Cosmic Chronometers (C-C), which gives a compilation of H(z) measurements estimated with the differential evolution of passively evolving early-type galaxies [Simon, Verde & Jimenez (2005)]; second, the radial BAO scale in the galaxy distribution, a relic of the pre-recombination universe [Blake et al. (2012); Gaztanaga, Cabre & Hui (2009)].
The mentioned diagnostic has been tested with these astrophysical samples and provide a solution of the cosmic acceleration based in a smoothed model-independent via Gaussian process [Holsclaw et al. (2010);Shafieloo, Kim & Linder (2012)], but the price that we pay for using this are the strong constraint over the statistical process and the assumption of a initial guess cosmological model.
In the light of these issues, in [Montiel et al. (2014)] was proposed the use of two statistical techniques: the Locally Weighted Scatterplot Smoothing (Loess) [Cleveland (1979)] and the Simulation and Extrapolation methods (Simex) [Apanasovich, Carroll & Maity (2009)] in order to address a nonparametric scenario with the fewest possible of priors, a smooth reconstruction of the parameter H(z) and, of course, obtain the well established cosmic acceleration. Two nobel achievements using these statistical techniques are: (1) we do not need any DE parameterization as a prior, instead we apply directly the full astrophysical sample in the code structure and the evolution of the cosmological parameters will be issued by the smooth curve given by the observations; (2) we do not require any functional distribution for the analysis. There is only a couple of restrictions which are related to the statistical analysis: (a) the size of the window data in where we are going to develop a fitting routine based in a specific degree of the polynomial [Press et al. (1992); Daly & Djorgovski (2003)]; (b) and we require a weight function that will gives to each data point some importance with respect to the other observations around them. We clarify that this factory is a cosmological-model-independent method due the relax in the use of information concerning cosmological parameters in comparison to Gaussian process, where the use of strong constraints on spatial flatness is required [Holsclaw et al. (2010)]. In order to proceed with this research, we will follow these ideas to constraint even more the use of priors via the Loess-Simex factory and reconstruct h(z) and its derivative to test the ΛCDM model. This paper is organised as follows. In Sect. 2 we give an overview of the quantities used to test the ΛCDM model. In Sect. 3 and 4 we derive the equations for the Om diagnostic by consider a constant EoS and present the cases for a flat and curved universe. In Sect. 5 we describe the astrophysical samples for H(z). In the following two sections we describe our methodology with the Loess-Simex factory to reconstruct h(z) and the Om diagnostic. We conclude in Sect. 7 with a discussion of the results obtained.

ΛCDM BACKGROUND
The dark energy reconstruction starts underlying the validity of the FLRW metric which gives the Friedmann equation where and Ω0m, Ω 0k are the matter and curvature densities at present epoch, respectively. The EoS that characterize DE can be obtained by introducing Eq.
(1) and deriving to obtain its characteristic expression where h (z) is the first derivative of the normalized Hubble parameter with respect to the redshift z. Here we can notice that depending the values of the density parameters there is a strong restriction over w(z). The simplest explanation for DE is when this parameter acquire the value w = −1, which is related to a cosmological constant Λ. Another interesting cases emerge when w > −1 (w < −1), which points out to quintessence (phantom) scenario, respectively. However, the models are still restricted to the values of the density parameters and a distinction between them are quite difficult at this point. This issue was the pattern to propose a diagnostic to differentiate between DE models in scenarios where w could be a constant (and flat) and dynamical (and non-flat). Om diagnostic outline a test where we can fathom between DE models in the cases when the value of Om is a constant or not. Following these lines, let us start our study by describing a Om diagnostic with a flat ΛCDM model as an example. Afterwards, we will proceed with the presentation of the dynamical (non-flat) diagnostic.

THE OM DIAGNOSTIC BACKGROUND
Let us begin with the distance-redshift relation where

EoS
Om diagnostic Model Phantom. Table 1. Features in the Om diagnostic with respect to the value of Ω 0m , which can be taken from recent Planck results [Ade et al. (2015)] and a constant EoS w = w 0 . is the luminosity distance. Deriving Eqs.(4)-(5) and consider a flat universe (Ω 0k = 0), it can be found that D (z) = H0/H ≡ h −1 . In this flat background with a constant DE EoS, w = w0, the Eq.(1) can be expressed as:

Cosmic strings
from where we can define a function that characterize this diagnostic where the upper index '(1)' indicates the existence of a first derivative of the luminosity distance dL.
To test the ΛCDM model using direct observations of the Hubble rate H(z), we require set in Eq.(7) w0 = −1 [Sahni, Shafieloo & Starobinsky (2008)] At this point, we can distinguish a ΛCDM model from any DE models by rewriting Eq.(8) using Eq.(6), obtaining in where, on one hand, if (1) m (see Figure 1) by considering a specific value for w0, e.g non-interacting cosmic string with w0 = −1/3 [Alam et al. (2003)] and static domain walls with w = −2/3 [Friedland, Murayama & Perelstein (2003)]. To distinguished between these models we require the introduction of the Om diagnostic at first-order in h, which is related to the dynamical test.

THE DYNAMICAL OM DIAGNOSTIC
A more meticulous analysis based in the above-mentioned features takes into account a curved model, where first derivatives of h(z) comes into the scene. Expressions for this case can be obtained by consider Ω 0k = 0 and w = w0 in Eq. (1): from where we can find two expressions: where the upper index '(2)' indicates the existence of a second derivative of the luminosity distance. The calculations are explained in Appendix A.
To perform the distinctions between DE models we can rewrite Eq.(11) using Eq.(10) and its derivative, which gives O (2) m 0 = Ω0m and O k 0 = Ω 0k implying a ΛCDM model.

OBSERVATIONS OF THE HUBBLE RATE
To perform the diagnostic analysis we require to have at hand the observed H(z) data. This parameter has become an effective probe in cosmology comparison with SNIa, BAO and CMB data. In fact, it is more rewarding to study the observational H(z) data directly due that all these tests use the distance scale (e.g the luminosity distance dL, the shift parameter R, or the distance parameter A) measurement to determinate the values of the cosmological parameters, which needs the integral of H(z) and therefore loses some important information of this quantity.
H(z) depends on the differential age as a function of redshift z in the form: H(z) = −(1 + z) −1 dz/dt, which gives a direct measurement of H(z) through the change of redshift in cosmic time. As an independent approach of this measure we provide two samples: (i) Cosmic Chronometers (C-C) data. This kind of sample gives a measurement of the expansion rate without relying on the nature of the metric between the chronometer and us. We are going to employ several data sets presented in [Simon, Verde & Jimenez (2005)]. A full compilation of the latter, which include 28 measurements of H(z) in the range 0.07 < z < 2.3, are reported in [Farooq & Ratra (2013)]. The normalized parameter h(z) can be easily determine by consider the value H0 = 67.31 ± 0.96 kms −1 Mpc −1 [Ade et al. (2015)].
(ii) Data from BAO. Unlike the angular diameter dA measures given by the transverse BAO scale, the H(z) data can be extracted from the measurements of the line-of-sight of this BAO scale. Because the BAO distance scale is embodied in the CMB, its measurements on DE parameters strongest at low redshift. The samples that we are going to consider consist in 3 data points from [Blake et al. (2012)] and 3 more from [Gaztanaga, Cabre & Hui (2009)] measured at 6 redshift in the range 0.24 < z < 0.73. This data set is showed in Table 2.

NONPARAMETRIC RECONSTRUCTIONS
Following the same methodology proposed in [Montiel et al. (2014)], we are going to reconstruct the normalized Hubble parameter h using the Loess-Simex factory.

Reconstruction of h(z)
Step A1. Windows and subsample selection. First, we are going to select the proportion of observations fitting in a specific window. Each selection consist in some percentage of the total number of observations and to each subsample will be assigned a specific weighted least square local polynomial fit. We use a subsample via one quantity that is usually known in the statistical jargon as the smoothing parameter or span s, we use k = ns, where k is the number of observations per window and rounded to the next largest integer, n is the total number of observations and s typically takes values that oscillates between 0 and 1. We calculated the values: s = 0.9 for the C-C sample, s = 0.85 for the BAO sample and s = 0.4 for the C-C+BAO total sample, which correspond to 90, 85 and 40 percent of the data in each window, respectively. These values were found using the cross validation process detailed in [Montiel et al. (2014)].
Step A2. Weighted subsamples. Having already selected the amount of data in each window, consider a certain amount of data points near each other are more related between them than others that are significantly away and receive a null weight. This idea is coined in the weight function described by a tricube kernel : wherezi = (zi − z0)/d, indicates the distance between the predictor redshift value for the i-th observation and the focal redshift z0. d is the maximum distance between the point of interest and elements inside the window.
Step A3. Regression analysis. Following the Loess technique, we consider a low-degree polynomial to perform a local fit of the subsample in each window. It can be possible that higher-degree polynomials works, but for a simple nonparametric regression model we perform the analysis with a polynomial of the form The first term correspond to the fitting coefficients of H(z), which are calculated by consider an evaluation in z = 0 as H(0) = a0. A similar fit routine proposal was presented in [Daly & Djorgovski (2003)]. The r.h.s second term is related to H , parameter that we will reconstruct in Sect. 6.2. The reconstructed quantity is a weighted sum of the observations H(z) represented as:Ĥ where the weights in this regression are Wij = W [(xi−x0)/d] and j = 1, . . . , k.
Step A4. Simulated data sample. The Simex method offers an algorithm to estimate a true parameter set in situations where covariate data has noise. Basically, this step consist in adding to the data sets an additional measurement error as follow where ηi(λ) denotes the simulated data points and σH i is the measurement error variance of each H(z) observation. The resulting measurement error is β = (1 + λ), in where we can extrapolate the data sample to an error free zone if λ = −1. This zone is achieved after perform a standard regression, using a quadratic polynomial, of the data set computed for difference values of λ. Specifically, we are going to consider as a starting value λ = 0.5 until λ = 2 increasing in steps of 0.1.
Step A5. Starting the reconstruction. After performing the latter extrapolation step the data set will be simplified to the same length of the initial data and finally, these simulated data sets are normalized by H0, given as a result the reconstruction of h(z). All the above steps are repeated for all the data points in the astrophysical sample. The connection of the Loess-Simex reconstructed data points are represented by a curve due the lack of parameter estimates. The reconstructed normalized Hubble parameter h(z) gives a general trend of the model.
Step A6. About the confidence regions. To design the confidence regions of the reconstructed parameter h(z) we require the transfer uncertainties via error propagation given by With this expression we can calculate the uncertainties for the Om diagnostic For the dynamical Om diagnostic we have the following uncertainties As the set implies, we need to found the value of the variable σH . Let us start with the fitted valueĤ(z) obtained in the Step A3. For nonparametric regression models we estimate the error variance as where ri = Hi −Ĥi is the residual for i-th observation and df mod is the equivalent degrees of freedom for the model, which in our case it is equal to two. With this we are capable to compute the variance of the fitted valueĤ(z) at z = zi The results of the latter are considered to compute the propagation values σ h in Eq. (19). Finally, the 68% confidence interval and the 95% confidence interval are given by hi ± V (Ĥi) and hi ± 2 V (Ĥi), respectively and hi =Ĥi/H0.

Reconstruction of h (z)
The logistics in this issue remains in the steps explained above. Nonetheless, we are going to proceed with a data set that only includes the coefficients related to the first derivative of H(z).
Step B1. Reconstruction of h (z). Let us proceed as in Step A1 until Step A3, where in the latter we performed a linear fit for these points using Eq.(16). The fitting coefficients of our interest are determinated by the evaluation of the polynomial in z = 0 as H (0) = a1, where the prime denotes differentiation with respect to z. The new data set will consist of these a1 coefficients for the 28-simulated data points, to which we apply a least square fit and then extrapolate to λ = −1, given us the data set that we normalize to obtain the values of h (z) and its respectively curve as in the Step A5.
Step B2. About the error propagation. Estimate the errors of h (z) and construct a similar step as was developed with Eqs.(23)-(24) can be a little tricky and it is necessarily to be careful in the following methodology. This can be seeing from the form of Eq.(18), expression that can be used similarly for h (z) if we have at hand the values of H (z) (already obtained in the linear fit performance in Step B1). The next question is: how we can compute the uncertainties of H (z)? We need to start from Step A4, where we perform a least square fit and the polynomial that we need to propagate now is where the σ-values are the diagonal elements of the covariance matrix obtained from H (z) data set.
With the new set [H (z), σ H ], we are ready to reproduce the same steps starting in Eq.(18) and computing its error and matrix variance Eqs.(23)-(24). Until now we are not taken yet into account any normalization of H (z), aspect that is implicit in the following propagation of errors Finally, using this error propagation and its respectively h (z) value we can construct the confidence regions as in Step A6.

Nonparametric reconstruction of the Om diagnostic
On one hand, regarding to the Om diagnostic for ΛCDM flat model (8), it is straightforward to compute the Om data set using the Loess-Simex estimates values h(z) calculated in Sect. 6. The values of Om are given directly from the new data setĥ(z). On the other hand, the uncertainties calculations are easily to perform via Eq.(20). Thereupon, we construct the 68% and the 95% confidence intervals using the expressions: Om ± σÔ m andÔm ± 2σÔ m , respectively. As we discussed, the existence of a non-flat universe brings to the scene h (z) and O k . In this case the system is given by Eqs. (13)-(14), which are independent of the values of the cosmological parameters Ωm and Ω k and implying a model that only relies in the values of our reconstructed h(z) and h (z).
The confidence regions will be compute using the error propagation Eqs.(20)-(21) and the expressions:Ô

DISCUSSION AND CONCLUSIONS
We developed the Loess-Simex factory to achieve two interesting goals. First, we perform the reconstruction of the normalized Hubble parameter h(z), results that are represented by red dots (red line) in Figures 2, 3 and 4. As well, in the upper plots of Figure 2 we illustrate the original H(z) data set represented by blue dots with its respectively error values and its nonparametric reconstruction (red dots/line). It is interesting to remark the comparison between these reconstructed points and the ΛCDM model, which is represented by a dotted green line.
Our second goal was the reconstruction of the Om diagnostic and the O was made by consider two options: (I) using the already reconstructed h values (top of Figure 3) and (II) performing directly its reconstruction (bottom of Figure 3).
Let us discuss the results for each case. For the C-C sample, the nonparametric reconstruction has the same trend as the one reported in [Montiel et al. (2014)]. However, in our case we worked with the normalized Hubble parameter h, which behaviour is analogous to the previous case, as it is expected. The directly reconstruction of the Om diagnostic appears to be in good agreement with ΛCDM at z > 1. It is interesting to notice that in this case the confidence regions looks smaller than in the case when we use the h data reconstructed.
For the BAO sample, unlike other proposals above mentioned, ours results shows a ΛCDM model that lies in our Om confidence contour reconstructions at 2-σ, even by performing the reconstruction with a few values of this data set. As in the previous sample, the directly reconstruction of this diagnostic gave a concordance model between 1 up to 2-σ. The reconstructions of O (2) m and O k implies the reconstruction of h and the analysis shows large uncertainties, even so, the reconstructions at high redshifts shows a trend that possibly can loiters to ΛCDM at z > 0.7 (see Figure 4, middle row).
For the C-C+BAO sample, we observe that the reconstruction is almost similar to the C-C case, clearly due the amount of data of the first sample in comparison to the second sample. The concentration of data points at z < 0.5 is related to the effects of the evaluation of the reconstructed data in Eq.(8). We observed in the O (2) m analysis a pull of the reconstructed curve up at z < 0.3, which probably shows the important relationship between derivatives of the data and the model itself. The directly reconstruction at zero-order loiters to ΛCDM up to z = 1, but due that is not a constant in the entire redshift range we need to consider a dynamical test.
In order to found the adequate DE model in agreement with the reconstructions we perform a O

of O
(2) m deviation between this data and each DE models correspond to 8% for a phantom model and 3% for a static domain walls, making the latter a better model in agreement with the reconstructed data.
Forthcoming studies along the lines of these analysis promise to greatly improve with the use of high quality observations to make this nonparametric Om diagnostic more accurate and a very useful tool for testing alternatives DE parameterizations and modify gravity proposals. (2) m reconstructed at low redshifts (z < 0.5). Bottom: Probability comparison between DE models. The green bars represent DE models (phantom and static domain walls) and the red bars represent the amount of the reconstructed data. 62% fraction of the reconstructed data lies in O (2) m < 0, then in this range we observe that the amount of O (2) m deviation between this data and each DE models correspond to 8% for a phantom model and 3% for a static domain walls. These probabilities supports the result obtained above.
• For w = w0 and Ω k = 0, D −2 = Ωm(1 + z) 3 + Ω k (1 + z) 2 From Eq.(A7) we obtain the first generalized equation for the Om diagnostic described by Eq.(7). When we consider a non-flat universe the Ω k arise and we are going to need a system of two equation: the first one given by Eq.(10) and the second is the EoS when we rearranged Eq.(3). After straightforward calculations and redefining Ωm ≡ O (2) m and Ω k ≡ O k , we obtain the generalized equations for a non-flat universe and a constant dark energy EoS described by Eqs.(11)-(12).