1 Introduction

Recent observations suggest that the universe can be modelled according to the six parameter \(\Lambda \)CDM model, where structure formation is explained by cold dark matter physics (CDM) and recent acceleration of the universe is explained by vacuum energy \(\Lambda \) which is the candidate for dark energy. There are however possible extensions to the standard \(\Lambda \)CDM. Cosmic neutrino background and Inflationary Gravitational waves (IGWs/tensors) are theoretically well motivated. Among them, cosmic neutrino background (\(\mathrm C\nu \mathrm B\)) is indirectly confirmed by the CMB measurements of the Planck satellite, where the current preferred value of the effective number of extra radiation species at recombination, \(N_{\text {eff}} = 2.92^{+0.36}_{-0.37}\) (95%, TT,TE,EE+lowE) [1] in a minimal \(\Lambda \text {CDM}+ N_{\text {eff}}\), is very far away from the value of \(N_{\text {eff}} = 0\). The theoretically predicted value of \(N_{\text {eff}} = 3.045\) [2] considering three active neutrinos as the only relativistic species apart from photons during recombination, is completely compatible with this bound, implying consistency with \(\Lambda \)CDM. In standard model of particle physics, neutrinos are massless. But terrestrial neutrino oscillation experiments have strongly confirmed that neutrinos have small masses. While strongest upper bounds on the sum of masses of the three active neutrino mass eigenstates \(\sum m_{\nu }\)) come from cosmology, it is still unable to provide any lower bound, indicating that the standard model assumption of \(\sum m_{\nu } = 0\) is consistent with current data. For instance, Planck 2018 results [1] provided a bound of \(\sum m_{\nu } < 0.12\) eV (95% CL) in the minimal \(\Lambda CDM+\sum m_{\nu }\) for the TT,TE,EE+ lowE + lensing + BAO data combination. There are numerous other analyses with different datasets which provide bounds of \(\sum m_{\nu } \lesssim 0.15\) eV (95% CL) [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21], i.e, cosmological data is becoming more and more effective in constraining neutrino masses. Please also see [22, 23], which provide detailed reviews on current status and future prospects of constraining neutrino masses and determining their ordering from cosmology and other data.

Again, while \(\mathrm C\nu \mathrm B\) is indirectly detected, existence of IGWs is still to be confirmed. The main probe for IGWs is the CMB B-mode polarization, and the corresponding important parameter is the tensor-to-scalar ratio (r). The currently available observations can only put an upper bound on the tensor to scalar ratio: \(r_{0.05} < 0.06\) (95% CL; at a pivot scale of \(k_* = 0.05~\text {h}/\text {Mpc}\)) [24], implying that \(r=0\) is consistent with current data. While \(\Lambda \)CDM has its success there are also parameter tensions between CMB and non-CMB data within the \(\Lambda \)CDM model. One of the most important limitations of \(\Lambda \)CDM is that high redshift (CMB) and low redshift (local universe) measurements gives different values of Hubble constant. The Planck 2018 results [1] provide \(H_0=67.27\pm 0.60\) km/s/Mpc (68% CL) for TT,TE,EE + lowE in \(\Lambda \)CDM (with \(\sum m_{\nu }\) fixed at 0.06 eV), and recent direct measurement gives \(H_0=73.24\pm 1.74\) km/s/Mpc (68% CL, hereafter HST) [25]. There is roughly a \(3\sigma \) inconsistency between these datasets. Recent strong lensing observations from the H0LiCOW program [26] provides \(H_0=71.9^{+2.4}_{-3.0}\) km/s/Mpc (68% CL) and partially confirms the tension. CMB data also has \(\sim 2\sigma \) tensions in the measurements of \(\Omega _{m}\) and \(\sigma _8\) with x-ray galaxy cluster measurements [27] or cosmic shear surveys like CFHTLenS [28] and KiDS-450 [29]. For instance, the KiDS-450 survey measures a combined quantity \(S_8 \equiv \sigma _8 \sqrt{\Omega _m/0.3} = 0.745 \pm 0.039\) (68% CL) which has a more than 2\(\sigma \) tension with Planck 2018, which prefers a much higher value of \(S_8 = 0.834 \pm 0.016\) (68% CL; TT,TE,EE + lowE).

Apart from inconsistencies among high and low redshift datasets, there are several internal inconsistencies in the Planck data itself. Parameter estimations in \(\Lambda \)CDM differ when considering small scale (\(l\ge 1000\)) and high or intermediate scale (\(l<1000\)) temperature data separately [30]. This is especially true for the measured value of \(H_0\) which is much lower when obtained from the \(l\ge 1000\) data than when obtained from the \(l<1000\) data. Another puzzling inconsistency in \(\Lambda \)CDM with Planck data is that the latest measurement of lensing parameter by Planck 2018, \(A_{\text {lens}}=1.180\pm 0.065\) (68% CL) in a \(\Lambda \text {CDM} + A_{\text {lens}}\) model [1] is 2.8\(\sigma \) level higher than \(\Lambda \)CDM prediction of \(A_{\text {lens}}=1\). See also [31, 32] on the \(A_{\text {lens}}\) problem.

A possible explanation for these tensions is the systematics of the observations. But it is also possible that we need physics beyond \(\Lambda \)CDM and standard model of particle physics. These inconsistencies in \(\Lambda \text {CDM}\) model and different datasets have motivated several studies of cosmological scenarios in extended parameter spaces [10, 21, 33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61]. Recent studies have also analyzed models with as large as twelve parameters [33,34,35]. The motivation behind studying such a large parameter space is that \(\Lambda \text {CDM}\) currently seems to be an over-simplification. Indeed, there is no reason to fix \(\sum m_{\nu }\) to 0.06 eV (95% CL), since it is only approximately the minimum sum of masses required for normal hierarchy of neutrinos and this mass might not be an accurate one. Massive neutrinos produce distinct effects on CMB and large scale structure data and this has been widely studied [62,63,64,65,66,67]. Again, the discrepancy with Planck and HST might be explained by a dark radiation species contributing to \(N_{\text {eff}}\) [25]. Similarly, existence of tensor perturbations are theoretically well motivated and there seem no reason to not to include them in a analysis.

Apart from massive neutrinos and tensors, another extension to \(\Lambda \)CDM which has recently received a lot of attention is dynamical dark energy, where the dark energy (DE) equation of state (EoS) is not fixed at \(w=-1\) or some other constant, but is varying with time [51]. Dark energy is one of the biggest puzzles, not only in Cosmology, but in the whole of Physics. Currently available datasets, in this era of precision cosmology, can provide us with much better bounds on DE equation of state than it was previously possible. Thus it seems simplistic and unnecessary to assume dark energy as just a cosmological constant, especially when from the quantum field theoretic point of view, it has been a very difficult thing to explain [68]. Hence, in this work, with massive neutrinos, tensors, and dynamical dark energy included, we consider a largely extended cosmology compared to a standard one.

However, we do not include the full dynamical dark energy range. The \(w=-1\) line divides the dynamics of dark energy in two distinct regions, phantom (\(w<-1\)) and non-phantom (\(w\ge -1\)). In this work, we discard the phantom region as first done in [54] in the context of cosmological neutrino mass constraints, and specifically consider a non-phantom scenario, since in a universe with phantom dark energy (\(w<-1\)), the dark energy density reaches infinity in a finite time leading to dissociation of all bound states, i.e., the so called Big Rip, and seems unphysical in that sense [69, 70]. From field theory perspective, Dark energy models with a single scalar field are not able to go across the \(w=-1\) line (i.e., the phantom barrier) and more general models that allow it demand extra degrees of freedom to supply stability gravitationally [71]. Phantom dark energy accommodating field theories are usually plagued with one or more of the following problems like Lorentz violation, unstable vacuum, superluminal modes, ghosts, non-locality, or instability to quantum corrections. On the other hand, however, it is possible to make theories free of such abnormalities by using effects like photon-axion conversion or modified gravity which leads to an apparent \(w<-1\) (see [72] for a brief review), or vacuum phase transition [73], which produces a phantom behaviour of the DE EoS. Nonetheless, there are single scalar field theories like quintessence [74,75,76] which are relatively well motivated theoretically, and are non-phantom in nature. So, in this work we limit ourselves to such theories. Our main motivation to do this work has been to study how effective the currently available datasets are in constraining the cosmological parameters (especially the sum of neutrino masses) in a non-phantom dynamical dark energy scenario instead of a cosmological constant, with minimal assumptions about other parameters coming from the massive neutrinos and tensor sector.

In this work we have first considered a 12 parameter extended scenario with 6 usual \(\Lambda \text {CDM}\) parameters, two dynamical dark energy parameters (\(w_0-w_a\) approach, CPL parametrization) with \(w(z)\ge -1\), two neutrino parameters (\(N_{\text {eff}}\) and \(\sum m_{\nu }\)), and two inflationary parameters (\(r_{0.05}\) and the running of the spectral index, \(n_{run} \equiv dn_s/d\text {ln~k}\)). We performed a Bayesian analysis to constrain parameters using different combinations of latest available datasets: (1) Cosmic Microwave Background temperature and polarization data from Planck 2015; (2)the latest data released from the BICEP2/Keck Collaboration for the BB mode of the CMB spectrum (BK14); (3) Baryon Acoustic Oscillation Measurements from SDSS III BOSS DR12, MGS and 6dFGS; (4) Supernovae Type Ia Luminosity Distance Measurements from the newly released Pantheon Sample, (5) Planck 2015 lensing data; and (6) the HST Gaussian prior (\(H_0=73.24\pm 1.74\) km/s/Mpc (68% CL)) on Hubble constant. Next we turned off the tensor perturbations (i.e., removed \(r_{0.05}\)) and constrained this 11 parameter scenario with the same datasets except BK14. Finally we add a new parameter \(A_{\text {lens}}\) and again constrain this 12 parameter expended space with the mentioned datasets. We emphasize here that this is the first time someone has evaluated the non-phantom dark energy scenario in a 12 parameter extended space. Our main focus in this paper is on sum of neutrino masses, however we provide the constraints on all the varying parameters. Here we would also like to emphasize that we take the datasets at face value, i.e., any discrepancy or tension between datasets in our model is assumed to have a physical reason and not due to unknown systematics involved in the experiments. Also, it is imperative to point out that the best bounds on sum of neutrino masses that we have presented, are strong and comparable or better to the bounds provided by the recently released Planck 2018 results [1] in the \(\Lambda \text {CDM}+\sum m_{\nu }\) model. Hence our results remain very much relevant although we have used the Planck 2015 data.

It is imperative that we also mention three recent papers which have helped in building the motivation for this work, and also the difference in our analyses with the said papers. In [35], the authors constrained the dark energy dynamics in an extended 12 parameter model, but they included both the phantom and non-phantom sectors of dark energy,and did not consider any tensor modes. In our analysis, we also use 12 parameters, but we have included tensor perturbations, use newer datasets, and more importantly, we have discarded the phantom DE sector as explained above. We would like to mention that this does affect the bounds on \(\sum m_{\nu }\) greatly, i.e., they become far stronger compared to the case where phantom DE is included. Bounds on other cosmological parameters also improve. The fact that the neutrino mass bounds from cosmology improve greatly in a non-phantom dark energy scenario, and are stronger even compared to the minimal \(\Lambda \text {CDM}+\sum m_{\nu }\) case was shown by two recent papers [21, 54]. However, analyses in both of these papers were done in smaller parameter spaces, and none of these two papers have \(N_{\text {eff}}\) and \(A_{\text {lens}}\) as free parameters as we have. Consequently, they have not touched the issues like the possibility of extra radiation species and the \(A_{\text {lens}}\) problem. Reference [54] also uses older datasets. In this paper, we have, for the first time, shown that neutrino mass bounds can indeed be stronger than the minimal \(\Lambda \text {CDM} + \sum m_{\nu }\) model even in a 12 parameter extended scenario if one considers non-phantom dark energy, even though one expects the bounds to relax in such a large extended space. We have also shown that it is possible to effectively constrain cosmological parameters with some reasonable 1-\(\sigma \) ranges with current cosmological data, in a 12 parameter expended scenario with non-phantom dark energy.

This paper is arranged as follows: in Sect. 2 we describe the cosmological models used in this paper and the prior ranges of parameters used, along with a brief description of the CPL parametrization. In Sect. 3 we briefly describe the datasets used in this work. In Sect. 4 we present our analysis results. In Sect. 5, we further discuss how the neutrino mass bounds will change in the three models with new values of \(\tau \) and \(A_{\text {lens}}\) obtained by the new Planck 2018 collaboration [1]. We provide a discussion and summary in Sect. 6. The main results are in Tables 2, 4, and 5.

2 Models

In this work we have considered three different cosmological scenarios to obtain bounds on the cosmological parameters. Below we list the vector of parameters to vary in each of these cosmological scenarios.

For NPDDE11+r model with 12 parameters:

$$\begin{aligned} \theta\equiv & {} \left[ \omega _c, ~\omega _b, ~\Theta _s,~\tau , ~n_s, ~ln [ 10^{10} A_s], w_0, w_a,\right. \nonumber \\&\left. N_{\text {eff}}, \sum m_{\nu }, r_{0.05}, n_{run} \right] . \end{aligned}$$
(2.1)

For NPDDE11 model with 11 parameters:

$$\begin{aligned} \theta\equiv & {} \left[ \omega _c, ~\omega _b, ~\Theta _s,~\tau , ~n_s, ~ln [ 10^{10} A_s], w_0, w_a,\right. \nonumber \\&\left. N_{\text {eff}}, \sum m_{\nu }, n_{run} \right] . \end{aligned}$$
(2.2)

For NPDDE11+\(A_{\text {lens}}\) model with 12 parameters:

$$\begin{aligned} \theta\equiv & {} \left[ \omega _c, ~\omega _b, ~\Theta _s,~\tau , ~n_s, ~ln [ 10^{10} A_s], w_0, w_a,\right. \nonumber \\&\left. N_{\text {eff}}, \sum m_{\nu }, n_{run}, A_{\text {lens}} \right] . \end{aligned}$$
(2.3)

In this analysis, the first model, NPDDE11+r, comprises of six additional parameters on top of \(\Lambda \)CDM model. The six parameters of \(\Lambda \)CDM are: present day cold dark matter energy density \(\omega _c \equiv \Omega _c h^2\), present day baryon energy density \(\omega _b \equiv \Omega _b h^2\), reionization optical depth \(\tau \), spectral tilt and amplitude of primordial scalar power spectrum \(n_s\) and \(A_s\) (evaluated at pivot scale \(k_*=0.05~\text {h}/\text {Mpc}\)) and \(\Theta _s\) is the ratio between the sound horizon and the angular diameter distance at decoupling.. For our analysis we are adding the following parameters: two dark energy parameters \(w_0\) and \(w_a\), effective number of relativistic species at recombination \(N_{\text {eff}}\), total neutrino mass \(\sum m_{\nu }\), the tensor-to-scalar ratio \(r_{0.05}\) (evaluated at pivot scale \(k_*=0.05~\text {h}/\text {Mpc}\)) and the running of spectral index of primordial power spectrum \(n_{run}\)(\(\equiv dn_s/d\text {ln~k}\)). In this model, the gravitational lensing amplitude of the CMB angular spectra \(A_{\text {lens}}\) is fixed at the \(\Lambda \)CDM predicted value of unity.

We also consider two other scenarios. In the NPDDE11 model, we do not run the tensor perturbations and constrain the parameter space considering scalar only perturbations. In the NPDDE11+\(A_{\text {lens}}\) model we also allow the \(A_{\text {lens}}\) parameter to vary. This is since the cause of the \(A_{\text {lens}}\)-anomaly is unknown and therefore it is important to look into the effect of varying \(A_{\text {lens}}\) on the constraints of rest of the parameter space.

CPL Parametrization For dark energy dynamics we use the famous Chevallier–Polarski–Linder (CPL) parametrization [77, 78] which uses a varying equation of state in terms of the redshift z and two parameters \(w_0\) and \(w_a\):

$$\begin{aligned} w (z) \equiv w_0 + w_a (1 - a) = w_0 + w_a \frac{z}{1+z}. \end{aligned}$$
(2.4)

This uses the Taylor expansion of the equation of state in powers of the scale factor \(a = 1/(1+z)\) and takes only the first two terms. Here \(w(z=0) = w_0\) is the dark energy EoS at present day (\(z=0\)), whereas \(w(z\rightarrow \infty ) = w_0+w_a\) is the dark energy EoS in the beginning of the universe; and w(z) is a monotonic function between these two times. Therefore, to constrain only the NPDDE region of the parameter space i.e. \(w(z) \ge -1\) it is enough to apply these hard priors:

$$\begin{aligned} w_0 \ge -1;\quad w_0+w_a\ge -1. \end{aligned}$$
(2.5)

For the cosmological parameters mentioned in Eqs. 2.12.3, we have assumed flat priors which are listed in Table 1, along with hard priors given in Eq. 2.5. We obtain the constraints using the Markov Chain Monte Carlo (MCMC) sampler CosmoMC [79] which uses CAMB [80] as the Boltzmann code and the Gelman and Rubin statistics [81] to estimate the convergence of chains. All our chains reached the convergence criterion of \(R-1<0.01\).

Table 1 Flat priors on the main cosmological parameters constrained in this paper

3 Datasets

Below, we provide a description of the datasets used in our analyses. We have used different combinations of these datasets.

Cosmic Microwave Background: Planck 2015 We have used measurements of the CMB temperature, polarization, and temperature-polarization cross-correlation spectra from the Planck 2015 data release [82, 83]. We use a combination of the high-l (30 \(\le \) l \(\le \) 2508) and low-l (2 \(\le \) l \(\le \) 29) TT likelihood. Along with that, we also include the high-l (30 \(\le \) l \(\le \) 1996) EE and TE likelihood and the low-l (2 \(\le \) l \(\le \) 29) polarization likelihood. We refer to this whole dataset as Planck.

Baryon Acoustic Oscillations (BAO) Measurements We use measurements of the BAO signal obtained from different galaxy surveys in this work. We include the SDSS-III BOSS DR12 Consensus sample ([84] which includes LOWZ and CMASS galaxy samples at \(z_{\text {eff}} = 0.38,\) 0.51 and 0.61). Along with it, we also include the DR7 MGS at \(z_{\text {eff}} = 0.15\) [85], and the 6dFGS survey at \(z_{\text {eff}} = 0.106\) [86]. We denote this full combination as BAO. Here \(z_{\text {eff}}\) is the effective redshift of a survey.

Luminosity Distance Measurements from Type Ia Supernovae (SNe Ia) We also use Supernovae Type-Ia (SNe Ia) luminosity distance measurements from the Pantheon Sample [87]. It comprises of data from 279 Pan-STARRS1 (PS1) Medium Deep Survey SNe Ia (\(0.03< z < 0.68\)) and distance estimates of SNe Ia from SDSS, SNLS, various low-z and HST samples. This combined sample comprises of data from a total of 1048 SNe Ia with a redshift range of \(0.01< z < 2.3\) and is the largest one till date. We refer to this data as PAN from now on. This dataset supersedes the Joint Light-curve Analysis (JLA) sample which comprises of information from 740 spectroscopically confirmed SNe Ia [88].

BB Mode Spectrum of CMB We use the latest data available from BICEP2/Keck collaboration for the B mode polarization of CMB, which includes all data (range \(20< l < 330\)) taken up to and including 2014 [89]. This dataset is denoted as BK14.

Hubble Parameter Measurements We use a Gaussian prior of \(73.24 \pm 1.74\) km/s/Mpc (68% CL) on \(H_0\). This result is a recent 2.4% determination of the local value of the Hubble parameter by [25] which combines the anchor NGC 4258, Milky Way and LMC Cepheids. We denote this prior as HST.

While we use HST in most cases, we also provide some results with a prior with a lower value of \(H_0 = 71.6\pm 2.7\) km/s/Mpc, which is based on the determination of the Hubble constant from the H0LiCOW programme [26].We call this prior H071p6. This is to compare what happens when we use a \(H_0\) prior that has less tension with Planck than HST.

Planck Lensing Measurements We also use the lensing potential measurements via reconstruction through the four point functions of Planck 2015 measurements of CMB [83]. We simply refer to this data as lensing.

4 Results

We have split the results in the three smaller sections for the three different models we have studied. The description of models and datasets are given at Sects. 2 and 3 respectively. We have presented the results in the following order: first the NPDDE11+r model, then the NPDDE11 model and lastly the NPDDE11+\(A_{\text {lens}}\) model. All the marginalized limits quoted in the text or tables are at 68% CL whereas upper limits are quoted at 95% CL.

4.1 NPDDE11+r model

Bounds on the NPDDE11+r model parameters are presented in Table 2 while the bounds on the \(\Lambda \text {CDM}\) model parameters are presented in Table 3. We do not include the bounds from CMB only data as the bounds are not strong enough in the NPDDE11+r model, a finding that corroborates with a recent study [35] which had varied the dark energy EoS in both phantom and non-phantom regions. However adding either BAO or HST with CMB data seems to provide strong bounds on cosmological parameters. Comparing with the bounds on the parameters in the \(\Lambda \text {CDM}\) model however we can see that the 68% CL spreads of the relevant parameters have increased to different degrees for different parameters. This is an expected phenomenon given the number of parameters has been doubled. Overall the six \(\Lambda \text {CDM}\) parameters have been estimated in the NPDDE11+r model with reasonable spreads, showing that it is possible to constrain cosmology effectively in a large parameter space with current datasets.

We also find tight bounds on \(\sum m_{\nu }\) in this model. The 1-D posteriors for \(\sum m_{\nu }\) and \(N_{{\text {eff}}}\) are given in Fig. 1. Our most aggressive bound in this paper is found in this model with Planck+BAO dataset: \(\sum m_{\nu }<\) 0.123 eV (95% CL) which is very close to the minimum mass of \(\sum m_{\nu } \simeq \) 0.1 eV (95% CL) required for inverted hierarchy of neutrinos (for normal hierarchy, the minimum \(\sum m_{\nu }\) required is around 0.06 eV) [90]. Although we are in such an extended parameter space, this bound is stronger than a bound of \(\sum m_{\nu }<\) 0.158 eV (95% CL) obtained in \(\Lambda \text {CDM}+\sum m_{\nu }\) with Planck+BAO [21]. Without the BAO data, only Planck and BK14 together provide a bound of \(\sum m_{\nu } < 0.414\) eV (95% CL) whereas only using Planck in the same model gives us a bound of \(\sum m_{\nu } < 0.509\) eV (95% CL) which is incidentally very close to the bound of \(\sum m_{\nu } < 0.49\) eV (95% CL) reported by Planck collaboration [83] using the same data in the minimal \(\Lambda \text {CDM}+\sum m_{\nu }\) model. Recent studies [21, 54] in smaller parameter spaces have shown that the models comprising of NPDDE provide stronger bounds on \(\sum m_{\nu }\) than \(\Lambda \text {CDM}+\sum m_{\nu }\), because of a degeneracy present between the dark energy EoS w and \(\sum m_{\nu }\) [91] which leads to the phantom region of the dark energy parameter space preferring larger masses and the non-phantom region preferring smaller masses. However, cosmological datasets usually prefer the phantom region more when the dark energy EoS is allowed to vary both in the phantom and non-phantom regions, which usually leads to weaker bounds on \(\sum m_{\nu }\). This work shows that even as a 12 parameter model, the NPDDE11+r is very efficient in constraining \(\sum m_{\nu }\), unlike the 12 parameter model in [35], where the bounds on neutrino mass sum loosens up considerably. Contrary to what happens in lower dimensional parameter spaces, the HST prior does not lead to stronger bounds on \(\sum m_{\nu }\), as the magnitude of correlation between \(H_0\) and \(\sum m_{\nu }\) is very small in this model.

Table 2 Bounds on cosmological parameters in the NPDDE11+r model. Marginalized limits are given at 68% CL whereas upper limits are given at 95% CL. Note that \(H_0\) and \(\sigma _8\) are derived parameters
Table 3 Bounds on cosmological parameters in the \(\Lambda \text {CDM}\) model. Marginalized limits are given at 68% CL whereas upper limits are given at 95% CL. Note that \(H_0\) and \(\sigma _8\) are derived parameters

This small correlation can be explained with the help of mutual degeneracies present between \(H_0\), \(\sum m_{\nu }\), and the DE EoS w. When w is kept constant in a flat \(\Lambda CDM+ \sum m_{\nu }\) universe, \(H_0\) and \(\sum m_{\nu }\) are strongly anti-correlated, to keep the distance to the last scattering surface, \(\chi (z_{dec})\) unchanged. Here \(z_{dec}\) is the redshift of photon decoupling. \(\chi (z_{dec})\) is sensitive to any changes in the values of \(H_0\) and \(\sum m_{\nu }\), and as shown in [21], any change to \(\chi (z_{dec})\) due to increase in \(\sum m_{\nu }\) can be compensated by decreasing \(H_0\). This causes the anti-correlation. On the other hand, \(H_0\) and w are also degenerate, as both of them control the late time expansion rate of the universe. Thus, when we consider a varying DE EoS, a change in \(H_0\) now can be compensated by a change in w, instead of \(\sum m_{\nu }\). This leads to the decreased degeneracy between \(H_0\) and \(\sum m_{\nu }\) in our NPDDE models.

However we found a strong positive correlation still present with \(N_{{\text {eff}}}\), which leads to a large increase in the value of \(N_{\text {eff}}\) with the use of HST prior (the correlations can be visualized in Fig. 2). Indeed, while Planck+BK14+BAO prefers a \(H_0 = 66.64^{+1.38}_{-1.37}\) km/s/Mpc (68% CL), and \(N_{\text {eff}} = 3.082_{-0.211}^{+0.209}\) (68% CL), the inclusion of the HST prior to this data combination leads to higher values of \(H_0 =69.13_{-1.08}^{+1.09}\) km/s/Mpc (68% CL), and \(N_{\text {eff}} = 3.392_{-0.186}^{+0.188}\) (68% CL) both. The standard value of \(N_{\text {eff}} = 3.045\) is excluded at 68% CL, and favours a dark radiation component, but only very mildly, since \(N_{\text {eff}} = 3.045\) is included in 95% CL. Thus this exclusion of \(N_{\text {eff}} = 3.045\) at 68% CL should not be considered as anything of great significance. In this model, this is a general feature in all the dataset combinations that have the HST prior included, solely due the large tension present between Planck and HST. The HST prior also prefers higher values of \(\sigma _8\). This model does not help the conflict between Planck and CFHTLenS regarding the value of \(\sigma _8\). Visual depiction of this can be found in Fig. 3 in the \(\sigma _8-\Omega _m\) plane. Inclusion of the lensing data lead to worsening of the mass bounds whereas bounds on \(N_{{\text {eff}}}\) are almost unaffected. These datasets however lower the preferred \(\sigma _8\) values.

The use of the H071p6 prior, which has a lower value of \(H_0\) than HST, however, leads to lower values of \(N_{\text {eff}}\), due to a smaller tension between Planck and H071p6. In particular, with Planck + BK14 + BAO + H07106, we get a bound of \(N_{\text {eff}} = 3.202^{+0.200}_{-0.202}\) (68%). Thus, \(N_{\text {eff}} = 3.045\) is no longer excluded at 68% in this case.

Fig. 1
figure 1

Comparison of 1-D marginalized posterior distributions for \(\sum m_{\nu }\) (eV) and \(N_{{\text {eff}}}\) for various data combinations in NPDDE11+r

Fig. 2
figure 2

1\(\sigma \) and 2\(\sigma \) marginalized contours for \(H_0\) (km/s/Mpc) vs. \(\sum m_{\nu }\) (eV) and \(H_0\) (km/s/Mpc) vs. \(N_{{\text {eff}}}\) for Planck+BK14+HST in the NPDDE11+r model, showing only a small correlation between \(H_0\) and \(\sum m_{\nu }\) whereas a strong positive correlation between \(H_0\) vs. \(N_{{\text {eff}}}\)

The SNe Ia luminosity distance measurements provide information about evolution of luminosity distance as a function of redshift (\(0.01<z<2.3\) for the Pantheon sample). This can be used to measure the evolution of the scale factor [92] and is helpful in constraining the dark energy EoS. We found that addition of the PAN data did help in constraining the dark energy parameters more tightly. For Planck+BK14+BAO, we have a bound of \(w_0 < -0.859\) (95% CL), which shrinks to \(w_0 < -0.933\) (95% CL) with the addition of PAN. On the other hand, Planck+BK14+BAO produces a bound of \(w_a = 0.013^{+0.065}_{-0.077}\) (68% CL), whereas Planck+BK14+BAO+PAN leads to \(w_a = 0.033^{+0.036}_{-0.063}\) (68% CL). We see that the 68% spreads of \(w_a\) have shrunk. This has also been depicted in Fig. 4. The HST prior also has similar but less strong effect. With Planck+BK14+BAO+HST we have \(w_0 < -\)0.908 (95% CL) and \(w_a = 0.028^{+0.046}_{-0.065}\) (68% CL). In all cases we found that the cosmology is compatible with a cosmological constant (i.e., \(w_0 = -1\), \(w_a = 0\)).

As far as values of the tensor-to-scalar ratio is concerned, we find that if we run the chains without the BK14 data, we get a bound of \(r_{0.05}<\) 0.155 (95% CL) with Planck+BAO, which is higher than the bound of \(r_{0.05}<\) 0.12 (95% CL) set by Planck collaboration [83]. However, inclusion of the BK14 data leads to a bound of \(r_{0.05}<\) 0.075 (95% CL), which is close to the \(r_{0.05} < 0.07\) (95% CL) limit set by the BICEP2/Keck collaboration [89]. The value of \(r_{0.05}\) remains almost unchanged across all the datasets as long as the BK14 data is included.

Fig. 3
figure 3

1\(\sigma \) and 2\(\sigma \) marginalized contours in the \(\sigma _8-\Omega _m\) plane showing that the NPDDE+r model is ineffective in reducing the tension between CFHTLenS and Planck 2015

Fig. 4
figure 4

Comparison of 1-D marginalized posterior distributions for \(w_0\) and \(w_a\) for different data combinations in NPDDE11+r

Table 4 Bounds on cosmological parameters in the NPDDE11 model. Marginalized limits are given at 68% CL whereas upper limits are given at 95% CL. Note that \(H_0\) and \(\sigma _8\) are derived parameters

4.2 NPDDE11 model

In this section we consider the NPDDE11 model where we turn off the tensor perturbations and also do not include the BK14 data. This does not affect the bounds much as can be seen from Table 4 and comparing with Table 2, which verifies the stability of the results in a smaller parameter space.

The 1-D posteriors for \(\sum m_{\nu }\) and \(N_{\text {eff}}\) for selected datasets are given in Fig. 5. We again find strong bounds on the sum of neutrino masses. We notice that the removal of BK14 data has a small effect on \(\sum m_{\nu }\) which persists over different datasets. For instance, in NPDDE11+r, for Planck+BAO, we find a \(\sum m_{\nu } < 0.131\) eV (95% CL), which is reduced to \(\sum m_{\nu } < 0.123\) eV (95% CL) when we add the BK14 data. In the NPDDE11, this bound is \(\sum m_{\nu } < 0.126\) eV (95% CL) with Planck+BAO, which is our best bound in this model. This is also stronger than the bound obtained in \(\Lambda \text {CDM}+\sum m_{\nu }\) with Planck+BAO, as in the previous NPDDE11+r model, and a large improvement compared to the ones presented in [35], which varied dark energy parameters in both in phantom and non-phantom range.

The strengthening of the bound from NPDDE11+r to NPDDE11 with Planck+BAO might simply be due to reduction in the parameter space volume. On the other hand it seems BK14 prefers a lower \(\sum m_{\nu }\). However even then the changes are small. BK14 data also seems to prefer slightly larger values of \(\sigma _8\), thereby increasing the tension with CFHTLenS. Also, the inclusion of HST prior again seems to discard the standard value of \(N_{\text {eff}} = 3.045\) at 68% CL but again, not at 95% CL, and also it doesn’t lead to stronger \(\sum m_{\nu }\), as before in the NPDDE+r model, due to a large positive correlation between \(H_0\) and \(N_{\text {eff}}\) but a only small correlation between \(H_0\) and \(\sum m_{\nu }\). This can be visualized in Fig. 6. The PAN dataset provides stricter bounds on \(w_0\) and \(w_a\), as before. We depict that in Fig. 7.

The use of the H071p6 prior instead of HST, here again, leads to lower values of \(N_{\text {eff}}\). For instance, with Planck+BAO+H07106, we get a bound of \(N_{\text {eff}} = 3.193^{+0.197}_{-0.199}\) (68%). Thus, \(N_{\text {eff}} = 3.045\) is no longer excluded at 68% in this model also.

4.3 NPDDE11+\(A_{\text {lens}}\) model

We present the limits on the cosmological parameters in Table 5. A number of important changes happen with the introduction of the new varying parameter \(A_{\text {lens}}\). Considering that our main goal in this paper is to constrain neutrino masses, we see a substantial relaxation in the bounds on \(\sum m_{\nu }\). In previous cases we had fixed \(A_{\text {lens}} =\) 1. However now that \(A_{\text {lens}}\) is varied we find that the data prefers a large \(A_{\text {lens}}\) and discards the \(\Lambda \text {CDM}\) value of \(A_{\text {lens}} = 1\) at more than 95% CL (except in case of inclusion of Planck lensing data, which prefers a much lower \(A_{\text {lens}}\), implying a tension between Planck and lensing). The increasing of the lensing amplitude \(A_{\text {lens}}\) has the same effect as the decreasing of \(\sum m_{\nu }\) [93]. Increasing \(A_{\text {lens}}\) leads to smearing of high-l peaks in the CMB temperature and polarization angular power spectra (\(C_l^{TT}\), \(C_l^{TE}\), \(C_l^{EE}\), \(C_l^{BB}\)), due to increased gravitational lensing. On the other hand, massive neutrinos help in reducing this smearing, because it decreases the gravitational lensing of the CMB photons, by suppressing the matter power spectrum in small scales, due to neutrinos having large thermal velocities which prevents them from clustering. Increasing the \(\sum m_{\nu }\) parameter causes increasing suppression of matter power in the small scales [64], which leads to decreasing gravitational lensing of the CMB photons. This leads to a strong positive correlation between \(A_{\text {lens}}\) and \(\sum m_{\nu }\), such as, to compensate for the increase in \(A_{\text {lens}}\), the neutrino masses are also increased. The 1-D plots for \(\sum m_{\nu }\) and \(N_{\text {eff}}\) for selected datasets are given in Fig. 8. In this model, the Planck data is almost insensitive to neutrino masses \(< 0.6\) eV. Our tightest bound of \(\sum m_{\nu } < 0.239\) eV (95% CL) again comes with Planck+BAO data. This bound, while weaker than the previous models we have discussed, is still close to the \(\sum m_{\nu } < 0.23\) eV (95% CL) bound provided by Planck collaboration [83], and still a large improvement compared to the ones presented in [35], which varied dark energy parameters in both in phantom and non-phantom range and had found a bound of \(\sum m_{\nu }<\) 0.557 eV (95% CL) with Planck+BAO, demonstrating the large difference between phantom and non-phantom dark energies as far as neutrino masses are concerned. The preferred \(N_{\text {eff}}\) values are also higher in NPDDE11+\(A_{\text {lens}}\) compared to the previous cases. The addition of the HST data leads to even higher \(N_{\text {eff}}\) which leads to the \(N_{\text {eff}}=3.045\) value being disallowed even at 95% CL with Planck+HST, for which the 68% and 95% limits are \(N_{\text {eff}}=3.517_{-0.216}^{+0.196}\) and \(N_{\text {eff}}=3.517_{-0.396}^{+0.424}\) respectively. This signifies the presence of tension between Planck and HST in this model, as it was in previous models.

Fig. 5
figure 5

Comparison of 1-D marginalized posterior distributions for \(\sum m_{\nu }\) (eV) and \(N_{{\text {eff}}}\) for various data combinations in NPDDE11

Fig. 6
figure 6

1\(\sigma \) and 2\(\sigma \) marginalized contours for \(H_0\) (km/s/Mpc) vs. \(\sum m_{\nu }\) (eV) and \(H_0\) (km/s/Mpc) vs. \(N_{{\text {eff}}}\) for Planck+HST in the NPDDE11 model, showing negligible correlation between \(H_0\) and \(\sum m_{\nu }\) whereas a strong positive correlation between \(H_0\) vs. \(N_{{\text {eff}}}\)

Fig. 7
figure 7

Comparison of 1-D marginalized posterior distributions for \(w_0\) and \(w_a\) for different data combinations in NPDDE11

The use of the H071p6 prior, again leads to lower values of \(N_{\text {eff}}\). In particular, with Planck + BAO + H07106, we get a bound of \(N_{\text {eff}} = 3.329^{+0.207}_{-0.227}\) (68%). Thus, \(N_{\text {eff}} = 3.045\) is not excluded at 95% in this model, but excluded only at 68%.

Another important change is the change in bounds on the optical depth to reionization, \(\tau \). With Planck+BAO, the NPDDE11 model preferred a value of \(\tau = 0.092\pm 0.018\) (68% CL), whereas this model prefers \(\tau = 0.059^{+0.21}_{-0.22}\) (68% CL), which is actually closer to the bound of \(\tau = 0.055 \pm 0.009\) (68% CL) given by Planck 2016 intermediate results [94]. This was previously observed in [35] which did the analysis with varying the dark energy parameters in both the phantom and non-phantom sector. This implies that the main effect is through the degeneracy between \(\tau \) and \(A_{\text {lens}}\) and has not much to do with dark energy. Again, while the NPDDE11+r and NPDDE11 models failed to reconcile Planck with weak lensing measurements like CFHTLenS, the NPDDE11+\(A_{\text {lens}}\) model prefers lower values of \(\sigma _8\) and the agreement with CFHTLenS is considerable. This can be visualized in Fig. 9. This was also previously seen in [35] and hence, again we can infer that this happens because of varying \(A_{\text {lens}}\). The bounds on the dynamical dark energy parameters are however weaker than in the other two models. The cosmological constant is however compatible with the data even in this model (Fig. 10).

5 \(\tau \) and \(A_{\text {lens}}\): implications for Planck 2018

Both \(\tau \) and \(A_{\text {lens}}\) are correlated with \(\sum m_{\nu }\), and with each other. In particular, when \(A_{\text {lens}}\) is fixed, increase in \(\sum m_{\nu }\) reduces smearing in the damping tail of the CMB power spectra, and it can be compensated by increasing \(\tau \) [10, 21]. Hence they have a positive correlation. On the other hand, increasing \(A_{\text {lens}}\) increases the smearing of the damping tail, i.e., negative correlation with \(\tau \). The value of \(\tau \) has been significantly improved from Planck 2015 to Planck 2018. Thus we consider a bound on this optical depth to reionization, \(\tau = 0.055\pm 0.009\), taken from [95], in which Planck collaboration removed previously unexplained systematic effects in the polarization data of the Planck HFI on large angular scales (low-l). We refer to this prior as \(\tau 0p055\) hereafter. We use \(\tau 0p055\) as a substitute for low-l polarization data, and thus we discard the lowP data whenever we apply the \(\tau 0p055\) prior, to avoid any double counting. This prior is very close to the bound, \(\tau = 0.0544^{+0.0070}_{-0.0081}\) (68%), obtained with Planck 2018 temperature and polarization data [1]. Hence, imposition of \(\tau = 0.055\pm 0.009\) would produce bounds on \(\sum m_{\nu }\) that will be close to the bounds produced with Planck 2018 (instead of Planck 2015) in the models that we have considered.

Table 5 Bounds on cosmological parameters in the NPDDE11+\(A_{\text {lens}}\) model. Marginalized limits are given at 68% CL whereas upper limits are given at 95% CL. Note that \(H_0\) and \(\sigma _8\) are derived parameters
Fig. 8
figure 8

Comparison of 1-D marginalized posterior distributions for \(\sum m_{\nu }\) (eV) and \(N_{{\text {eff}}}\) for various data combinations in NPDDE11+\(A_{\text {lens}}\)

Fig. 9
figure 9

1\(\sigma \) and 2\(\sigma \) marginalized contours in the \(\sigma _8-\Omega _m\) plane showing that the NPDDE11+\(A_{\text {lens}}\) model is effective in reducing the tension between CFHTLenS and Planck 2015

We find that in the NPDDE11+r model, with Planck + BK14 + BAO + \(\tau 0p055\), we get \(\sum m_{\nu }<\) 0.097 eV (95%) (i.e. improvement over the \(\sum m_{\nu }<\)0.123 eV limit as in Table 2, with Planck + BAO). This bound is actually lower than the \(\sum m_{\nu }\simeq 0.1\) eV, i.e. minimum mass required for inverted mass hierarchy of neutrinos. At the same time, in the NPDDE11 model, with Planck + BAO + \(\tau 0p055\) we get \(\sum m_{\nu }<\) 0.107 eV (95%), which is also an improvement from the result: \(\sum m_{\nu }<\) 0.126 eV (95%) with Planck + BAO (see Table 5. This happens, since in both of these models the mean value of \(\tau \) hovers around 0.09–0.1. The \(\tau 0p055\) prior partially breaks the degeneracy between \(\tau \) and \(\sum m_{\nu }\), and produces lower values of \(\sum m_{\nu }\) by lowering the preferred \(\tau \) values.

On the other hand, in the NPDDE11+\(A_{\text {lens}}\) model with Planck + BAO + \(\tau 0p055\), we found \(\sum m_{\nu } < \) 0.237 eV, which is almost similar to the bound \(\sum m_{\nu } < \) 0.239 eV (95%) with Planck + BAO (see Table 3). This happens since all the three parameters, \(\tau \), \(A_{\text {lens}}\), and \(\sum m_{\nu }\) are varied together. Now, as the data prefers \(A_{\text {lens}}\) values higher than the \(\Lambda \)CDM value in this model, the degeneracy between \(A_{\text {lens}}\) and \(\tau \) leads to a much lowered value of \(\tau \), and thus the correlation between \(\tau \) and \(\sum m_{\nu }\) is already much smaller in this model, than the other two. Thus \(\tau 0p055\) has little effect on the neutrino mass bounds in this model.

Also, we obtained limits of \(A_{\text {lens}}\) in a \(\Lambda \text {CDM} + A_{\text {lens}}\) model with Planck 2015 full temperature and polarization data. The value we got is \(A_{\text {lens}} = 1.15^{+0.072}_{-0.082}\) (68% CL). In the Planck 2018 Cosmological Parameters paper [1], for similar data and same model, given value of \(A_{\text {lens}}\) is: \(A_{\text {lens}} = 1.18\pm 0.065\) (68%) (see equation 36b). This shows that there is only a very small change in \(A_{\text {lens}}\) from Planck 2015 to Planck 2018. Thus, it is likely that there will not be any considerable changes in the limits of other cosmological parameters with the Planck 2018 data, in the context of the value of \(A_{\text {lens}}\).

Fig. 10
figure 10

Comparison of 1-D marginalized posterior distributions for \(w_0\) and \(w_a\) for different data combinations in NPDDE11+\(A_{\text {lens}}\)

6 Summary

In this work we have studied three different extended cosmological scenarios with non-phantom dynamical dark energy (NPDDE) with a focus on constraining sum of neutrino masses. We have presented bounds on all the varying parameters in these extended scenarios and described the main effects we observed. In the first model, NPDDE11+r, we consider 12 parameters: the 6 \(\Lambda \text {CDM}\) parameters, two dynamical dark energy parameters with CPL parametrization (\(w_0\) and \(w_a\)) with hard priors to satisfy the non-phantom requirement, number of effective relativistic neutrino species at recombination (\(N_{\text {eff}}\) and sum of neutrino masses (\(\sum m_{\nu }\)), and the running of the inflation spectral index (\(n_{run}\)) and the tensor-to-scalar ratio (\(r_{0.05}\)). We used different combinations of recent datasets including Planck 2015 temperature and polarization data, CMB B-mode spectrum data from BICEP2/Keck collaboration (BK14), BAO SDSS III BOSS DR12, MGS and 6dFS data, SNe Ia Pantheon sample (PAN), the HST prior (\(H_0 = 73.24\pm 1.74\) km/s/Mpc (68% CL)). We found that CMB only data is not very effective in constraining the cosmological parameters. The 1\(\sigma \) spreads for the parameters were however increased in this model compared to \(\Lambda \text {CDM}\) due to the doubling of number of parameters. Our best bound on neutrino masses in this model came from Planck+BK14+BAO: \(\sum m_{\nu } < 0.123\) eV (95% CL) which is a strong bound close to the minimum mass of \(\simeq \) 0.1 eV (95% CL) required for inverted hierarchy of neutrino masses and is stronger than a bound of \(\sum m_{\nu }<\) 0.158 eV (95% CL) obtained in \(\Lambda \text {CDM}+\sum m_{\nu }\) with Planck+BAO [21] (see also [54] for a similar conclusion in a smaller parameter space). We also found that inclusion of the HST prior leads to a preference for dark radiation at 68% CL but not at 95%, while without the HST prior the data is completely consistent with the standard value of \(N_{\text {eff}} = 3.045\). Although this is driven by the more than 3\(\sigma \) tension present between Planck and HST regarding the value of \(H_0\) and should be interpreted cautiously. This model did not improve the \(\sigma _8\) tension present in the \(\sigma _8-\Omega _m\) plane between Planck and CFHTLenS. The Pantheon sample improved the bounds on the dark energy parameters. All combinations of data are also compatible with a cosmological constant (\(w_0=-1, w_a = 0\)). However, this is mostly because we are restricting the parameter space to \(w(z)\ge -1\) and [35] had found that the data mostly prefers the phantom region in such an extended parameter space when both phantom and non-phantom regions are allowed.

We tested the stability of these results in a lower parameter space (model:NPDDE11) where we turned off the tensor perturbations and also did not use the BK14 data. We found that the general conclusions made for NPDDE11+r were also true in this model. The tightest bound of \(\sum m_{\nu }<\) 0.126 eV (95% CL) in this model also came from Planck + BAO.

Finally we studied the NPDDE11+\(A_{\text {lens}}\) model where we also varied the lensing amplitude. We found that except when Planck lensing data is included, the \(A_{\text {lens}} = 1\) value predicted by \(\Lambda \text {CDM}\) was rejected at more than 95% CL by the datasets. Due to this, the \(\sum m_{\nu }\) bounds also worsened with our best result in this model:\(\sum m_{\nu } < 0.239\) eV (95% CL) coming from Planck+BAO again. This result is, however, still close to the \(\sum m_{\nu } < 0.23\) eV (95% CL) bound by Planck collaboration [83], showing that the cosmological data is effective in constraining neutrino masses in a cosmology with NPDDE. The HST prior also preferred a dark radiation component but this time also at 95% CL level, as this model also prefers higher values of \(N_{\text {eff}}\). On the other hand, we found that this model helps relieve the \(\sigma _8\) tension between Planck and CFHTLenS considerably.

The recent Planck 2018 results [1] put the bound of \(\sum m_{\nu }<\) 0.13 eV (95% CL) in \(\Lambda \text {CDM}+\sum m_{\nu }\) with Planck+BAO. Thus, the aggressive bound of \(\sum m_{\nu }<\) 0.123 eV (95% CL) (Planck + BK14 + BAO) is still stronger than this bound by Planck 2018 and hence, our results are very much relevant albeit the analysis is with Planck 2015 dataset. In fact, when we use the following Gaussian prior on optical depth to reionization: \(\tau = 0.055\pm 0.009\) from 2016 Planck intermediate results, and discard the low-l polarization data, the bound on neutrino masses improves to \(\sum m_{\nu }<\) 0.097 eV (95%), which is less than the 0.1 eV mass sum required for inverted hierarchy of active neutrino masses.

While we have used the CPL parameterization in our paper, it is not the only parameterization that can be used for non-phantom dark energy. Any change in parameterization can lead to change in bounds obtained on the sum of neutrino masses. For instance, if we set the \(w_a\) parameter to zero, i.e., if we consider only a simple \(w(z) = w_0\) parameterization, we find that bounds on \(\sum m_{\nu }\) relax slightly. In the NPDDE11 model, with \(w_a = 0\) and \(w(z)=w_0\), and using Planck + BAO data, we found \(\sum m_{\nu }<\) 0.141 eV (95%), instead of \(\sum m_{\nu }<\) 0.126 eV (95%) when we vary both \(w_0\) and \(w_a\). In the NPDDE11+\(A_{\text {lens}}\) model also, with \(w_a = 0\) and \(w(z)=w_0\), and using Planck + BAO, we obtained \(\sum m_{\nu }<\) 0.261 eV (95%), instead of \(\sum m_{\nu }<\) 0.239 eV (95%). Some other parameterizations that can be considered include Logarithmic parameterization [96] (\(w(a) = w_0 - w_aln(a)\)), Jassal–Bagla–Padmanabhan (JBP) parameterization [97] (\(w(a) = w_0 + w_a a(1-a)\) etc. Analysis involving these parameterizations is beyond the scope of our work in this paper. However, we would like to point the reader to [41], where the authors found similar limits, with CPL and Logarithmic parameterizations, on \(\sum m_{\nu }\) for the case of degenerate hierarchy. However, in case of JBP, bound on \(\sum m_{\nu }\) was found to be significantly stronger. While [41] does not discard the phantom region, it is possible that results from analyses with only non-phantom dark energy will also vary depending on the parameterization used, as far as neutrino masses are concerned.

We would like to add a final remark that we have obtained the bounds while taking the datasets at face value. However unresolved systematics present in the dataset could have affected our results and conclusions. For instance the tension between Planck and HST prior can be due to a dark radiation species, but can also be due to systematics present in both the datasets. Thus there is still a lot to learn about robustness of datasets and also about dynamics of dark energy.