1 Introduction

Dark matter (DM) accounts for the majority of the matter in the Universe, but its nature remains a mystery. It has been known for some time [1,2,3] that GeV-scale particle DM can accurately reproduce the observed relic abundance of DM, provided that it has an interaction strength with standard model (SM) particles that is comparable to that of the weak force. This is the Weakly Interacting Massive Particle (WIMP) paradigm.

The simplest WIMP model is the “scalar singlet” or scalar “Higgs-portal” scenario, in which one adds to the SM a massive real scalar field S uncharged under the SM gauge group [4,5,6]. S is stabilised by a \(\mathbb {Z}_2\) symmetry, and never obtains a vacuum expectation value (VEV). The only renormalisable interactions between the singlet and the SM allowed by the symmetries of the SM arise from a Lagrangian term of the form \(S^2H^2\). This term gives the singlet a so-called “Higgs portal” for interacting with the SM, leading to a range of possible phenomenological consequences. These include thermal production in the early Universe and present-day annihilation signals [7,8,9], direct detection and \(h\rightarrow SS\) decays [10]. A number of recent papers have investigated prospects for relaxing these constraints by adding additional scalars [11,12,13]. The singlet has also been implicated in inflation [14,15,16] and baryogenesis [17,18,19].

The simplicity of the scenario and the discovery of the Higgs boson in 2012 [20, 21] have focussed much attention on the singlet model in recent years. XENON100 and WMAP constraints were applied in Ref. [22], and an early global fit of the model using a similar range of data was performed in Ref. [23]. LHC Run I constraints from a CMS vector boson fusion analysis, and monojet and mono-Z analyses were shown to be very weak [24]; indeed, monojet constraints on all minimal Higgs-portal models (i.e. scalar, fermion or vector DM interacting with the SM only via the Higgs portal) are weak [25]. Implications of the Higgs mass measurement and a detailed treatment of direct and indirect detection were explored in Ref. [26], followed by the application of direct limits from the LUX and PandaX experiments [27,28,29]. Anti-proton data can be important in the region of the Higgs resonance [27, 30], and competitive with the LUX limits at higher DM masses, but are ultimately prone to substantial cosmic ray propagation uncertainties. Discovery prospects at future colliders have been explored for the 14 TeV LHC and a 100 TeV hadron collider [31, 32], and the International Linear Collider [33].

The most comprehensive recent studies were presented in Refs. [26, 34, 35]. The first pair of papers examined the scalar singlet scenario in light of recent (and projected) LHC Higgs invisible width measurements [36,37,38], the Planck relic density measurement [39], Planck and WMAP CMB constraints on DM annihilation at the time of recombination [39,40,41], Fermi-LAT analysis of gamma rays in the direction of 15 dwarf spheroidal galaxies using 6 years of Pass 8 data [42], and LUX limits on the spin-independent WIMP-nucleon scattering cross-section [43]. These studies also investigated the prospects for detection in gamma rays by the Cherenkov Telescope Array (CTA; [44,45,46]), and for direct detection by XENON1T [47]. Reference [35] presented a global fit to determine the regions of the scalar singlet model space that can explain the apparent excess of gamma rays observed by Fermi towards the Galactic centre, frequently interpreted as evidence for DM annihilation [48,49,50,51,52,53,54]. This included a treatment of the Planck relic density constraint, LHC invisible Higgs width constraints, direct search data from LUX (and projections for XENON1T and DARWIN), and constraints from Fermi-LAT searches for DM annihilation in dwarf galaxies and \(\gamma \)-ray lines at the Galactic centre.

Although lines and signals from the Galactic centre in the context of this model have received a reasonable amount of attention [8, 35, 55,56,57], in general these signals are relevant only if the singlet is produced non-thermally, as the regions of parameter space where such signals are substantial have quite low thermal relic abundances [26]. Fitting the excess at the Galactic centre requires relatively large couplings, which in turn imply too little DM from thermal freeze-out. Some previous studies have solved this issue by assuming an unspecified additional production mechanism. This reduces the predictability of the theory, as the cosmological abundance of scalar singlets ceases to be a prediction. We will take a different approach, allowing for the possibility that the scalar singlet constitutes only a sub-component of DM, and permitting a different species (e.g. axions) to make up the rest. Indeed, as we show in this paper, experiments are now so sensitive to DM signals that they can probe singlet models constituting less than a hundredth of a percent of the total DM.

The purpose of the present paper is twofold. First and foremost, we provide the most comprehensive study yet of the scalar singlet scenario, in a number of ways. We augment the particle physics model parameters with a series of nuisance parameters characterising the DM halo distribution, the most important SM masses and couplings, and the nuclear matrix elements relevant for the calculation of direct search yields. These are included in the scan as free parameters, and are constrained by a series of likelihoods derived from the best current knowledge of each observable (and in some cases, their correlations). Compared to the constraints used in Refs. [26, 34], we add improved direct detection likelihoods [58] from LUX [59], PandaX [60], SuperCDMS [61] and XENON100 [62], as well as IceCube limits on DM annihilation to neutrinos in the core of the Sun [63, 64]. We also test some benchmark models obtained in our scan for stability of the electroweak vacuum. Given the recent preference for astrophysical explanations of the Fermi-LAT Galactic centre excess [65,66,67,68,69,70,71], we do not add this to the scan as a positive measurement of DM properties, unlike in Ref. [35]. We explore the extended parameter space in more detail than has previously been attempted, using four different scanning algorithms, and more stringent convergence criteria than previous studies. The secondary purpose of this paper is to provide an example global statistical analysis using the Global and Modular Beyond-Standard Model Inference Tool (GAMBIT) [72], for a DM model where extensive comparison literature exists.

In Sect. 2, we describe the Lagrangian and parameters of the scalar singlet model, discuss our astrophysical assumptions, and define the nuisance parameters that we include in our global fit. Sect. 3 gives details of our scan, including the likelihood terms that we include for each constraint, the sampling algorithms we employ, and their settings. We present the latest status of the singlet model in Sect. 4, before concluding in Sect. 5. All input files, samples and best-fit benchmarks produced for this paper are publicly accessible from Zenodo [73].

2 Physics framework

2.1 Model definition

The renormalisable terms involving a new real singlet scalar S, permitted by the \(\mathbb {Z}_2\), gauge and Lorentz symmetries, are

$$\begin{aligned} {\mathcal {L}} = \frac{1}{2} \mu _{\scriptscriptstyle S}^2 S^2 + \frac{1}{2}\lambda _{h\scriptscriptstyle S}S^2|H|^2 + \frac{1}{4}\lambda _{\scriptscriptstyle S}S^4 + \frac{1}{2}\partial _\mu S \partial ^\mu S. \end{aligned}$$
(1)

From left to right, these are: the bare S mass, the Higgs-portal coupling, the S quartic self-coupling, and the S kinetic term. Because S never obtains a VEV, the model has only three free parameters: \(\mu _{\scriptscriptstyle S}^2\), \(\lambda _{h\scriptscriptstyle S}\) and \(\lambda _{\scriptscriptstyle S}\). Following electroweak symmetry breaking, the portal term induces \(h^2S^2\), \(v_0hS^2\) and \(v_0^2S^2\) terms, where h is the physical Higgs boson and \(v_0 = 246\) GeV is the VEV of the Higgs field. The additional \(S^2\) term leads to a tree-level singlet mass

$$\begin{aligned} m_{\scriptscriptstyle S}= \sqrt{\mu _{\scriptscriptstyle S}^2 + \frac{1}{2}{\lambda _{h\scriptscriptstyle S}v_0^2}}. \end{aligned}$$
(2)

Dark matter phenomenology is driven predominantly by \(m_{\scriptscriptstyle S}\) and \(\lambda _{h\scriptscriptstyle S}\), with viable solutions known to exist [26, 34] in a number of regions:

  1. 1.

    the resonance region around \(m_{\scriptscriptstyle S}\sim m_h/2\), where couplings are very small (\(\lambda _{h\scriptscriptstyle S}<10^{-2}\)) but the singlet can nevertheless constitute all of the observed DM,

  2. 2.

    the resonant “neck” region at \(m_{\scriptscriptstyle S}=m_h/2\), with large couplings but an extremely small relic S density, and

  3. 3.

    a high-mass region with order unity couplings.

The parameter \(\lambda _{\scriptscriptstyle S}\) remains relevant when considering DM self-interactions (e.g. [74]), and the stability of the electroweak vacuum. In the SM, the measured values of the Higgs and top quark masses indicate that the electroweak vacuum is not absolutely stable, but rather meta-stable [75]. This means that although the present vacuum is not the global minimum of the scalar potential, its expected lifetime exceeds the age of the Universe. Although this is not inconsistent with the existence of the current vacuum, one appealing feature of scalar extensions of the SM is that the expected lifetime can be extended significantly, or the stability problem solved entirely, by making the current vacuum the global minimum.

The stability of the electroweak vacuum has been a consideration in many studies of scalar singlet extensions to the SM [14, 76,77,78,79,80,81,82,83,84,85,86], typically appearing along with constraints from perturbativity, direct detection experiments and the relic abundance of DM. As such, vacuum stability can be an interesting aspect to study of the scalar singlet model (and indeed, of any UV-complete model). In this paper however, we primarily treat the scalar singlet DM model as a low-energy effective theory, and do not consider \(\lambda _{\scriptscriptstyle S}\) as a relevant parameter. In a future fit, we plan to explore renormalisation of the scalar singlet model over the full range of scales, from electroweak to Planck, including full calculations of perturbativity and the lifetime of the electroweak vacuum. Here, for the sake of interest we simply check the stability of the electroweak vacuum for a few of our highest-likelihood parameter points.

2.2 Relic density and Higgs invisible width

In order to calculate the relic density of S, we need to solve the Boltzmann equation [87]

$$\begin{aligned} \frac{\mathrm{d}n_{\scriptscriptstyle S}}{\mathrm{d}t}+3Hn_{\scriptscriptstyle S}=-\langle \sigma v_{\mathrm {rel}}\rangle (n_{\scriptscriptstyle S}^2-n_{{\scriptscriptstyle S},{\mathrm {eq}}}^2), \end{aligned}$$
(3)

where \(n_{\scriptscriptstyle S}\) is the DM number density, \(n_{{\scriptscriptstyle S},\mathrm {eq}}\) is the number density if the DM population were in chemical equilibrium with the rest of the Universe, H is the Hubble rate, and \(\langle \sigma v_{\mathrm {rel}}\rangle \) is the thermally averaged self-annihilation cross-section times the relative velocity of the annihilating DM particles (technically the Møller velocity). The non-averaged cross-section \(\sigma v\) depends on the centre-of-mass energy of the annihilation \(\sqrt{s}\), and the thermal average depends on temperature T. The average is given by

$$\begin{aligned} \langle \sigma v_{\mathrm {rel}}\rangle =\int _{4m_{\scriptscriptstyle S}^2}^{\infty }\mathrm{d}s\, \frac{s\sqrt{s-4m_{\scriptscriptstyle S}^2}K_1(\sqrt{s}/T)\sigma v_{\mathrm {cms}}}{16 T m_{\scriptscriptstyle S}^4K_2^2(m_{\scriptscriptstyle S}/T)} , \end{aligned}$$
(4)

where for convenience we have expressed the result in terms of the relative velocity of the annihilating S particles in the centre-of-mass frame, \(v_\mathrm{cms}=2\sqrt{1-4m_{\scriptscriptstyle S}^2/s}\). For the case of the scalar singlet model, the non-averaged cross-section for annihilation into all final states except hh is [26]

$$\begin{aligned} \sigma v_{\mathrm {cms}}=\frac{2\lambda _{h\scriptscriptstyle S}^2v_0^2}{\sqrt{s}} \frac{\varGamma _h(\sqrt{s}) }{ (s-m_h^2)^2+m_h^2\varGamma _h^2(m_h)}. \end{aligned}$$
(5)

For \(m_{\scriptscriptstyle S}>m_h\), this expression needs to be supplemented with the partial annihilation cross-section into hh, given in Eq. A4 of Ref. [26].

We use the SM Higgs boson width \(\varGamma _h(\sqrt{s})\) as a function of the invariant mass of the resonance \(m_{h^*}=\sqrt{s}\), as implemented in DecayBit [88].Footnote 1 At tree level, the decay width of Higgs bosons to such invisible final states is

$$\begin{aligned} \varGamma _{h\rightarrow {\scriptscriptstyle SS}} = \frac{\lambda _{h\scriptscriptstyle S}^2 v_0^2}{32\pi m_h}(1 -4 m_{\scriptscriptstyle S}^2/m_h^2)^{1/2}. \end{aligned}$$
(6)

This is the standard method for calculating the relic density. It assumes that kinetic decoupling of DM from other species occurs well after chemical freeze-out. If this is not the case, one must solve a coupled system of differential equations rather than the single Boltzmann equation (Eq. 3) [90]. For the scalar singlet model, the standard approach is very accurate except at and below the Higgs resonance, where \(m_{\scriptscriptstyle S}\sim m_h/2\). Here, the impact of a more accurate treatment on the relic density can be up to one order of magnitude in the range \(53~\mathrm{GeV}\lesssim m_\chi \lesssim 63\) GeV [91], as \(\sigma v_{\mathrm {rel}}\) is resonantly enhanced, so even small values of \(\lambda _{h\scriptscriptstyle S}\), where DM undergoes early kinetic decoupling, can avoid thermal overproduction. Although this effect should arguably be included for the sake of completeness in future fits, it has little impact on our final results because it only affects a relatively small mass range.

2.3 Direct detection

The predicted number of events in a direct detection experiment is

$$\begin{aligned} N_{\mathrm {p}} = MT \int _0^{\infty } \phi (E) \frac{\mathrm{d}R}{\mathrm{d}E}(E)\,\mathrm{d}E, \end{aligned}$$
(7)

with M the detector mass, T the exposure time, and \(\phi (E)\) the detector response function. The latter encodes the fraction of recoil events of some energy E that are expected to be detected, within some analysis region.

The differential recoil rate \(\frac{\mathrm{d}R}{\mathrm{d}E}\) depends on the nuclear scattering cross-section. The scalar singlet model has no spin-dependent interactions with nuclei. The spin-independent WIMP-nucleon cross-section is

$$\begin{aligned} \sigma _{\mathrm{SI}}=\frac{m_N^4}{4\pi (m_{\scriptscriptstyle S}+ m_N)^2}\frac{\lambda _{h\scriptscriptstyle S}^2f_N^2}{m_h^4}, \end{aligned}$$
(8)

where \(m_N\) is the nucleon mass, and \(f_N\) is the effective Higgs–nucleon coupling

$$\begin{aligned} f_N=\frac{2}{9}+\frac{7}{9}\sum _{q=u,d,s}f^{(N)}_{Tq}. \end{aligned}$$
(9)

The three light quark nuclear matrix elements \(f^{(N)}_{Tq}\) can be calculated from the nuclear matrix elements that describe the quark content of the proton and neutron,

$$\begin{aligned} \sigma _l\equiv & {} m_l \langle N | \bar{u}u + \bar{d}d | N \rangle , \end{aligned}$$
(10)
$$\begin{aligned} \sigma _s\equiv & {} m_s \langle N | \bar{s}s | N \rangle , \end{aligned}$$
(11)

where \(m_l \equiv (1/2) (m_u + m_d)\), and \(N \in \{p,n\}\). See Ref. [26] for details.

Halo uncertainties can have a significant impact on the interpretation of direct searches for DM [92]. For the DM halo in the Milky Way, we assume a generalised NFW profile, with a local Maxwell–Boltzmann speed distribution truncated at the local Galactic escape velocity. The only parameter of the density profile that we retain as a nuisance parameter is \(\rho _0\), the local DM density, although GAMBIT makes it straightforward to also include uncertainties arising from the DM velocity profile in future fits. For the scans of this paper, we assume a most probable speed \(v_0=235\) km s\(^{-1}\) [93, 94], and an escape velocity of \(v_{\mathrm {esc}}=550\) km s\(^{-1}\) [95]. See Ref. [58] for details.

2.4 Indirect detection

The flux of gamma rays from DM annihilation factorises into a part \(\varPhi \) that only depends on the particle physics properties and a part J that depends only on the astrophysical distribution of DM.

For the gamma-ray flux in energy bin i with width \(\varDelta E_i \equiv E_{\text {max},i} - E_{\text {min},i}\), the particle physics factor is

$$\begin{aligned} \varPhi _i = \sum _j\frac{\langle \sigma v\rangle _{0,j}}{8\pi m_{\scriptscriptstyle S}^2}\int _{E_{\text {min},i}}^{E_{\text {max},i}}\mathrm{d}E \, \frac{\mathrm{d}N_{\gamma ,j}}{\mathrm{d}E}, \end{aligned}$$
(12)

where \(\mathrm{d}N_{\gamma ,j}/\mathrm{d}E\) is the differential gamma-ray multiplicity for single annihilations into final state j, and \(\langle \sigma v\rangle _{0,j}\equiv \sigma v_j|_{v\rightarrow 0} \equiv \sigma v_j|_{s\rightarrow 4m_{\scriptscriptstyle S}^2}\) is the zero-velocity limit of the partial annihilation cross-section into final state j. This is the final-state-specific equivalent of Eq. 5. We compute the partial annihilation cross-sections for the singlet model using the expressions of Appendix A of Ref. [26], as implemented in DarkBit [58]. We obtain the predicted spectra \({\mathrm{d}N_{\gamma }}/{\mathrm{d}E}\) for each model point by using a Monte-Carlo showering simulation, detailed in Ref. [58].

The astrophysics factor for a given target k is

$$\begin{aligned} J_k = \int _{\varDelta \varOmega _k} \mathrm{d}\varOmega \int _\text {l.o.s.} \mathrm{d}s \, \rho _{\scriptscriptstyle S}^2. \end{aligned}$$
(13)

Here, \(\varDelta \varOmega _k\) denotes the solid angle over which the signal is integrated, l.o.s. indicates that the line element \(\mathrm{d}s\) runs along the line of sight to the target object, and \(\rho _{\scriptscriptstyle S}\) is the DM mass density within it.

Neutrino telescopes also place bounds on DM models by searching for high-energy neutrinos from DM annihilation. The most likely signal in this respect comes from DM gravitationally captured by the Sun and concentrated to its core, where it would annihilate [64, 96]. This channel predominantly tests the mass and couplings of DM to nuclei rather than the annihilation cross-section, as the nuclear scattering leading to capture is the rate-limiting step for most models. Because the singlet model has no spin-dependent couplings to nuclei, neutrino telescope searches for annihilation in the Sun provide constraints only on the spin-independent scattering cross-section.

Owing to the uncertainties associated with cosmic ray propagation, we do not consider constraints from charged cosmic rays (primarily anti-protons and positrons). Radio signals coming from synchrotron emission by DM annihilation products generated in strong magnetic fields are not included in our analysis, as the associated field strengths are highly uncertain. Nor are CMB limits on DM annihilation, as Fermi dwarf limits are stronger at all masses of interest for this model. Finally, we do not consider limits implied by gamma-ray observations of the Galactic centre, whether by Fermi or ground-based gamma-ray telescopes, owing to the uncertainties involved in modelling the DM profile and astrophysical gamma-ray emission of the central Milky Way.

Table 1 Scalar singlet model parameters varied in our fits, along with their associated ranges and prior types
Table 2 Names and ranges of Standard Model, halo and nuclear nuisance parameters that we vary simultaneously with scalar singlet parameters in our fits. We assign a flat prior to all these parameters

3 Scan details

3.1 Parameters and nuisances

A summary of the parameter ranges that we scan over for this paper is given in Tables 1 and 2.

Table 1 gives the singlet model parameters, along with the scanning priors that we use. We carry out two main types of scan: one over the full range of masses from 45 GeV to 10 TeV, intended to sample the entire parameter space, and another centred on lower masses at and below the Higgs resonance \(m_{\scriptscriptstyle S}\sim m_h/2\), in order to obtain a more detailed picture of the resonance region.

In addition to the effect of the singlet parameters, we also consider the effects of varying a number of SM, astrophysical and nuclear parameters within their allowed experimental uncertainties. Table 2 gives the full ranges of the 13 nuisance parameters that we vary in our scans, along with their central values. We assign flat priors to all nuisance parameters in Table 2, as they are all sufficiently well constrained that their priors are effectively irrelevant.

We allow for \({\pm }3\sigma \) excursions from the best estimates of the nuclear couplings. For the local DM density, we scan an asymmetric range about the central value, reflecting the log-normal likelihood that we apply to this parameter (Sect. 3.7). Detailed references for the central values and uncertainties of these parameters can be found in Ref. [58].

The central values of the up and down quark masses come from the 2014 edition of the PDG review [97]; we allow these parameters to vary by \({\pm }20\%\) in our fits, so as to encompass the approximate \(3\sigma \) range of correlated uncertainties associated with the mass ratio likelihoods implemented in PrecisionBit [88]. Given the large impact that the Higgs mass can have on the phenomenology of this model, we scan an extended range for this parameter, covering more than \({\pm }4\sigma \) around the central value quoted in the 2015 update to the PDG review [98] (\(m_h=125.09\pm 0.24\) GeV; see Sect. 3.7). The central value and \(\pm 3\sigma \) scan range for the top quark pole mass come from Ref. [99], and for all other SM nuisance parameters from Ref. [97].

We include the local DM density and nuclear matrix elements as nuisance parameters because of their impacts on direct detection and capture of singlet particles by the Sun. The strong coupling, Higgs VEV (determined by \(G_F\)), Higgs mass and quark masses all enter into the cross-sections for annihilation and/or scattering of S [26]. The electromagnetic coupling does not impact our fit beyond its own nuisance likelihood, but has a small effect on renormalisation of other parameters and therefore vacuum stability, which we investigate for a few benchmarks and will explore in detail in a follow-up paper.

3.2 Scanning procedure

Although 13 of the directions in the 15-dimensional parameter space are well constrained, efficiently sampling all 15 parameters simultaneously requires sophisticated scanning algorithms. We explore this space primarily with two different scanning packages interfaced via ScannerBit: a differential evolution sampler Diver, and an ensemble Markov Chain Monte Carlo (MCMC) known as T-Walk [100]. Both algorithms are the current state of the art when it comes to scaling with dimension [100], and thus are the natural choice for this study.

Both of these algorithms are particularly well suited for multimodal distributions, and each serves a purpose in this study. T-Walk allows efficient and accurate calculation of the Bayesian posterior distribution for the target model. The package can also be used for frequentist studies if the sampling density is amplified by a judicious choice of run parameters. However, T-Walk is far less efficient at sampling the profile likelihood in high-dimensional spaces than Diver [100]. Because we vary 15 parameters in total, we use Diver to produce high-quality profile likelihoods. Having identified all likelihood modes, and therefore all possible locations that might meaningfully contribute to the posterior, we then use T-Walk to produce posterior distributions, checking that it does not fail to locate any of the modes identified by Diver.

In addition to the ensemble MCMC and differential evolution scans, we also combine our results with those from a more traditional MCMC, GreAT, and the nested sampling algorithm MultiNest. These are also interfaced to ScannerBit [100]. Although it is not typically necessary to combine results from four different algorithms, here we demonstrate the power of the GAMBIT package, which allows us to use a range of scanning procedures on the same composite likelihood, in order to produce the most robust results possible.

As discussed in Sect. 2.1, the singlet parameter space features a viable region at \(m_{\scriptscriptstyle S}\approx m_h/2\). In this region, the annihilation of singlet DM to SM particles via s-channel Higgs exchange is resonantly enhanced, and a lower portal coupling is required to achieve the observed relic density. This region is not yet excluded by direct detection. However, probing this region of the parameter space over a large-mass range is difficult, even when using a logarithmic prior on the mass. To properly sample this region, we run a second scan with each sampler, using a flat prior over the range \(m_{\scriptscriptstyle S}\in [45,70]\) GeV. We also carry out an additional specially focussed low-mass scan with Diver in the “neck” region of the resonance, in order to obtain well-sampled contours in the most localised part of the allowed parameter space. We do this by excluding all points outside the range \(m_{\scriptscriptstyle S}\in [61.8,63.1]\) GeV.

The convergence criteria, population size and chain details are controlled by various settings for each sampler. The settings that we use in this paper are presented in Table 3. We chose these settings after extensive testing [100], to give the most stringent convergence and best exploration possible with each scanner and region. To a certain extent, some of these settings are overkill for the problem at hand, and the same physical inference could be achieved with less samples. However, the scans that we present here took only 26,000 core hours in total to compute, and the scan that dominates most of the contours (the full-range Diver scan) took just 3 h on 10 \(\times \) 24-core nodes, i.e. around 700 core hour. Compared to the time required to compute fits that include direct LHC simulations [101,102,103], the additional sampling we do here costs practically nothing—and noticeably improves the resolution of our results. We refer the reader to Ref. [100] for further details of the scanners, their settings and underlying algorithms.

The profile likelihoods that we present in this paper are based on the combination of all samples from all scans, which contain \(5.7\times 10^7\) valid samples altogether. In contrast, the posteriors that we show come exclusively from the full-range T-Walk scan.

We compute and plot profile likelihoods and posteriors using pippi [104], obtaining profile likelihoods by maximising the log-likelihood in parameter bins over all other parameters not shown in a given plot, and posteriors by integrating the posterior density over the parameters not shown in each plot. We compute confidence regions and intervals by determining the appropriate iso-likelihood contour relative the best-fit likelihood for 1 or 2 degrees of freedom, corresponding to 1D and 2D plots, respectively. We compute Bayesian credible regions and intervals as parameter ranges containing the relevant posterior mass according to the maximum posterior density requirement. Further details can be found in Ref. [104].

Table 3 Parameters of each sampler for carrying out global fits of the scalar singlet model in this paper

3.3 Relic density likelihood

To determine the thermal S relic density for each parameter combination, we solve the Boltzmann equation (Eq. 3) numerically with DarkBit [58], taking the partial annihilation rates for different final states from Eq. 5 supplemented at \(m_{\scriptscriptstyle S}> m_h\) with the expression for \(\langle \sigma v\rangle _{0,hh}\) from Appendix A of Ref. [26]. For \(m_{\scriptscriptstyle S}< 150\) GeV we use the SM Higgs partial widths contained in DecayBit (from Ref. [89]), whereas for \(m_{\scriptscriptstyle S}> 150\) GeV we revert to the tree-level expressions from Appendix A of Ref. [26], to avoid the impact of large 1-loop corrections to the Higgs self-interaction. We determine the effective invariant rate \(W_\mathrm{eff}\) from the partial annihilation cross-sections, and pass it on to the numerical Boltzmann solver of DarkSUSY  [105] in order to obtain \(\varOmega _{\scriptscriptstyle S} h^2\).

We implement the relic density likelihood as an upper limit only, permitting models where the thermal abundance makes S a fraction of DM. Comparing with the relic abundance measured by Planck [39] (\(\varOmega _\text {DM} h^2 = 0.1188\pm 0.0010\), at \(1\sigma \)), we compute a marginalised Gaussian upper limit likelihood as described in Sec 8.3.4 of Ref. [72]. Models that predict less than the measured relic density are assigned a likelihood contribution equal to that assigned to models that predict the observed value exactly. Models predicting more than the measured relic density are penalised according to a Gaussian function centred on the observed value. We adopt the DarkBit default value of 5% for the theoretical uncertainty on the relic density prediction, adding it in quadrature to the experimental uncertainty on the observed value. We note that this is a very conservative estimate of the theoretical uncertainty for this model, except in the resonance region (see Sect. 2.2).

For models that underpopulate the observed relic density, we rescale all direct and indirect signals to account for the fraction of DM that is detectable using the properties of the S boson. This is internally consistent from the point of view of the model, and conservative in the sense that it suppresses direct and indirect signals in regions where the thermal abundance is less than the Planck value.

3.4 LHC Higgs likelihoods

When \(m_{\scriptscriptstyle S}<m_h/2\), the decay \(h\rightarrow S S\) is kinematically allowed, with a partial width given by Eq. 6. This is entirely invisible at hadron colliders. Constraints can be placed on the scalar singlet model parameters from measurements of Higgs production and decay rates, and the implied limit on invisible decay channels of the Higgs. For the case of SM-like couplings, the 95% confidence level upper limit on the Higgs invisible width from LHC and Tevatron data is presently at the level of 19% [36]. We use the DecayBit implementation of the complete invisible Higgs likelihood, based on an interpolation of Figure 8 of [36].

3.5 Direct detection likelihoods

The dominant constraints on the scalar singlet model come from the LUX [43, 59] and PandaX [60] experiments, with weaker limits also available from DarkBit based on SuperCDMS [61] and XENON100 [62]. We use the DarkBit interface to DDCalc to evaluate a Poisson likelihood for observing \(N_{\mathrm {o}}\) events in a given experiment, given a predicted number of signal events \(N_{\mathrm {p}}\) (Eq. 7),

$$\begin{aligned} {\mathcal {L}}(N_{\mathrm {p}}|N_{\mathrm {o}}) = \frac{(b+N_{\mathrm {p}})^{N_{\mathrm {o}}} \, \mathrm{e}^{-(b+N_{\mathrm {p}})}}{N_{\mathrm {o}}!}. \end{aligned}$$
(14)

Here b is the expected number of background events in the analysis region. We model detector efficiency and acceptance effects by interpolating between values in pre-calculated tables contained in DDCalc.

3.6 Indirect detection likelihoods

The lack of evidence for anomalous gamma-ray emission from dwarf spheroidal galaxies in data collected by the Fermi-LAT experiment allows stringent constraints to be placed on the DM annihilation cross-section [42]. We use the Pass 8 analysis of the 6-year dataset, with the composite likelihood

$$\begin{aligned} \ln {\mathcal {L}}_\text {exp} = \sum _{k=1}^{N_\text {dSph}}\sum _{i=1}^{N_\text {ebin}} \ln {\mathcal {L}}_{ki}(\varPhi _i \cdot J_k), \end{aligned}$$
(15)

where \(N_\text {dSph}\) and \(N_\text {ebin}\) are the number of considered dSphs and the number of energy bins, respectively. The partial likelihoods \({\mathcal {L}}_{ki}\) are functions of the signal flux, and hence of the quantities \(\varPhi _i\) and \(J_k\) defined in Eqs. 12 and 13, respectively.

The main results of Ref. [42] were obtained by profiling over the \(J_k\) as nuisance parameters, yielding a combined profile likelihood of

$$\begin{aligned} \ln {\mathcal {L}}_{\text {dwarfs}}^{\text {prof.}}(\varPhi _i) = \max _{J_1\dots J_k}(\ln {\mathcal {L}}_{\text {exp}} + \ln {\mathcal {L}}_J), \end{aligned}$$
(16)

where

$$\begin{aligned} \ln {\mathcal {L}}_J = \sum _{k=1}^{N_{\text {dSph}}} \ln {\mathcal {N}}(\log _{10} J_k | \log _{10} \hat{J}_k, \sigma _k). \end{aligned}$$
(17)

Here the use of a log-normal distribution to describe the uncertainty on \(J_k\) is a good approximation. Tabulated binned likelihoods have been provided by the Fermi-LAT experiment, and implemented in DarkBit via the gamLike package.Footnote 2

The strongest neutrino indirect detection constraints on DM-nucleon scattering currently come from the IceCube search for annihilation in the Sun [64, 106]. We access the 79-string results via the DarkBit interface to the nulike package [63, 107], which constructs a fully unbinned likelihood using event-level energy and angular information available in the published 79-string IceCube dataset, marginalised over detector systematics. We obtain predicted neutrino spectra at the Earth using WimpSim [108] yield tables contained in DarkSUSY [105]. Although IceCube limits on spin-independent scattering are not competitive with those from LUX or PandaX, for many points in the singlet parameter space they provide constraints stronger than those from SuperCDMS, and almost as strong as XENON100.

Note that the methods that we use for marginalising or profiling out additional systematic uncertainties in neutrino and \(\gamma \)-ray likelihoods are only applicable because the systematics are uncorrelated; the same cannot be done with a common systematic that impacts many experiments, such as the local density of DM (which affects every direct detection experiment).

The dwarf likelihood gives an identical result to what we would obtain if we were to include each of the J factors as nuisance parameters in our own fit, and profile over them. The same is true of the IceCube detector systematics treated by nulike, although in that case the equivalent result would be the Bayesian one, where the corresponding nuisance parameter was marginalised over. Ideally, one would include all such nuisance parameters in the same fit, and then be free to choose at the end of a scan to profile over them all to produce profile likelihoods, or marginalise over them all to produce posterior probability densities. In practice, however, the gain in accuracy achieved by doing so is generally minimal, whereas the speed gain from the ‘inline’ treatment is substantial.

Fig. 1
figure 1

Profile likelihoods for the scalar singlet model, in the plane of the singlet parameters \(\lambda _{h\scriptscriptstyle S}\) and \(m_{\scriptscriptstyle S}\). Contour lines mark out the \(1\sigma \) and \(2\sigma \) confidence regions. The left panel shows the resonance region at low singlet mass, whereas the right panel shows the full parameter range scanned. The best-fit (maximum likelihood) point is indicated with a white star, and edges of the allowed regions corresponding to solutions where S constitutes 100% of dark matter are indicated in orange

Fig. 2
figure 2

Profile likelihoods for the scalar singlet model, in various planes of observable quantities against the singlet mass. Contour lines mark out the \(1\sigma \) and \(2\sigma \) confidence regions. Greyed regions indicate values of observables that are inaccessible to our scans, as they correspond to non-perturbative couplings \(\lambda _{h\scriptscriptstyle S}>10\), which lie outside the region of our scan. Note that the exact boundary of this region moves with the values of the nuisance parameters, but we have simply plotted this for fixed central values of the nuisances, as a guide. The best-fit (maximum likelihood) point is indicated with a white star, and edges of the allowed regions corresponding to solutions where S constitutes 100% of dark matter are indicated in orange. Left Late-time thermal average of the cross-section times relative velocity; centre spin-independent WIMP-nucleon cross-section; right relic density

Fig. 3
figure 3

Profile likelihoods of nuclear scattering (left) and annihilation (right) cross-sections for the scalar singlet model, scaled for the singlet relic abundance and plotted as a function of the singlet mass. Here we rescale the nuclear and annihilation scattering cross-sections by \(f\equiv \varOmega _{\scriptscriptstyle S} / \varOmega _{\text {DM}}\) and \(f^2\), in line with the linear and quadratic dependence, respectively, of scattering and annihilation rates on the dark matter density. Contour lines mark out the \(1\sigma \) and \(2\sigma \) confidence regions. The best-fit (maximum likelihood) point is indicated with a white star

Fig. 4
figure 4

One-dimensional profile likelihoods and posterior distributions of the scalar singlet parameters, and all nuisance parameters varied in our fits. Posterior distributions are shown in blue and profile likelihoods in red. Dashed lines indicate \(1\sigma \) and \(2\sigma \) confidence and credible intervals on parameters

3.7 Nuisance likelihoods

Following Ref. [58], we take the likelihood terms for the hadronic matrix elements \(\sigma _s\) and \(\sigma _l\) to be Gaussian, with central values and \(1\sigma \) uncertainties of \(43\pm 8\) and \(58\pm 9\), respectively.

The canonical value of the local DM density \(\rho _0\) is \(\bar{\rho _0}=0.4\) GeV/cm\(^3\) (e.g. [109]), but this depends on assumptions such as spherical symmetry in the halo. We remain relatively agnostic with respect to this assumption by choosing a log-normal distribution for the likelihood of \(\rho _0\), and assuming an uncertainty of \(\sigma _{\rho _0} = 0.15\) GeV cm\(^{-3}\), such that

$$\begin{aligned} \mathcal {L}_{\rho _0} = \frac{1}{\sqrt{2\pi } \sigma '_{\rho _0} \rho _0} \exp \left( - \frac{\ln (\rho _0 / \bar{\rho }_0)^2}{2 {\sigma ^{\prime 2}_{\rho _0}}} \right) , \end{aligned}$$
(18)

where \(\sigma '_{\rho _0} = \ln (1 + \sigma _{\rho _0}/\rho _0)\). More details can be found in Ref. [58].

We use the PrecisionBit implementation of SM nuisance parameter likelihoods. For the \(\overline{MS}\) light quark (uds) masses at \(\mu =2\) GeV, we use a single joint Gaussian likelihood function, combining likelihoods on \(m_u/m_d\), \(m_s/(m_u+m_d)\), and \(m_s\). We take the experimental measurements of these quantities and their uncertainties from the PDG [97]. We use Gaussian likelihoods for \(G_F\), based on the measured value \(G_{F} = (1.1663787 \pm 0.0000006) \times 10^{-5}\) GeV\(^{-2}\), \(\alpha _\mathrm{EM}\), based on the observed \(\alpha _{\mathrm {EM}}(m_{Z})^{-1} = 127.940 \pm 0.014\) (\(\overline{MS}\) scheme) [97], and \(\alpha _{s}\), using the value \(\alpha _{s}(m_{Z}) = 0.1185 \pm 0.0005\) (\(\overline{MS}\) scheme), as obtained from lattice QCD [97]. We use the quoted uncertainties as \(1\,\sigma \) confidence intervals, and apply no additional theoretical uncertainty. We also apply a simple Gaussian likelihood with no theoretical uncertainty to the Higgs mass, based on the 2015 PDG result of \(m_h= 125.09 \pm 0.24\) GeV [98].

4 Results

4.1 Profile likelihoods

Results of our global fit analysis with all nuisances included are presented as 2D profile likelihoods in the singlet parameters in Fig. 1, and in terms of some key observables in Figs. 2 and 3. We also show the one-dimensional profile likelihoods for all parameters in red in Fig. 4.

The viable regions of the parameter space agree well with those identified in the most recent comprehensive studies [26, 34]. Two high-mass, high-coupling solutions exist, one strongly threatened from below by direct detection, the other mostly constrained from below by the relic density. The leading \(\lambda _{h\scriptscriptstyle S}^2\)-dependence of \(\sigma _\text {SI}\) and \(\sigma v\) approximately cancel when direct detection signals are rescaled by the predicted relic density, suggesting that the impacts of direct detection should be to simply exclude models below a given mass. However, the relic density does not scale exactly as \(\lambda _{h\scriptscriptstyle S}^{-2}\), owing to its dependence on the freeze-out temperature, resulting in an extension of the sensitivity of direct detection to larger masses than might be naïvely expected, for sufficiently large values of \(\lambda _{h\scriptscriptstyle S}\).Footnote 3 This is the reason for the division of the large-mass solution into two sub-regions; at large coupling values, the logarithmic dependence of the relic density on \(\lambda _{h\scriptscriptstyle S}\) enables LUX and PandaX to extend their reach up to singlet masses of a few hundred GeV. This is also slightly enhanced by additional \(\lambda _{h\scriptscriptstyle S}^3\) and \(\lambda _{h\scriptscriptstyle S}^4\) terms in \(\langle \sigma v \rangle _{0,hh}\), which are responsible for the ‘kink’ seen in the border of the grey regions at \(m_{\scriptscriptstyle S}\sim 600\) GeV in the left and right panels of Fig. 2.

The resonance region persists, despite being beset from all sides: invisible Higgs from above, relic density from below, indirect detection from higher masses, and direct detection from lower masses. We find a narrow “neck” of degenerate maximum likelihood directly on the resonance, with a best fit located at \(m_{\scriptscriptstyle S}=62.51\) GeV, \(\lambda _{h\scriptscriptstyle S}= 6.5\times 10^{-4}\). The width of this region is set by a number of things:

  1. 1.

    the actual separation between the areas allowed by the invisible width and direct detection constraints, which press in from \(m_{\scriptscriptstyle S}<m_h/2\) and \(m_{\scriptscriptstyle S}>m_h/2\), respectively,

  2. 2.

    the uncertainty on the Higgs mass, which blurs the exact \(m_{\scriptscriptstyle S}\) value of the resonance by \(\sim \)480 MeV at the level of the \(2\sigma \) contours, and

  3. 3.

    the width of the bins into which we sort samples for plotting, which prevents anything from being resolved on scales below \(\varDelta m_{\scriptscriptstyle S}\sim 170\) MeV in the left panel of Fig. 1.

In addition to correctly identifying the allowed region of the parameter space, we obtain additional information from the global fit analysis beyond that seen from pure exclusion studies. Using the relic density as an upper limit, all points for which \(\varOmega _{\scriptscriptstyle S}h^2\le \varOmega _\text {DM} h^2\) have a null log-likelihood contribution, and are thus treated equally above the line in parameter space where \(\varOmega _{\scriptscriptstyle S}h^2=0.1188\). Where \(\varOmega _{\scriptscriptstyle S}h^2<\varOmega _\text {DM} h^2\), we rescale the local DM density (\(\rho _0\)) as well as that in dwarfs \(\rho _{\scriptscriptstyle S}\), so the direct and indirect detection likelihoods are not flat within the allowed region. Were we not to rescale signals self-consistently for the predicted relic density, the areas excluded by direct and indirect detection in the first two panels of Fig. 2 would instead closely track the standard direct and indirect sensitivity curves that many readers will be familiar with. This can be seen more clearly in Fig. 3, where we plot cross-sections rescaled by the appropriate power of \(\varOmega _{\scriptscriptstyle S} / \varOmega _\text {DM}\), together with the experimental constraints from Fermi-LAT, LUX and PandaX.

Table 4 Contributions to the log-likelihood at the best-fit point, compared to an ‘ideal’ case. The ideal is defined as the central observed value for detections, and the background-only likelihood for exclusions. Note that each likelihood is dimensionful, so its absolute value is less meaningful than any offset with respect to another point (see Sect. 8.3 of Ref. [72] for more details of the normalisation used). The best-fit point has \(\lambda _{h\scriptscriptstyle S}= 6.5\times 10^{-4}\), \(m_{\scriptscriptstyle S}= 62.51\) GeV

Were we to instead restrict our fits to only those models that reproduce all of the DM via thermal production to with the Planck uncertainties, we would be left with a narrow band along a small number of edges of the allowed regions we have found. These edges are indicated with orange annotations in Figs. 1 and 2. At high singlet masses, the value of the late-time thermal cross-section (Eq. 4 for \(T=0\)) corresponding to this strip is equal to the canonical ‘thermal’ scale of 10\(^{-26}\) cm\(^3\) s\(^{-1}\). At low masses, this strip runs along the lower edge of the resonance ‘triangle’ only, as indirect detection rules out models with \(\varOmega _Sh^2=0.119\) near the vertical edge (at \(m_{\scriptscriptstyle S}= 62\) GeV).

In Fig. 2, we also show in grey the regions corresponding to Higgs-portal couplings above our maximum considered value, \(\lambda _{h\scriptscriptstyle S}=10\), in order to give some rough idea of the area of these plots that we have not scanned (and the area that should almost certainly be excluded on perturbativity grounds were we to do so). We note that at large \(m_{\scriptscriptstyle S}\), the highest-likelihood regions are all at quite large coupling values, where the annihilation cross-section is so high, and the resulting relic density is so low, that all direct and indirect signals are essentially absent—but where perturbativity of the model begins to become an issue.

Table 5 Details of the best-fit points and posterior means, differentiated into the two main likelihood modes. Best fits are given for the case where the singlet relic density is within 1\(\sigma \) of its observed value, and for the case where singlet particles may be a sub-dominant component of dark matter. We omit the values of the 13 nuisance parameters, as they do not deviate significantly from the central values of their associated likelihood functions

4.2 Best-fit point

Our best-fit point is located within the low-mass resonance region, at \(\lambda _{h\scriptscriptstyle S}= 6.5\times 10^{-4}\), \(m_{\scriptscriptstyle S}=62.51\) GeV. This point has a combined log-likelihood of \(\log ({\mathcal {L}})=4.566\), shown broken down into its various likelihood components in Table 4. To put this into context, we also provide the corresponding likelihood components of a hypothetical ‘ideal’ fit, which reproduces positive measurements exactly, and has likelihood equal to the background-only value for those observables with only a limit. The overall combined ideal likelihood is \(\log ({\mathcal {L}})=4.673\), a difference of \(\varDelta \ln {\mathcal {L}}=0.107\) with respect to our best fit. The best fit above the resonance is at \(\lambda _{h\scriptscriptstyle S}=9.9\), \(m_{\scriptscriptstyle S}=132.5\) GeV, with \(\log ({\mathcal {L}})=4.540\), \(\varDelta \ln {\mathcal {L}}=0.133\).

Interpreting \(\varDelta \ln {\mathcal {L}}\) defined this way is somewhat fraught, as we do not know its distribution under the hypothesis that the best fit is correct. However, its definition is almost identical to half the “likelihood \(\chi ^2\)” of Baker and Cousins [110], which is known to follow a \(\chi ^2\) distribution in the asymptotic limit. Our \(\varDelta \ln {\mathcal {L}}\) differs from half the likelihood \(\chi ^2\) only in that some of the components of the ideal likelihood come from the likelihood of a pure-background model, rather than from setting all predictions to their observed values. Assuming that \(2\varDelta \ln {\mathcal {L}}\) follows a \(\chi ^2\) distribution, estimating the effective number of degrees of freedom would still be difficult, as our likelihoods include many upper limits and Poisson terms, some of which have already been conditioned on the background expectation, and some of which have not. The difference between the ideal and the best-fit likelihood does nonetheless give some indication of the degree to which the Singlet DM model can simultaneously explain all data in a consistent way, and how much worse it does than the ideal model. In this sense, it gives information similar in character to the modified p-value method known as CLs [111,112,113], which was explicitly designed for excluding models that gave poorer fits than the background model, by conditioning on the background. Were one to approximate the distribution of \(2\varDelta \ln {\mathcal {L}}\) as \(\chi ^2\) with e.g. 1–2 effective degrees of freedom, this would correspond to a rough p value of between 0.6 and 0.9 in both the resonance and the high-mass region—a perfectly acceptable fit.

Next we consider parameter combinations where the singlet constitutes the entire observed relic density of DM, by restricting the discussion to points with \(\varOmega _Sh^2\) within \(1\sigma \) of the Planck value \(\varOmega _{\text {DM}}h^2 = 0.1188\pm 0.006\) (the uncertainty includes theoretical and observational contributions added in quadrature). In this case, the best fit occurs at the bottom of the resonance, at \(\lambda _{h\scriptscriptstyle S}=2.9 \times 10^{-4}\), \(m_{\scriptscriptstyle S}= 62.27\) GeV. This point has \(\log ({\mathcal {L}})=4.431\), which translates to \(\varDelta \ln {\mathcal {L}}=0.242\) compared the ideal model. In the high-mass region, the best fit able to reproduce the entire observed relic density is at \(\lambda _{h\scriptscriptstyle S}= 3.1\), \(m_{\scriptscriptstyle S}= 9.79\) TeV, and has \(\log ({\mathcal {L}})=4.311\) (\(\varDelta \ln {\mathcal {L}}=0.362\)). If we were to approximate the distribution of \(2\varDelta \ln {\mathcal {L}}\) as \(\chi ^2\) with 1–2 degrees of freedom, this would correspond to p values of between 0.5 and 0.8 for the resonance point, and between 0.4 and 0.7 for the high-mass point. Again, these would suggest that the fit is perfectly reasonable. This indicates that there is no significant preference from data for scalar singlets to make up either all or only a fraction of the observed DM.

The four best-fit points and the corresponding relic densities are presented in Table 5.

4.3 Bayesian posteriors

By using multiple scanning algorithms in our fits, we are also able to consider marginalised posterior distributions for the singlet parameters. In Fig. 4, in blue we also plot one-dimensional marginalised posteriors for all parameters, from our full-range posterior scan with the T-Walk sampler.Footnote 4 The one-dimensional posterior for \(m_{\scriptscriptstyle S}\) shows that although the full-range scan has managed to detect the resonance region, this area has been heavily penalised by its small volume in the final posterior, arising from the volume effect of integrating over nuisance parameters to which points in this region are rather sensitive, such as the mass of the Higgs. The penalty is sufficiently severe that this region drops outside the \(2\sigma \) credible region in the \(m_{\scriptscriptstyle S}\)\(\lambda _{h\scriptscriptstyle S}\) plane. We therefore focus only on the high-mass modes in the righthand panel of Fig. 5, where we show the posterior from the full-range scan.

Fig. 5
figure 5

Marginalised posterior distributions of the scalar singlet parameters, in low-mass (left) and full-range (right) scans. White contours mark out \(1\sigma \) and \(2\sigma \) credible regions in the posterior. The posterior mean of each scan is shown as a white circle. Grey contours show the profile likelihood \(1\sigma \) and \(2\sigma \) confidence regions, for comparison. The best-fit (maximum likelihood) point is indicated with a grey star

Because it is restricted to the resonance region, the low-range scan (left panel of Fig. 5) shows the expected relative posterior across this region. The fact that the resonance is so strongly disfavoured in the full-range posterior scan is an indication of its heavy fine-tuning, a property that is naturally penalised in a Bayesian analysis. This mode of the posterior accounts for less than 0.4% of the total posterior mass, indicating that it is disfavoured at almost \(3\sigma \) confidence.

For the sake of understanding the prior dependence of our posteriors, we also carried out a single scan of the full parameter range with flat instead of log priors on \(m_{\scriptscriptstyle S}\) and \(\lambda _{h\scriptscriptstyle S}\), using MultiNest with the same full-range settings as in Table 3. Unsurprisingly, the resulting posterior is strongly driven by this (inappropriate) choice of prior, concentrating all posterior mass into the corner of the parameter space at large \(\lambda _{h\scriptscriptstyle S}\) and \(m_{\scriptscriptstyle S}\). The \(1\sigma \) region lies above \(\lambda _{h\scriptscriptstyle S}\sim 3\), \(m_{\scriptscriptstyle S}\sim 3\) TeV, and the \(2\sigma \) region above \(\lambda _{h\scriptscriptstyle S}\sim 1\), \(m_{\scriptscriptstyle S}\sim 1\) TeV.

4.4 Vacuum stability

Finally, we check vacuum stability for some interesting benchmark points.

So far, our calculations have not required any renormalisation group evolution or explicit computation of pole masses. We have simply taken the tree-level expression for \(m_{\scriptscriptstyle S}\) (Eq. 2) to indicate the pole mass, and varied it and \(\lambda _{h\scriptscriptstyle S}\) as free parameters. To test vacuum stability using \(\overline{MS}\) renormalisation group equations (RGEs), we need to instead use these parameters along with the values of the nuisance parameters to set up boundary conditions for a set of \(\overline{MS}\) RGEs. We determine values for the \(\overline{MS}\) parameters that give consistent pole masses using FlexibleSUSY Footnote 5 1.5.1 [116], with SARAH 4.9.1 [117,118,119,120]. In doing this, it becomes necessary to specify the parameter \(\lambda _{\scriptscriptstyle S}\), which we set to zero at the renormalisation scale \(m_Z\). SpecBit can then evolve the \(\overline{MS}\) parameters to higher scales, using the two-loop RGEs of FlexibleSUSY, in order to test vacuum stability and also perturbativity.

For our best-fit point, the Higgs-portal coupling \(\lambda _{h\scriptscriptstyle S}\) is too small to make a noticeable positive contribution to the running of the Higgs self-coupling, which reaches a minimum value of \(-0.0375935\) at \(2.523\times 10^{17}\) GeV. The electroweak vacuum remains meta-stable for this point, with no substantial change in phenomenology compared to the SM, where for the same Higgs and top quark masses the quartic Higgs coupling has a minimum of \(-0.037631\) at \(2.514\times 10^{17}\) GeV.

Next we consider a high-mass point within our \(1\sigma \) allowed region: \(\lambda _{h\scriptscriptstyle S}= 0.5\), \(m_{\scriptscriptstyle S}= 1.3\) TeV. This point has a large enough coupling \(\lambda _{h\scriptscriptstyle S}\) that the minimum quartic Higgs coupling is positive: 0.0522133 at \(1.40006\times 10^9\) GeV. We see that it is certainly possible to stabilise the electroweak vacuum within the singlet model whilst respecting all current constraints.

4.5 Comparison to existing results

The most recent study of the scalar singlet model with a \(\mathbb {Z}_2\) symmetry and a wide range of experimental constraints was that of Beniwal et al. [34]. This recent study is an ideal candidate with which to compare our results, in order to check for consistency and determine the impacts of the newest experimental constraints. There are two important differences in the ingredients of our study and that of Beniwal et al. First, we include stronger DM direct detection constraints from LUX [59] and PandaX [60], which exclude a large part of the parameter space. Second, we scan many relevant nuisance parameters, whereas previous studies have taken them as fixed. The effect of this can be seen along the boundaries of the confidence intervals, where the viable regions are always at least as large in a scan where the nuisances are allowed to vary as in one where they are fixed.

Considering these differences, we see consistency between the results of this paper and Fig. 4 of Beniwal et al. [34], both in the low- and high-mass parts of the \(\lambda _{h\scriptscriptstyle S}\), \(m_{\scriptscriptstyle S}\) parameter space. The increased size of the allowed region resulting from the variable nuisance parameters is evident along all contour edges. The behaviour of the stronger direct detection constraint is also visible, in the top left corner of the triangular part of the allowed region in the left panel of Fig. 1, and on the right side of the “neck”. In the high-mass area of the parameter space (right panel of Fig. 1), we also see LUX and PandaX cutting a large triangular region into the allowed parameter space, essentially separating the high-mass solutions into two separate likelihood modes.

5 Conclusion

The extension of the Standard Model by a scalar singlet stabilised by a \(\mathbb {Z}_2\) symmetry is still a phenomenologically viable dark matter model, whether one demands that the singlet constitutes all of dark matter or not. However, the parameter space is being continually constrained by experimental dark matter searches. This is evident in the global fit that we have presented here, combining the latest experimental results and likelihoods to provide the most stringent constraints to date on the parameter space of this model. Direct detection experiments will fairly soon probe the entire high-mass region of the model, with XENON1T expected to access all but a very small part of each of the high-mass islands [26]. The resonance region will prove more difficult, though some hope certainly exists for ton-scale direct detection to improve constraints from the low-\(m_{\scriptscriptstyle S}\) direction, and for future colliders focussed on precision Higgs physics to probe the edge of the region at \(\lambda _{h\scriptscriptstyle S}\sim 0.02\).

We have seen that the best-fit point found in our scan does not have a notable impact on the stability of the electroweak vacuum, due to its rather small value of \(\lambda _{h\scriptscriptstyle S}\). We have shown that larger values of the portal coupling can completely solve the meta-stability of the electroweak vacuum, even whilst satisfying all experimental constraints. However, the couplings required to do this are not far below where perturbativity starts to become an issue. Investigating how these competing constraints impact the allowed and preferred regions of the singlet model in a global fit will be one of the aims of our follow-up study.

In this study we have demonstrated some powerful features of the GAMBIT framework. It is now possible to easily combine likelihoods and observables from GAMBIT and existing packages in a consistent and computationally efficient way. We have varied 13 nuisance parameters in addition to the two parameters of the scalar singlet model. We have searched this parameter space using the most modern scanning algorithms available, to provide both frequentist and Bayesian statistical interpretations. By using parallel computing resources, we have achieved this with, for example, a maximum runtime for any Diver scan presented in this paper of just 3 h. Finally, due to the modularity and flexibility of the GAMBIT system, it will be possible to include new likelihoods and/or change parts of the calculation at any time in the future, in order to quickly update the analysis to take into account new experimental developments. All input files, samples and best-fit benchmarks produced for this paper are publicly accessible from Zenodo [73].