Global fits of GUT-scale SUSY models with GAMBIT

We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the Constrained Minimal Supersymmetric Standard Model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95\% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

Introduction
Although the Standard Model (SM) of particle physics has long provided a spectacularly successful description of physics at and below the electroweak scale, it remains incomplete. Explaining dark matter (DM), the asymmetry between matter and antimatter, the hierarchy between the Planck and electroweak scales, the origin of the fundamental forces and charges, or anomalies in low-energy precision and flavour measurements, requires extending the SM by adding one or more new particles.
Even though the MSSM framework is predictive, its Lagrangian terms responsible for softly breaking SUSY contain over a hundred new parameters. This impairs the practical predictivity of the model. Mediation of supersymmetry breaking by Planck-scale physics is a popular and viable motivation for reducing the free parameters to a small number at the Grand Unified Theory (GUT) scale [103][104][105][106][107][108]. Here we analyse three scenarios motivated by gravity mediation: the Constrained MSSM (CMSSM) [109] and two of its Non-Universal Higgs Mass (NUHM1, NUHM2) extensions [110][111][112][113][114].
Opinions differ regarding the phenomenological feasibility of these models, especially the CMSSM. Global fits of the CMSSM after Run I of the Large Hadron Collider (LHC) indicate that its experimentally-viable parameter space has been pushed to regions with superpartners heavier than 1 TeV. The relic density of DM in these scenarios is set by neutralino-stau co-annihilation, resonant annihilation through a heavy Higgs, or a large Higgsino component . Relaxing the assumption of scalar soft-mass universality at the GUT scale provides much more flexible phenomenology [120,134,[144][145][146][147][148][149][150][151][152][153][154]. Global fits of the NUHM1 show that the tension that exists in the CMSSM between the measured Higgs mass and precision/flavour observables is reduced due to the decoupling of the Higgs sector from the squark and slepton sectors, and a new region of chargino co-annihilation opens up for dark matter [117,119,120,126,134,145,148]. Extending the parameter space to the NUHM2 [117,145,148] relaxes the constraints on the scalar masses even further.
In this work we use the GAMBIT framework [155][156][157][158][159][160] to scan and assess the viability of the parameter spaces of each of these three GUT-scale scenarios in detail. We also carry out a detailed comparison of our results with previous ones, to understand the impact of the improved theoretical calculations and updated experimental data that we include, and as a verification of our new computational framework. We have also carried out similar analyses of scalar singlet DM [161] and 'phenomenological' (weak-scale) SUSY models [162] with the GAMBIT framework.
There are several important features of our study that make it the most definitive exploration of the CMSSM, NUHM1 or NUHM2 to date: 1. We apply the DM relic density constraint as an upper bound only. This requires that the cosmological density of the lightest neutralino does not exceed the observed density of DM. This is a conservative option from the point of view of excluding a light mass spectrum, as it introduces more possibilities for light Higgsino and light Higgsino-bino DM than in studies where a lower bound is also applied. 2. We include a significantly higher number of observables in our combined likelihood than has been done before. These include rates in multiple direct and indirect searches for DM, a wide range of LHC sparticle searches and Higgs observables, and an up-to-date set of flavour physics observables and electroweak precision measurements. 3. In addition to improving the quantity of data included in the fit, we have also improved the quality of the typical simulation treatments, including direct Monte Carlo simulation of LHC observables during the global fit, event-level indirect search likelihoods, and direct DM search limits based on rigorous simulation of the relevant experiments. 4. Using GAMBIT allows us to pursue a thorough theoretical and statistical approach, where theoretical assumptions are consistently treated across different observables and experimental searches. This includes the accurate treatment, via nuisance parameters, of uncertainties associated with the local DM distribution, nuclear matrix elements relevant for direct detection, and SM parameters. 5. GAMBIT includes an interface to Diver [160], a new scanner based on differential evolution, which provides significantly improved sampling performance compared to conventional techniques. This allows us to more accurately locate and more comprehensively map small regions of high likelihood. 6. The public, open-source nature of GAMBIT 1 makes our study transparent, reproducible and extendible by the reader.
In Section 2, we introduce the CMSSM, NUHM1 and NUHM2, along with their parameters, the ranges and priors over which we vary those parameters, and the algorithms and settings that we use for sampling them. Section 3 contains a summary of the experimental data, observables and likelihood calculations that go into each fit. We then present our results in Section 4, before looking at the implications of our scans for future searches for the models in question, and concluding in Section 5.
All input files, samples and best-fit benchmarks produced for this paper are publicly accessible from Zenodo [163].

Model definitions and parameters
From a statistical standpoint, there is no fundamental difference between models that describe SM physics (and astrophysics) and physics beyond the SM (BSM).
GAMBIT therefore treats BSM models on exactly the same footing as models that describe nuisance parameters, which are designed to quantify uncertainties on better-constrained quantities. The only difference is that 1 gambit.hepforge.org nuisance models are generally more strongly constrained by the likelihood than BSM models.
In this paper, we simultaneously sample from four models in each scan: one GUT-scale SUSY model (CMSSM, NUHM1 or NUHM2; Sec. 2.1.1), and three specific nuisance models. The first nuisance model includes the parameters of the SM (Sec. 2.1.2), the second parameterises the density and velocity distribution of the DM halo (Sec. 2.1.3), and the third encapsulates the nuclear uncertainties relevant for DM direct detection (Sec. 2.1.4).

SUSY models
The definitions of the MSSM superpotential and softbreaking Lagrangian that we use are specified in Sec. 5.4.3 of Ref. [155], and we follow the conventions established there. All the BSM models that we investigate in this paper are subsets of the GAMBIT model MSSM63atMGUT [155], which is the most general formulation of the CP -conserving MSSM, with the soft masses defined at the scale where the gauge couplings g 1 and g 2 unify (the GUT scale).
The complexity of the MSSM can be reduced considerably if one makes simplifying assumptions about the values of the soft masses at the GUT scale: acts as the boundary condition under which the correct shape of the Higgs potential must be radiatively generated at the electroweak scale. The parameters of the NUHM1 are m 0 , m 1/2 , A 0 , tan β(m Z ), sgn(µ) and m H .

NUHM2
The constraint on the soft Higgs masses is further relaxed so that m Hu and m H d become independent, real, dimension-one parameters at the GUT scale. As in the NUHM1, m 2 Hu and m 2 H d are always positive at the GUT scale, and the correct shape of the Higgs potential at the electroweak scale must be radiatively generated. The parameters are thus m 0 , m 1/2 , A 0 , tan β(m Z ), sgn(µ), m Hu and m H d .
We assume throughout that R-parity is conserved, making the lightest supersymmetric particle (LSP) stable. In this paper we consider only the possibility of neutralino LSPs, assigning zero likelihood to all parameter combinations where this is not the case. Sneutrino DM in the MSSM [164] is now essentially ruled out by direct detection, though it remains viable in MSSM extensions (see Ref. [165] for a review). Gravitino LSP scenarios (e.g. [166,167]) are still viable even in the CMSSM, so adding such models to the results that we present here would be an interesting future extension.
The parameter ranges that we scan over for the CMSSM, NUHM1 and NUHM2 can be found in Table 1. We allow the magnitudes of all dimensionful parameters to vary between 50 GeV and 10 TeV. The lower cutoff is motivated by the constraints on sparticle masses from existing searches. The upper cutoff is somewhat arbitrary, but designed to encompass the mass range interesting for solving the hierarchy problem, and for leading to potentially-observable phenomenology. We consider both positive and negative µ, and the full range of tan β over which particle spectra can be consistently calculated and EWSB achieved in such models.

Standard Model
Here we define the SM as per SLHA2 [168], sampling from the GAMBIT model StandardModel_SLHA2 [155]. We identify the strength of the strong coupling at the scale of the Z mass, α s (m Z ), and the top quark pole mass, m t , as the most relevant nuisance parameters within this model. Both affect the running of softbreaking masses from the GUT scale. The mass of the SM-like Higgs boson is also very sensitive to the top quark mass, and has a strong influence on the scan through the Higgs likelihood (see Sec. 3.11).
In all our fits, we allow both these parameters to vary within ±3σ of their observed central values [169,170]. The resulting parameter ranges are shown in Table 2. We adopt flat priors on both α s and m t ; their values are sufficiently well-determined that the prior has no impact on results. The values of other SM parameters that we keep fixed in our scans are also shown in Table 2.

Dark matter halo model
The density and velocity distributions that characterise the DM halo of the Milky Way constitute an important source of uncertainty for astrophysical observations, particularly direct and indirect searches for DM. In this paper, we employ the GAMBIT model Halo_ gNFW_rho0 [155] to describe the halo. This consists of a generalised NFW [171] spatial profile, tied to a locally Maxwell-Boltzmann velocity distribution by a specific input local density ρ 0 .
Because we do not employ any observables in our fits that depend on the Milky Way density profile, the spatial part of this model plays no role. The local distribution of DM velocities v is given bỹ where v esc is the local Galactic escape velocity, v 0 is the most probable particle speed and is the normalisation factor induced by truncating the distribution at v esc .
In the Earth's rest frame, DM particles have a velocity distribution given by:  where v obs is the velocity of the Earth relative to the Milky Way DM halo. This is given by: where v ,pec = (11, 12, 7) km s −1 is the peculiar velocity of the Sun, which is known with very high precision [172]. The Local Standard of Rest (LSR) in Galactic coordinates moves with a velocity v LSR = (0, v rot , 0), while V ⊕ (t) = 29.78 km s −1 [173] denotes the speed of the Earth in the solar rest frame.
For an NFW profile v 0 is within 10% of v rot . As shown in Table 2, we set both these parameters to 235 km s −1 [156,174,175] in all our scans. Similarly, we adopt a fixed value of 550 km s −1 for the local escape speed [176].
Because it has a substantial impact on direct detection and high-energy solar neutrino signals from DM, we vary the local density of DM as a nuisance parameter in all scans (Table 2). Here we adopt an asymmetric range of +0.4 −0.2 GeV cm −3 around the canonical value of ρ 0 = 0.4 GeV cm −3 , reflecting the log-normal form of the likelihood that we apply to this parameter (see Sec. 3.1.2). The prior on ρ 0 has no impact because it is sufficiently well-constrained by the associated nuisance likelihood; we choose to make it flat.
See Refs. [156,177] for further discussion and details of the DM halo model, parameters and uncertainties.

Nuclear model
A final class of uncertainty relevant for direct detection and neutralino capture by the Sun is due to the effective nuclear couplings in WIMP-nucleon cross-sections. For spin-independent interactions, these depend on the light-quark composition of the proton and the neutron. We scan the GAMBIT model nuclear_params_sigmas_ sigmal, parameterising the 6 individual hadronic matrix elements in terms of just two nuclear matrix elements which we take to be identical for N = p and N = n [178]. These two parameters, respectively, describe the lightquark and strange-quark contents of the nucleus. We vary σ l and σ 0 over their ±3σ ranges in all fits. Discussion of the values and uncertainties of these parameters can be found in Sec. 3.1.3 and the DarkBit paper [156]. Like all other nuisance parameters listed in Table 2, the nuclear matrix elements are sufficiently well constrained that the prior is irrelevant, so we choose it to be flat. The spin-dependent couplings are described by the spin content of the proton and neutron ∆ (N ) q for each light quark q ∈ {u, d, s}. As the values for the proton and neutron are related, only three of these parameters are independent. As listed in Table 2, we specify the values for the proton, and set them to the central values discussed in Ref. [156].

Scanning methodology
In this paper we carry out a number of different scans of each of the three GUT-scale models, employing multiple priors, sampling algorithms and settings. We then merge the results of all scans for each model, in order to obtain the most complete sampling of the profile likelihood possible. We leave discussion and presentation of Bayesian posteriors for a future paper, as they remain strongly dominated by the choice of prior even in such low-dimensional versions of the MSSM, and a detailed  analysis of the their implications for fine-tuning and naturalness (e.g. [179,180]) is beyond the scope of the current paper. The parameter ranges and priors that we employ in scans of each model are listed in Table 1. We repeat every scan for positive and negative values of µ. We carry out scans with both flat and logarithmic priors on all dimensionful parameters. In the case of the trilinear coupling A 0 , which may be positive or negative, in our log-prior scans we employ a hybrid prior (log_flat_join in the language of Ref. [160]), consisting of a symmetric logarithmic prior at large |A 0 |, truncated to a flat prior at |A 0 | < 100 GeV.
We use two different samplers for our scans: Diver 1.0.0 [160] and MultiNest 3.10 [181]. The settings that we use for each can be found in Table 3.
Diver is a self-adaptive sampler based on differential evolution [182]. It samples the profile likelihood far more efficiently than traditional algorithms [160], allowing high-quality profile likelihoods to be computed in a fraction of the time of previous SUSY global fits, using significantly fewer likelihood samples. As a result, the majority of our results are driven by the Diver scans. Following the extensive tests discussed in Ref. [160], for most scans 2 we choose a population size of NP = 19 200 and a convergence threshold of convthresh = 10 −5 . The latter is defined in terms of the smoothed fractional improvement in the mean likelihood across the entire population. Other than these two parameters, we employ Diver with the default settings defined in ScannerBit [160]. In particular, this includes the λjDE version (introduced in Ref. [160]) of the self-adaptive jDE algorithm [183].
MultiNest is an implementation of nested sampling [184], a method optimised for the calculation of the Bayesian evidence. As a by-product, it also produces posterior samples, which it obtains via likelihood evaluation. It can therefore be very useful for sampling profile likelihoods as well, especially for smoothly mapping isolikelihood contours. However, it typically requires a rather long runtime to properly find the global best fit and any highly-localised likelihood modes [160,185,186]. We employ it here mainly to bulk out our sampling of the main likelihood mode of each scan a little, in regions where the profile likelihood is comparatively flat. For this purpose, we run MultiNest with relatively loose settings, choosing nlive = 5000 live points and a stopping tolerance of tol = 0.1. The tolerance is given in terms of the estimated fractional remaining unsampled evidence.
For more details on the performance of the two scanning algorithms, and comparisons to others, please see the ScannerBit paper [160].
To more densely sample the narrow strips in parameter space where neutralino-sfermion co-annihilations play an important role in determining the relic density of DM, we also carry out two specially-targeted versions of each log-prior scan. In these scans, we restrict the mass of either the lightest slepton or the lightest squark to within 50% of the mass of the lightest neutralino, i.e.
Although these additional scans are not necessary for finding the sfermion co-annihilation regions (Diver typically uncovers these regions anyway in untargeted scans), they are useful for ensuring that the boundaries of these regions are mapped thoroughly.
We carry out the additional scans using Diver only, building the initial population exclusively from models that satisfy the mass-ratio cut, before evolving it as usual. As it takes many random draws from the prior to successfully build such an initial population, we run these scans with a reduced population of NP = 6000, and a looser convergence criterion (convthresh = 10 4 ) than the untargeted equivalents.
In all profile likelihood plots that we show in the paper, we sort the samples of our scans into 60 bins across the range of data values that they cover in each direction. We then interpolate with a bilinear scheme to a finer resolution of 500 when plotting [187]. This expressly avoids any smoothing of the resulting profile likelihoods, which would amount to manipulation of the likelihoods of our samples. The binning and interpolation process can produce some cosmetic artefacts, in particular a sawtooth pattern in regions where the likelihood drops off sharply. Using a fixed number of bins 7 across the data range (rather than the plot range) can also sometimes produce surprising effects when plotting multiple regions on the same axes, as the same region will typically appear smaller and 'smoother' for subsets of the data where all samples lie within a small range of parameter values than for subsets with samples spread across the entire plot plane (as the size of an individual bin is much larger in the latter case). This should be kept in mind especially when viewing plots of mass correlations for multiple co-annihilation mechanisms and comparing preferred regions to LHC sensitivity curves (e.g. Fig. 15).

Standard Model
We include independent Gaussian likelihoods for each of the two SM nuisance parameters in our scans. We evaluate the strong coupling α s at the scale µ = m Z in the M S scheme, and compare with α s (m Z ) = 0.1185 ± 0.0006 from lattice QCD [170]. We interpret the quoted uncertainty as a 1σ confidence interval, and do not incorporate any additional theoretical uncertainty.
For the top quark pole mass m t , we compare with the combined measurements of experiments at the Tevatron and LHC: m t = 173.34 ± 0.27(stat) ± 0.71(syst) GeV, with a total uncertainty of 0.76 GeV [169]. We do not assign any separate systematic error to our interpretation of the experimental result as the top pole mass.

Local halo model
The canonical local density of DM extracted from fits to stellar kinematic data isρ 0 = 0.4 GeV cm −3 (see e.g. [188,189]). Because arbitrarily small or negative densities are unphysical, we adopt a log-normal distribution for the likelihood of ρ 0 , where σ ρ0 = ln(1 + σ ρ0 /ρ 0 ) and σ ρ0 is taken to be 0.15 GeV/cm 3 . We refer the reader to Ref. [155] for additional implementation details of the GAMBIT lognormal likelihood, and Refs. [156,177,190] for a more extended discussion of the central value and uncertainty on this parameter.

Nuclear matrix elements
We constrain the nuclear matrix elements σ s and σ l using Gaussian likelihood functions, with central ± 1σ values of 43 ± 8 MeV and 58 ± 9 MeV, respectively. The former is based on lattice calculations [191], whereas the latter is a weighted average of a number of different results in the literature [156].
Large logarithms appear when the supersymmetric spectrum is very heavy. To improve precision, these can be resummed using techniques from effective field theory (EFT) [44,46,[201][202][203][204]. This method has been implemented in several public codes [44,46,203]. However, the hierarchical spectrum assumed in the EFT calculation only appears in small subspaces of the models over which we scan in this paper. The public codes that implement this calculation, and that are suitable for cluster-scale parameter scans, are rather new 3 . It was also pointed out in Ref. [44] that the accuracy of the fixed-order Higgs mass prediction in FlexibleSUSY is much better than one would naively expect at large sparticle masses, due to accidental cancellations. We have therefore retained the fixed-order calculations in FlexibleSUSY for calculating the Higgs mass in this paper.

Relic density of dark matter
The thermal relic abundance of the lightest neutralino is a strong constraint on the MSSM. Many parameter combinations lead to more DM than the cosmological abundance observed by Planck, which is Ω c h 2 = 0.1188 ± 0.0010 [205]. For the relic density not to exceed this value, if the lightest neutralino is heavier than 8 ∼100 GeV, typically one or more specific depletion mechanisms must be active. These include co-annihilation of light sfermions or charginos with the lightest neutralino, and resonance or 'funnel' effects, where the lightest neutralino has a mass very close to half that of another neutral species.
We compute the relic density of each model taking into account DM annihilation to all two-body final states, including full co-annihilation [206,207], thermal and resonance effects, using the native DarkBit relic density calculator [156], connected to various subroutines of DarkSUSY 5.1.3 [208]. We obtain the effective annihilation rate W eff from DarkSUSY, passing all spectrum, decay and SM information from GAMBIT, and considering co-annihilations with particles up to 60% heavier than the lightest neutralino. We also employ the DarkSUSY Boltzmann solver, setting the option fast = 1. This ensures that the relic density calculation for most models takes less than a second. This setting controls the convergence criteria of the Boltzmann solver, and is the recommended option unless accuracy of better than 1% is required.
The likelihood that we employ penalises only models that predict more than the observed relic density. The likelihood function is a half Gaussian (see Ref. [156] and Sec. 8.3 of Ref. [155]), centered on the Planck observation but treating it as an upper limit. Consistent with our choice of the fast parameter for the Boltzmann solver, we retain the DarkBit default theoretical uncertainty of 5% on the relic density, adding it in quadrature to the observational error. Further discussion of this number in the context of higher-order corrections can be found in Refs. [3,18,156,[209][210][211][212][213].

Gamma rays from dark matter annihilation
Neutralino annihilation in astrophysical objects would produce a variety of final states, leading to both prompt gamma rays and those produced as final decay products. Dwarf spheroidal galaxies are particularly important targets, as they are strongly dominated by dark rather than visible matter, and exhibit little or no astrophysical gamma-ray emission. Limits from gamma-ray observations of dwarf galaxies have therefore played an increasingly important role in global fits, e.g. [132,[214][215][216].
For most neutralino masses, the most stringent gamma-ray limits on DM annihilation come from joint analyses of multiple Milky Way satellite galaxies [217][218][219][220][221] using data from the Fermi Large Area Telescope (Fermi-LAT). We employ likelihoods from the analysis of six years of Pass 8 data in the direction of 15 dwarf spheroidal galaxies [221], as implemented in gamLike [156].
gamLike constructs a composite likelihood from gamma-ray data sorted into N dSph fields of view (one for each dwarf) and N ebin energy bins. The partial likelihoods L ki describe the likelihood of obtaining the observed number of photons in the ith energy bin from the kth dwarf. The energy-dependent factor depends on the MSSM model, whereas the astrophysical factor is a model-independent property of each dwarf galaxy.
Here the differential gamma-ray multiplicity per annihilation is dN γ /dE, the zero-velocity annihilation crosssection is σv 0 ≡ σv| v→0 , m χ is the DM mass, and ρ χ (s) is the DM density along the line of sight parameter s in a given dwarf. ∆E i is the width of the ith energy bin, and ∆Ω k is the solid angle around the position of the kth dwarf over which gamma-ray data are being considered.
As in [221], gamLike profiles over the J k -factors as nuisance parameters, giving a final likelihood of where Here the probability distribution for each J k is assumed to follow an independent log-normal distribution with meanĴ k and width σ k . We compute the predicted spectrum dN γ /dE for each model by combining tabulated two-body annihilation spectra from DarkSUSY [208] with yields computed on the fly with the DarkBit Fast Cascade Monte Carlo [156]. To this, we add the dominant contribution from photon internal bremsstrahlung [222]. For each parameter combination, we rescale the expected gamma-ray flux by the squared ratio of the predicted relic density to the observed value, allowing for the fact that neutralinos may only be a fraction of DM. We limit this scaling factor to 1, not rescaling signals when the predicted relic density is greater than the observed value.

High-energy neutrinos from dark matter annihilation in the Sun
The Sun is expected to capture neutralinos from the local halo by nuclear scattering. Subsequent neutralino annihilation and interaction of the annihilation products in the solar core would produce GeV-energy neutrinos, which may be detectable at the Earth. The rate-limiting step for all MSSM models is the capture process, which depends sensitively on both the spin-dependent and spin-independent nuclear scattering cross-sections. The dominant constraints on intermediate and high-mass neutralino annihilation in the solar interior currently come from the IceCube experiment [223,224]. We use the DarkBit interface [156] to nulike 1.0.4 [7,225], which computes an unbinned likelihood from the event-level energy and angular information contained in the three independent event selections of the 79-string IceCube dataset [223]. We predict the neutrino spectra at Earth using DarkSUSY 5.1.3, which contains tabulated results previously obtained from WimpSim [226]. 4 Slightly stronger limits are also available from the 86-string dataset [224], but not in a format that allows them to be accurately applied to MSSM models.

Direct detection of dark matter
The dominant direct DM constraints on the models in this paper come from the LUX [227][228][229], Panda-X [230] and PICO [231,232] experiments. We also include likelihoods from XENON100 [233], SuperCDMS [234] and SIMPLE [235]. A new analysis from PICO-60 [236] appeared after much of this paper was already finalised, but the majority of MSSM models susceptible to that limit are already probed in our scans by the IceCube 79string likelihood (Sec. 3.5). We do not include the recent XENON1T result [237], but given that its sensitivity improvement relative to LUX is smaller than the error in our likelihood approximation [156], this will not impact our results. For each experimental search and combination of MSSM, halo and nuclear parameters, we use the likelihood functions contained in DDCalc [156] to compute a Poisson likelihood, Here N o,i is the number of observed events in the analysis region of the ith experiment, b i is the expected number of background events, and N p,i is the expected number of signal events. DDCalc computes the latter by 4 WimpSim is available at www.fysik.su.se/∼edsjo/wimpsim/.
interpolating in pre-computed efficiency tables, which include both detector and acceptance effects. The signal prediction takes into account both the spin-dependent and spin-independent interactions expected from each MSSM model. We compute the DM-nucleon couplings for each MSSM model using DarkSUSY 5.1.3 [208]. We scale the direct detection yields for each parameter combination by the ratio of the predicted relic density to the value observed by Planck [205], allowing for the fact that neutralinos may not constitute all of DM. We do not rescale direct detection rates when the predicted relic density is larger than the observed value.

Electroweak precision observables
We include likelihooods from PrecisionBit [159] for the W mass and the anomalous magnetic moment of the muon a µ . These functions construct a basic Gaussian likelihood based on the difference between the calculated and measured value, and combine theoretical and experimental uncertainties in quadrature.
The W mass must be recalculated using the details of the SUSY spectrum. In the present scans, the value of m W comes from FlexibleSUSY. SpecBit assigns a theoretical uncertainty of 10 MeV to this quantity, based on the size of two-loop corrections [159]. PrecisionBit compares these to m W = 80.385 ± 0.015 GeV [170], based on mass measurements and uncertainties from the Tevatron and LEP experiments.
For a µ , we assume an SM contribution of a µ,SM = (11659180.2±4.9)×10 −10 , which comes from theoretical calculations based on e + e − data [238]. We evaluate the supersymmetric contribution using GM2Calc 1.3.0 [92], which determines an uncertainty on its result by estimating the magnitude of neglected higher-order corrections using the two-loop Barr-Zee corrections [239]. The total predicted value is the sum of the SM and MSSM contributions, and the total uncertainity the sum in quadrature of their individual uncertainties. We compare this with the experimental measurement of a µ = (11659208.9 ± 6.3) × 10 −10 [240,241], where the experimental error is the sum in quadrature of the systematic (3.3 × 10 −10 ) and statistical (5.4 × 10 −10 ) contributions.

Scans in this paper include 59 flavour observables from
FlavBit [158]. These are sorted into four different categories for likelihood calculation: 1. Tree-level leptonic and semi-leptonic B and D meson decays (8 observables). Branching fractions for Here either µ or e may be substituted for , as both are effectively massless in the B-meson system. 2. Electroweak penguin decays (48 observables). Eight observables (AF B, F L, S 3 , S 4 , S 5 , S 7 , S 8 and S 9 ) for the decay All observable predictions draw on SuperIso 3.6 [242,243]. We have not included the B s -B s meson mass difference ∆M s , owing to the fact that it is only calculable within FlavBit via FeynHiggs, which we otherwise avoided for the scans of this paper in the interests of speed, and due to worries about its most recent versions' accuracy in parts of the parameter space (some details of which have been mentioned earlier in this Section). Recent LHCb results in the exclusive modes have already provided substantial additional constraints as compared to the available inclusive results from the B factories. In particular, several angular observables in the B 0 → K * 0 µ + µ − decay have been measured for the first time.
We construct a separate likelihood function for observables in each of the four categories above, including correlated uncertainties on observables within each category wherever warranted. The likelihood functions consider correlations between experimental measurement errors separately from correlations between theoretical errors (arising from e.g. common scale or form factor uncertainties), and then sum them to obtain the final covariance matrix. FlavBit then computes the likelihood within each category using a χ 2 approximation, where x i and y i are the experimental measurements and theoretical predictions, respectively, and V is the covariance matrix.

Searches for superpartners at LEP
Even though they are typically overshadowed by constraints from the LHC, LEP searches can have a significant impact in some parts of the parameter spaces that we consider in this paper. This is especially true for light, highly-degenerate spectra. Direct limits on sparticle production at LEP have typically taken the form of hard lower limits on sparticle masses, at e.g. 95% CL, computed with model-dependent assumptions [208,267].
ColliderBit instead uses the individual cross-section limits on pair production of neutralinos, charginos and sleptons from the ALEPH, L3 and OPAL experiments, as a function of the sparticle masses.
For each MSSM parameter combination, we compute the pair-production cross-sections at LEP for the processes given in Table 4, using the cross-section calculations included in ColliderBit and based on the results of Refs. [268][269][270][271]. We take the relevant sparticle decay branching fractions from DecayBit (choosing to obtain widths from a suitably-patched SUSY-HIT 1.5 [159,272]), and calculate the product of the cross-section and branching fraction for each process. This number can then be compared to digitised LEP cross-section limits in the plane of mχ0 1 and the mass of the directlyproduced sparticle, interpolating when the masses do not fall exactly on a grid point. This takes care of the massdependent experimental acceptance for each parameter point. We then calculate the likelihood of the experimental result assuming a Gaussian form, accounting for the dominant theoretical uncertainty on the signal prediction by varying the mass of the pair-produced sparticles within the uncertainties provided by SpecBit. Finally, we multiply the likelihoods from the various experiments and channels, taking them as independent measurements.
Further details of the cross-section and likelihood calculations can be found in the ColliderBit paper [157] . These are typically the most constraining searches for direct production of charginos and neutralinos, and the 2 lepton search is also sensitive to slepton pair production and decay. 4. Dark matter searches (CMS, Run I). We include the CMS monojet search [287], which constrains supersymmetric particle production in the case of compressed mass spectra.
We use ColliderBit to calculate the expected signal yield for each combination of model parameters, in each analysis region, using the external Monte Carlo (MC) event generator Pythia 8 [288,289], the native ColliderBit detector parameterisation BuckFast [157], and the Col-liderBit implementation of the analysis cuts applied in each LHC paper. ColliderBit contains a number of code optimisations of the Pythia 8 routines, including parallelisation of the main event loop via OpenMP. These modifications make it feasible to run 20 000 MC events per parameter combination during the global fit itself, as we do here. Due to the computational cost of calculating next-to-leading order (NLO) cross sections, we normalise the signal yields using leading-order (LO) plus leading-log (LL) cross-sections only, as provided by Pythia 8. For a more exhaustive discussion of this choice see the ColliderBit paper [157].
In a specific signal region with a predicted number of signal events s and an expected number of background events b, the likelihood of observing n events is described in ColliderBit by a marginalised form of the Poisson likelihood [214,225,290], where ξ is a scaling variable with a probability distribution centred on 1, designed to describe the effective rescaling of the signal + background prediction due to systematic uncertainties. Marginalising over ξ this way, it is possible to include the effects of fractional systematic uncertainties on both the signal prediction (σ s ) and the background estimate (σ b ). 5 We assume a log-normal distribution for ξ, where σ 2 ξ = σ 2 s + σ 2 b . We compute this integral using the highly-optimised implementation in nulike 1.0.4 [7,225].
The analyses listed above are statistically independent, either because they use a completely independent dataset (based on collisions at ATLAS versus CMS, or during Run I versus Run II), or because they utilise signal regions that have no overlap with the signal regions of any of the other searches. This allows us to simply multiply the likelihoods of all analyses in order to arrive at a combined likelihood.
However, within each analysis, signal regions are not always orthogonal, i.e. some contain events or significant systematics in common. Given that there is no public information describing the correlations across these signal regions, 6 we calculate the likelihood for an analysis based on the signal region that is expected to give the strongest 5 Due to our use of LO cross-sections, including a signal systematic associated with finite MC statistics is in practice rather pointless, as with 20 000 simulated events this is basically always dwarfed by the systematic error associated with neglecting NLO corrections. Considering that the LO cross-sections in the MSSM are known to almost always lie significantly below the NLO cross-section, our approach is in any case very conservative. In the present scan we have thus set σs = 0. For details, see the ColliderBit paper [157]. 12 limit. We determine the expected limit from each signal region by computing the expected ratio between the signal plus background and background-only likelihoods, in the hypothetical scenario where the observed number of events is exactly equal to the background expectation, Taking the difference with respect to the background loglikelihood prevents erroneous model-to-model jumps in the likelihood function (see Ref. [157] for more details).
Given the absence of published correlations between the yields (and uncertainties) in the various signal regions, this is arguably the best possible treatment, and it has the added merit of giving conservative results. Because no significant excess has been observed in any of the LHC searches that we include, we restrict the combined LHC Run I and combined Run II log-likelihood each to a maximum of 0, i.e. forbidding mildly better fits than the SM (which are achievable via statistical fluctuations in the data or Monte Carlo simulation, at a little less than the 1σ level).
We included all Run I searches listed above directly in our main scans of the CMSSM, NUHM1 and NUHM2. We then applied the likelihoods associated with the 13 TeV, 13 fb −1 Run II ATLAS and CMS 0-lepton searches in a postprocessing step, using the ScannerBit postprocessor scanner (see Sec. 6 of Ref. [160]). These searches uncovered no excesses, and therefore do not change the regions preferred by our scans except to disfavour a strip of additional models (compared to the Run I searches) at sparticle masses of a few hundred GeV. The accuracy of our sampling is therefore unaffected by their inclusion via postprocessing rather than in the original scans. 7

Higgs physics
We use likelihoods from HiggsBounds 4.3.1 [293][294][295] and HiggsSignals 1.4.0 [296], as interfaced via ColliderBit [157]. These provide two likelihood terms: one based on limits from LEP, and the other on measurements of Higgs masses and signal strengths at the LHC (plus some subdominant contributions from the Tevatron).
The combined LEP Higgs likelihood is an approximate Gaussian likelihood, valid in the asymptotic limit.
HiggsBounds constructs this from the full CL s+b distribution, accounting for the effect of varying production cross-sections and Higgs masses by interpolating in a grid of pre-calculated values.
The LHC Higgs likelihood is based on mass and signal-strength measurements reported by ATLAS and CMS. The mass and signal-strength data contribute separate χ 2 terms to the overall LHC Higgs log-likelihood. For each channel where a mass measurement is available, a χ 2 contribution is calculated for the hypothesis that each neutral Higgs particle is responsible for the observed 125 GeV boson [297,298]. Only the minimum value enters the final likelihood. This minimisation allows for the possibility that multiple resonances exist at 125 GeV with near-degenerate masses. The signalstrength contribution to the χ 2 uses a covariance matrix that contains all published experimental uncertainties on all measurements of signal strengths, including their correlations.
As discussed in Section 3.2, we obtain theoretical predictions of Higgs masses from FlexibleSUSY, adopting an uncertainty of 2 GeV on the mass of the lightest neutral Higgs, and 3% on all other Higgses [159]. We compute Higgs decay rates and branching fractions using SUSY-HIT 1.5 [272] via DecayBit [159]. To obtain the neutral Higgs boson production cross sections, we employ an effective coupling approximation, assuming that the BSM-to-SM ratios of Higgs production cross sections are equal to the ratios of the relevant squared couplings. We determine the coupling ratios using the partial width approximation, in which the ratios of squared BSM-to-SM couplings are taken to be equal to the ratios of the equivalent partial decay widths. To obtain branching fractions for SM-like Higgs bosons of equivalent mass to those in our MSSM models, we use lookup tables computed with HDECAY 6.51 [299,300]. More details can be found in the DecayBit paper [159].

CMSSM
In the left panel of Fig. 1, we show the joint profilelikelihood ratio for the mass of lightest neutralino and the relic density in the CMSSM. In the right panel, we show the same 95% CL regions colour-coded according to the possible mechanisms by which different models may avoid exceeding the observed relic density of DM. We classify these regions as follows: stau co-annihilation: mτ 1 ≤ 1. where 'heavy' may be A 0 or H 0 , i.e. a model qualifies if either Higgs is in range.
We emphasise that this classification is not exclusive. The labels that we give to these regions are merely a convenient shorthand for the precise mass/composition relations that we give above. In particular, they should not be interpreted as definitive indications that a specific mechanism is solely (nor even predominantly) responsible for setting the relic density of the neutralino. These relations indicate necessary but not sufficient conditions for a given mechanism to play a significant role in setting the relic density. The colour-coding in Fig. 1 (right) is done on the basis of the subset of the points in the 2σ region of the full scan that fulfil each of the mass/composition relations, and the resulting shading of regions is overlaid. In many cases, as we will show, single parameter combinations can satisfy two or more of the mass/composition conditions, and can thus be classified as members of multiple regions. In these cases, one of the mechanisms sometimes dominates over the others. Hybrid sub-regions also exist where the relic density is controlled by two or more mechanisms. For clarity, we make no attempt to show any of these cases as separate regions, nor to colour according to which (if any) mechanism dominates in overlapping regions. For specific cases of interest, we do, however, attempt to clarify these finer issues in our discussion of the results that we show.
Even within individual regions, readers should be wary of the need for nuance in interpreting the "relic density mechanism" labels. Points labelled "chargino co-annihilation" will typically exhibit co-annihilation of the lightest neutralino with both the lightest chargino and the next-to-lightest neutralino, as smallχ 0 1 -χ 0 2 and χ 0 1 -χ ± 1 mass splittings are an automatic consequence of a predominantly Higgsino LSP. Nevertheless, both these co-annihilation processes are outweighed in many models simply by boostedχ 0 1 -χ 0 1 annihilation, brought about by the dominance of the Higgsino component in the lightest neutralino. Similarly, A/H-funnel points will exhibit resonant annihilation through both the CP-odd Higgs, A 0 , and the heavy CP-even Higgs, H 0 , which are close to degenerate in mass in the CMSSM (and NUHM models). The CP-odd Higgs resonance dominates at the present day, however, as s-channel annihilation via the CP-even state is velocity suppressed.
In contrast to previous studies of the CMSSM, we apply the relic density measurement as an upper limit only, allowing for the possibility that thermal neutralinos do not constitute all of DM. This has important consequences for the resulting phenomenology.
Higgsino LSPs are automatically nearly degenerate with the lightest chargino and next-to-lightest neutralino, leading to efficient co-annihilation and an underabundant relic density for m χ 1 TeV. In isolation, this effect naturally gives the observed relic density at neutralino masses of about a TeV, and lower and higher values at smaller and larger neutralino masses, respectively. 8 This effect can be seen in the low-mass yellow strip in Fig. 1. If the LSP is instead a "welltempered" [301] admixture of Higgsino and bino 9 , then the efficiency of the co-annihilation effect can be tuned to give the exact observed relic density, even at very low neutralino masses. Such scenarios are, however, heavily constrained by recent LUX [228,229] and Panda-X [230] limits on the spin-independent scattering cross-section [309-311]. As we see in the low-mass section of Fig. 1 however, relaxing the demand that the neutralino must explain all of DM allows models to be more Higgsinodominated, leading to subdominant neutralino DM. The reduced relic density also helps Higgsino models avoid limits from spin-dependent nuclear scattering, which would otherwise prove rather constraining.
Similarly, at masses above 1 TeV, the not-quiteefficient-enough Higgsino co-annihilation can be supplemented by additional resonant annihilation through the heavy Higgs funnel, bringing the relic density down to the observed value, or lower. These models can be seen as overlapping yellow and orange regions at m χ 1 TeV in the right panel of Fig. 1.
We now see that relaxing the relic density constraint to an upper limit opens up a much richer set of phenomenologically-viable scenarios, with lighter Higgsino or mixed Higgino-bino LSPs. From the perspective of global fits, treating the relic density as an upper bound is a conservative approach, and allows us to test whether the preference for heavy spectra found in recent studies [115,145,312] persists even when a greater variety of light LSPs is permitted.
The right panel of Fig. 1 shows that at 95% CL, all of the identified annihilation mechanisms (stop coannihilation, A/H-funnel and chargino co-annihilation) permit solutions where the measured relic density is fully accounted for, as well as scenarios where only a very small fraction of the DM relic abundance is explained in the CMSSM. The fit does not demonstrate any clear preference for the relic density to be under-abundant or very close to the measured value. Looking at the top of this plot, we indeed see the established picture for chargino co-annihilation discussed above, where a pure Higgsino DM candidate should have a mass of around 1 TeV to fit the observed relic density.
In Fig. 2, we show 2D CMSSM joint profile likelihoods for m 0 and m 1/2 , as well as for tan β and A 0 . Here the plots include both positive and negative µ, and are again coloured by relic density mechanism. We see a large region of high likelihood at large m 0 and m 1/2 , consisting of overlapping chargino co-annihilation and A/H-funnel points. The A/H-funnel region is concentrated at high tan β, as is well known from previous studies of the CMSSM (e.g. Ref. [313]). The chargino co-annihilation region disfavours large negative A 0 , in agreement with existing results in the literature. 10 At lower m 0 and m 1/2 , a stop co-annihilation region appears, with a light stop very close in mass to the lightest neutralino. Due to constraints from direct searches, as well as Higgs-mass measurements at the LHC, which push up the sfermion masses, these scenarios can only be obtained through very large stop mixing. This restricts the stop co-annihilation region to very large and negative A 0 values, and low-to-moderate tan β, as can be seen in the bottom panels of Figure 2. This region has not been seen in most of the recent global fit literature, as revealing it requires not only consideration of large, negative A 0 values, but also very careful scanning of the parameter space. 11 The preference for large and negative A 0 in stop co-annihilation could lead to colour-or charge-breaking minima in the scalar potential. We have investigated the presence of such problems for points in the stop coannihilation region, using several conditions that have been proposed in the literature: [315], and 3. A 2 t < 3.4(m 2 Q3,3 + m 2 Q3,3 ) + 60(m 2 Hu + µ 2 ), based on the results in Ref. [79].
We found that whilst some points in this region do violate one or more of these conditions, removing all points that do so neither modifies the shapes of the likelihood contours in our plots, nor the fact that the best-fit occurs in the stop co-annihilation region. This question could in principle be investigated further by calculating the tunnelling probability for each point, e.g. using Vevacious [316]. However, it is not possible to do this in a reasonable amount of time with the large number of points in our scans. Even though the conditions above are not definitive, being neither necessary nor sufficient to establish that the vacuum of the theory breaks gauge invariance, neither is studying stability with tools such as Vevacious, due to the large number of scalar fields in the MSSM and the resulting difficulty of finding all relevant minima of the potential. We therefore leave detailed investigation of such issues for a future paper. The profile likelihood ratio in the CMSSM, for m0 and m 1/2 (top) and tan β and A0 (bottom), with explicit 68% and 95% CL contour lines drawn in white, and the best fit point indicated by a star. Right: Colour-coding shows the mechanisms active in models within the 95% CL contour for avoiding thermal overproduction of neutralino dark matter, through either chargino co-annihilation, resonant annihilation via the A/H funnel, or stop co-annihilation. Other potential mechanisms (e.g. stau co-annihilation) are not present, as they do not lie within the 95% CL contour.
Looking at the lower-right panel of Fig. 2, the stop coannihilation region undoubtedly extends to even lower values of A 0 than we have considered here. Combined with possible impacts of Sommerfeld enhancement on the relic density [317], this would have the effect of allowing stop co-annihilation to extend to very large values of m 0 (Ref. [317] found stop co-annihilation models with m 0 as large as 13 TeV). However, as A 0 becomes more negative, colour-and charge-breaking vacua become an ever-increasing concern.
In contrast with previous results, we do not find a stau co-annihilation region inside the 95% CL region surrounding our overall best fit. We do find stau coannihilation solutions in the same region of parameter space as seen in the literature 12 when we look at 4σ confidence regions. In addition to this, we also see small islands of stau co-annihilation appear inside the 2σ contours if we remove the LHC Run II likelihood (leaving only Run I analyses), although these are much smaller than seen in the previous literature. Therefore, the likelihood of the stau co-annihilation strip is suppressed in our results relative to those in the literature. Beyond the LHC Run II likelihood, which has also been shown to impact this region in Ref. [115], the suppression compared to other regions of good fit comes mostly from the LHC Higgs likelihood. This is influenced by the fol- lowing differences in our analysis compared to existing analyses (for recent examples see Refs. [115,145,312]): i) relaxation of the relic density constraint to an upper bound, allowing light Higgsino DM scenarios and consequentially relaxing the constraint on µ; ii) differences in the Higgs mass calculation (FlexibleSUSY rather than FeynHiggs) and branching ratio calculations (HDECAY rather than FeynHiggs); iii) a wider prior mass range than some previous scans; and iv) an improved scanning technique, which finds a modestly better fit in the other regions, relative to the stau co-annihilation region.
The plots in Fig. 2 combine scans for µ < 0 and µ > 0. As the sign of µ is a discrete parameter, it is useful to also investigate each sign independently. These plots are shown in Fig. 3. A preference for positive µ, from the anomalous magnetic moment of the muon a µ , has been reported previously (e.g. [145]). In our results Higgs observables and LHC sparticle search likelihoods (and the large allowed range for dimensionful parameters) push up the mass scale of the preferred sparticle spectrum, minimising the impact of the a µ likelihood and removing the preference for positive µ. We see a mild preference for µ < 0, which has a best fit log-likelihood of −263.75, as compared to −265.00 for µ > 0. The negative µ results also exhibit an enhanced stop co-annihilation region at low mass, and a reduced A/H-funnel region at higher mass, relative to the positive µ results.
The larger, better-fitting stop co-annihilation region at µ < 0 is driven entirely by the Higgs signal likelihood, in particular the fit to the gauge boson signal strengths. Positive µ in this region suppresses the h → W W, ZZ, γγ branching fractions to below the observed values, leading to a best-fit likelihood worse than the µ < 0 equivalent by ∆ ln L = 1.3. Indeed, the µ < 0 fit is actually slightly better than the fit of the SM to the Higgs data, by ∆ ln L = 0.9 units. Although the implied preference for µ < 0 over µ > 0 is weak, at just 1.1σ (in 2D), this demonstrates that precision Higgs physics has now reached the stage where it can directly constrain the parameters of supersymmeteric scenarios.
The vastly different size of the heavy Higgs funnel for µ < 0 and µ > 0 is due to differences in LSP composition. For µ > 0, the A/H funnel contains many Higgsino LSP points, which combine with the chargino co-annihilation mechanism to form an extensive hybrid region. In contrast, the bino solutions that do exist in this region are somewhat more concentrated in the m 0 -m 1/2 plane. For µ < 0, the heavy Higgs funnel region is almost exclusively bino, leaving the chargino co-annihilation to exist mostly as a pure mechanism, and resulting in an upper limit on the mass of the LSP of ∼1.2 TeV. Although there are some relatively isolated points in the µ < 0 scan exhibiting hybrid funnel-chargino co-annihilation behaviour, it seems difficult to obtain valid solutions to the RGEs with such spectra when µ is negative.
The best-fit points for each relic density mechanism (with positive and negative µ results combined) are given in Table 5. These are also shown in the figures of this section, as stars coloured by their corresponding region. We also give the mass spectrum for the global best-fit CMSSM point (column 5 of Table 5) in Fig. 4, demonstrating that the only light superpartners are the lightest stop, the lightest two neutralinos and the lightest chargino, which is almost exactly degenerate in mass with theχ 0 2 . Theχ 0 1 is a pure bino for this point, whereas theχ 0 2 andχ ± 1 are pure wino. The point generates a relic density within the allowed range through stop co-annihilation, but with at 1 −χ 0 1 mass difference of 40 GeV. This mass difference should ensure prompt stop decay and potential visibility in future compressed spectrum searches at the LHC.
In Table 5, we also give a detailed breakdown of the likelihood contributions from the different searches discussed in Sec. 3, and compare to an 'ideal' reference likelihood. The ideal likelihood is defined as the best likelihood that a model could be expected to achieve, were it to perfectly predict all detections, and make no additional contribution beyond that predicted from background for all other searches. Computing this is straightforward for most likelihood components, as it follows directly from setting the model prediction to either the observed value (e.g. m W , Ω c h 2 , a µ , any nuisance parameters) or the background-only prediction (e.g. direct DM, LHC and neutrino searches). In some cases however, where multiple sub-observables are involved and the background-only or SM prediction can be improved on by including a BSM contribution, a more nuanced calculation is required. This is the case for the LHC Higgs and electroweak penguin (B 0 → K * 0 µ + µ − ) likelihoods. For these components, we define the ideal likelihood to be the highest value possible in a more general phenomenological scenario. In the flavour sector, we use the B 0 → K * 0 µ + µ − likelihood at the best fit point of the scan of the flavour EFT shown in Ref. [158]. In the Higgs sector, we take the best-fit likelihood obtainable by allowing the mass, width and decay branching fractions of a single scalar to vary freely in order to fit the full set of data contained in HiggsSignals.
The log-likelihood difference of the best fit in the CMSSM to the ideal likelihood is ∆L BF = 36.820. This difference is largely driven by known anomalies that cannot be explained by either the SM or the MSSM, including the magnetic moment of the muon (∆L = 6.289; see Ref. [159]) and the B 0 → K * 0 µ + µ − angular observables (∆L = 11.828; see Ref. [158]). The largest contribution to ∆L BF comes from anomalies in tree-level B and D decays, in particular the partially-correlated branching fraction ratios R D and R D * . These have values of 0.308 and 0.248 respectively at the best-fit point of the scan. For comparison, the SM predictions are 0.300 and 0.252, whereas the observed values are 0.403 ± 0.047 and 0.310 ± 0.017. Further discussion and details can be found in the FlavBit paper [158].
The log-likelihood difference of a point relative to the ideal log-likelihood can be used to give some indication of the goodness of fit, as its definition is very similar to half of the "likelihood χ 2 " of Baker & Cousins [318]. The likelihood χ 2 is known to follow a χ 2 distribution in the asymptotic limit. The main difficulty in using this fact is estimating the effective degrees of freedom of the fit, as carrying out simulations to find the true distribution of 0.000 0.000  Table 5: Best-fit points in the CMSSM, for each of the regions characterised by a specific mechanism for suppressing the relic density of dark matter. Here we show the likelihood contributions, parameter values at each point, and some quantities relevant for the interpretation of mass spectra at the different best fits. We also give likelihood components for a canonical 'ideal' likelihood (see text), along with its offset from the global best-fit point. SLHA1 and SLHA2 files corresponding to the best-fit point in each region can be found in the online data associated with this paper [163]. our test statistic is computationally intractable. 13 Given the number of observables that actively constrain the fit, a reasonable guess for the effective degrees of freedom is probably something in the range of 30-50, leading to a p-value of between 2 × 10 −5 and 0.02 for the CMSSM; neither a particularly good fit nor catastrophically bad, given the uncertainties involved in the estimate of the p-value. Taking 40 as a canonical estimate of the degrees of freedom, for the sake of later comparison with the NUHM1 and NUHM2, the p-value would be 9.4 × 10 −4 . 13 We note that a similar thing has been done in the CMSSM [312], but using a likelihood function far quicker to compute than ours, based on interpolation in a 2D grid of LHC signal yields rather than explicit simulation for each parameter combination.

NUHM1
The main results from the NUHM1 scan are shown in Figs. 5 and 6. Fig. 5 shows results in the m 0 -m 1/2 and tan β-A 0 planes, with plots of the profile likelihood ratio on the left and the DM annihilation mechanisms, defined as in the previous subsection, on the right. In comparison to the CMSSM equivalent, Fig. 2, one can see that the additional freedom in the NUHM1 substantially extends the likelihood contours, so that much of the parameter space is now allowed.
In particular, we now find a stau co-annihilation region, which was absent in the CMSSM results. The extra freedom present in the Higgs sector in the NUHM1 avoids the penalty from the LHC Higgs likelihood seen in stau co-annihilation models in the CMSSM. Further- more, we now see chargino co-annihilation solutions within the 2σ contours that extend to arbitrarily low m 0 . These are both a consequence of the fact that once m 0 is decoupled from m H , the former can be pushed low without impacting EWSB. This allows light staus to exist, making stau co-annihilation viable, and also means that |µ| can be low at arbitrarily small m 0 , leading to Higgsino LSPs.
Such low values of m 0 in chargino-coannihilation scenarios suggests that the first-and second-generation squarks may be light enough to be constrained directly by collider searches. However, a detailed examination reveals that their masses remain above 2 TeV, and out of reach of LHC limits, for all models within our 2σ contours.
A similar expansion of the chargino co-annihilation region 14 has been seen in the previous literature comparing the CMSSM and NUHM1 models (see e.g. Fig. 6 of Ref. [129] and Fig. 1 of Ref. [145], although the contours do not reach arbitrarily low m 0 for all m 1/2 in those studies. This difference can be explained by the additional freedom associated with only applying the relic density measurement as a one-sided limit. We checked that demanding neutralinos make up all of DM removes some low-m 0 scenarios from the 2σ contours, such that a better agreement with the results in the literature is obtained.
Another interesting feature of this is that these low m 0 values mean that the NUHM1 admits significantly lighter squarks within the chargino co-annihilation region than in the CMSSM.
We give results in the m H -m 1/2 and m H -m 0 planes for the NUHM1 in Fig. 6. These plots show a sharp cut-off in the likelihood near the diagonals m 0 ≈ m H and m 1/2 ≈ m H , such that m H must be greater than both m 0 and m 1/2 . This structure emerges from the combination of several effects. Reducing m H with respect to m 0 and m 1/2 leads to m 2 Hu running to a more negative value, and this in turn leads to a larger µ value through the EWSB conditions. Past this boundary in Fig. 6, µ is then always significantly larger than the bino mass, M 1 . As a result the neutralino LSP is always bino in these scenarios and requires either an A/H-funnel or sfermion co-annihilation mechanism to reduce the relic density to the measured value or below. The A/H-funnel mechanism also requires µ to be small, as µ 2 gives a contribution to the pseudoscalar mass. As a result, if µ is much larger than 2M 1 , then the relation for the A/H-funnel mechanism, m A ≈ 2mχ0 1 ≈ 2M 1 , cannot be achieved. Although we do find majority-bino LSPs annihilating through an A/H-funnel, these have smaller values of µ than can be achieved when m H is less than either m 0 or m 1/2 . Finally, sfermion co-annihilation can be effective in this region, but only for lower values of m 0 and m 1/2 . In those scenarios, if m H m 0 , m 1/2 the likelihood is suppressed by the LHC Higgs likelihood, because it is difficult to fit the 125 GeV Higgs there.
We investigated charge-and colour-breaking minima in the NUHM1 in the same way as in the CMSSM (Sec. 4.1). As in the CMSSM, a number of models within our 95% CL regions are affected by one or more of the three proposed conditions, but removing all such parameter combinations does not move the best fit to a different region, nor substantially change the regions of parameter space preferred by our fits.
As we did for the CMSSM, in Table 6 we show the best-fit points for each mechanism for depleting the relic density of DM. The best-fit point in the chargino coannihilation region has smallχ 0 1 ,χ 0 2 andχ ± 1 masses, but escapes LHC exclusion due to the highly compressed mass spectrum for these sparticles. As in the CMSSM, the overall best-fit point lies in the stop co-annihilation region. Its mass spectrum is shown in Figure 7. There are important differences to the CMSSM case, however. Firstly, the stop is heavier, now sitting just above 1 TeV in mass. Thet 1 −χ 0 1 mass difference is once again roughly 40 GeV, ensuring prompt decay of the stop. The heavier stop mass is accompanied by a heavier mass spectrum in general, with no sparticles lighter than 800 GeV in mass. Theχ 0 1 is pure bino, but theχ 0 2 ,χ 0 3 andχ ± 1 are  Fig. 7: Sparticle mass spectrum of the NUHM1 best-fit point.
now predominantly Higgsino in character, leaving thẽ χ 0 4 andχ ± 2 to be mostly wino. Discovery of this point would be very challenging at the LHC in the near future, due to the heavy weakly-coupled states, and the lack of light coloured states that have a large mass splitting with theχ 0 1 . For the NUHM1, ∆ ln L BF = 36.702, slightly better than what we found in the CMSSM. For the sake of comparison with the CMSSM (p = 9.4 × 10 −4 if computed with 40 degrees of freedom), we can compute a p-value assuming one less degree of freedom, i.e. 39. This gives 7.1 × 10 −4 , slightly worse than the CMSSM. We see that despite the improvement in the fit, the fact that it has not delivered a sufficiently large improvement in ∆ ln L BF means that this is not enough to outweigh the penalty associated with the introduction of the additional parameter.

NUHM2
The NUHM2 results are shown in Figs. 8 and 9. Figure 8 shows results in the m 0 -m 1/2 and tan β-A 0 planes, with plots of the profile likelihood ratio on the left and the annihilation mechanism, defined at the start of section 4.1, on the right.
In comparison to the NUHM1 results, the stau coannihilation region is significantly extended, covering higher values of m 1/2 , and lower A 0 and tan β. This is due to modification of the RG flow for the soft scalar stau masses that occurs when the soft Higgs masses are split at the GUT scale. A similar effect has been observed and discussed for this model in Ref. [148]. As in the NUHM1 despite the low values of m 0 our models within the 2σ contours do not generate first and second generation squark masses below ∼2 TeV.
In Fig. 9 we show the structure of the m Hu -m 1/2 and m H d -m 1/2 planes in the same format as the m 0m 1/2 and tan β-A 0 planes. The m Hu -m 1/2 plot is quite 0.000 0.000  Table 6: Best-fit points in the NUHM1, for each of the regions characterised by a specific mechanism for suppressing the relic density of dark matter. Here we show the likelihood contributions, parameter values at each point, and some quantities relevant for the interpretation of mass spectra at the different best fits. We also give likelihood components for a canonical 'ideal' likelihood (see text), along with its offset from the global best-fit point. SLHA1 and SLHA2 files corresponding to the best-fit point in each region can be found in the online data associated with this paper [163].  (Fig. 6) for the NUHM1 model discussed in section 4.2, while in contrast there is not much structure in the m H d -m 1/2 plane. 15 This could be anticipated from the NUHM1 results (again Fig. 6), as the structure was caused by the fact that smaller m Hu at the GUT scale leads to µ M 1 at the SUSY scale, making bino DM the only possibility. As in the NUHM1 case, the A/H-funnel mechanism for the bino again does not work because µ is too large to allow However, the extra freedom in the Higgs sector from splitting m Hu and m H d at the GUT scale does allow a better LHC Higgs likelihood at smaller m 0 and m 1/2 , so that the stau and stop co-annihilation regions can be found when m Hu m 0 , m 1/2 .
As with the NUHM1, we checked the three chargeand colour-breaking conditions mentioned in Sec. 4.1. The results were as in the NUHM1: some individual parameter combinations are affected, but the overall inference is not.
We give a table of best-fit NUHM2 points in Table 7, with the mass spectrum for the overall best fit shown in Figure 10. Once again, the overall best fit is obtained for a point that satisfies the relic density bound through stop co-annihilation. Thet 1 mass is 950 GeV, but the mass difference with theχ 0 1 is now less than 20 GeV, making this very difficult to resolve at the LHC. The neutralino-chargino sector features a pure binoχ 0 1 , winodominatedχ 0 2 andχ ± 1 , and Higgsino-dominatedχ  Fig. 10: Sparticle mass spectrum of the NUHM2 best-fit point.
of thet 1 −χ 0 1 mass difference, making this a particularly challenging scenario for collider searches.
For the NUHM2, ∆ ln L = 36.362, indicating a better fit than either the CMSSM or NUHM1. This is expected to some extent, as the NUHM2 has one more free parameter than the NUHM1. Indeed, accounting for the extra freedom in the fit (i.e. adopting a canonical degree of freedom of 38 instead of 39 or 40 -see previous subsections), and computing the implied p-value, the result is just 5.9 × 10 −4 . This actually disfavours the NUHM2 compared to the NUHM1 (p = 7.1 × 10 −4 for 39 dof) and CMSSM (p = 9.4×10 −4 for 40 dof), because its additional parameter does not provide a sufficiently large improvement to the overall fit.
We have not commented so far on the ability of any of the models to explain the large discrepancy between the measured value of a µ ≡ (g−2) µ /2 and that predicted by the SM [238,319]. This is because, with the heavy spectra found, a sizeable supersymmetric contribution to ∆a µ is not expected. However, in Ref. [148] (see right panel of Fig. 12 in that paper), it was found that although the best fit for the NUHM2 predicts a very small a µ , there are points within the 2σ contours that predict significantly larger values of around 2 × 10 −9 , which may give some grounds for optimism. In contrast, the MSSM contribution to a µ within the 2σ confidence regions of our CMSSM, NUHM1 and NUHM2 fits is below 5 × 10 −10 . We show this visually in Fig. 11. Therefore, with the latest data and using GM2Calc to obtain the most precise calculation available of the supersymmetric contributions to the anomalous magnetic moment of the muon, we find that none of the GUT-scale models that we consider can make a significant contribution to resolving this discrepancy.
A similar point can be made about the flavour anomalies associated with the angular observables in B 0 → K * 0 µ + µ − decays and the ratios R D and R D * ; none of the best-fits or 95% CL regions of our scans indicate any ability for these data to be explained within the CMSSM, NUHM1 or NUHM2.

Discovery Prospects
In the following we discuss the discovery prospects of the CMSSM, NUHM1 and NUHM2, given current constraints. We first address prospects at the LHC, followed by direct and indirect detection of DM.

LHC
In Fig. 12, we show the 1D profile likelihood ratio for the masses of the gluino, lightest (third generation) squarks, lightest stau, lightest chargino and lightest neutralino in the CMSSM, NUHM1 and NUHM2. The likelihood is generally low for coloured sparticles light enough to be in reach of LHC Run II, but there is an interesting peak of high likelihood at low stop masses for all three models, centred on the best-fit masses of 592, 1030 and 950 GeV for the CMSSM, NUHM1 and NUHM2 respectively. At least naively, this appears worthy of further investigation for each model, in terms of the potential for discovery at the LHC.
Concentrating first on the profile likelihood for mt 1 in the CMSSM, the first consideration is the mass difference mt 1 − mχ0 1 for models with a low stop mass, as experimental prospects generally deteriorate rapidly for more compressed spectra. The CMSSM 1D profile likelihood ratio for the mass difference mt 1 − mχ0 1 is shown in the top panels of Figure 13 in red, while Figure 14 shows the 2D profile likelihood in thet 1 −χ 0 1 mass plane. The low-mass stop solutions all satisfy the relic density constraint through stop co-annihilation, giving stop-neutralino mass differences below ∼ 50 GeV. For very small mass differences, below the mass of the b quark, these points could be probed by long-lived particle searches at the LHC. We defer a detailed study of this to future work.
If the stop decays promptly, however, this region can in principle be probed by LHC compressed spectra searches, particularly in the recent Run II updates that were not included in our initial scan. Although we plan a detailed analysis of the full range of recent LHC results in a forthcoming paper, some insight can be gained by examining the recent 36 fb −1 simplified model limits presented by the CMS experiment [322-326] at 13 TeV. They carried out stop searches in a variety of final states, and interpreted them in terms a model in which stop pair production is immediately followed by decay to a (possibly off-shell) top quark and the lightest neutralino. Although this is not necessarily the case for our models, the simplified model limit acts as a guide to the strongest possible exclusion potential of these Run II searches. We show this limit in Fig. 14 as a red line. The low-mass part of our 2σ best-fit region remains out of reach of the latest CMS search. We have also checked that the models in this region emerge almost unscathed when compared to recent ATLAS limits on compressed stop scenarios [327-329], 16 but there is some hope that at least the lower parts of this region will be probed 16 We note that the ATLAS limit assumes a 100% branching fraction for the processt1 → cχ 0 1 . We have checked that this   Table 7: Best-fit points in the NUHM2, for each of the regions characterised by a specific mechanism for suppressing the relic density of dark matter. Here we show the likelihood contributions, parameter values at each point, and some quantities relevant for the interpretation of mass spectra at the different best fits. We also give likelihood components for a canonical 'ideal' likelihood (see text), along with its offset from the global best-fit point. SLHA1 and SLHA2 files corresponding to the best-fit point in each region can be found in the online data associated with this paper [163]. in the near future. Completely excluding the stop coannihilation region in the CMSSM would require probing compressed spectra in lightest stop decays up to a stop mass of approximately 900 GeV. Although finding such models is challenging at the LHC, stop pair-production is within the kinematic reach of a multi-TeV linear collider for the whole region, and dedicated analysis, similar to searches for Higgsino-dominated neutralinos, should be effective in constraining such models.
This picture changes in the NUHM1, which is most easily seen by examining which mechanism for obeying agrees closely with the branching fractions returned by DecayBit and SUSY-HIT for our best-fit stop co-annihilation point.
the relic density constraint is active in each region of thet 1 −χ 0 1 mass plane. Figure 15 shows that, whereas the entire CMSSM 95% CL region at low stop masses arises from stop co-annihilation, the extra freedom in the NUHM1 model allows the existence of points with low stop mass that generate the required relic density through either the stau co-annihilation or chargino coannihilation mechanisms (or indeed some combination thereof). There is hence a region with stop masses below 1 TeV that would exhibit largert 1 −χ 0 1 mass differences, making future discovery at the LHC an easier prospect. Indeed, comparison with the most recent CMS simplified model limits demonstrates that part of this chargino co- 1 (GeV) Fig. 15: 95% CL 2D profile likelihoods in thet1 −χ 0 1 mass plane, coloured according to the mechanism(s) active in depleting the relic density. Left: the CMSSM. Right: the NUHM1. Superimposed in red is the latest CMS Run II simplified model limit fort1 pair production and subsequent decay to t quarks and the lightest neutralino [320]. This limit should be interpreted with caution (for details see main text).
annihilation region may already have been probed [320]. Still, the region of highest likelihood is the stop coannihilation region, with mt 1 − mχ0 1 50 GeV. This can be seen in the top panels of Figure 13 in blue. Excluding the stop co-annihilation mechanism entirely in the NUHM1 is more difficult than in the CMSSM, requiring the ability to probe compressed spectra for t 1 masses up to approximately 1700 GeV, as seen in Figure 15. The situation in the NUHM2 model (not shown) is qualitatively similar. Figure 12 also shows the presence of a region with relatively smallτ 1 masses, particularly in the NUHM1 and NUHM2. These masses are, however, already too large to lead to substantial stau production, given the small direct production cross-section [331] at 13 TeV.
Finally, we consider the prospects for discovery of charginos and neutralinos in the CMSSM, NUHM1 and NUHM2, as there are high-likelihood model points with relatively lowχ 0 1 andχ ± 1 masses in all three models. In Figure 16, we show the profile likelihood ratio in Right: Colour-coding shows the mechanism(s) that allow models within the 95% CL region to avoid exceeding the observed relic density of DM. Superimposed in red is the latest CMS Run II simplified model limit forχ ± 1χ 0 1 pair production and decay with decoupled sleptons [321]. This limit should be interpreted with caution (for details see main text). Right: Colour-coding shows the mechanism(s) that allow models within the 95% CL region to avoid exceeding the observed relic density of DM. Superimposed in red is the latest CMS Run II simplified model limit forχ ± 1χ 0 1 pair production and decay via sleptons [330]. This limit should be interpreted with caution (for details see main text). theχ ± 1 −χ 0 1 mass plane for the CMSSM, as well as a version colour-coded by the mechanism for depleting the relic density. For low masses, there is always a strict correlation between theχ ± 1 andχ 0 1 masses in the CMSSM. The mχ± 1 ∼ mχ0 1 correlation is a consequence of a Higgsino-dominated lightest neutralino, which is always accompanied by a Higgsino-dominated chargino with a similar mass and leads to chargino co-annihilation. In the stop co-annihilation region, the neutralino is dom-inantly bino, and the chargino is mostly wino, with a mass about twice that of the neutralino. This comes from the approximate 2 : 1 ratio between the low-scale wino and bino mass parameters, produced by RGE running from the common GUT-scale input value m 1/2 . We also show the envelope of the latest CMS simplified model interpretations forχ ± power, as we have not performed a detailed examination of our model points to check that the EW gaugino mixing matrices and decay branching ratios match the CMS assumptions. Only the low-mass tip of the stop co-annihilation part of our 2σ region has been probed by the most recent CMS analyses, and the best fit point is far beyond the current LHC reach. Figure 17 shows the profile likelihood ratio in thẽ χ ± 1 −χ 0 1 mass plane for the NUHM1. The low-mass region now sees a contribution from stau co-annihilation in addition to chargino co-annihilation. This is interesting for LHC searches, as the assumption of decoupled sleptons clearly no longer applies. The recent CMS simplified model interpretations include a model where the sleptons are not decoupled, but the interpretation is even more fraught than that of the previous simplified models we have considered. The slepton masses are fixed in these scenarios, and one can generically expect the strength of the exclusions to decrease as one departs both from the mass assumptions, and from the branching ratio assumptions. Nevertheless, we show this limit in Figure 17 in order to demonstrate the most optimistic possible exclusion, compared to our 2D profile likelihood. Almost the entire region with compressed spectra remains unprobed. As the bottom right plot of Figure 13 shows, the highest likelihood in the degenerate region is obtained for chargino-neutralino mass differences less than 15 GeV, which is small enough to escape the CMS searches. However, the part of our low-mass 95% CL region without degenerateχ ± 1 −χ 0 1 masses may be within current LHC reach. Furthermore, there is hope that the LHC would prove capable of exploring part of the 68% CL region in the near future. The situation in the NUHM2 is shown in Figure 18. The main difference with the NUHM1 is that the low-mass region potentially explored by the recent CMS analysis updates falls within our 68% CL preferred region.

Direct detection
Now we turn to a discussion of the discovery prospects at future direct DM detection experiments. In Figs. 19 and 20, we show the spin-independent (SI) and spindependent (SD) nuclear scattering cross-sections of the lightest neutralino as a function of its mass, scaled for the fraction of the local density of DM in neutralinos. We give the full profile likelihood for the CMSSM only. The other panels show 2σ confidence regions for each model, with colour-coded mechanisms for reducing the relic density to or below the observed value.
In the CMSSM, the chargino co-annihilation and A/H-funnel regions largely overlap, predicting SI crosssections in the range 10 −46 -10 −44 cm 2 and neutralino masses from 200 to 2500 GeV. This region will be almost fully probed by XENON1T after two years of datataking (long dashed curve), and could be completely excluded by XENONnT or LZ with 1-3 years of data (short dashed curve). The stop region has significantly lower SI cross-sections, with an LSP that can be almost pure bino in nature. Most of this region is outside of the projected reach of future multi-tonne detectors, although the proposed ∼50-tonne DARWIN experiment Colourcoding shows the active mechanism(s) by which CMSSM models avoid exceeding the observed relic density of DM, through either chargino co-annihilation, the A/H funnel, or stop co-annihilation. Top Right: Colour-coded regions in the NUHM1, now also featuring stau co-annihilation (blue). Bottom Right: Colour-coded regions of the NUHM2. 90% CL exclusion limits are overlaid from the complete LUX exposure [229], the projected reach of XENON1T with two years of exposure, the projected reach of XENONnT/LZ with 20 tonne-years of exposure [336] (around 1-3 years of data), and the projected reach of DARWIN with 200 tonne-years of exposure [337] (around 3-4 years of data). The "neutrino floor", where the coherent neutrino background starts to limit the experimental sensitivity, is indicated by the dashed grey line [338]. The exact position of this limit is subject to several caveats; see [338] for further details. may probe it slightly (dotted curve). As discussed in Sec. 4.4.1, this region is also difficult to see at the LHC, but may be within the reach of a future linear collider.
The NUHM1 and NUHM2 display similar properties, with large parts of the chargino co-annihilation and A/H-funnel regions able to be tested in the near future, including models with very heavy LSPs (in the NUHM2 case, up to 4.5 TeV). Some of the chargino co-annihilation region at relatively low LSP masses (< 1250 GeV) will remain untested by future direct detection experiments until DARWIN. Parts of both the stop and stau co-annihilation regions will escape all direct detection, even DARWIN, although other parts of this region at higher masses will be easily detected or excluded by XENON1T.
In Fig. 20 we show the rescaled spin-dependent neutralino-proton scattering cross-sections for the CMSSM, NUHM1 and NUHM2. Here we overplot current PICO limits not included in our scan [339], and sensitivity estimates for PIC0-250, a scaled-up version of PICO-60 [340]. We also show the IceCube 79-string limits from Ref. [7], for two different annihilation final states. Colourcoding shows the active mechanism(s) by which CMSSM models avoid exceeding the observed relic density of DM, through either chargino co-annihilation, the A/H funnel, or stop co-annihilation. Top Right: Colour-coded regions in the NUHM1, now also featuring stau co-annihilation (blue). Bottom Right: Colour-coded regions of the NUHM2. 90% CL exclusion limits are overlaid from the 79-string IceCube search for DM [7,223], assuming dark matter annihilation in the Sun tobb (yellow solid) and τ + τ − (red solid) final states, from PICO-60 [339] (green solid), and projected limits from PICO-250 [340] (green dashes).
We can see that the preferred regions are relatively far from the current limits, so future direct detection experiments are unlikely to probe them further. However, the proposed neutrino telescopes IceCube-PINGU [341] and KM3NeT-ORCA [342] may have sufficient additional sensitivity to test the models with largest SD crosssections. Although estimates of the expected sensitivity of these experiments to σ SD exist, those estimates do not (yet) extend above DM masses of 100 GeV, so at present they can tell us little about prospects for discovery of the CMSSM, NUHM1 or NUHM2. Fig. 19 shows the 2σ allowed region for the SI crosssection extending to substantially lower values in the CMSSM than the NUHM1 or NUHM2. This seems surprising, as the CMSSM is a subspace of the NUHM1 and NUHM2, so all viable CMSSM models are indeed also viable NUHM1 and NUHM2 models. The improvement in the best-fit likelihood in the NUHM1 compared to the CMSSM is not sufficient to explain this effect. The smallest scattering cross-sections are caused by cancellations in the tree-level matrix elements, which can be tuned to essentially arbitrary accuracy. A consequence of this is that models become steadily more fine-tuned as the cross-section asymptotically approaches zero, and therefore steadily more difficult to find for sampling algorithms. What we see here is evidence of the addi-tional numerical difficulty of finding such points in the NUHM1 and NUHM2, due to the additional challenge of dealing with more dimensions, and a more diverse set of viable regions of parameter space. However, in models where the mass parameters unify at a high scale, loop corrections [343,344] are expected to spoil such carefullytuned cancellations anyway, holding cross-sections well above the lowest values that we see in the CMSSM [345]. The fact that we have found scattering cross-sections as low as 10 −60 cm 2 in the CMSSM, but not at quite such low values in the NUHM1 and NUHM2, is therefore ultimately of little physical significance. Even if this isn't physically significant, however, getting as low as 10 −60 cm 2 in the CMSSM is nonetheless quite a remarkable numerical feat, made possible only by our use of Diver. This increases our confidence in the completeness of our sampling in the rest of the parameter space, and in fits of weak-scale MSSM models [162].

Indirect detection
To assess the discovery prospects for future indirect searches for DM, in Fig. 21 we show the rescaled zerovelocity annihilation cross-section f 2 · σv 0 , as a function of the mass of the lightest neutralino. Here f is again the ratio of the neutralino relic density in the model to the observed relic density of DM. Note that we implicitly assume here that if neutralinos are not all of the DM, the other component(s) of DM cluster in the same way as neutralinos, leading to the same f cosmologically, in dwarf galaxies and in the local halo. Although this needn't be true in general, the general requirements that DM be cold and (almost) non-interacting mean that this should be a reasonably good approximation.
The upper left panel of Fig. 21 shows the profile likelihood for the CMSSM, and the remaining panels show the mechanisms by which models in the CMSSM (bottom left), NUHM1 (top right) and NUHM2 (bottom right) avoid producing too much thermal DM. In the same figure we also indicate, for comparison, current limits from dwarf galaxy observations by the Fermi-LAT [221], assuming photon spectra for DM annihilation tō bb and τ + τ − final states. We also show projected Fermi limits forbb final states [346], assuming 15 years of data on 60 dwarf galaxies (vs 6 yr and 15 dwarfs in the current limits). Lastly, we show the projected sensitivity of CTA after 500 hours of observation of the Galactic halo, also assumingbb final states [347]. We note that the actual (projected) limits depend on the final state, and hence the lines shown are only indicative for general points in the CMSSM parameter space. However, as long as the final states are hadronic, the expected variations remain within a factor of about three [348].
In general, the largest annihilation cross-sections are expected for the A/H funnel region, where resonant annihilation boosts σv. All models with annihilation cross-sections above the canonical thermal value (3 × 10 −26 cm 3 s −1 ) exhibit resonant annihilation through the A funnel (note that in the zero-velocity limit, due to the CP properties of the initial state, only the pseudoscalar resonances can contribute). As one would expect, all regions above this value in Fig. 21 are indeed identified as being part of the A/H funnel region, indicated by the fact that they are shaded orange. Some parts of these regions are also shaded yellow and/or blue, as some of the parameter points identified as belonging to the A/H funnel region also satisfy the necessary mass/composition conditions to be counted as part of the stau and/or chargino coannihilation regions. However, in all regions of overlap above f 2 · σv 0 ∼ 3 × 10 −26 cm 3 s −1 , resonant annihilation via the heavy Higgs bosons is the dominant mechanism in setting the relic density. Most such models exhibit a relic density below the observed value. 17 Indeed, most of the high-mass models identified in our scans as having stau and/or chargino co-annihilation also exhibit resonant annihilation through the heavy Higgs funnel. 18 Indeed, as discussed at the beginning of Sec. 4 in the context of Fig. 1, above DM masses of around a TeV, chargino co-annihilation in the CMSSM is unable to deplete the relic density to the thermal value or below without additional assistance from the A/H resonance -so in fact, all chargino co-annihilation models above about a TeV are hybrid models of some kind. This point is confirmed by careful study of all other CMSSM, NUHM1 and NUHM2 plots in this section: for m χ 1 TeV, chargino co-annihilation regions only appear in the presence of the heavy Higgs funnel, indicating that all high-mass chargino co-annihilation models are in fact either hybrids with, or completely dominated by, the heavy Higgs funnel. The NUHM1 and NUHM2 plots also show that the same situation holds for stau co-annihilation at m χ 1.5 TeV in the NUHM, which at such masses appears only in combination with the heavy Higgs funnel. Colour-coding shows the active mechanism(s) by which CMSSM models avoid exceeding the observed relic density of DM, through either chargino co-annihilation, the A/H funnel, or stop co-annihilation. Top Right: Colour-coded regions in the NUHM1, now also featuring stau co-annihilation (blue). Bottom Right: Colour-coded regions of the NUHM2. 95% CL exclusion limits are overlaid from the 6-year Fermi-LAT search for DM annihilation in 15 satellite dwarf galaxies [221], assuming dark matter annihilation tobb (yellow solid) and τ + τ − (red solid) final states. We also show the projected improvement for bb final states with 15 years of LAT data and four times as many dwarfs [346] (dashed yellow), and an optimistic projection of the sensitivity to bb final states of a Galactic halo search for DM annihilation by the upcoming Cherenkov Telescope Array, assuming 500 hr of observations and no systematic uncertainties [347] (green dashes).
In the CMSSM, Fermi will generally only probe the low-likelihood tails of the A/H funnel region and its co-annihilation hybrid, with the exception of ∼ 1 TeV Higgsinos (see below). Taking the optimistic predictions of Ref. [347] at face value, CTA will significantly cut into this region. However, this ignores the impact of detector and background systematics. Adding a systematic uncertainty of 1%, the sensitivity would degrade by a factor of ∼6 [349], and hence probe a significantly smaller part of the funnel and/or chargino co-annihilation region. The stop co-annihilation region, on the other hand, will remain largely unconstrained in the CMSSM, even with future indirect detection missions.
Let us stress, however, that indirect detection prospects will generally be better than indicated by this general discussion. One aspect not taken into account here is the impact of radiative corrections to the annihilation rate, which are particularly relevant for large neutralino masses [222,350,351]. In the stop coannihilation region, for example, the inclusion of Higgs-Strahlung off fermion final states can increase the total annihilation rate by a factor of a few compared to what 36 we have implemented [351]. Also antiprotons can be an efficient complementary probe of compressed mass spectra [352]. For heavy neutralinos with mass-degenerate charginos, another example is a distinct feature from W + W − γ final states [353], which adds to the already large monochromatic line signal from such models [354][355][356]; such a signal is much more easily distinguished from astrophysical backgrounds than the spectrum from bb final states assumed for CTA in Fig. 21. This signal would also appear in observations of the Galactic Centre well before any dwarf observations shown in Fig. 21 or taken into account in our scans. Last but not least, let us mention the Sommerfeld effect [357][358][359], which leads to a large enhancement in particular for ∼1 TeV Higgsino DM [359][360][361], and which we have not (yet) included in

GAMBIT.
In the NUHM1 and NUHM2, Fermi appears to be just beginning to constrain A-funnel models at masses of around 1.7 TeV. The various mechanisms to suppress the relic density are not as well-separated in the σv 0m χ plane in NUHM models as in the CMSSM, however. As a consequence, all these mechanisms can be (partially) tested with CTA. We note that this includes the stau co-annihilation region. The sensitivity curves shown here are overly conservative for such models, because τ + τ − γ final states will be much more constraining than bb spectra [362]. Still, even with the most optimistic assumptions, large parts of the viable parameter spaces of all of the GUT-scale models that we consider here will remain impossible to probe with indirect DM searches.

Conclusions
In this paper we have presented state-of-the-art profile likelihood global fits to three constrained versions of the minimal supersymmetric standard model, using GAMBIT. We have incorporated updated experimental data, additional observables and improved calculations for many quantities compared to previous global fits. We have also fully explored the parameter space in which the models are not excluded by any experimental measurements, specifically including areas where the neutralino only constitutes a fraction of the dark matter in the Universe.
In the CMSSM, we show that the stau coannihilation region is finally ruled out at more than 95% CL. This comes about due to Run II LHC constraints, difficulty in fitting the Higgs mass in this region, and an overall lifting of the isolikelihood contours defining the boundaries of this region, brought about by our improved sampling in this paper and resulting discovery of what is effectively a better best-fit than in previous works. The NUHM1 and NUHM2 allow more freedom, permitting lighter staus and a re-appearance of the stau co-annihilation region as a source of equally good fits as other mechanisms for depleting the relic density. Those include stop co-annihilation, chargino co-annihilation and resonant annihilation through the A/H funnel. We find that the chargino co-annihilation region also widens substantially in the NUHM1 and NUHM2 compared to the CMSSM, extending to arbitrarily low values of m 0 . Current constraints from the LHC push superpartner masses towards the multi-TeV regime, even if one does not demand that the lightest neutralino is the only DM species. The important exceptions are the lightest neutralinos and charginos, which can still have masses as low as ∼100 GeV without violating any experimental constraints, the lightest stau, which can be as light as ∼200 GeV, and the lightest stop, which can be as light as ∼500 GeV.
Despite very heavy spectra in many parts of the parameter space, future direct detection experiments will fully explore the chargino co-annihilation region, encompassing the so-called 'focus point'.
We find a region of good fits at large negative trilinear coupling, where the neutralino relic abundance is set by co-annihilation with the lightest stop. The trilinear couplings in this region raise questions about colour-and charge-breaking vacua, but our tests indicate that large parts of this region remain unaffected by such considerations. More detailed investigation would, however, be interesting. This region has been properly seen only in very recent fits performed contemporaneously with this one [115]. Models in this region feature quite light stops (∼500 GeV), making it very appealing from the point of view of electroweak naturalness. However, the stop co-annihilation mechanism requires the neutralino-stop mass difference to be quite small, which may constitute a fine-tuning in itself. A more detailed analysis of naturalness considerations, including a full Bayesian treatment of the fit, would be illuminating. Models in this region will be challenging to discover at the LHC, and next to impossible at direct detection experiments, but are promising targets for a future linear collider.
We began this study mainly intending to validate the new generic beyond-the-Standard-Model global fitting framework GAMBIT. In the end however, we have found quite a few genuinely new and interesting results. This serves to illustrate the utility of a modern and adaptive global fitter such as GAMBIT, where the impacts of different searches on different models can easily be examined and compared whilst retaining a consistent treatment of theoretical assumptions, systematics, nuisances, scanning algorithms, statistical approaches, experimental analyses and external code interfaces. 37 All input files, samples and best-fit benchmarks produced for this paper are publicly accessible from Zenodo [163]. The GAMBIT software is available from gambit.hepforge.org.