Search for associated production of a $Z$ boson with an invisibly decaying Higgs boson or dark matter candidates at $\sqrt{s}=13$ TeV with the ATLAS detector

A search for invisible decays of the Higgs boson as well as searches for dark matter candidates, produced together with a leptonically decaying $Z$ boson, are presented. The analysis is performed using proton-proton collisions at a centre-of-mass energy of 13 TeV, delivered by the LHC, corresponding to an integrated luminosity of 139 fb$^{-1}$ and recorded by the ATLAS experiment. Assuming Standard Model cross-sections for $ZH$ production, the observed (expected) upper limit on the branching ratio of the Higgs boson to invisible particles is found to be 19% (19%) at the 95% confidence level. Exclusion limits are also set for simplified dark matter models and two-Higgs-doublet models with an additional pseudoscalar mediator.


Introduction
Understanding the nature of dark matter (DM) is one of the most important goals in particle physics today. Experiments at particle colliders such as the Large Hadron Collider (LHC) might provide sensitivity that complements searches for naturally occurring DM particles or their decay products [1][2][3][4][5][6][7][8] by attempting to produce and detect them in the laboratory. The nature of DM particles remains largely unknown, and there is no obvious candidate in the Standard Model (SM) of particle physics. One of the most credited hypotheses is that DM candidates are weakly interacting massive particles (WIMPs, denoted by the symbol ). At hadron colliders, searches for WIMP-like DM production rely on one or more visible particles being produced in association with the invisible DM candidates, whose experimental signature would be the missing transverse momentum ( miss T ) in the collision event. Several models have been proposed in the past decades, with different assumptions about the DM couplings to SM particles and related production processes. As DM particles must be massive, searches for physics beyond the SM in which DM couples to the Higgs boson are strongly motivated. The Higgs boson was discovered in 2012 by the ATLAS and CMS collaborations at the LHC, with a mass of approximately 125 GeV [9,10]. If the DM particles are in the right mass range, they could even be produced in decays of the Higgs boson. The SM branching ratio prediction for Higgs boson decays into → 4 is only 0.1% [11]. Assuming SM production of the Higgs boson, its branching ratio to invisible particles, B ( → inv), can be constrained in the absence of a significant excess of such events above the expected background.
Searches for an excess in events with two electrons or muons 1 and missing transverse momentum ( miss T ) are sensitive to Higgs boson decays into DM if the Higgs boson is produced in association with a boson that decays into leptons. The analysis is also sensitive to so-far undiscovered heavier scalars produced together with the boson and decaying into invisible particles.
Searches in the + miss T final state can also be used to probe simplified DM models [12,13], in which DM is produced through a mediator particle that also couples to quarks and as such searches for dĳet resonances are the most sensitive [14,15]. In the analysis presented here, benchmark models are used, and these consider s-channel production of DM particles through vector or axial-vector mediators. The models are defined by five parameters: the mediator and DM particle masses, and the mediator couplings , , and ℓ to the DM particles, quarks, and leptons, respectively. Exclusion limits are set in a plane spanning the DM and mediator masses, for chosen values of the mediator couplings.
Furthermore, the analysis tests two-Higgs-doublet models (2HDM) that include an additional pseudoscalar mediator and are called 2HDM+ [16][17][18]. The two Higgs doublets in the model are CP-conserving and of type II [19][20][21][22], and the lighter scalar is identified as the observed 125 GeV Higgs boson. Four benchmark scenarios are probed in various planes as a function of the mass of the pseudoscalar Higgs boson, ; the mass of the additional pseudoscalar, ; the ratio of the vacuum expectation values of the two Higgs doublets, tan ; and sin , where is the mixing angle between the two CP-odd weak spin-0 eigenstates [17]. Example Feynman diagrams for all probed models are shown in Figure 1.
The + miss T channel is one of the most sensitive for the → inv search and the most sensitive channel over much of the parameter space for 2HDM+ searches. Previous results in the + miss T final state were obtained with partial Run-2 datasets by the ATLAS Collaboration. Using 36 fb −1 of data, ATLAS set a 95% confidence level (CL) upper limit of 67% on the Higgs boson branching ratio to invisible particles, with an expected limit of 39% in the absence of signal [23]. The same paper also reported exclusion limits for the simplified DM models mentioned above. Combining different Higgs boson production channels and different LHC runs, including 36 fb −1 from Run 2, ATLAS set an upper limit of 26% (17% expected) on the Higgs boson branching ratio to invisible particles [24]. The CMS Collaboration published results in the + miss T final state based on 137 fb −1 of Run-2 data, including limits on the branching ratio for invisible decays of the Higgs boson (29% observed vs 25% expected), simplified DM models and 2HDM+ models [25]. CMS also combined results from different production modes and LHC runs to set an upper limit of 19% (15% expected) on the Higgs boson branching ratio to invisible particles [26]. Constraints on 2HDM+ models were also set by the ATLAS and CMS collaborations using various final states [18,25,27,28].
The analysis strategy is outlined briefly in the following. Using the full LHC Run-2 dataset of 139 fb −1 recorded with the ATLAS detector, events in this search are required to have two oppositely charged electrons or muons, consistent with originating from a boson decay, as well as significant miss T . The same event selection is applied in both the → ℓℓ + inv search and the other DM searches. A boosted decision tree (BDT) is trained so that its output is used as the discriminating observable in the search for invisible Higgs boson decays, while the searches in the context of simplified DM models and 2HDM+ models are based on an observable representing the transverse mass distribution of the dominant background. Background distributions are estimated using simulated samples, and a simultaneous fit is performed in the signal region and three background control regions to constrain the systematic uncertainties and determine the normalization of some of the backgrounds. The sensitivity of the analysis is considerably improved in comparison with a projection of the previous analysis scaled to the present integrated luminosity, mainly due to the use of the BDT and the simultaneous fit of the signal and background control regions.

ATLAS detector
The ATLAS experiment [29] at the LHC is a multipurpose particle detector with a forward-backward symmetric cylindrical geometry and a near 4 coverage in solid angle. 2 It consists of an inner tracking detector surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic and hadronic calorimeters, and a muon spectrometer. The inner tracking detector covers the pseudorapidity range | | < 2.5. It consists of silicon pixel, silicon microstrip, and transition radiation tracking detectors. Lead/liquid-argon (LAr) sampling calorimeters provide electromagnetic (EM) energy measurements with high granularity. A steel/scintillator-tile hadronic calorimeter covers the central pseudorapidity range (| | < 1.7). The endcap and forward regions are instrumented with LAr calorimeters for both the EM and hadronic energy measurements up to | | = 4.9. The muon spectrometer surrounds the calorimeters and is based on three large superconducting air-core toroidal magnets with eight coils each. The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector. The muon spectrometer includes a system of precision chambers for tracking and fast detectors for triggering. A two-level trigger system is used to select events [30]. The first-level trigger is implemented in hardware and uses a subset of the detector information to accept events at a rate below 100 kHz. This is followed by a software-based trigger that reduces the accepted event rate to 1 kHz on average depending on the data-taking conditions. An extensive software suite [31] is used in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.

Data and simulated event samples
The presented analysis is performed using data from collisions, produced at √ = 13 TeV by the LHC and recorded with the ATLAS detector between 2015 and 2018. Data quality requirements [32] are applied to ensure that all detector subsystems were operational. The data correspond to an integrated luminosity of 139 fb −1 . The data sample was collected using a set of single-electron [33] and single-muon [34] triggers which require the presence of an electron (muon) with transverse energy T (transverse momentum T ) above thresholds in the range of 20-26 GeV depending on the lepton flavour and data-taking period [35]. The trigger selections also impose object quality and isolation requirements. There must be a geometrical match between a trigger lepton and a lepton selected in the offline analysis as described in Section 4. 2 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe. The -axis points from the IP to the centre of the LHC ring, and the -axis points upwards. Cylindrical coordinates ( , ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as = − ln tan( /2). Angular distance is measured in units of Simulated Monte Carlo (MC) samples are used to optimize the analysis selection, to estimate the signal, and as input to the background estimation. Unless otherwise mentioned, the NNPDF3.0 parton distribution function (PDF) set [36] was used for the hard interaction and the events were passed through the ATLAS detector response simulation [37] within the G 4 framework [38]. The profiles of the additional inelastic interactions (pile-up) in the simulation match those of each dataset between 2015 and 2018, and were obtained by overlaying minimum-bias events, simulated using the soft QCD processes of P 8.186 [39]  Associated production of a Higgs boson and a boson was simulated with P B v2 [44], including both the / and initial states. A Higgs boson mass of 125 GeV was assumed. The / → process was calculated at next-to-leading order (NLO) in QCD, and the M NLO technique [45] was used to merge 0-jet and 1-jet events. The → contribution was modelled at leading order (LO) in QCD. The parton-level events were passed to P 8.212 [46] with the AZNLO tune [47] to model the Higgs boson decay into four neutrinos as invisible decays, and also the parton showering, hadronization, and multiple parton interactions (MPI). The samples were normalized to next-to-next-to-leading order (NNLO) in QCD with electroweak (EW) corrections (for / → ) or to NLO+next-to-leading logarithms (NLO+NLL) in QCD (for → ) [11,[48][49][50][51][52][53][54][55]. In addition, parameterized EW corrections were applied as a function of the transverse momentum of the boson for the / → process [11]. Allowing the boson to decay into electron, muon or -lepton pairs, and assuming a Higgs boson branching ratio to invisible particles of 100%, the production cross-section times branching ratio is 77.0 ± 1.5 fb for / → and 12.4 ± 2.8 fb for For the simplified DM models, the s-channel process → (ℓℓ)¯events were simulated with M G 5_ MC@NLO 2.2.2 at NLO in QCD [56], and fed into P 8.212 with the A14 tune [57]. Vector and axial-vector mediator samples were produced for various mediator and DM masses, with the mediator couplings to DM and quarks set to = 1.0 and = 0.25, respectively [12,13]. Only the leptophobic case is probed, i.e. the mediator coupling to leptons was set to ℓ = 0. These samples were passed through a faster detector simulation using a parameterization of the calorimeter response [37]. M G 5 was used to generate the 2HDM+ signals, from both the and initial states, at LO in QCD, in combination with P 8.244 and the A14 tune. The model contains 14 parameters, including the DM mass, the masses of the five Higgs bosons, the ratio of the vacuum expectation values of the two Higgs doublets (tan ), various couplings, and the mixing angles and between the CP-even and CP-odd weak eigenstates, respectively. The -initiated production process is particularly important for high values of tan . Parameters and scanning planes were chosen following the recommendations in Ref. [ [59,[64][65][66][67] were applied as a function of miss T or the T of one of the bosons in the 4ℓ final state, using the average of additive and multiplicative approaches [67]. The → process was modelled at LO for up to one jet. NLO corrections for the → ℓℓ continuum process are calculated using the M package [60,68,69] with improvements in the calculation of the loop amplitudes [70]. The , (including → ), and contributions were modelled as well, with the same matrix-element accuracies as the contributions. The generation includes off-shell effects, → contributions and interference between processes [60].
The + jets background was modelled with S 2.2.1, with matrix elements calculated at NLO in QCD for up to two jets and at LO for three or four jets. Top-quark pair (¯) and single top-quark( , s-channel and tchannel) production was simulated using P B v2 (v1 for t-channel production) at NLO in QCD [44,[71][72][73][74][75], interfaced to P 8.230 with the A14 tune. The predictions were normalised to NNLO+nextto-next-to-leading-logarithm (NNLO+NNLL) cross-section calculations [76][77][78][79]. Associated production of a top-quark pair and a or boson,¯+ , was simulated with M G 5_ MC@NLO 2.2.3, and interfaced to P 8.210 with the A14 tune. In this case, predictions were normalized to NLO cross-section calculations [11,56].

Object selection
Selected events are required to contain at least one vertex with a minimum of two associated tracks with T > 500 MeV [80]. The primary vertex is chosen to be the vertex reconstructed with the largest Σ 2 T of its associated tracks. The event quality is checked to remove events with noise bursts or coherent noise in the calorimeters [32].
Electron candidates are reconstructed by matching inner-detector tracks to clusters of energy deposited in the EM calorimeter. Electrons must have T > 7 GeV and | | < 2.47. The associated track must have | 0 |/ 0 < 5 and | 0 | sin < 0.5 mm, where 0 ( 0 ) is the transverse (longitudinal) impact parameter relative to the primary vertex, 0 is the uncertainty in 0 , and is the polar angle of the track. Candidates are identified with a likelihood method and must satisfy 'medium' identification criteria [81], while 'loose' criteria are used to veto additional electrons in each analysis region. The likelihood relies on the shape of the EM shower measured in the calorimeter, the quality of the track reconstruction, and the quality of the match between the track and the cluster. The energy of the EM clusters associated with the electrons is calibrated in successive steps using a combination of simulation-based and data-driven correction factors. The electron reconstruction, identification, and energy calibration algorithms, as well as their performance, including the associated systematic uncertainties, are studied in Ref. [81]. Muon candidates are reconstructed in the range | | < 2.5 by combining tracks in the inner detector with tracks in the muon spectrometer.. All muon candidates must have T > 7 GeV, | 0 |/ 0 < 3, and | 0 | sin < 0.5 mm. In order to improve the momentum resolution, further quality requirements are placed on the muons. 'Medium' quality requirements [82] are used for candidate muons and 'loose' criteria are used to veto additional muons. The algorithms and efficiency of the muon reconstruction and identification, as well as the momentum calibration, including the associated systematic uncertainties, are estimated as described in Refs. [82,83]. To suppress hadronic and non-prompt lepton background, electron and muon candidates are required to satisfy the particle-flow isolation criteria, which are based on tracking and calorimeter measurements [83].
Jets in the range | | < 4.5 and T > 20 GeV are reconstructed with a particle-flow algorithm, which combines energy deposits in the calorimeter with inner-detector tracks [84], using the anti-algorithm [85, 86] with a radius parameter of 0.4. A jet-vertex-tagging technique based on a multivariate likelihood [87] is applied to jets with | | < 2.4 and T < 60 GeV to suppress jets that are not associated with the primary vertex of the event. Jets are further calibrated according to in situ measurements of the jet energy scale [88]. Jets in the range | | < 2.5 are identified as -jets with the MV2c10 algorithm, described in Ref.
[89]. The -jet identification efficiency is about 85% with a rejection factor of about 25 for light-flavour jets, as measured in a sample of simulated¯events.
Overlaps between reconstructed objects are accounted for with a removal procedure that is mainly based on the angular separation between the different final-state objects. The procedure is the same as the one described in Ref. [90].
The ì miss T of the event is computed as the negative vectorial sum of the transverse momenta of electrons, muons, jets and a track-based soft term [91] that accounts for the contribution from prompt particles that are not contained in the other objects. The miss T significance [92]is defined as miss , where the parameters L and LT are calculated from MC simulation and shown to describe the data well; the quantity L denotes the resolution of the T of the system and LT is a correlation factor between resolutions of the T components parallel and perpendicular to the miss T vector.
Further related quantities used in the analysis are T , the scalar sum of T for all leptons and those jets with T , a measure of the fraction of the event's T carried by the soft term, where ì jets T is the vectorial sum of the transverse momenta of all jets in the event with T > 30 GeV and ℓℓ T is the T of the ℓℓ system.

Event selection
Events are selected in a signal region (SR) designed to capture as many signal events as possible, and three control regions (CR) which are used to constrain the most important background processes, and which are expected to have negligible contamination from signal events.
Events in the SR are required to have exactly two oppositely charged electrons or muons with an invariant mass consistent with the mass of the boson. The leptons must have ℓ T > 20, 30 GeV when ordered in increasing T . The lepton pair is required to have an invariant mass ℓℓ in the range 76 < ℓℓ < 106 GeV. In order to select events in the SR consistent with invisible particles recoiling against the boson, events are required to have miss T > 90 GeV, miss T > 9 and a separation of Δ ℓℓ < 1.8 between the leptons.

An
CR is defined in exactly the same way as the SR apart from requiring the two leptons to have different flavours. This CR helps constrain the sum of the non-resonant backgrounds from¯, single top-quark, and → events. The CR is about 98% pure in non-resonant background.
A CR containing exactly four leptons, the 4ℓ CR, is used to constrain the background. Its events must contain two pairs of same-flavour oppositely charged leptons, with the four leptons required to have ℓ T > 7, 15, 15, 27 GeV when ordered in increasing T . If all four leptons are of the same flavour, the chosen pairing combination is the one minimizing the quantity | ℓℓ1 − | + | ℓℓ2 − |, where the indices 1 and 2 denote the lepton pairs and is taken to be 91.19 GeV. Both lepton pairs must satisfy 76 < ℓℓ < 106 GeV. In order to mimic the SR, the quantities miss T and miss T are calculated in the same way as miss T and miss T , but one pair of leptons, which is chosen at random, is treated as invisible and excluded from the calculation. The selection criteria are the same as for the SR, i.e. miss T > 90 GeV, miss T > 9 and Δ ℓℓ < 1.8, where the final requirement is imposed on the remaining lepton pair. The 4ℓ CR is almost 100% pure in events.
A CR containing exactly three leptons, the 3ℓ CR, is used to constrain the background. In order to select the boson in the event, two of the leptons must have opposite charge, the same flavour, ℓ T > 20, 30 GeV and 76 < ℓℓ < 106 GeV. If there are two combinations that pass these requirements, the one closer in mass to is taken. To select events consistent with a boson decay, it is required that the third lepton has ℓ T > 20 GeV, the event has miss T > 30 GeV and miss T > 3, and the transverse mass of the boson candidate satisfies is the azimuthal angle between the third lepton and ì miss T . The 3ℓ CR is about 93% pure in events.
Events with one or more identified -jets are removed in all regions to suppress events containing top quarks.
Considering all signal events in which the boson decays into an electron, muon or -lepton pair, the SR selection has an acceptance times efficiency of ∼8% (∼19%) for quark-induced (gluon-induced) → ℓℓ + inv, resulting in an expectation of 120 events for B ( → inv) = 10%. For an example signal point in the simplified DM model ( = 1 GeV, med = 900 GeV), the acceptance times efficiency is ∼20% with 145 events expected. For a -induced 2HDM+ signal point (tan = 1.0, sin = 0.7, = 600 GeV, = 400 GeV, = 10 GeV), it is ∼32% with 182 events expected. For all signal models the acceptance times efficiency is very similar for the dielectron and dimuon selections.

Background estimates and signal extraction
The signal and all backgrounds are estimated through simultaneous likelihood fits in the signal and control regions, using MC simulation as input. The dominant background in the SR is the background, followed by , + jets, and the non-resonant backgrounds ( ,¯, single top-quark and → ), which are treated together. Small contributions arising from triboson production,¯+ and → 4ℓ, where two of the leptons are not reconstructed, are referred to as 'Other' in the following. The and non-resonant backgrounds have the same topology as the signal (two leptons and miss T ). The SM Higgs boson decay → 4 is included as part of the signal, while other decays are estimated to be small and neglected. The background enters the SR if one of the leptons is not reconstructed. In most of the + jets background, significant miss T arises through mismeasurement of the energy of the jets or is due to neutrinos or muons that are missed by the reconstruction, coming from semileptonic heavy-flavour decays. The normalization of the MC simulation prediction for + jets was cross-checked using events with miss T < 9, which has a high purity for this process. A sample of + jets events, which has similar production diagrams to + jets, was used to check how well the MC simulation described the shape of the data. In both cases the MC simulation prediction and data were observed to agree within the statistical and systematic errors.
The data are compared with expectation by performing simultaneous maximum-likelihood fits, using the HistFitter framework [93], to distributions in the signal and control regions. A separate fit is performed for each signal hypothesis. Confidence intervals are based on a profile-likelihood-ratio test statistic [94], assuming asymptotic distributions for the test statistic. The CL s method [94] is used to set exclusion limits. The systematic uncertainties affecting the signal and background distribution normalizations and shapes across categories are parameterized by making the likelihood function depend on dedicated nuisance parameters, constrained by additional Gaussian probability terms, which are correlated between the regions. The normalizations of the signal, of the background and of the sum of non-resonant backgrounds are allowed to float in the fits, as their respective CRs have large numbers of events. The individual components of the non-resonant background are allowed to vary independently within their systematic uncertainties. As the 4ℓ CR has low statistics the normalization of the background, like those of the other backgrounds ( + jets, triboson production,¯+ , → 4ℓ), can vary only within the systematic uncertainties.
The sensitivity of the → inv search is increased by using a BDT that is coded in the TMVA package [95] to improve the separation between signal and background events. The BDT uses a set of kinematic distributions that are selected because they have a different shape for signal and background. These are combined into a single output variable that provides better signal/background separation than any of the individual inputs alone. Eight variables are used: miss T / T , miss T , T , soft , ℓℓ , Δ ℓℓ , ℓℓ (the rapidity of the ℓℓ system), and Δ (ℓℓ, ì miss T ) (the azimuthal angle between the T of the ℓℓ system and ì miss T ). Other variables were considered but they did not improve the sensitivity significantly. The BDT was trained in the SR using simulated events for the → ℓℓ + inv signal and for the sum of all backgrounds. The BDT output distribution is used in the profile likelihood fit for the SR and CR. The miss T distribution is used for the 3ℓ and 4ℓ CRs as the shapes of many of the BDT input variables for the background processes differ significantly between the CRs and SR. Example post-fit distributions in the CRs are shown in Figures 2(a)-2(c), after the → ℓℓ + inv simultaneous fit to the SR and CRs. It can be seen that the data are well described by the expectation. The fit uses the same binning as shown in these plots.
For the searches considering the simplified DM models or 2HDM+ models, the transverse mass distribution is used in the maximum-likelihood fits for the SR and the CR. The quantity is the transverse mass of the dominant background, and gives good separation between signal and background for the majority of the DM and 2HDM+ signals considered in this analysis. BDTs were not used for the DM and 2HDM+ searches, due to the complexity of training over many mass points. It was found that the T distribution provides better sensitivity over much of the probed parameter space than a BDT trained for one signal point. The T distribution is only used in the fit for T > 200 GeV. Figure 2(d) shows that the background estimate is in good agreement with the data for the T distribution in the CR after a simultaneous fit to the signal and control regions in the context of the simplified DM models (axial-vector signal with ( , med ) = (150, 900) GeV). As in the → inv search, the miss T distribution is used for the 3ℓ and 4ℓ CRs.

Systematic uncertainties
Signal and background expectations are subject to statistical, detector-related and theoretical uncertainties. The shapes of the observable distributions, the acceptances in signal and control regions, and the overall background sample normalizations that are not free to float in the fit can be affected when varying the simulation to estimate the impact of systematic uncertainties. As discussed in Section 6, the uncertainties are treated as nuisance parameters in the fits and correlated between signal and control regions, which helps to constrain many of them, and reduces their impact. The post-fit impact of the systematic uncertainties can be found in Section 8. from the deviation of the additive and multiplicative approaches from their average [67]. For all diboson backgrounds, uncertainties due to missing higher orders in the QCD calculation are estimated by varying the renormalization and factorization scales by a factor of two, either independently or correlated and taking the largest variation as the uncertainty, while the effects of PDF and s uncertainties are calculated using the PDF4LHC prescription [96]. The impact of the choice of parton shower and hadronization model is evaluated by comparing the samples from the nominal generator set-up with samples produced with varied resummation and CKKW matching procedure [97,98], and a different recoil scheme tuned in S .
The effects of QCD uncertainties on the + jets background are also non-negligible. They are estimated by varying the renormalization and factorization scales by a factor of two. PDF uncertainties are also included. Modelling uncertainties in the¯and single-top-quark backgrounds are subdominant; their estimation is performed by varying the generator used for the hard scatter, as well as the renormalization and factorization scales, the parton shower model, s , and the PDF sets. The modelling uncertainties of the remaining background processes are also evaluated but have no impact on the final result.
For the → ℓℓ + inv signal, the uncertainty due to missing higher orders in QCD is estimated in bins of the kinematic observables using a scheme similar to the one discussed in Ref.  Figure 3 shows the BDT and T distributions of the selected data events, compared with the estimated backgrounds after the simultaneous fits described in Section 6 are performed. For the → ℓℓ + inv case, Table 1 gives the number of observed events as well as the signal and background expectations after the simultaneous fit. The numbers for the DM fits are very similar, with some differences in the CR as the T distribution is only used in the fit for T > 200 GeV.

Results
The data are in good agreement with the background expectation. Assuming SM Higgs production cross-sections and their uncertainties, the best-fit branching ratio of Higgs boson decays into invisible  Table 1: Summary of signal and control region yields after the simultaneous fit for the → ℓℓ + inv signal. Also given is the total post-fit uncertainty of each number. As defined in the text, 'Non-resonant' includes ,¯, single top-quark and → processes. Note the uncertainty on the total expectation does not equal the sum of the uncertainties of individual contributions added in quadrature, due to correlations between the uncertainties. particles is B ( → inv) = (0.3 ± 9.0)%, which corresponds to an observed 95% CL upper limit of 19%. The corresponding expected upper limit is 19%. The analysis improvements and reduced systematic uncertainties lead to this limit being 45% lower than a projection of the previous expected limit [23] scaled to the present integrated luminosity. The observed normalization factors for the and backgrounds are 0.971 ± 0.060 and 0.92 ± 0.10 respectively. Table 2 shows the contributions of the statistical and systematic uncertainties to the total uncertainty of the best-fit B ( → inv). As discussed in Section 7, the dominant uncertainty is due to the modelling. Uncertainties in the jet reconstruction are important as well. Table 2: Summary of the uncertainties ΔB on the best-fit B ( → inv), obtained by fixing the corresponding nuisance parameters to their best-fit values, and subtracting the square of the resulting uncertainty from the square of the total uncertainty to evaluate (ΔB) 2 . The statistical uncertainty component is obtained by fixing all nuisance parameters to their best-fit values. Note the total uncertainty does not equal the sum of the individual contributions added in quadrature due to correlations between the systematic uncertainties.  Eight scans are produced for the 2HDM+ models, following the recommendations in Ref.
[17], including the requirement = = ± . They are shown in Figures 5 and 6. The hashed red area indicates that the width of one of the Higgs bosons is larger than 20% of its mass [16]. The experimental exclusion in those areas is subject to additional theoretical uncertainties, as the dependence of the width on the virtuality of the additional Higgs bosons could significantly alter the inclusive production cross-sections (one of the limitations of the models). For all presented scans, the exclusion limit is stronger for a mixing parameter value of sin = 0.7 because the cross-sections are larger than for sin = 0.35.  Figure 5(a), the contours now extend upwards beyond tan = 3 due to the inclusion of -induced signal contributions (see Figure 1(d) for an example diagram). The interplay between those and the -fusion processes (Figure 1(c)) affects the shapes, as the tan dependence of the coupling of / / to top quarks (present in -fusion) differs from that to bottom quarks. The relative difference between and also affects the shape through the → + process.
The Within the context of appropriate models, both the B ( → inv) limit and the limits on the simplified DM model (Figure 4) can be compared with limits from direct-detection DM experiments. For consistent comparisons, the 90% CL is used, which corresponds to B ( → inv) = 16%. The translation of the → inv result into a WIMP-nucleon scattering cross-section WIMP-N in the Higgs portal model [104] relies on an effective field theory approach and assumes that Higgs boson decays into a pair of DM particles are kinematically possible and that the DM particle is a scalar or a Majorana fermion [105][106][107]. In this translation, the nuclear form factor = 0.308 ± 0.018 [108] is used. The simplified models with an axial-vector mediator can be translated to spin-dependent WIMP-proton scattering, while the vector mediator induces spin-independent WIMP-nucleon interactions [12]. Figures 7 and 8 [1][2][3][4][5][6][7][8] with the exclusion obtained in the simplified DM models in the plane of the dark matter mass and (a) the spin-dependent WIMP-proton scattering cross-section or (b) the spin-independent WIMP-nucleon scattering cross-section [12]. The area within the shaded lines is excluded by this analysis in the context of the simplified models.

Conclusion
This article presents a search for invisible decays of the Higgs boson as well as searches for dark matter candidates, produced together with a leptonically decaying boson. The analysis was performed using proton−proton collisions at a centre-of-mass energy of 13 TeV, delivered by the LHC, corresponding to an integrated luminosity of 139 fb −1 and recorded by the ATLAS experiment. Assuming Standard Model cross-sections for production, the upper limit in the branching ratio of the Higgs boson to invisible particles was constrained to 19%, at the 95% confidence level. The corresponding expected limit of 19% represents an improvement of about 45% in comparison with a projection of the previous analysis scaled to the present integrated luminosity. Exclusion limits were also set for simplified dark matter models and 2HDM+ models for a number of benchmark parameters, improving on previously set combined constraints obtained with a 36 fb −1 dataset.