1 Introduction

This paper presents a search for signatures of supersymmetry in events containing two energetic isolated photons and large missing transverse momentum (with magnitude denoted \(E_{\text {T}}^{\text {miss}}\)) in \(3.2{~\mathrm{fb}^{-1}}\) of proton–proton (pp) collision data at \(\sqrt{s}=13\) TeV recorded with the ATLAS detector at the Large Hadron Collider (LHC) in 2015. The results are interpreted in the context of general gauge mediation (GGM) [1, 2] models that include the production of supersymmetric partners of Standard Model (SM) particles that possess color charge. In all models of GGM, the lightest supersymmetric particle (LSP) is the gravitino \(\tilde{G}\) (the partner of the hypothetical quantum of the gravitational field), with a mass significantly less than 1 GeV. In the GGM model considered here, the decay of the supersymmetric states produced in pp collisions would proceed through the next-to-lightest supersymmetric particle (NLSP), which would then decay to the \(\tilde{G}\) LSP and one or more SM particles, with a high probability of decay into \(\gamma \) + \(\tilde{G}\). All accessible supersymmetric states with the exception of the \(\tilde{G}\) are assumed to be short-lived, leading to prompt production of SM particles that would be observed in the ATLAS detector. These results extend those of prior studies with 8 TeV collision data from Run 1 by the ATLAS [3] and CMS [4] experiments.

Supersymmetry (SUSY) [510] introduces a symmetry between fermions and bosons, resulting in a SUSY particle (sparticle) with identical quantum numbers, with the exception of a difference of half a unit of spin relative to its corresponding SM partner. If SUSY were an exact symmetry of nature, each sparticle would have a mass equal to that of its SM partner. Since no sparticles have yet been observed, SUSY would have to be a broken symmetry. Assuming R-parity conservation [11], sparticles are produced in pairs. These would then decay through cascades involving other sparticles until the stable, undetectable LSP is produced, leading to a final state with significant \(E_{\text {T}}^{\text {miss}}\).

Experimental signatures of gauge-mediated supersymmetry-breaking models [1214] are largely determined by the nature of the NLSP. For GGM, the NLSP is often formed from an admixture of any of the SUSY partners of the electroweak gauge and Higgs boson states. In this study the NLSP, assumed to be electrically neutral and purely bino-like (the SUSY partner of the SM U(1) gauge boson), is the lightest gaugino state \(\tilde{\chi }^{0}_{1}\). In this case, the final decay in each of the two cascades in a GGM event would be predominantly \(\tilde{\chi }^{0}_{1}\rightarrow \gamma +\tilde{G}\), leading to final states with \(\gamma \gamma +E_{\text {T}}^{\text {miss}}\).

In addition to the bino-like \(\tilde{\chi }^{0}_{1}\) NLSP, a degenerate octet of gluinos (the SUSY partner of the SM gluon) is taken to be potentially accessible with 13 TeV pp collisions. Both the gluino and \(\tilde{\chi }^{0}_{1}\) masses are considered to be free parameters, with the \(\tilde{\chi }^{0}_{1}\) mass constrained to be less than that of the gluino. All other SUSY masses are set to values that preclude their production in 13 TeV pp collisions. This results in a SUSY production process that proceeds through the creation of pairs of gluino states, each of which subsequently decays via a virtual squark (the 12 squark flavour/chirality eigenstates are taken to be fully degenerate) to a quark–antiquark pair plus the NLSP neutralino. Additional SM objects (jets, leptons, photons) may be produced in these cascades. The \(\tilde{\chi }^{0}_{1}\) branching fraction to \(\gamma \) + \(\tilde{G}\) is 100 % for \(m_{\tilde{\chi }^{0}_{1}} \rightarrow 0\) and approaches \(\cos ^2 \theta _W\) for \(m_{\tilde{\chi }^{0}_{1}} \gg m_Z\), with the remainder of the \(\tilde{\chi }^{0}_{1}\) sample decaying to Z + \(\tilde{G}\). For all \(\tilde{\chi }^{0}_{1}\) masses, then, the branching fraction is dominated by the photonic decay, leading to the diphoton-plus-\(E_{\text {T}}^{\text {miss}}\)signature. For this model with a bino-like NLSP, a typical production and decay channel for strong (gluino) production is exhibited in Fig. 1. Finally, it should be noted that the phenomenology relevant to this search has a negligible dependence on the ratio \(\tan \beta \) of the two SUSY Higgs-doublet vacuum expectation values; for this analysis \(\tan \beta \) is set to 1.5.

Fig. 1
figure 1

Typical production and decay-chain processes for the gluino-pair production GGM model for which the NLSP is a bino-like neutralino

2 Samples of simulated processes

For the GGM models under study, the SUSY mass spectra and branching fractions are calculated using SUSPECT 2.41 [15] and SDECAY 1.3b [16], respectively, inside the package SUSY-HIT 1.3 [17]. The Monte Carlo (MC) SUSY signal samples are produced using Herwig++ 2.7.1 [18] with CTEQ6L1 parton distribution functions (PDFs) [19]. Signal cross sections are calculated to next-to-leading order (NLO) in the strong coupling constant, including, for the case of strong production, the resummation of soft gluon emission at next-to-leading-logarithmic accuracy (NLO+NLL) [2024]. The nominal cross section and its uncertainty are taken from an envelope of cross-section predictions using different PDF sets and factorization and renormalization scales [25]. At fixed centre-of-mass energy, SUSY production cross sections decrease rapidly with increasing SUSY particle mass. At \(\sqrt{s} = 13\) TeV, the gluino-pair production cross section is approximately 25 fb for a gluino mass of 1.4 TeV and falls to below 1 fb for a gluino mass of 2.0 TeV.

While most of the backgrounds to the GGM models under examination are estimated through the use of control samples selected from data, as described below, the extrapolation from control regions (CRs) to the signal region (SR) depends on simulated samples, as do the optimization studies. Diphoton, photon+jet, \(W\gamma \), \(Z\gamma \), \(W\gamma \gamma \) and \(Z\gamma \gamma \) SM processes are generated using the SHERPA 2.1.1 simulation package [26], making use of the CT10 PDFs [27]. The matrix elements are calculated with up to three parton emissions at leading order (four in the case of photon+jet samples) and merged with the SHERPA parton shower [28] using the ME+PS@LO prescription [29]. The \(t\bar{t}\gamma \) process is generated using MadGraph5_aMC@NLO [30] with the CTEQ6L1 PDFs [19], in conjunction with PYTHIA 8.186 [31] with the NNPDF2.3LO PDF set [32, 33] and the A14 set [34] of tuned parameters.

All simulated samples are processed with a full ATLAS detector simulation [35] based on GEANT4 [36]. The effect of additional pp interactions per bunch crossing (“pile-up”) as a function of the instantaneous luminosity is taken into account by overlaying simulated minimum-bias events according to the observed distribution of the number of pile-up interactions in data, with an average of 13 interactions per event.

3 ATLAS detector

The ATLAS experiment records pp collision data with a multipurpose detector [37] that has a forward-backward symmetric cylindrical geometry and nearly 4\(\pi \) solid angle coverage. Closest to the beam line are solid-state tracking devices comprising layers of silicon-based pixel and strip detectors covering \(\left| \eta \right| <2.5\) and straw-tube detectors covering \(\left| \eta \right| <2.0\), located inside a thin superconducting solenoid that provides a 2T magnetic field. Outside of this “inner detector”, fine-grained lead/liquid-argon electromagnetic (EM) calorimeters provide coverage over \(\left| \eta \right| < 3.2\) for the measurement of the energy and direction of electrons and photons. A presampler, covering \(\left| \eta \right| < 1.8\), is used to correct for energy lost upstream of the EM calorimeter. A steel/scintillator-tile hadronic calorimeter covers the region \(|\eta | < 1.7\), while a copper/liquid-argon medium is used for hadronic calorimeters in the end cap region \(1.5< |\eta | < 3.2\). In the forward region \(3.2< |\eta | < 4.9\) liquid-argon calorimeters with copper and tungsten absorbers measure the electromagnetic and hadronic energy. A muon spectrometer consisting of three superconducting toroidal magnet systems, each comprising eight toroidal coils, tracking chambers, and detectors for triggering, surrounds the calorimeter system. The muon system reconstructs penetrating tracks over a range \(|\eta | < 2.7\) and provides input to the trigger system over a range \(|\eta | < 2.4\). A two-level trigger system [38] is used to select events. The first-level trigger is implemented in hardware and uses a subset of the detector information to reduce the accepted rate to less than 100 kHz. This is followed by a software-based ’high-level’ trigger (HLT) that reduces the recorded event rate to approximately 1 kHz.

4 Event reconstruction

Primary vertices are formed from sets of two or more tracks, each with transverse momentum \(p_\mathrm{T}^{\mathrm {track}}>\) 400 MeV, that are mutually consistent with having originated at the same three-dimensional point within the luminous region of the colliding proton beams. When more than one such primary vertex is found, the vertex with the largest sum of the squared transverse momenta of the associated tracks is chosen.

Electron candidates are reconstructed from EM calorimeter energy clusters consistent in transverse shape and longitudinal development with having arisen from the impact of an electromagnetic particle (electron or photon) upon the face of the calorimeter. For the object to be considered an electron, it is required to match a track identified by a reconstruction algorithm optimized for recognizing charged particles with a high probability of bremsstrahlung [39]. The energy of the electron candidate is determined from the EM cluster, while its direction is determined from the associated reconstructed track. Electron candidates are required to have \(p_{\text {T}}> 25~\text{GeV}\) and \(|\eta | < 2.37\), and to be outside the transition region \(1.37< \left| \eta \right| < 1.52\) between the central and forward portions of the EM calorimeter. Finally, the electron track is required to be consistent with originating from the primary vertex in both the \(r-z\) and \(r-\phi \) planes. Further details of the reconstruction of electrons can be found in Refs. [40] and [41].

Electromagnetic clusters are classified as photon candidates provided that they either have no matched track or have one or more matched tracks consistent with having arisen from a photon conversion. Based on the characteristics of the longitudinal and transverse shower development in the EM calorimeter, photons are classified as “loose” or “tight”, with the tight requirements leading to a more pure but less efficienct selection of photons relative to that of the loose requirements [42]. Photon candidates are required to have \(p_{\text {T}}> 25~\text{GeV}\), to be within \(\left| \eta \right| < 2.37\), and to be outside the transition region \(1.37< \left| \eta \right| < 1.52\). Additionally, an isolation requirement is imposed: after correcting for contributions from pile-up and the deposition ascribed to the photon itself, the energy within a cone of \(\Delta R = 0.4\) around the cluster barycentre is required to be less than \(2.45~\text{GeV} + 0.022 \times p_{\mathrm {T}}^{\gamma }\), where \(p_{\mathrm {T}}^{\gamma }\) is the transverse momentum of the cluster. In the case that an EM calorimeter deposition identified as a photon overlaps the cluster of an identified electron within a cone of \(\Delta R = 0.4\), the photon candidate is discarded and the electron candidate is retained. Further details of the reconstruction of photons can be found in Ref. [42].

Muon candidates make use of reconstructed tracks from the inner detector as well as information from the muon system [43]. Muons are required to be either “combined”, for which the muon is reconstructed independently in both the muon spectrometer and the inner detector and then combined, or “segment-tagged”, for which the muon spectrometer is used to tag tracks as muons, without requiring a fully reconstructed candidate in the muon spectrometer. Muons are required to have \(p_{\text {T}}> 25~\text{GeV}\) and \(|\eta |<2.7\), with the muon track required to be consistent with originating from the primary vertex in both the \(r-z\) and \(r-\phi \) planes.

Jets are reconstructed from three-dimensional energy clusters [44] in the electromagnetic and hadronic calorimeters using the anti-\(k_t\) algorithm [45] with a radius parameter R = 0.4. Each cluster is calibrated to the electromagnetic scale prior to jet reconstruction. The reconstructed jets are then calibrated to particle level by the application of a jet energy scale derived from simulation and in situ corrections based on 8 TeV data [46, 47]. In addition, the expected average energy contribution from pile-up clusters is subtracted using a factor dependent on the jet area [46]. Track-based selection requirements are applied to reject jets with \(p_{\text {T}}{} < 60\) GeV and \(|\eta | < 2.4\) that originate from pile-up interactions [48]. Once calibrated, jets are required to have \(p_{\text {T}}>\) 40 GeV and \(|\eta | < 2.8\).

To resolve the ambiguity that arises when a photon is also reconstructed as a jet, if a jet and a photon are reconstructed within an angular distance \(\Delta R = 0.4\) of one another, the photon is retained and the jet is discarded. If a jet and an electron are reconstructed within an angular distance \(\Delta R = 0.2\) of one another, the electron is retained and the jet is discarded; if \(0.2< \Delta R < 0.4\) then the jet is retained and the electron is discarded. Finally, in order to suppress the reconstruction of muons arising from showers induced by jets, if a jet and a muon are found with \(\Delta R < 0.4\) the jet is retained and the muon is discarded.

The missing transverse momentum \(\varvec{p}_{\mathrm {T}}^\mathrm{miss}\) is defined as the negative vector sum of the \(p_{\text {T}}\) of all reconstructed physics objects in the event, with an extra term added to account for soft energy in the event that is not associated with any of the objects. This “\(E_{\text {T}}^{\text {miss}}\) soft term” is calculated from inner-detector tracks with \(p_{\text {T}}\)above 400 MeV matched to the primary vertex to make it less dependent upon pile-up contamination [49, 50]. The scalar observable \(E_{\text {T}}^{\text {miss}}\) is defined to be the magnitude of the resulting \(\varvec{p}_{\mathrm {T}}^\mathrm{miss}\) vector.

Several additional observables are defined to help in the discrimination of SM backgrounds from potential GGM signals. The total visible transverse energy, \(H_{\text {T}}\), is calculated as the scalar sum of the transverse momenta of the reconstructed photons and any additional leptons and jets in the event. The “effective mass”, \(m_{\mathrm {eff}}\), is defined as the scalar sum of \(H_{\text {T}}\) and \(E_{\text {T}}^{\text {miss}}\). The minimum jet–\(\varvec{p}_{\mathrm {T}}^\mathrm{miss}\) separation, \(\Delta \phi _{\mathrm {min}}(\mathrm {jet},\varvec{p}_{\mathrm {T}}^\mathrm{miss}{})\), is defined as the minimum azimuthal angle between the missing transverse momentum vector and the two leading (highest-\(p_{\text {T}}\)) jets with \(p_{\text {T}}> 75\) GeV in the event, if they are present. If no such jets exist, no requirement is placed on this observable.

5 Event selection

The data sample is selected by a HLT trigger requiring the presence of two loose photons, each with \(p_{\text {T}}\) greater than 50 GeV. Offline, two tight photons with \(p_{\text {T}}> 75~\text{GeV}\) are required. In order to ensure that \(E_{\text {T}}^{\text {miss}}\) is measured well, events are removed from the data sample if they contain jets likely to be produced by beam backgrounds, cosmic rays or detector noise [51].

To exploit the significant undetectable transverse momentum carried away by the gravitinos, a requirement on \(E_{\text {T}}^{\text {miss}}\) is imposed on the diphoton event sample. To take advantage of the high production energy scale associated with signal events near the expected reach of the analysis, an additional requirement on \(m_{\mathrm {eff}}\) is applied. To further ensure the accurate reconstruction of \(E_{\text {T}}^{\text {miss}}\) and to suppress backgrounds associated with the mismeasurement of hadronic jets, a requirement of \(\Delta \phi _{\mathrm {min}}(\mathrm {jet},\varvec{p}_{\mathrm {T}}^\mathrm{miss}{})> 0.5\) is imposed. Figure 2 shows the \(E_{\text {T}}^{\text {miss}}\) and \(m_{\mathrm {eff}}\) distributions of the diphoton sample after the application of requirements of \(p_{\mathrm {T}}^{\gamma }> 75\) GeV on each selected photon and of \(\Delta \phi _{\mathrm {min}}(\mathrm {jet},\varvec{p}_{\mathrm {T}}^\mathrm{miss}{})> 0.5\), but with no requirements yet imposed on \(E_{\text {T}}^{\text {miss}}\) and \(m_{\mathrm {eff}}\). Also shown are the expected contributions from SM processes, estimated using the combination of Monte Carlo and data-driven estimates discussed in Sect. 6.

As discussed in Sect. 1, the GGM signal space is parameterized by the masses of the gluino (\(m_{\tilde{g}}\)) and bino-like NLSP (\(m_{\tilde{\chi }^{0}_{1}}\)). The sensitivity of this analysis was optimized for two signal scenarios near the expected reach in \(m_{\tilde{g}}\): high and low neutralino-mass benchmark points were chosen with \((m_{\tilde{g}},m_{\tilde{\chi }^{0}_{1}}) = (1500,1300)\) GeV and \((m_{\tilde{g}},m_{\tilde{\chi }^{0}_{1}}) = (1500,100)\) GeV, respectively.

Fig. 2
figure 2

Distributions of \(E_{\text {T}}^{\text {miss}}\)(left) and \(m_{\mathrm {eff}}\)(right) for the diphoton sample after the application of requirements of \(p_{\mathrm {T}}^{\gamma }> 75\) GeV on each selected photon and of \(\Delta \phi _{\mathrm {min}}(\mathrm {jet},\varvec{p}_{\mathrm {T}}^\mathrm{miss}{})> 0.5\), but with no requirements imposed on \(E_{\text {T}}^{\text {miss}}\) and \(m_{\mathrm {eff}}\). The expected contributions from SM processes are estimated using the combination of Monte Carlo and data-driven estimates discussed in Sect. 6. Uncertainties (shaded bands for MC simulation, error bars for data) are statistical only. The yellow band represents the uncertainty in the data/SM ratio that arises from the statisical limitations of the estimates of the various expected sources of SM background. Also shown are the expected contributions from the GGM signal for the two benchmark points, \((m_{\tilde{g}},m_{\tilde{\chi }^{0}_{1}}) = (1500,1300)\) GeV and \((m_{\tilde{g}},m_{\tilde{\chi }^{0}_{1}}) = (1500,100)\) GeV. The final bin of each plot includes the ‘overflow’ contribution that lies above the nominal upper range of the plot

Based on background estimates derived from the MC samples described in Sect. 2, the selection requirements were optimized as a function of \(E_{\text {T}}^{\text {miss}}\), \(m_{\mathrm {eff}}\) and \(p_{\mathrm {T}}^{\gamma }\) by maximizing the expected discovery sensitivity of the analysis, for each of the two signal benchmark points. The selected values of the minimum requirements on all three optimization parameters were found to be very similar for the low and high neutralino-mass benchmark points, leading to the definition of a single signal region (SR). The selection requirements for this SR are shown in Table 1.

Table 1 Requirements defining the signal region (SR) and the \(W\gamma \gamma \) CR referred to in Sect. 6

6 Background estimation

Processes that contribute to the Standard Model background of diphoton final states can be divided into three primary components. The largest contribution to the inclusive diphoton spectrum is the “QCD background”, which can be further divided into a contribution from two real photons produced in association with jets, and a “jet-faking-photon” contribution arising from \(\gamma \)+jet and multijet events for which one or both reconstructed photons are faked by a jet, typically by producing a \(\pi ^{0}\rightarrow \gamma \gamma \) decay that is misidentified as a prompt photon. An “electron-faking-photon background” arises predominantly from W, Z, and \(\; t\bar{t}\) events, possibly accompanied by additional jets and/or photons, for which an electron is misidentified as a photon. Electron-to-photon misidentification is due primarily to instances for which an electron radiates a high-momentum photon as it traverses the material of the ATLAS inner detector. Last, an “irreducible background” arises from \(W\gamma \gamma \) and \(Z\gamma \gamma \) events. These backgrounds are estimated with a combination of data-driven and simulation-based methods described as follows.

The component of the QCD background arising from real diphoton events (\(\gamma \gamma \)) is estimated directly from diphoton MC events, rescaled as function of \(E_{\text {T}}^{\text {miss}}\)and the number of selected jets to match the respective distributions for the inclusive diphoton sample in the range \(E_{\text {T}}^{\text {miss}}<\) 100 GeV. While this background dominates the inclusive diphoton sample, it is very steeply falling in \(E_{\text {T}}^{\text {miss}}\), making it small relative to backgrounds with real \(E_{\text {T}}^{\text {miss}}\) for \(E_{\text {T}}^{\text {miss}}\gtrsim \) 100 GeV, independent of the reweighting.

The component of the QCD background arising from jets faking photons and the background arising from electrons faking photons are both estimated with a data-driven “fake-factor” method, for which events in data samples enriched in the background of interest are weighted by factors parameterizing the misidentification rate.

To estimate the jet-faking-photon fake-factor, the jet-faking-photon background is enriched by using an inverted isolation requirement, selecting events only if they contain one or more non-isolated photons. The relative probability of an energy cluster being reconstructed as an isolated, rather than non-isolated, photon is known as the photon-isolation fake factor, and is measured in an orthogonal “non-tight” sample of photons. The selection of this sample requires that all the tight photon identification requirements be satisfied, with the exception that at least one of the requirements on the calorimeter variables defined only with the first (strip) layer of the electromagnetic calorimeter fails. This leads to a sample enriched in identified (non-tight) photons that are actually \(\pi ^0\)s within jets. The correlation between the isolation variable and the photon identification requirements was found to be small and to have no significant impact on the estimation of the jet-faking-photon fake-factor. The fake factors depend upon \(p_{\text {T}}\) and \(\eta \), and vary between 10 and 30 %. The jet-faking photon background is then estimated by weighting events with non-isolated photons by the applicable photon-isolation fake factor.

The electron-faking-photon background is estimated with a similar fake-factor method. For this case, the electron-faking-photon background is enriched by selecting events with a reconstructed electron instead of a second photon. Fake factors for electrons being misidentified as photons are then measured by comparing the ratio of reconstructed \(e\gamma \) to ee events arising from Z bosons decaying to electron–positron pairs, selected within the mass range of 75–105 GeV. The electron-faking-photon background is then estimated by weighting selected \(e\gamma \) events by their corresponding fake factors, which are typically a few percent.

The irreducible background from \(W\gamma \gamma \) events is estimated with MC simulation; however, because it is a potentially dominant background contribution, the overall normalization is derived in a \(\ell \gamma \gamma \) control region (\(W\gamma \gamma \) CR) as follows. Events in the \(W\gamma \gamma \) CR are required to have two tight, isolated photons with \(p_{\text {T}}> 50\) GeV, and exactly one selected lepton (electron or muon) with \(p_{\text {T}}> 25\) GeV. As with the SR, events are required to have \(\Delta \phi _{\mathrm {min}}(\mathrm {jet},\varvec{p}_{\mathrm {T}}^\mathrm{miss}{})> 0.5\), so that the direction of the missing transverse momentum vector is not aligned with that of any high-\(p_{\text {T}}\) jet. To ensure that the control sample has no overlap with the signal region, events are discarded if \(E_{\text {T}}^{\text {miss}}> 175\) GeV. While these requirements target \(W\gamma \gamma \) production, they also are expected to select appreciable backgrounds from \(t\bar{t}\gamma \), \(Z\gamma \) and \(Z\gamma \gamma \) events, and thus additional requirements are applied. To suppress \(t\bar{t}\gamma \) contributions to the \(W\gamma \gamma \) CR, events are discarded if they contain more than two selected jets. To suppress \(Z\gamma \) contributions, events are discarded if there is an e\(\gamma \) pair in the events with \(83< m_{e\gamma } < 97\) GeV. Finally, to suppress \(Z\gamma \gamma \) contributions, events with \(E_{\text {T}}^{\text {miss}}<50\) GeV are discarded. The event selection requirements for the \(W\gamma \gamma \) CR are summarized in Table 1. A total of seven events are observed in this \(W\gamma \gamma \) control region, of which 1.6 are expected to arise from sources other than \(W\gamma \gamma \) production. The MC expectation for the \(W\gamma \gamma \) process is 1.9 events, leading to a \(W\gamma \gamma \) scale factor of \(2.9 \pm 1.4\), assuming that no GGM signal events contaminate the \(W\gamma \gamma \) CR. This scale factor is consistent with that of the corresponding \(\sqrt{s} =\) 8 TeV analysis [3], and is reconciled by a large and uncertain NLO correction to the \(W\gamma \gamma \) production cross section that depends strongly upon the momentum of the \(W\gamma \gamma \) system [52]. When setting limits on specific signal models, a simultaneous fit to the control region and the signal region is performed, allowing both the signal and \(W\gamma \gamma \) contributions to float to their best-fit values.

Last, the irreducible background from \(Z(\rightarrow \nu \nu )\gamma \gamma \) events, the only background without a data-derived normalization, is estimated with simulation and found to be 0.02 events. A \({\pm }100~\%\) uncertainty is conservatively applied to account for modelling uncertainties [53].

A summary of the background contributions to the signal region is presented in Table 2. The QCD background can be traced to a few hundredths of an event at high \(E_{\text {T}}^{\text {miss}}\)and high \(m_{\mathrm {eff}}\), but no events are observed for either the diphoton Monte Carlo or the jet-faking-photon control sample when the full signal region requirements are applied. Relaxing the \(m_{\mathrm {eff}}\) requirement, and using a conservative extrapolation of the expected QCD background as a function of \(m_{\mathrm {eff}}\), the combined QCD background is estimated to be \(0.05^{+0.20}_{-0.05}\) events. The estimate of the electron-faking-photon background is established by the presence of two \(e\gamma \) events in the background model passing the SR requirements, yielding a background estimate of \(0.03 \pm 0.02\) events after application of the fake-factor weights. Summing all background contributions, a total of \(0.27^{+0.22}_{-0.10}\) SM events are expected in the SR, with the largest contribution, \(0.17 \pm 0.08\) events, expected to arise from \(W\gamma \gamma \) production. The background modelling was found to agree well in several validation regions, including the inclusive high-\(p_{\text {T}}\) diphoton sample, as well as event selections with relaxed \(m_{\mathrm {eff}}\) and \(E_{\text {T}}^{\text {miss}}\) requirements relative to those of the SR.

Table 2 Summary of background estimates by source, and total combined background, in the signal region. The uncertainties shown include the total statistical and systematic uncertainty. Also shown is the expected number of signal events for the benchmark points \((m_{\tilde{g}},m_{\tilde{\chi }^{0}_{1}}) = (1500,100)\) and \((m_{\tilde{g}},m_{\tilde{\chi }^{0}_{1}}) = (1500,1300)\), where all masses are in GeV

7 Signal efficiencies and uncertainties

GGM signal acceptances and efficiencies are estimated using MC simulation for each simulated point in the gluino–bino parameter space, and vary significantly across this space due to variations in the photon \(p_{\text {T}}\), \(E_{\text {T}}^{\text {miss}}\), and \(m_{\mathrm {eff}}\) spectra. For example, for a gluino mass of 1600 GeV, the acceptance-times-efficiency product varies between 14 and 28 %, reaching a minimum as the NLSP mass approaches the Z boson mass, below which the photonic branching fraction of the NLSP rises to unity. Table 3 summarizes the contributions to the systematic uncertainty of the signal acceptance-times-efficiency, which are discussed below.

Making use of a bootstrap method [54], the efficiency of the diphoton trigger is determined to be greater than 99 %, with an uncertainty of less than 1 %. The uncertainty in the integrated luminosity is \({\pm }2.1~\%\). It is derived, following a methodology similar to that detailed in Ref. [55], from a calibration of the luminosity scale using x–y beam-separation scans performed in August 2015.

The reconstruction and identification efficiency for tight, isolated photons is estimated with complementary data-driven methods [42]. Photons selected kinematically as originating from radiative decays of a Z boson (\(Z \rightarrow \ell ^+ \ell ^- \gamma \) events) are used to study the photon reconstruction efficiency as a function of \(p_{\text {T}}\) and \(\eta \). Independent measurements making use of a tag-and-probe approach with \(Z \rightarrow ee\) events, with one of the electrons used to probe the calorimeter response to electromagnetic depositions, also provide information about the photon reconstruction efficiency. For photons with \(p_{\text {T}}> 75\) GeV, the identification efficiency varies between 93 and 99 %, depending on the values of the photon \(p_{\text {T}}\) and \(|\eta |\) and whether the photon converted in the inner detector. The uncertainty also depends upon these factors, and is generally no more than a few percent.

Uncertainties in the photon and jet energy scales lead to uncertainties in the signal acceptance-times-efficiency that vary across the GGM parameter space, and contribute the dominant source of acceptance-times-efficiency uncertainty in certain regions of the parameter space. The photon energy scale is determined using samples of \(Z \rightarrow ee\) and \(J/\psi \rightarrow ee\) events [56]. The jet-energy scale uncertainty is constrained from an assessment of the effect of uncertainties in the modelling of jet properties and by varying the response to differing jet flavour composition in MC simulations, as well as from in situ measurements with 8 TeV dijet data [46, 47].

Uncertainties in the values of whole-event observables, such as \(E_{\text {T}}^{\text {miss}}\) and \(m_{\mathrm {eff}}\), arise from uncertainties in the energy of the underlying objects from which they are constructed. Uncertainties in the \(E_{\text {T}}^{\text {miss}}\) soft term due to uncertainties in hadronic fragmentation, detector material modeling and energy scale were found to introduce an uncertainty of less than 0.1 % in the signal acceptance-times-efficiency. The uncertainty due to pile-up is estimated by varying the mean of the distribution of the number of interactions per bunch crossing overlaid in the simulation by \({\pm }11~\%\).

Including the contribution from the statistical limitations of the MC samples used to model the GGM parameter space, the quadrature sum of the individual systematic uncertainties in the signal reconstruction efficiency is, on average, about 4 %. Adding the uncertainty in the integrated luminosity gives a total systematic uncertainty of about 5 %.

Table 3 Summary of individual and total contributions to the systematic uncertainty of the signal acceptance-times-efficiency. Relative uncertainties are shown in percent, and as the average over the full range of the (\(m_{\tilde{g}}\),\(m_{\tilde{\chi }^{0}_{1}}\)) grid. Because the individual contributions are averaged over the grid only for that particular source, the average total uncertainty is not exactly equal to the quadrature sum of the individual average uncertainties

8 Results

An accounting of the numbers of events observed in the SR after the successive application of the selection requirements is shown in Table 4 along with the size of the expected SM background. After the full selection is applied, no events are observed in the SR, to be compared to an expectation of \(0.27^{+ 0.22}_{-0.10}\) SM events.

Table 4 Numbers of events observed in the SR after the successive application of the selection requirements, as well as the size of the expected SM background

Based on the observation of zero events in the SR and the magnitude of the estimated SM background expectation and uncertainty, an upper limit is set on the number of events from any scenario of physics beyond the SM, using the profile likelihood and \(CL_s\) prescriptions [57]. The various sources of experimental uncertainty, including those in the background expectation, are treated as Gaussian-distributed nuisance parameters in the likelihood definition. Assuming that no events due to physical processes beyond those of the SM populate the \(\ell \gamma \gamma \) CR used to estimate the \(W(\rightarrow \ell \nu )+\gamma \gamma \) background, the observed 95 % confidence-level (CL) upper limit on the number of non-SM events in the SR is found to be 3.0. Taking into account the integrated luminosity of \(3.2{~\mathrm{fb}^{-1}}\), this number-of-event limit translates into a 95 % CL upper limit on the visible cross section for new physics, defined by the product of cross section, branching fraction, acceptance and efficiency, of 0.93 fb.

By considering, in addition, the value and uncertainty of the acceptance-times-efficiency of the selection requirements associated with the SR, as well as the NLO (+NLL) GGM cross Sect. [2024], which varies steeply with gluino mass, 95 % CL lower limits may be set on the mass of the gluino as a function of the mass of the lighter bino-like neutralino, in the context of the GGM scenario described in Sect. 1. The resulting observed limit on the gluino mass is exhibited, as a function of neutralino mass, in Fig. 3. For the purpose of establishing these model-dependent limits, the \(W(\rightarrow \ell \nu )+\gamma \gamma \)background estimate and the limit on the possible number of events from new physics are extracted from a simultaneous fit to the SR and \(W(\rightarrow \ell \nu )+\gamma \gamma \)control region, although for a gluino mass in the range of the observed limit the signal contamination in the \(W(\rightarrow \ell \nu )+\gamma \gamma \)control sample is less than 0.03 events for any value of the neutralino mass. Also shown for this figure is the expected limit, including its statistical and background uncertainty range, as well as observed limits for a SUSY model cross section \({\pm }1\) standard deviation of theoretical uncertainty from its central value. Because the background expectation is close to zero and no events are observed in data, the expected and observed limits nearly overlap. The observed lower limit on the gluino mass is observed to be roughly independent of neutralino mass, reaching a minimum value of approximately 1650 GeV at a neutralino mass of 250 GeV. Within the context of this model, gluino masses as low as 400 GeV have been excluded in a prior analysis making use of 7 TeV ATLAS data [58].

Fig. 3
figure 3

Exclusion limits in the neutralino–gluino mass plane at 95 % CL. The observed limits are exhibited for the nominal SUSY model cross section, as well as for a SUSY cross section increased and lowered by one standard deviation of the cross-section systematic uncertainty. Also shown is the expected limit, as well as the \({\pm }1\) standard-deviation range of the expected limit, which is asymmetric because of the low count expected. Because the background expectation is close to zero and the observed number of events is zero, the expected and observed limits nearly overlap. The previous limit from ATLAS using 8 TeV data [3] is shown in grey. Within the context of this model, gluino masses as low as 400 GeV have been excluded in a prior analysis making use of 7 TeV ATLAS data [58]

9 Conclusion

A search has been made for a diphoton + \(E_{\text {T}}^{\text {miss}}\) final state using the ATLAS detector at the Large Hadron Collider in \(3.2{~\mathrm{fb}^{-1}}\) of proton–proton collision data taken at a centre-of-mass energy of 13 TeV in 2015. At least two photon candidates with \(p_{\text {T}}> 75~\text{GeV}\) are required, as well as minimum values of 175 and 1500 GeV of the missing transverse momentum and effective mass of the event, respectively. The resulting signal region targets events with pair-produced high-mass gluinos each decaying to either a high-mass or low-mass bino-like neutralino. Using a combination of data-driven and direct Monte Carlo approaches, the SM background is estimated to be \(0.27^{+0.22}_{-0.10}\) events, with most of the expected background arising from the production of a W boson in association with two energetic photons. No events are observed in the signal region; considering the expected background and its uncertainty, this observation implies model-independent 95 % CL upper limits of 3.0 events (0.93 fb) on the number of events (visible cross section) due to physics beyond the Standard Model. In the context of a generalized model of gauge-mediated supersymmetry breaking with a bino-like NLSP, this leads to a lower limit of 1650 GeV on the mass of a degenerate octet of gluino states, independent of the mass of the lighter bino-like neutralino. This extends the corresponding limit of 1340 GeV derived from a similar analysis of 8 TeV data by the ATLAS Collaboration.