UvA-DARE (Digital Academic Measurement of the cross section for inclusive isolated-photon production in pp collisions at √s̅=13 TeV using the ATLAS detector

Inclusive isolated-photon production in pp collisions at a centre-of-mass energy of 13 TeV is studied with the ATLAS detector at the LHC using a data set with an integrated luminosity of 3 . 2 fb − 1 . The cross section is measured as a function of the photon transverse energy above 125 GeV in different regions of photon pseudorapidity. Next-to-leading-order perturbative QCD and Monte Carlo event-generator predictions are compared to the cross-section measurements and provide an adequate description of the data.


Introduction
The production of prompt photons in proton-proton (pp) collisions, pp → γ + X, provides a testing ground for perturbative QCD (pQCD) with a hard colourless probe. All photons produced in pp collisions that are not secondaries from hadron decays are considered as "prompt". Two processes contribute to promptphoton production in pp → γ + X: the direct process, in which the photon originates directly from the hard interaction, and the fragmentation process, in which the photon is emitted in the fragmentation of a high transverse momentum (p T ) parton [1,2]. Measurements of inclusive prompt-photon production were used recently to investigate novel approaches to the description of parton radiation [3] and the importance of resummation of threshold logarithms in QCD and of the electroweak corrections [4]. Comparisons of prompt-photon data and pQCD are usually limited by the theoretical uncertainties associated with the missing higherorder terms in the perturbative expansion. The extension of the recent next-to-next-to-leading-order (NNLO) pQCD calculations for jet production [5] to prompt-photon production 1 will allow a more stringent test of pQCD. To make such a test with small experimental and theoretical uncertainties, it is optimal to perform measurements of prompt-photon production at high photon transverse energies and at the highest possible centre-of-mass energy of the colliding particles.
Since the dominant production mechanism in pp collisions at the LHC proceeds via the qg → qγ process, measurements of E-mail address: atlas.publications@cern.ch. 1 After completion of the work presented here, first NNLO calculations for prompt-photon production have been completed [6].
prompt-photon production are sensitive at leading order (LO) to the gluon density in the proton [7][8][9][10][11][12][13][14][15][16]. Although prompt photon data were initially included in the determination of the proton parton distribution functions (PDFs), their use was abandoned some years ago. Since then, theoretical developments [13,14] have shown ways to improve the description of the data in terms of pQCD, and a recent study quantified the impact of prompt-photon data from hadron colliders on the gluon density in the proton [15]. New measurements of prompt-photon production at higher centreof-mass energies are expected to further constrain the gluon density in the proton when combined with previous data.
These measurements can also be used to tune the Monte Carlo (MC) models to improve the understanding of prompt-photon production. In addition, precise measurements of these processes aid those searches for which they are an important background, such as the search for new phenomena in final states with a photon and missing transverse momentum.
Measurements of prompt-photon production at a hadron collider require isolated photons to avoid the large contribution of photons from decays of energetic π 0 and η mesons inside jets.
The production of inclusive isolated photons in pp collisions at centre-of-mass energies of of the photon transverse energy 2 (E γ T ) are measured in the range E γ T > 125 GeV for different regions of the photon pseudorapidity (η γ ). The threshold in E γ T is chosen so as to avoid the low-E γ T region where both systematic and theoretical uncertainties increase. Next-to-leading-order (NLO) pQCD and MC event-generator predictions are compared to the measurements.

The ATLAS detector
The ATLAS detector [23] is a multi-purpose detector with a forward-backward symmetric cylindrical geometry. It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadronic calorimeters, and a muon spectrometer incorporating three large superconducting toroid magnets. The inner-detector system is immersed in a 2 T axial magnetic field and provides charged-particle tracking in the range |η| < 2.5. The high-granularity silicon pixel detector is closest to the interaction region and provides four measurements per track; the innermost layer, known as the insertable B-layer [24], was added in 2014 and provides high-resolution hits at small radius to improve the tracking performance. The pixel detector is followed by the silicon microstrip tracker, which typically provides four three-dimensional measurement points per track. These silicon detectors are complemented by the transition radiation tracker, which enables radially extended track reconstruction up to |η| = 2.0. The calorimeter system covers the range |η| < 4.9. Within the region |η| < 3.2, electromagnetic calorimetry is provided by barrel and endcap high-granularity lead/liquid-argon (LAr) electromagnetic calorimeters, with an additional thin LAr presampler covering |η| < 1.8 to correct for energy loss in material upstream of the calorimeters; for |η| < 2.5 the LAr calorimeters are divided into three layers in depth. Hadronic calorimetry is provided by a steel/scintillator-tile calorimeter, segmented into three barrel structures within |η| < 1.7, and two copper/LAr hadronic endcap calorimeters, which cover the region 1.5 < |η| < 3.2. The solid angle coverage is completed out to |η| = 4.9 with forward copper/LAr and tungsten/LAr calorimeter modules, which are optimised for electromagnetic and hadronic measurements, respectively. Events are selected using a first-level trigger implemented in custom electronics, which reduces the maximum event rate of 40 MHz to a design value of 100 kHz using a subset of detector information. Software algorithms with access to the full detector information are then used in the high-level trigger to yield a recorded event rate of about 1 kHz [25].

Data selection
The data used in this analysis were collected with the ATLAS detector during the pp collision running period of 2015, when the LHC operated with a bunch spacing of 25 ns and a centre-of-mass energy of √ s = 13 TeV. Only events taken in stable beam conditions and satisfying detector and data-quality requirements are considered. The total integrated luminosity of the collected sample amounts to 3.16 ± 0.07 fb −1 [26,27]. Events were recorded using a single-photon trigger, with a transverse energy threshold of 120 GeV. The trigger efficiency for isolated photons with 2 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upwards. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). Angular distance is measured in units of GeV and |η γ | < 2.37, excluding 1.37 < |η γ | < 1.56, is higher than 99%. Events are required to have a reconstructed primary vertex. Primary vertices are formed from sets of two or more reconstructed tracks, each with p T > 400 MeV and |η| < 2.5, that are mutually consistent with having originated at the same three-dimensional point within the luminous region of the colliding proton beams. If multiple primary vertices are reconstructed, the one with the highest sum of the p 2 T of the associated tracks is selected as the primary vertex.
Photon and electron candidates are reconstructed from clusters of energy deposited in the electromagnetic calorimeter. Candidates without a matching track or reconstructed conversion vertex 3 in the inner detector are classified as unconverted photons [28]. Those with a matching reconstructed conversion vertex or a matching track consistent with originating from a photon conversion are classified as converted photons. Those matched to a track consistent with originating from an electron produced in the beam interaction region are classified as electrons.
The photon identification is based primarily on shower shapes in the calorimeter [28]. An initial selection is derived using the information from the hadronic calorimeter and the lateral shower shape in the second layer of the electromagnetic calorimeter, where most of the photon energy is contained. The final tight selection applies stringent criteria [28] to these variables, different for converted and unconverted photon candidates. It also places requirements on the shower shape in the finely segmented first calorimeter layer to ensure the compatibility of the measured shower profile with that originating from a single photon impacting the calorimeter. When applying the photon identification criteria to simulated events, corrections are made for small differences in the average values of the shower-shape variables between data and simulation. The efficiency of the photon identification varies in the range 92-98% for E γ T = 125 GeV and 86-98% for E γ T = 1 TeV, depending on η γ and whether the photon candidate is classified as unconverted or converted [28,29]. For E γ T > 125 GeV, the uncertainty in the photon identification efficiency varies between 1% and 5%, depending on η γ and E γ T . The photon energy measurement is made using calorimeter and tracking information. A dedicated energy calibration [30] is then applied to the candidates to account for upstream energy loss and both lateral and longitudinal leakage; a multivariate regression algorithm to calibrate electron and photon energy measurements was developed and optimised on simulated events. The calibration of the layer energies in the calorimeter is based on the measurement performed with 2012 data at is the sampling term and c is the constant term.

Table 1
Kinematic requirements and number of selected events in data for each phase-space region.
Phase-space region E γ T > 125 GeV and |η γ | < 2.37 are selected. Candidates in the region 1.37 < |η γ | < 1.56, which includes the transition region between the barrel and endcap calorimeters, are not considered.
The photon candidate is required to be isolated based on the amount of transverse energy inside a cone of size R = 0.4 in the η-φ plane around the photon candidate, excluding an area of size η × φ = 0.125 × 0.175 centred on the photon. The isolation transverse energy is computed from topological clusters of calorimeter cells [31] and is denoted by E iso T . The measured value of E iso T is corrected for leakage of the photon's energy into the isolation cone and the estimated contributions from the underlying event (UE) and additional inelastic pp interactions (pile-up). The latter two corrections are computed simultaneously on an eventby-event basis [18] and the combined correction is typically 2 GeV. The combined correction is computed using a method suggested in Refs. [32,33]: the k t jet algorithm [34,35] with jet radius R = 0.5 is used to reconstruct all jets taking as input topological clusters of calorimeter cells; no explicit transverse momentum threshold is applied. The ambient-transverse energy density for the event (ρ), from pile-up and the underlying event, is computed using the median of the distribution of the ratio between the jet transverse energy and its area. Finally, ρ is multiplied by the area of the isolation cone to compute the correction to E iso T . In addition, for simulated events, data-driven corrections to E iso T are applied such that the peak position in the E iso T distribution coincides in data and simulation. After all these corrections, E iso T is required to be lower than E iso [20]. The isolation requirement significantly reduces the main background, which consists of multi-jet events where one jet typically contains a π 0 or η meson that carries most of the jet energy and is misidentified as a photon because it decays into an almost collinear photon pair.
A small fraction of the events contain more than one photon candidate satisfying the selection criteria. In such events, the highest-E γ T (leading) photon is considered for further study. The total number of data events selected by using the requirements discussed above amounts to 1 253 508. A summary of the kinematic requirements as well as the number of selected events in data in each |η γ | region are included in Table 1. The selected sample of events is used to unfold the distribution in E γ T separately for each of the four regions in |η γ | indicated in Table 1; the unfolding is performed using the samples of MC events described in Section 4.1 and the results are compared to the predictions from the Pythia and Sherpa generators as well as to the predictions from NLO pQCD (see Section 8).

Monte Carlo simulations
Samples of MC events were generated to study the characteristics of signal events. The MC programs Pythia 8.186 [36] and Sherpa 2.1.1 [37] were used to generate the simulated events. In both generators, the partonic processes were simulated using tree-level matrix elements, with the inclusion of initial-and finalstate parton showers. Fragmentation into hadrons was performed using the Lund string model [38] in the case of Pythia, and in Sherpa events by a modified version of the cluster model [39]. The LO NNPDF2.3 [40] PDFs were used for Pythia (NLO CT10 [41] for Sherpa) to parameterise the proton structure. Both samples include a simulation of the UE. The event-generator parameters were set according to the "A14" tune for Pythia [42] and the "CT10" tune for Sherpa. All the samples of generated events were passed through the Geant4-based [43] ATLAS detector-and triggersimulation programs [44]. They were reconstructed and analysed by the same program chain as the data. Pile-up from additional pp collisions in the same and neighbouring bunch crossings was simulated by overlaying each MC event with a variable number of simulated inelastic pp collisions generated using Pythia8 with the A2 tune [45]. The MC events were weighted to reproduce the distribution of the average number of interactions per bunch crossing (μ) observed in the data, referred to as "pile-up reweighting"; in this procedure, the μ value in the data is divided by a factor of 1.16 ± 0.07, a rescaling which improves the agreement between the data and simulation for the observed number of primary vertices and recovers the fraction of visible cross-section of inelastic pp collisions as measured in the data [46].
The Pythia simulation of the signal includes LO photon-plus-jet events from both direct processes (the hard subprocesses qg → qγ and qq → gγ , called the "hard" component) and photon bremsstrahlung in QCD dijet events (called the "bremsstrahlung" component). The Sherpa samples were generated with LO matrix elements for photon-plus-jet final states with up to three additional partons (2 → n processes with n from 2 to 5); the matrix elements were merged with the Sherpa parton shower [47] using the ME+PS@LO prescription. While the bremsstrahlung component was modelled in Pythia by final-state QED radiation arising from calculations of all 2 → 2 QCD processes, it was accounted for in Sherpa through the matrix elements of 2 → n processes with n ≥ 3; in the generation of the Sherpa samples, a requirement on the photon isolation at the matrix-element level was imposed using the criterion defined in Ref. [48]. 5 The predictions of the MC generators at particle level are defined using those particles with a lifetime τ longer than 10 ps; these particles are referred to as "stable". The particles associated with the overlaid pp collisions (pile-up) are not considered. The particle-level isolation requirement on the photon was built summing the transverse energy of all stable particles, except for muons and neutrinos, in a cone of size R = 0.4 around the photon direction after the contribution from the UE was subtracted; the same subtraction procedure used on data was applied at the particle level. Therefore, the cross sections quoted from MC simulations refer to photons that are isolated by requiring E iso . 5 This criterion, commonly called Frixione's criterion, requires the total transverse energy inside a cone of size V around the generated final-state photon, excluding the photon itself, to be below a certain threshold, E max The parameters for the threshold were chosen to be R = 0.3, n = 2 and = 0.025.

Next-to-leading-order pQCD predictions
The NLO pQCD predictions presented in this paper are computed using the program Jetphox 1.3.1_2 [49,13]. This program includes a full NLO pQCD calculation of both the direct and fragmentation contributions to the cross section for the pp → γ + X process.
The number of massless quark flavours is set to five. The renormalisation scale μ R (at which the strong coupling is evaluated), factorisation scale μ F (at which the proton PDFs are evaluated) and fragmentation scale μ f (at which the fragmentation function is evaluated) are chosen to be μ The calculations are performed using the MMHT2014 [50] parameterisations of the proton PDFs and the BFG set II of parton-to-photon fragmentation functions at NLO [51]. The strong coupling constant is calculated at two loops with α s (m Z ) = 0.120. Predictions based on other proton PDF sets, namely CT14 [52] and NNPDF3.0 [53], are also computed. The calculations are performed using a parton-level isolation criterion which requires the total transverse energy from the partons inside a cone of size R = 0.4 around the photon direction to be The NLO pQCD predictions refer to the parton level while the measurements refer to the particle level. Since the data are corrected for pile-up and UE effects and the distributions are unfolded to a phase-space definition in which the requirement on E iso T at particle level is applied after subtraction of the UE, it is expected that parton-to-hadron corrections to the NLO pQCD predictions are small. This is confirmed by computing the ratio of the particle-level cross section for a Pythia sample with UE effects to the parton-level cross section without UE effects 6 : the ratio is consistent with unity within 1% over the measured range in E γ T . Therefore, no correction is applied to the NLO pQCD predictions and an uncertainty of 1% is assigned.

Background estimation and signal extraction
A non-negligible background contribution remains in the selected sample, even after imposing the tight identification and isolation requirements on the photon. This background originates mainly from multi-jet processes in which a jet is misidentified as a photon.
The background subtraction relies on a data-driven method based on signal-suppressed control regions. The background contamination in the selected sample is estimated using the same two-dimensional sideband technique as in the previous analyses [17,18,54,20,55] and then subtracted bin-by-bin from the observed yield. In this method, the photon is classified as: GeV and E iso T < 50 GeV; • "tight", if it satisfies the tight photon identification criteria; • "non-tight", if it fails at least one of four tight requirements on the shower-shape variables computed from the energy deposits in the first layer of the electromagnetic calorimeter, but satisfies the tight requirement on the total lateral shower width in the first layer and all the other tight identification criteria [28].
In the two-dimensional plane formed by E iso T and the photon identification variables, which are chosen because they are ex- 6 The effects of hadronisation and UE are also studied separately; the effects of including the UE do not cancel those of hadronisation and are dominant. pected to be independent for the background, four regions are defined: • A: the "signal" region, containing tight isolated photon candidates; • B: the "non-isolated" background control region, containing tight non-isolated photon candidates; • C : the "non-tight" background control region, containing isolated non-tight photon candidates; • D: the background control region containing non-isolated nontight photon candidates.
The signal yield N sig A in region A is estimated by using the relation , (1) where N K , with K = A, B, C , D, is the number of events in re- There is an additional background from electrons misidentified as photons, mainly produced in Drell-Yan Z ( * ) /γ * → e + e − and W ( * ) → eν processes. Such misidentified electrons are largely suppressed by the photon selection. The remaining electron background is estimated using MC techniques and found to be negligible in the phase-space region of the analysis presented here.

Unfolding
The isolated-photon cross section is measured as a function of E γ T in different regions of |η γ |. The phase-space regions are listed in Table 1. The data distributions, after background subtraction, are unfolded to the particle level using bin-by-bin correction factors determined using the MC samples. These correction factors take into account the efficiency of the selection criteria and the purity and efficiency of the photon reconstruction. The data distributions are unfolded to the particle level via the formula

Experimental uncertainties
The primary sources of systematic uncertainty that affect the measurements are investigated. These sources include photon identification, photon energy scale and resolution, background subtraction, modelling of the final state, pile-up, MC sample statistics, trigger and luminosity.
• Photon identification efficiency. The uncertainty in the photon identification efficiency is estimated from the effect of differences between shower-shape variable distributions in data and simulation. From the studies presented in Ref.
[28], this procedure is found to provide a conservative estimate of the uncertainties. 7 The resulting uncertainty in the measured cross sections increases from 1-2% at E γ T = 125 GeV to 2-6% at To study the dependence on the specific choices, these definitions are varied over a wide range. The lower limit on E iso T in regions B and D is varied by ±1 GeV, which is larger than any difference between data and simulations and still provides a sufficient sample to perform the data-driven subtraction. The upper limit on E iso T in regions B and D is removed. The resulting uncertainty in the measured cross sections is negligible. Likewise, the choice of inverted photon identification variables is varied. The analysis is repeated using different sets of variables: tighter (looser) identification criteria are defined by applying tight requirements to an extended (restricted) set of shower-shape variables in the first calorimeter layer. The resulting uncertainty in the measured cross sections is typically smaller than 2%.
• Photon identification and isolation correlation in the background. The photon isolation and identification variables used to define the plane in the two-dimensional sideband method to subtract the background are assumed to be independent for background events (R bg = 1 in Eq. (1)). Any correlation between these variables affects the estimation of the purity of the signal and leads to systematic uncertainties in the background-subtraction procedure. A range in R bg is set to cover the deviations from unity observed for the estimations based on subtracting the signal leakage with either Pythia or Sherpa MC samples. The resulting range in R bg , which is taken as the uncertainty, is 0.8 < R bg < 1.2 for 0.6 < |η γ | < 1.37 and 1.81 < |η γ | < 2.37; for the region |η γ | < 0.6 (1.56 < |η γ | < 1.81), the range is 0.8 < R bg < 1.2 (0.75 < R bg < 1.25) at low E γ T and increases to 0.65 < R bg < 1.35 (0.6 < R bg < 1.4) at high E γ T . The resulting uncertainty in the measured cross sections is typically smaller than 2%.
• Parton-shower and hadronisation model dependence. The effects due to the parton-shower and hadronisation models in the signal purity and correction factors are studied separately; the effects are estimated as the differences observed between the nominal results and those obtained using Sherpa MC samples either for the determination of the signal leakage fractions or the unfolding correction factors. The resulting uncertainties in the measured cross sections are typically smaller than 2%.
• Photon isolation modelling. The differences between the nominal results and those obtained without applying the datadriven corrections to E iso T in simulated events are taken as systematic uncertainties in the measurements due to the modelling of E iso T in the MC simulation. The resulting uncertainty in the measured cross sections is smaller than 2%.
• Signal modelling. The MC simulation of the signal is used to estimate the signal leakage fractions in the two-dimensional sideband method for background subtraction and to compute the bin-by-bin correction factors. The Pythia simulation is used with the mixture of the hard and bremsstrahlung components as predicted by the generator to yield the backgroundsubtracted data distributions and to compute the correction factors; in the predicted mixture, the relative contribution of the bremsstrahlung component amounts to ≈ 30%.
The uncertainty related to the simulation of the hard and bremsstrahlung components is estimated by performing the background subtraction and the calculation of the correction factors using a mixture with either two or zero times the amount of the bremsstrahlung component. The resulting uncertainty in the measured cross sections is typically smaller than 1%.
• Pile-up. The uncertainty is estimated by changing the nominal rescaling factor of 1.16 from 1.09 to 1.23 and re-evaluating the reweighting factors. The resulting uncertainty in the measured cross sections is typically smaller than 0.5%.
The total systematic uncertainty is computed by adding in quadrature the uncertainties from the sources listed above and the statistical uncertainty of the MC samples as well as the uncertainty in the trigger efficiency. The uncertainty in the integrated luminosity is 2.1% [27]. This uncertainty is fully correlated in all bins of all the measured cross sections and is shown separately. The total systematic uncertainty is smaller than 5% for |η γ | < 1.37. For 1.56 < |η γ | < 1.81 (1.81 < |η γ | < 2.37), it increases from ≈ 8% (4%) at

Theoretical uncertainties
The following sources of uncertainty in the theoretical predictions are considered: • The uncertainty in the NLO pQCD predictions due to terms • The uncertainty in the NLO pQCD predictions due to imperfect knowledge of the proton PDFs is estimated by repeating the calculations using the 50 sets from the MMHT2014 error analysis [50] and applying the Hessian method [57,58] for evaluation of the PDF uncertainties.
• The uncertainty in the NLO pQCD predictions due to that in the value of α s (m Z ) is estimated by repeating the calculations using two additional sets of proton PDFs from the MMHT2014 analysis, for which different values of α s (m Z ) were assumed in the fits, namely α s (m Z ) = 0.118 and 0.122; in this way, the correlation between α s and the PDFs is preserved.
• An uncertainty of 1% is assigned due to the non-perturbative effects of hadronisation and UE (see Section 4.2).
The dominant theoretical uncertainty is that arising from the terms beyond NLO and amounts to 10-15% for all η γ regions. The uncertainty arising from those in the PDFs increases from 1% at E γ T = 125 GeV to 3-4% at high E γ T . The uncertainty arising from the value of α s (m Z ) is below 2%. The total theoretical uncertainty is obtained by adding in quadrature the individual uncertainties listed above and amounts to 10-15%. The predictions of the Pythia and Sherpa MC models are compared to the measurements in Fig. 1. These predictions are normalised to the measured integrated cross section in each η γ region. The difference in normalisation between data and Pythia (Sherpa) is ∼ +10% (+30%) and attributed to the fact that these generators are based on tree-level matrix elements, which are affected by a large normalisation uncertainty due to missing higherorder terms. The predictions of both Pythia and Sherpa give a good description of the shape of the measured cross-section distributions for E γ T 500 GeV in the range |η γ | < 1.37 and in the whole measured E γ T range for 1.56 < |η γ | < 2.37.  Fig. 3. The predictions based on MMHT2014, CT14 and NNPDF3.0 are very similar, the differences being much smaller than the theoretical scale uncertainties. For most of the points, the theoretical uncertainties are larger than those of experimental origin. Differences are observed between data and the predictions of up to 10-15% depending on E γ T and |η γ |; since the theoretical uncertainties are 10-15% and cover those differences, it is concluded that the NLO pQCD predictions provide an adequate description of the measurements.

Results
The measured cross sections are larger than those at of-mass energies the NLO theoretical uncertainties are of similar size and comparable to the differences between the predictions and the data; since, in addition, the experimental uncertainties are smaller than those differences, the inclusion of NNLO pQCD corrections might improve the description of the two sets of measurements. The measured fiducial cross section for inclusive isolatedphoton production in the phase-space region given by E γ T > 125 GeV and |η γ | < 2.37 (excluding the region of 1.37 < |η γ | < 1.56) and isolation E iso where "exp." denotes the sum in quadrature of the statistical and systematic uncertainties and "lumi." denotes the uncertainty due to that in the integrated luminosity, details of which are listed in Table 2.
The fiducial cross section predicted at NLO in pQCD by Jetphox using the MMHT2014 PDFs is which is 12% lower than the measurement, but consistent within the experimental and theoretical uncertainties.  (outer) error bars represent the statistical uncertainties (the statistical and systematic uncertainties, excluding that on the luminosity, added in quadrature). For most of the points, the inner error bars are smaller than the marker size and, thus, not visible.

Table 2
Uncertainties (in pb) in the fiducial cross section: photon identification ("γ ID"), photon energy scale and resolution ("γ ES+ER"), lower limit in E iso T in regions B and D ("E iso T Gap"), removal of upper limit in E iso T in regions B and D ("E iso T upp. lim."), variation of the inverted photon identification variables ("γ invert. var."), correlation between γ ID and isolation in the background ("R bg "), signal leakage fractions of Sherpa ("Leak. Sherpa"), unfolding with Sherpa ("Unf. Sherpa"), modelling of E iso T in MC simulation ("E iso T MC"), mixture of hard and bremsstrahlung components in MC samples ("Hard and brem"), pile-up ("Pile-up"), statistical uncertainty in MC samples ("MC stat."), trigger ("Trigger"), statistical uncertainty in data ("Data stat.") and luminosity ("Luminosity").
The experimental systematic uncertainties are evaluated such that the correlations with previous ATLAS measurements of prompt-photon production can be used in the fits of the proton parton distribution functions. A combined fit at NNLO pQCD of the measurements in pp collisions at centre-of-mass energies of 8 and 13 TeV which takes into account the correlated systematic uncer-  tainties has the potential to constrain further the proton PDFs than either set of measurements alone.
The predictions of the Pythia and Sherpa Monte Carlo models give a good description of the shape of the measured cross-section distributions except for E γ T 500 GeV in the regions |η γ | < 0.6 and 0.6 < |η γ | < 1.37. The next-to-leading-order pQCD predictions, using Jetphox and based on different sets of proton PDFs, provide an adequate description of the data within the experimental and theoretical uncertainties. For most of the phase space the theoretical uncertainties are larger than those of experimental nature and dominated by the terms beyond NLO, from which it is concluded that NNLO pQCD corrections are needed to make an even more stringent test of the theory.

Acknowledgements
We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We