UvA-DARE (Digital Academic Measurement of the cross section for isolated-photon plus jet production in pp collisions at √s̅=13 TeV using the ATLAS detector

The dynamics of isolated-photon production in association with a jet in proton–proton collisions at a centre-of-mass energy of 13 TeV are studied with the ATLAS detector at the LHC using a dataset with an integrated luminosity of 3 . 2 fb − 1 . Photons are required to have transverse energies above 125 GeV. Jets are identiﬁed using the anti- k t algorithm with radius parameter R = 0 . 4 and required to have transverse momenta above 100 GeV. Measurements of isolated-photon plus jet cross sections are presented as functions of the leading-photon transverse energy, the leading-jet transverse momentum, the azimuthal angular separation between the photon and the jet, the photon–jet invariant mass and the scattering angle in the photon–jet centre-of-mass system. Tree-level plus parton-shower predictions from Sherpa and Pythia as well as next-to-leading-order QCD predictions from Jetphox and Sherpa are compared to the measurements.


Introduction
The production of prompt photons in association with at least one jet in proton-proton (pp) collisions provides a testing ground for perturbative QCD (pQCD). In pp collisions, all photons that are not secondaries from hadron decays are considered to be "prompt". The measurements of angular correlations between the photon and the jet can be used to probe the dynamics of the hard-scattering process. The dominant source in pp collisions at the LHC is the qg → qγ process. These measurements are also useful for tuning Monte Carlo (MC) models and testing t-channel quark exchange [1,2]. Furthermore, precise measurements of these processes validate the generators used for background studies in searches for physics beyond the Standard Model which involve photons, such as the search for new phenomena in final states with a photon and a jet [3,4].
The production of pp → γ + jet + X events proceeds via two processes: direct, in which the photon originates from the hard process, and fragmentation, in which the photon arises from the fragmentation of a coloured high transverse momentum 1 (p T ) par-E-mail address: atlas .publications @cern .ch. 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upwards. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). The angular distance is measured in units of R ≡ ( η) 2 + ( φ) 2 . The rapidity is defined as y = 0.5 ln[(E + p z )/(E − p z )], ton [5,6]. The direct and fragmentation contributions are only well defined at leading order (LO) in QCD; at higher orders this distinction is no longer possible. These two processes exhibit distinct behaviours in the observables considered here. Precise measurements test the interplay of direct and fragmentation processes.
Measurements of prompt-photon production in a final state with accompanying hadrons necessitate an isolation requirement on the photon to avoid the large contribution from neutral-hadron decays into photons. The production of isolated photons in association with jets in pp collisions at √ s = 7 and 8 TeV was studied by the ATLAS [1,2,7] and CMS [8-10] Collaborations. The increase in the centre-of-mass energy of pp collisions at the LHC to 13 TeV allows the exploration of the dynamics of photon+jet production in a new regime with the goal of testing the pQCD predictions at higher energy transfers than achieved before. It is also possible to investigate whether the data in the new energy regime are well described by the predictions of parton-shower event generators, such as Sherpa [11] and Pythia [12].
The dynamics of the underlying processes in 2 → 2 hard collinear scattering can be investigated using the variable θ * , where cos θ * ≡ tanh( y/2) and y is the difference between the rapidities of the two final-state particles. The variable θ * coincides with the scattering polar angle in the centre-of-mass frame for collinear scattering of massless particles, and its distribution is sensitive to the spin of the exchanged particle. For processes domwhere E is the energy and p z is the z-component of the momentum, and transverse energy is defined as E T = E sin θ . √ s = 13 TeV pp collision data recorded by ATLAS. Next-to-leading-order (NLO) QCD predictions from Jetphox [13,14] and Sherpa as well as the tree-level predictions of Pythia and Sherpa are compared to the measurements.

ATLAS detector
The ATLAS detector [15] is a multi-purpose detector with a forward-backward symmetric cylindrical geometry. It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadronic calorimeters, and a muon spectrometer incorporating three large superconducting toroid magnets. The inner-detector system is immersed in a 2 T axial magnetic field and provides charged-particle tracking in the range |η| < 2.5. The high-granularity silicon pixel detector is closest to the interaction region and provides four measurements per track; the innermost layer, known as the insertable B-layer [16], provides high-resolution hits at small radius to improve the tracking performance. The pixel detector is followed by the silicon microstrip tracker, which typically provides four three-dimensional space point measurements per track. These silicon detectors are complemented by the transition radiation tracker, which enables radially extended track reconstruction up to |η| = 2.0. The calorimeter system covers the range |η| < 4.9. Within the region |η| < 3.2, electromagnetic (EM) calorimetry is provided by barrel and endcap high-granularity lead/liquid-argon (LAr) EM calorimeters, with an additional thin LAr presampler covering |η| < 1.8 to correct for energy loss in material upstream of the calorimeters; for |η| < 2.5 the EM calorimeter is divided into three layers in depth. Hadronic calorimetry is provided by a steel/scintillator-tile calorimeter, segmented into three barrel structures within |η| < 1.7, and two copper/LAr hadronic endcap calorimeters, which cover the region 1.5 < |η| < 3.2. The solid-angle coverage is completed out to |η| = 4.9 with forward copper/LAr and tungsten/LAr calorimeter modules, which are optimised for EM and hadronic measurements, respectively. Events are selected using a first-level trigger implemented in custom electronics, which reduces the maximum event rate of 40 MHz to a design value of 100 kHz using a subset of detector information. Software algorithms with access to the full detector information are then used in the high-level trigger to yield a recorded event rate of about 1 kHz [17].

Data sample and Monte Carlo simulations
The data used in this analysis were collected with the ATLAS detector during the pp collision running period of 2015, when the LHC operated with a bunch spacing of 25 ns and at a centreof-mass energy of 13 TeV. Only events taken during stable beam conditions and satisfying detector and data-quality requirements, which include the calorimeters and inner tracking detectors being in nominal operation, are considered. The average number of pp interactions per bunch crossing in the dataset is 13. The total integrated luminosity of the collected sample is 3.16 ± 0.07 fb −1 .
The uncertainty in the integrated luminosity is 2.1% and is derived, following a methodology similar to that detailed in Ref. [18], from a calibration of the luminosity scale using x-y beam-separation scans performed in August 2015.
Samples of MC events were generated to study the characteristics of signal events. The MC programs Pythia 8.186 and Sherpa 2.1.1 were used to generate the simulated events. In both generators, the partonic processes were simulated using tree-level matrix elements, with the inclusion of initial-and final-state parton showers. Fragmentation into hadrons was performed using the Lund string model [19] in the case of Pythia, and in Sherpa by a modified version of the cluster model [20]. The LO NNPDF2.3 [21] parton distribution functions (PDF) set was used for Pythia while the NLO CT10 [22] PDF set was used for Sherpa to parameterise the proton structure. Both samples include a simulation of the underlying event (UE). The event-generator parameters were set according to the ATLAS 2014 tune series (A14 tune) for Pythia [23] and to the tune developed in conjunction with the NLO CT10 PDF set for Sherpa. The Pythia simulation of the signal includes LO photon-plus-jet events from both direct processes (the hard subprocesses qg → qγ and qq → gγ , called the "hard" component) and photon bremsstrahlung in LO QCD dijet events (called the "bremsstrahlung" component). The bremsstrahlung component is modelled by final-state QED radiation arising from calculations of all 2 → 2 QCD processes. In the particle-level phase space of the presented measurements the fraction of the bremsstrahlung component decreases from 35% at E γ T = 125 GeV to 15% at E γ T = 1 TeV. The Sherpa samples were generated with LO matrix elements for photon-plus-jet final states with up to three additional partons (2 → n processes with n from 2 to 5); the matrix elements were merged with the Sherpa parton shower using the ME+PS@LO prescription [24]. The bremsstrahlung component is accounted for in Sherpa through the matrix elements of 2 → n processes with n ≥ 3. In the generation of the Sherpa samples, a requirement on the photon isolation at the matrix-element level was imposed using the criterion defined in Ref. [25]. This criterion, commonly called Frixione's criterion, requires the total transverse energy inside a cone of size V around the generated final-state photon, excluding the photon itself, to be below a certain threshold, E max The parameters for the threshold were chosen to be R = 0.3, n = 2 and = 0.025. The Sherpa predictions from this computation are referred to as LO Sherpa.
All the samples of generated events were passed through the Geant4-based [26] ATLAS detector and trigger full simulation programs [27]. They are reconstructed and analysed with the same program chain as the data. Pile-up from additional pp collisions in the same and neighbouring bunch crossings was simulated by overlaying each MC event with a variable number of simulated inelastic pp collisions generated using Pythia 8.153 with the ATLAS set of tuned parameters for minimum bias events (A2 tune) [28]. The MC events are weighted to reproduce the distribution of the average number of interactions per bunch crossing ( μ ) observed in the data, referred to as "pile-up reweighting". In this procedure, the μ value from the data is divided by a factor of 1.16 ± 0.07, a rescaling which makes the number of reconstructed primary vertices agree better between data and simulation and reproduces the visible cross section of inelastic pp collisions as measured in the data [29].

Event selection
Events were recorded using a single-photon trigger, with a transverse energy threshold of 120 GeV and "loose" identification requirements based on the shower shapes in the second layer of the EM calorimeter as well as on the energy leaking into the hadronic calorimeter from the EM calorimeter [17]. Events are required to have a reconstructed primary vertex. If multiple primary vertices are reconstructed, the one with the highest sum of the p 2 T of the associated tracks is selected as the primary vertex.
Photon candidates are reconstructed from clusters of energy deposited in the EM calorimeter and classified [30] as unconverted photons (candidates without a matching track or matching reconstructed conversion vertex in the inner detector) or converted photons (candidates with a matching reconstructed conversion vertex or a matching track consistent with originating from a photon conversion). The measurement of the photon energy is based on the energy collected in calorimeter cells in an area of size η × φ = 0.075 × 0.175 in the barrel and 0.125 × 0.125 in the endcaps. A dedicated energy calibration [31] is then applied to the candidates to account for upstream energy loss and both lateral and longitudinal leakage. The photon identification is based primarily on shower shapes in the calorimeter [30]. An initial selection is derived using the information from the hadronic calorimeter and the lateral shower shape in the second layer of the EM calorimeter, where most of the photon energy is contained. The final tight selection applies stringent criteria [30] to the same variables used in the initial selection, separately for converted and unconverted photon candidates. It also places requirements on the shower shape in the finely segmented first calorimeter layer to ensure the compatibility of the measured shower profile with that originating from a single photon impacting the calorimeter. When applying the photon identification criteria to simulated events, corrections are made for small differences in the average values of the shower-shape variables between data and simulation. Events with at least one photon candidate with calibrated E γ T > 125 GeV, where the trigger is maximally efficient, and |η γ | < 2.37 are selected. Candidates in the region 1.37 < |η γ | < 1.56, which includes the transition region between the barrel and endcap calorimeters, are not considered. The photon candidate is required to be isolated based on the amount of transverse energy inside a cone of size R = 0.4 in the η-φ plane around the photon candidate, excluding an area of size η × φ = 0.125 × 0.175 centred on the photon. The isolation transverse energy is computed from topological clusters of calorimeter cells [32] and is denoted by E iso T . Topological clusters are built from neighbouring calorimeter cells containing energy significantly above a noise threshold that is estimated from measurements of calorimeter electronic noise and simulated pile-up noise. The measured value of E iso T is corrected for the expected leakage of the photon energy into the isolation cone as well as for the estimated contributions from the UE and pile-up [33,34]. The corrections for pile-up and the UE are computed simultaneously on an event-by-event basis using the jet-area method [35,36] as follows: the k ⊥ jet algorithm [37,38] with jet radius R = 0.5 is used to reconstruct all jets, taking topological clus-ters of calorimeter cells as input; no explicit transverse momentum threshold is applied. The ambient transverse energy density for the event (ρ), from pile-up and the UE, is computed using the median of the distribution of the ratio of the jet's transverse energy to its area. Finally, ρ is multiplied by the area of the isolation cone to compute the correction to E iso T . The combined correction is typically 2 GeV and depends weakly on E γ T . In addition, for simulated events, data-driven corrections to E iso T are applied such that the peak position in the E iso T distribution coincides in data and simulation. After all these corrections, E iso T is required to be less than E iso T,cut ≡ 4.2 · 10 −3 · E γ T + 4.8 GeV [39]. The isolation requirement significantly reduces the main background, which consists of multi-jet events where one jet typically contains a π 0 or η meson that carries most of the jet energy and is misidentified as a photon because it decays into an almost collinear photon pair. A small fraction of the events contain more than one photon candidate satisfying the selection criteria. In such events, the highest-E γ T (leading) photon is considered for further study.
Jets are reconstructed using the anti-k t algorithm [40,41] with a radius parameter R = 0.4, using topological clusters as input.
The calorimeter cell energies are measured at the EM scale, corresponding to the energy deposited by electromagnetically interacting particles. The jet four-momenta are computed from the sum of the jet-constituent four-momenta, treating each as a four-vector with zero mass. The jets are then further calibrated using the method described in Ref.
[42] and these jets are referred to as detector-level jets. The four-momentum of each jet is recalculated to point to the selected primary vertex of the event rather than the centre of the detector. The contribution from the UE and pile-up is then subtracted on a jet-by-jet basis using the jet-area method. A jet-energy calibration is derived from MC simulations as a correction relating the calorimeter response to the true jet energy. To determine these corrections, the jet reconstruction procedure applied to the topological clusters is also applied to the generated stable particles, which are defined as those with a decay length of cτ > 10 mm, excluding muons and neutrinos; these jets are referred to as particle-level jets. In addition, sequential jet corrections, derived from MC simulated events and using global properties of the jet such as tracking information, calorimeter energy deposits and muon spectrometer information, are applied  of m γ −jet and | cos θ * |, the additional constraints |η γ + y jet-lead | < 2.37, | cos θ * | < 0.83 and m γ −jet > 450 GeV are imposed to remove the bias [1,2] due to the rapidity and transverse-momentum requirements on the photon and the jet 3 ; the number of events selected in the data after these additional requirements is 137 738.

Background estimation and signal extraction
After the requirements on photon identification and isolation are applied to the sample of events, there is still a residual background contribution, which arises primarily from jets identified as photons in multi-jet events. This background contribution is estimated and subtracted bin-by-bin using a data-driven technique which makes use of the same two-dimensional sideband method employed in a previous analysis [47]. In this approach, the photon is classified as: non-isolated region is separated by 2 GeV from the isolated region to reduce the signal contamination. The upper bound is applied to avoid highly non-isolated photons. 4 • "Tight", if it satisfies the tight photon identification criteria.
• "Non-tight", if it fails at least one of four tight requirements on the shower-shape variables computed from the energy deposits in the first layer of the EM calorimeter, but satisfies the tight requirement on the total lateral shower width in the first layer and all the other tight identification criteria in other layers [30].
The distributions in E iso T for tight and non-tight photon candidates with |η γ | < 0.6 in the data are shown separately in Fig. 1 for The MC simulation of the prompt-photon signal using Pythia is also shown. A fit of the sum of the distributions of the Pythia signal photons and the non-tight photon candidates 3 The first two constraints avoid the bias induced by requirements on η γ and y jet-lead , yielding slices of cos θ * with the same length along the η γ + y jet-lead axis.
The third constraint avoids the bias due to the E γ T > 125 GeV requirement. 4 In this way, the determination of the signal yield does not depend on the description by the MC generators of the distribution of E iso T for prompt photons with high values of E iso T .
to the distribution of the tight photon candidates is also included. A clear signal of prompt photons centred at E iso T about zero is observed.
For the estimation of the background contamination in the signal region a two-dimensional plane is formed by E iso T and a binary variable ("tight" vs. "non-tight" photon candidate) since these two variables are expected to be largely uncorrelated for background events. In this plane, four regions are defined: the "signal" region ( A), containing tight and isolated photon candidates; the "nonisolated" background control region (B), containing tight and nonisolated photon candidates; the "non-tight" background control region (C ), containing isolated and non-tight photon candidates; the background control region containing non-isolated and non-tight photon candidates (D).
The signal yield N sig A in region A is estimated by using the relation The only assumption underlying Eq.
(1) is that the isolation and identification variables are uncorrelated for background events, thus R bg = 1. This assumption is verified in data typically within ±10% in validation regions, 5 which are dominated by background.
Deviations of R bg from unity in the validation regions, after accounting for signal leakage using either the Pythia or LO Sherpa simulations, are propagated through Eq. (1) and taken as systematic uncertainties. The signal purity, defined as N

Table 1
Summary of the requirements at particle level that define the fiducial phase-space region of the measurements.

Requirements on photons
Requirements on jets anti-k t algorithm with R = 0.4 the leading jet within |y jet | < 2.37 and R γ −jet > 0.8 is selected Additional requirements for dσ /dm γ −jet and dσ /d| cos θ * | |η γ + y jet-lead | < 2.37, | cos θ * | < 0.83 and m γ −jet > 450 GeV or LO Sherpa to extract the signal leakage fractions lead to similar signal purities; the difference in the signal purity is taken as a systematic uncertainty.
The background from electrons misidentified as photons, mainly is also studied. Such misidentified electrons are largely suppressed by the photon selection. This background is estimated using MC samples of fully simulated events and found to be negligible in the phase-space region of the analysis presented here. These processes were simulated using the Sherpa 2.2.1 generator. Matrix-elements were calculated for up to two additional partons at NLO and up to four partons at LO [48,49].

Fiducial phase space
The cross sections are unfolded to a phase-space region close to the applied event selection. The fiducial phase-space region is defined at the particle level. A summary of the requirements at particle level that define the fiducial phase-space region of the measurements is given in Table 1. The cross sections as functions of | cos θ * | and m γ −jet are measured in a fiducial phase-space region with additional requirements, as detailed in the last row of Table 1. The particle-level isolation requirement on the photon is built by summing the transverse energy of all stable particles (see Section 4), except for muons and neutrinos, in a cone of size R = 0.4 around the photon direction after the contribution from the UE is subtracted. The same underlying-event subtraction procedure used at the reconstruction level is applied at the particle level. The particle-level requirement on E iso T is optimised to best match the acceptance at reconstruction level using the Pythia and LO Sherpa MC samples by comparing the calorimeter isolation transverse energy with the particle-level isolation transverse energy on an event-by-event basis. The particle-level requirement on E iso T thus optimised is E iso T (particle) < 4.2 · 10 −3 · E γ T + 10 GeV; the same requirement is obtained whether Pythia or LO Sherpa is used. Particle-level jets are reconstructed using the anti-k t jet algorithm with radius parameter R = 0.4 and are built from stable particles, excluding muons and neutrinos. At particle level, the particles associated with the overlaid pp collisions (pile-up) are not considered.

Unfolding
The distributions of the background-subtracted signal yield as functions of E γ T , p jet-lead T , φ γ −jet , m γ −jet and | cos θ * | are used to measure the corresponding differential cross sections for isolatedphoton plus jet production. The distributions are unfolded to the particle level using MC samples of events via a bin-by-bin technique which corrects for resolution effects and the efficiency of the photon and jet reconstruction through the formula dσ dO Parton-shower and hadronisation model dependence. The effects due to the parton-shower and hadronisation models on the signal purity and detector-to-particle-level correction factors are studied separately. The effects on the signal purity are estimated as the differences observed between the nominal results and those obtained using either the (non-adjusted) Pythia or LO Sherpa MC samples for the determination of the signal leakage fractions. The difference between the nominal results and those obtained using the Pythia-adjusted MC samples for the determination of the unfolding correction factors is taken as a systematic uncertainty. The resulting uncertainties in the measured cross sections are typically smaller than 2%.
Photon identification efficiency. The uncertainty in the photon identification efficiency is estimated from the effect of differences between shower-shape variable distributions in data and simulation. From the studies presented in Refs. [30,51], this procedure is found to provide a conservative estimate of the uncertainties. The resulting uncertainty in the measured cross sections is in the range 1-2%. The effects on the measured cross sections due to the uncertainty in the photon reconstruction efficiency, which are evaluated by repeating the full analysis using a different detector simulation with increased material in front of the calorimeter, are found to be negligible.
Photon isolation modelling. The differences between the nominal results and those obtained without applying the data-driven corrections to E iso T in simulated events are taken as systematic uncertainties in the measurements due to the modelling of E iso T in the MC simulation. The resulting uncertainty in the measured cross sections is less than 1.1%.

Definition of the background control regions.
The estimation of the background contamination in the signal region is affected by the choice of background control regions. The control regions B and D are defined by the lower and upper limits on E iso T and the choice of inverted photon identification variables used in the selection of non-tight photons. To study the dependence on the specific choices, these definitions are varied over a wide range. The lower limit on E iso T in regions B and D is varied by ±1 GeV, which is larger than any difference between data and simulations and still provides a sufficient sample to perform the data-driven subtraction. The upper limit on E iso T in regions B and D is removed. The resulting uncertainty in the measured cross sections is negligible. Likewise, the choice of inverted photon identification variables is varied. The analysis is repeated using different sets of variables: tighter (looser) identification criteria are defined by applying tight requirements to an extended (restricted) set of shower-shape variables in the first calorimeter layer [30,51]. The resulting uncertainty in the measured cross sections is smaller than 1.3%.

Photon identification and isolation correlation in the background.
The photon isolation and identification variables used to define the plane in the two-dimensional sideband method to subtract the background are assumed to be independent for background events (R bg = 1 in Eq. (1)). Any correlation between these variables affects the estimation of the purity of the signal sample and leads to systematic uncertainties in the background-subtraction procedure. A range in R bg is set to cover the deviations from unity measured in the validation regions after subtracting the signal leakage with either Pythia-adjusted or LO Sherpa MC samples. The resulting uncertainty in all measured cross sections is less than 2%.
Pile-up. The uncertainty is estimated by changing the nominal rescaling factor of 1.16 to 1.09 or 1.23 and re-evaluating the reweighting factors. The resulting uncertainty in the measured cross sections is typically less than 0.5%.
Unfolding procedure. The uncertainty is estimated by comparing the nominal results with those obtained by unfolding with LO Sherpa MC samples reweigthed to match the data distributions. The resulting uncertainty in the measured cross sections is negligible.
The total systematic uncertainty is computed by adding in quadrature the uncertainties from the sources listed above, the statistical uncertainty of the MC samples, the uncertainty in the trigger efficiency (1%) and the uncertainty in the integrated luminosity, which is fully correlated between all bins of all the measured cross sections. There are large correlations in the systematic uncertainties across bins of one observable, particularly in the uncertainties due to the photon and jet energy scales, which are dominant. The total systematic uncertainty, excluding that in the luminosity, is less than 5% for E γ T , 4% for φ γ −jet , 6% for m γ −jet and 4% for | cos θ * | and increases from 4% at p jet-lead T = 100 GeV to 10% at p jet-lead T ∼ 1.5 TeV. Fig. 2 shows the total systematic uncertainty for each measured cross section, excluding that in the luminosity; the dominant components are shown separately in this Figure. The systematic uncertainty dominates the total experimental uncertainty for E γ T 700 GeV and m γ −jet 1.5 TeV, whereas for higher E γ T and m γ −jet values, the statistical uncertainty of the data limits the precision of the measurements. For p jet-lead T , φ γ −jet and | cos θ * |, the systematic uncertainty dominates in the whole measured range.

Theoretical predictions
The NLO pQCD predictions presented in this Letter are computed using two programs, namely Jetphox 1.3.1_2 and Sherpa 2.2.2. The Jetphox program includes a full NLO pQCD calculation of both the direct and fragmentation contributions to the cross section for the pp → γ + jet + X process. The number of massless quark flavours is set to five. The renormalisation scale μ R , factorisation scale μ F and fragmentation scale μ f are chosen to be μ R = μ F = μ f = E γ T [14]. The calculations are performed using the MMHT2014 [52] PDF set and the BFG set II of parton-to-photon fragmentation functions [53], both at NLO. The strong coupling constant is set to α s (m Z ) = 0.120. The calculations are performed using a parton-level isolation criterion which requires the total transverse energy from the partons inside a cone of size R = 0.4 around the photon direction to be below 4.2 · 10 −3 · E γ T + 10 GeV.
The NLO pQCD predictions from Jetphox are at the parton level while the measurements are at the particle level. Thus, there can be differences between the two levels concerning the photon isolation as well as the photon and jet four-momenta. Since the data are corrected for pile-up and UE effects and the distributions are unfolded to a phase-space definition in which the requirement on  iso T at particle level is applied after subtraction of the UE, it is expected that parton-to-hadron corrections to the NLO pQCD predictions are small. Correction factors to the Jetphox predictions are estimated by computing the ratio of the particle-level cross section for a Pythia sample with UE effects to the parton-level cross section without UE effects. These factors are close to unity within ±5% for the observables studied, except for p jet-lead T 600 GeV; in this region, which is dominated by the bremsstrahlung component, the factors can differ by up to 30% from unity since hadronisation of a nearby parton can significantly change the particle-level isolation compared to the parton-level isolation.
The Sherpa 2.2.2 program consistently combines parton-level calculations of γ + (1, 2) − jet events at NLO and γ + (3, 4) − jet events at LO [48,49] supplemented with a parton shower [54] while avoiding double-counting effects [55]. A requirement on the photon isolation at the matrix-element level is imposed using Frixione's criterion with R = 0.1, n = 2 and = 0.1. Dynamic factorisation and renormalisation scales are adopted as well as a dynamical merging scale with Q cut = 20 GeV [56]. The strong coupling constant is set to α s (m Z ) = 0.118. Fragmentation into hadrons and simulation of the UE are performed using the same models as for the LO Sherpa samples. The next-to-next-to-leading-order (NNLO) NNPDF3.0 PDF set [57] is used in conjunction with the corresponding Sherpa tuning. All the NLO Sherpa predictions are based on the particle-level observables from this computation after applying the requirements listed in Table 1.

Uncertainties in the predictions
The uncertainty in the NLO pQCD predictions from Jetphox due to terms beyond NLO is estimated by repeating the calculations using values of μ R , μ F and μ f scaled by the factors 0.5 and 2.
The three scales are either varied simultaneously, individually or by fixing one and varying the other two. In all cases, the condition 0.5 ≤ μ A /μ B ≤ 2 is imposed, where A, B = R, F, f. The final uncertainty is taken as the largest deviation from the nominal value among the 14 possible variations. In the case of the NLO Sherpa prediction, which does not include the fragmentation contribution, μ R and μ F are varied as above and the largest deviation from the nominal value among the 6 possible variations is taken as the uncertainty.
The uncertainty in the NLO pQCD predictions from Jetphox due to the choice of proton PDFs is estimated by repeating the calculations using the 50 sets from the MMHT2014 error analysis [52] and applying the Hessian method [58] for evaluation of the PDF uncertainties. In the case of NLO Sherpa, it is estimated using 100 replicas from the NNPDF3.0 analysis [57].
The uncertainty in the NLO pQCD predictions from Jetphox (NLO Sherpa) due to the uncertainty in α s is estimated by repeating the calculations using two additional sets of proton PDFs from the MMHT2014 (NNPDF3.0) analysis, for which different values of α s at m Z were assumed in the fits, namely 0.118 (0.117) and 0.122 (0.119); in this way, the correlation between α s and the PDFs is preserved. The uncertainty in the parton-to-hadron correction is estimated by comparing the values obtained using different tunes of Pythia: the ATLAS set of tuned parameters for the underlying event (tune AU2) [28] with the CTEQ6L1 PDF set [59], the A14 tune with the LO NNPDF2.3 PDF set as well as the tunes in which the parameter settings of the latter related to the modelling of the UE are varied [23]. Larger differences are obtained from the comparison of the two central tunes than from the variations around the A14 tune. The nominal correction is taken as the average of the corrections using the two central tunes, while the uncertainty is estimated as half of the difference between the two central tunes.
The uncertainty arising from the value of α s (m Z ) is below 2% (5%). The uncertainty in the parton-to-hadron correction is in the range 1-3% except for p jet-lead T 600 GeV, where it increases to 20% at p jet-lead T = 1.5 TeV; this uncertainty is included in the Jetphox predictions, but not in the case of NLO Sherpa since it is a particle-level Monte Carlo generator. 7 The total theoretical uncertainty is obtained by adding in quadrature the individual uncertainties listed above and, in the case of Jetphox (NLO Sherpa), is 10-15% (15-25%) except for p jet-lead T , where it is in the range 10-40% (15-30%); in the case of the NLO Sherpa prediction for dσ /d φ γ −jet , the total uncertainty is 10-40%.

Results
The measurements presented here apply to isolated prompt photons with E iso T < 4.2 · 10 −3 · E γ T + 10 GeV at particle level and jets of hadrons. The measured fiducial cross section for isolatedphoton plus one-jet production in the phase-space region given in Table 1 is σ meas = 300 ± 10 (exp.) ± 6 (lumi.) pb, where "exp." denotes the sum in quadrature of the statistical and systematic uncertainties excluding that due to the luminosity and "lumi." denotes the uncertainty due to that in the integrated luminosity. The measured dσ /dp jet-lead T decreases by more than four orders of magnitude from p jet-lead T = 100 GeV up to the highest measured value, p jet-lead T ≈ 1.5 TeV. The measurement of dσ /d φ γ −jet is restricted to φ γ −jet > π/2 to avoid the phase-space region dominated by photon production in association with a multi-jet system.
The tree-level predictions of the Pythia and LO Sherpa MC models are compared to the measurements in Fig. 3. These predictions are normalised to the measured integrated fiducial cross 7 An uncertainty related to the modelling of the hadronisation process should also be assigned to the NLO Sherpa predictions, but no tune other than the default one is available. It is expected that the uncertainty should be of similar size as that evaluated using Pythia.  section. The difference in normalisation between data and Pythia (LO Sherpa) is ∼ +10% (+40%) and attributed to the fact that these generators are based on tree-level matrix elements, which are affected by a large normalisation uncertainty due to missing higher-order terms; for this reason, the theoretical uncertainties are not included in Fig. 3. Both predictions give an adequate description of the shape of the measured dσ /dE γ T , although Pythia is slightly better than LO Sherpa for E γ T 600 GeV. For dσ /dp jet-lead T , the prediction from LO Sherpa gives an adequate description of the data in the whole measured range, whereas that from Pythia overestimates the data for p jet-lead T 200 GeV; the overestimation is attributed to a large contribution from photon bremsstrahlung predicted by the tune used in Pythia (see Section 3). The prediction from LO Sherpa gives a good description of the measured dσ /d φ γ −jet , whereas Pythia underestimates the data for 3π /5 < φ γ −jet < 4π /5 rad; this is expected from the inclusion of additional partons in the matrix elements in Sherpa as compared to Pythia, for which additional partons must necessarily come from the parton shower. Both predictions give a good description of the data for m γ −jet < 1. 25 TeV and for all of the measured | cos θ * | range.
To illustrate the sensitivity to t-channel quark or gluon exchange, the predicted cross-sections dσ /d| cos θ * | from Jetphox for LO direct and fragmentation processes are compared to the measurement in Fig. 4. Even though the two components are no longer distinguishable at NLO, the LO calculations are useful in illustrating the basic differences in the dynamics of the two processes. The contribution from fragmentation, dominated by gluon exchange, shows a steeper increase as | cos θ * | → 1 than that from direct processes, dominated by quark exchange. The shape of the measured cross-section dσ /d| cos θ * | is closer to that of the direct processes than that of fragmentation. This is consistent with the dominance of processes in which the exchanged particle is a quark. The predictions of the fixed-order NLO QCD calculations of Jetphox based on the MMHT2014 proton PDF set and corrected for hadronisation and UE effects as explained in Section 8 are compared to the measurements 8 in Fig. 5. The predictions of the multileg NLO QCD plus parton-shower calculations of Sherpa based on the NNPDF3.0 PDF set are also compared to the measurements in perimental and theoretical uncertainties. For the cross section as a function of φ γ −jet , the only well-founded prediction is that of NLO Sherpa, which is able to reproduce the data down to φ γ −jet = π/2 due to the inclusion of the matrix elements for 2 → n processes with n = 4 and 5. For most of the points, the theoretical uncertainties are larger than those of experimental origin. Predictions for Jetphox (Sherpa NLO) are also obtained with other PDF sets, namely NLO CT14 [60] and NLO NNPDF3.0 (CT14 and MMHT2014), and differ by less than 5% with respect to those using MMHT2014 (NNPDF3.0). Thus, the description of the data achieved by the predictions does not depend significantly on the specific PDF set used. It is concluded that the NLO pQCD predictions provide an adequate description of the measurements within the uncertainties. The predictions of the tree-level plus parton-shower MC models by Pythia and LO Sherpa give a satisfactory description of the shape of the data distributions, except for p jet-lead T in the case of Pythia. The fixed-order NLO QCD calculations of Jetphox, corrected for hadronisation and UE effects, and the multi-leg NLO QCD plus parton-shower calculations of Sherpa describe the measured cross sections within the experimental and theoretical uncertainties. The comparison of predictions based on different parameterisations of the proton PDFs shows that the description of the data achieved does not depend significantly on the specific PDF set used. The only well-founded prediction for dσ /d φ γ −jet is that of NLO Sherpa, which is able to reproduce the data down to φ γ −jet = π/2 due to the inclusion of the matrix elements for 2 → n processes with n = 4 and 5. The measured dependence on | cos θ * | is consistent with the dominance of processes in which a quark is exchanged. All these studies provide tests of the pQCD description of the dynamics of isolated-photon plus jet production in pp collisions at √ s = 13 TeV. The experimental uncertainties are, in general, much smaller than the uncertainties in the predictions and, thus, calculations with higher precision will allow stringent tests of the theory.