Measurement of the charge asymmetry in top-quark pair production in association with a photon with the ATLAS experiment

A measurement of the charge asymmetry in top-quark pair ($t\bar{t}$) production in association with a photon is presented. The measurement is performed in the single-lepton $t\bar{t}$ decay channel using proton-proton collision data collected with the ATLAS detector at the Large Hadron Collider at CERN at a centre-of-mass-energy of 13 TeV during the years 2015-2018, corresponding to an integrated luminosity of 139 fb$^{-1}$. The charge asymmetry is obtained from the distribution of the difference of the absolute rapidities of the top quark and antiquark using a profile likelihood unfolding approach. It is measured to be $A_\text{C}=-0.003 \pm 0.029$ in agreement with the Standard Model expectation.


Introduction
The top quark is the heaviest known elementary particle and the only quark that decays before hadronisation, which allows direct access to its properties in production and decay.Measurements of top-quark properties, predicted by the Standard Model (SM), provide important input to test theoretical calculations and have the potential to reveal deviations from the SM predictions.One of the relevant properties is related to the slight difference between the rapidity distributions of top quarks and top antiquarks produced in pairs ( t).This asymmetry, referred to as charge asymmetry, is defined in proton-proton collisions as follows [1][2][3][4]: , where  is the number of events and   ( t ) the rapidity of the top quark (top antiquark).The production of  t events is predicted to be symmetric under the exchange of top quark and antiquark, i.e.  C = 0, at leading-order (LO) accuracy in perturbative QCD.However, at next-to-leading order (NLO), quarkantiquark-initiated  t production is asymmetric in the top-quark rapidity distribution, owing to interference between processes with initial-and final-state gluon emission and between the Born diagram and the box diagram at  ( 4 s ).The total asymmetry from the sum of all effects is expected to be positive [5].Previous measurements of the asymmetry in  t production by the ATLAS and CMS Collaborations at centre-of-mass energies ( √ ) of 7, 8 and 13 TeV [6][7][8][9][10][11][12][13][14][15][16] agree with the SM expectation.The most recent measurement of the inclusive and differential charge asymmetry at 13 TeV by the ATLAS Collaboration [17] reported evidence for a non-zero asymmetry in  t production (measured as  C = 0.0068 ± 0.0015 and in agreement with the SM prediction [17,18]).While  C at the LHC corresponds to a central-forward asymmetry, the  t production asymmetry manifests itself as a forward-backward asymmetry at the Tevatron.Early measurements of this asymmetry showed deviations from NLO QCD predictions, particularly at large values of the  t invariant mass [19].However, more recent results by the CDF and D0 Collaborations [20][21][22] are compatible with the improved SM predictions including NLO electroweak (EW) and higher-order QCD corrections [23,24].
1 Diagrams for ttbar and ttgamma asymmetry q t t q g g q q q t g t g t q q t t g q t q t g q t g q t t q γ g q q q t γ t g t 1 1 Diagrams for ttbar and ttgamma asymmetry q t t q g g q q q t g t g t q q t t g q t q t g q t g q t t q γ g q q q t γ t g t The  t charge asymmetry is diluted at the LHC owing to the large fraction of gluon-gluon-initiated  t events, which are symmetric under the exchange of the top quark and antiquark.However, it is enhanced in other topologies where the fraction of quark-antiquark-initiated production is larger, such as in associated production of  t with a photon ( t) [1,2].Interference effects among QCD diagrams at NLO, similar to those in  t events, are predicted in  t production.However, the dominant contribution to the asymmetry in  t arises from interference between QED initial-state radiation, Figure 1(left), and final-state radiation, Figure 1(right), which yields a larger asymmetry of negative sign.In addition, other QCD-EW higher-order contributions can have a sizeable effect on the observed asymmetry [25].The overall asymmetry in  t at √  = 13 TeV is expected to have a negative value, of 1%-2% depending on the phase space, according to SM predictions [25,26] and can be modified by 'beyond-the-SM' contributions.Contributions from an -channel colour octet or a  boson would, for instance, result in a smaller  C absolute value [1].
These sources of asymmetry are only present in the  t events where the photon is radiated from an initial-state parton or one of the top quarks (hereafter referred to as  t production).The asymmetry is diluted by  t events where the photon arises from any of the charged decay products of the  t system ( t decay in the following).Therefore, only  t production events are considered as signal in this analysis.
This paper presents the first measurement of the charge asymmetry of the top-quark pairs in  t production in  t single-lepton final states, which have one high- T lepton and at least four jets, two of which arise from -quarks.It is performed using the full 139 fb −1 data set recorded with the ATLAS detector between 2015 and 2018 at √  = 13 TeV, referred to as Run 2. In order to extract the asymmetry, the top quarks are reconstructed using a kinematic likelihood fit.The separation between signal and background processes is enhanced using a neural network (NN) approach.The output distribution of the NN is used to define two regions, one enriched in background events and one in signal events.The  C value is determined by means of a maximum-likelihood fit to the distribution of the difference of absolute rapidities of the top quark and antiquark.This is also referred to as 'maximum-likelihood unfolding'.
The paper is organised as follows.The ATLAS detector is briefly introduced in Section 2. The simulation of signal and background processes is summarised in Section 3. The event reconstruction, selection, and estimation of the background processes are presented in Sections 4 and 5.The systematic uncertainties are described in Section 6.The analysis strategy is discussed in Section 7, followed by the result in Section 8. Finally, a summary is given in Section 9.

ATLAS detector
The ATLAS [27][28][29] detector is a multipurpose detector with a forward-backward symmetric cylindrical geometry with respect to the LHC beam axis. 1 The innermost layers consist of tracking detectors in the pseudorapidity range || < 2.5.This inner detector (ID) is surrounded by a thin superconducting solenoid that provides a 2 T axial magnetic field.It is enclosed by the electromagnetic and hadronic calorimeters, which cover || < 4.9.The outermost layers of ATLAS consist of an external muon spectrometer within || < 2.7, incorporating three large toroidal magnetic assemblies with eight coils each.The field integral of the toroids ranges between 2.0 and 6.0 Tm for most of the acceptance.The muon spectrometer includes precision tracking chambers and fast detectors for triggering.A two-level trigger system [30] reduces the recorded event rate to an average of 1 kHz.An extensive software suite [31] is used in data simulation, in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.

Simulation of signal and background processes
Monte Carlo (MC) event generators were used to estimate the contributions from the expected signal and background processes.The response of the ATLAS detector was simulated [32] with G 4 [33].A fast simulation (A F -II), which relies on a parameterisation of the calorimeter response [34], was used in samples employed to estimate uncertainties related to the  t and   modelling.The additional   collisions in the same or neighbouring bunch crossings, referred to as pile-up, were generated with P 8.186 [35] using a set of tuned parameters called the A3 tune [36] and the NNPDF2.3parton distribution function (PDF) set [37].
The signal  t production events were simulated with M G 5_ MC@NLO 2.7.3 [38] as a 2 → 3 process at NLO accuracy in QCD.The interference effects between initial-state and final-state photon radiation were considered in this process.The final-state top quarks in the  t production sample are on-shell and were decayed at LO using M S [39,40] to preserve spin correlations.
The background  t decay events, where the photon arises from any of the decay products of the top quarks or one of the on-shell top quarks, were simulated with the same version of M G 5_ MC@NLO but at LO precision as a 2 → 2 process followed by the decay of the top quarks, also simulated at LO precision.Both samples were generated using the NNPDF3.0[41] PDF set and interfaced to P 8.240 [42], which used the A14 tune [43] and the NNPDF2.3PDF set to model the parton shower, hadronisation, fragmentation and underlying event.The renormalisation and factorisation scales were set to 0.5 ×

√︃
2  +  2 T, , where   and  T, are the masses and transverse momenta of the particles generated from the matrix element (ME) calculation.Photons are required to have  T > 15 GeV and to be isolated according to a smooth-cone hadronic isolation criterion with  0 = 0.1,   = 0.1 and  = 2, defined in Ref. [44], which avoids infrared divergences.The top-quark mass in the  t sample and all other samples involving top quarks was set to 172.5 GeV and the decays of bottom and charm hadrons were simulated using the E G 1.6.0program [45].
The  t production sample is normalised to the NLO cross section given by the MC simulation, while the normalisation of the  t decay sample is corrected by a NLO/LO inclusive -factor of 1.5.This -factor was derived by comparing the normalisation of the sum of the NLO  t production sample and the LO  t decay sample with the normalisation of a LO inclusive 2 → 7  t sample corrected with the -factor obtained in Ref. [46] using the calculation described in Ref. [47].
The  t events were simulated at NLO accuracy in QCD using P B v2 [48][49][50] and the NNPDF3.0PDF set.The parton shower was generated with P 8.230 using the A14 tune [51].The  t events are normalised to a cross-section value calculated with the T ++ 2.0 program at next-to-next-to-leading order (NNLO) in perturbative QCD, including soft-gluon resummation to next-to-next-to-leading-logarithm order (see Ref. [52] and references therein).
The   events were generated at LO accuracy with the M G 5_ MC@NLO 2.7.3 generator in the five-flavour scheme.To simulate this process, two complementary samples were generated; one as a 2 → 3 process assuming a stable top quark and the other as a 2 → 2 process, where the photon is radiated from any other charged final-state particle.To avoid infrared divergences, the photon was required to have  T > 15 GeV and || < 5.0 and to be separated by Δ > 0.2 from any parton.Both samples make use of the NNPDF2.3PDF set and were interfaced to P 8.212 for parton showering using the A14 tune.
Single-top-quark -and -channel production and inclusive  production were simulated at ME level at NLO in QCD with P B v2 and the NNPDF2.3PDF set.The event generator was interfaced to P 8.230, which used the A14 tune.The events are normalised to the NNLO cross section [53][54][55].
Events with   and  final states (with additional jets) were simulated with S 2.2.8 [56] at NLO in QCD using the NNPDF3.0PDF set.The samples are normalised to the cross sections given by the MC simulation.The S generator performs all steps of the event generation, from the hard process to the observable particles.Events with inclusive -and -boson production in association with additional jets were simulated with S 2.2.1 [56,57] at NLO in QCD.The NNPDF3.0 PDF set was used together with a dedicated tune provided by the S authors.The samples are normalised to the NNLO cross section in QCD [58].
Diboson processes, WW, WZ and ZZ, were generated with S 2.2.2 (leptonic decays) and 2.2.1 (non-leptonic final states) at LO in QCD.The NNPDF3.0 PDF set was used with a dedicated tune provided by the S authors.The samples are normalised to NLO cross sections in QCD [59].
Events with a  t pair and an associated  or  boson ( t) were simulated at NLO at the ME level with M G 5_ MC@NLO using the NNPDF3.0PDF set.The ME generator was interfaced to P 8.210, for which the A14 tune was used in conjunction with the NNPDF2.3PDF set.The samples are normalised to NLO in QCD and electroweak theory [60].
A procedure was applied to remove the overlap between the samples in which events were generated at ME level without explicitly including a photon in the final state and the dedicated samples where photons were included in the ME-level event-generation step ( t and   final states, as well as   and  final states with additional jets).Events in the inclusive samples are discarded if they contain a parton-level photon that fulfils  T () > 15 GeV and Δ(, ℓ) > 0.2, where  T () is the transverse momentum of the photon and Δ(, ℓ) is the angular distance between the photon and any charged lepton.
Corrections to the pile-up profile, the trigger, reconstruction and selection efficiencies, and the energy scales and resolutions, are applied to the MC simulation samples to improve the description of the data.

Event reconstruction and selection
The analysis uses a data set that passes stringent quality requirements and corresponds to an integrated luminosity of 139 fb −1 collected with the ATLAS detector during Run 2 of the LHC.Events are required to have at least one primary vertex reconstructed from at least two associated tracks, and only events where at least one single-electron [61] or single-muon [62] trigger was fired are selected.
Muons are reconstructed by combining a track in the muon spectrometer with a track in the ID system.The reconstruction, identification and calibration methods are described in Ref. [63].The muon track is also required to originate from the primary collision vertex.Muons are required to be isolated according to measurements of nearby track  T and calorimeter energy.Only muons with calibrated  T > 25 GeV and || < 2.5 and passing 'medium' quality requirements are considered.
Electrons are reconstructed from energy deposits in the electromagnetic calorimeter (ECAL) associated with reconstructed tracks in the ID system.The origin of the electron track also has to be compatible with the primary vertex.Electrons are identified with a combined likelihood technique [64] using a 'tight' working point, and are required to be isolated according to measurements of nearby calorimeter energy and track  T .Electrons are calibrated with the method described in Ref. [64] and are selected if they fulfil  T > 25 GeV and | clus | < 2.47, excluding the ECAL barrel/endcap transition region 1.37 < | clus | < 1.52, where  clus refers to the pseudorapidity of the calorimeter energy cluster associated with the electron.
Photons are reconstructed from energy deposits in the central region of the ECAL [64].Photons are required to fulfil tight identification and isolation requirements.The latter is defined as  iso T Δ<0.4 < 0.022•  T () +2.45 GeV in conjunction with  iso T Δ<0.2 < 0.05•  T (), where  iso T refers to the calorimeter isolation within a cone of size Δ = 0.4 around the direction of the photon candidate and  iso T is the track isolation within a Δ = 0.2 cone [65].Photons are required to have transverse energy  T > 20 GeV and | clus | < 2.37, excluding the calorimeter transition region.They are separated into two categories, one where the cluster is not matched to any reconstructed track in the ID system (unconverted photons) and the other where the cluster is matched to one or two reconstructed tracks that are consistent with originating from a photon conversion and, in addition, a conversion vertex can be found (converted photons).
Jets are reconstructed using the anti-  algorithm [66] in the F J implementation [67] with a distance parameter  = 0.4.Their reconstruction is performed on particle-flow objects [68].The jet energy scale and jet energy resolution are calibrated using an energy-and -dependent calibration scheme based on simulation with in situ corrections obtained from data [69].Only jets with  T > 25 GeV and || < 2.5 are considered in the analysis.Jets with a large contribution from pile-up vertices are identified with the jet vertex tagger (JVT) [70] and rejected.
Jets arising from -quark hadronisation, referred to as -jets, are identified using the DL1r -tagging algorithm [71], which is based on an artificial deep neural network combining information from other algorithms using track impact parameters and secondary vertices, and a multi-vertex reconstruction algorithm.The flavour-tagging efficiency for -jets, as well as for -jets and light-flavour jets, is calibrated as described in Ref. [72].The working point used to select the -jets corresponds to a selection efficiency of 77% in simulated  t events.
The magnitude of the reconstructed missing transverse momentum ( miss T ) [73,74] is calculated from the negative vector sum of the  T of all calibrated physics objects and the remaining unclustered energy, also called the soft term.This term is estimated from low- T tracks associated with the primary vertex but not with any reconstructed object.
An overlap removal procedure is implemented to avoid the reconstruction of the same energy clusters or tracks as different objects.Electron candidates that share their track with a muon candidate are removed and jets within a Δ = 0.2 cone around any of the remaining electrons are excluded.If the distance between an electron and any remaining jet is Δ < 0.4, the electron is subsequently removed.In the next step, muon candidates within Δ = 0.4 of a jet are removed if the jet has more than two associated tracks, otherwise the jet is discarded.In the final step, photons within a Δ = 0.4 cone around any remaining electron or muon are excluded and then jets within a Δ = 0.4 cone around any remaining photon are removed.
Events are selected if they have exactly one electron or one muon that is matched to the corresponding trigger-level object.The  T thresholds for the leptons are 25 GeV in 2015 data, 27 GeV in 2016 data, and 28 GeV in 2017 and 2018 data, which are at least 1 GeV above the  T thresholds of the single-lepton triggers.This is done to avoid differences due to the calibration of the objects used in the trigger logic, and objects used in the physics analysis.Only events containing exactly one reconstructed photon fulfilling the condition Δ(ℓ, ) > 0.4 are considered in the measurement.Additionally, events where the invariant mass of the electron-photon system is within 5 GeV of the -boson mass are rejected.The event is also required to have at least four jets and at least one of them must be -tagged.
The kinematic properties, in particular the rapidities of the top quark and antiquark, are determined by means of a constrained kinematic fitting algorithm, KLFitter [75], based on a maximum-likelihood approach applied to the four-momenta of the selected lepton and up to five leading jets ( T -ordered) and  miss T , representing the transverse momentum of the neutrino.The likelihood is constructed as the product of transfer functions that relate the energies of the reconstructed objects and parton-level objects and Breit-Wigner distributions that match the selected objects to the  bosons and the top quarks.The fit is constrained to reconstruct two  bosons, each with a mass of 80.4 GeV.In addition, the reconstructed masses of the top quark and antiquark are constrained to 172.5 GeV.The combination of jets that gives the highest likelihood is selected and the jets are used to reconstruct the hadronically and leptonically decaying top quark or antiquark.In the case of leptonic top-quark decay, the invariant mass of the lepton-neutrino system is constrained to be the -boson mass and a quadratic equation for the neutrino's longitudinal momentum is obtained.For real solutions of the equation, the solution that results in the top-quark mass closer to 172.5 GeV is chosen, while in the case of complex solutions, the real part of the solution is considered.The fraction of top quarks that are reconstructed within Δ = 1.0 of the parton-level top quarks is about 64% for hadronically decaying top quarks and 72% for leptonically decaying top quarks.

Background estimation
Each background process is assigned to one of several categories depending on the origin of the reconstructed photon, whether it is a prompt photon or whether another object mimics a photon signature.The estimation of background events from different sources closely follows the methods employed in Ref. [46].
After the event selection, the largest background contribution is that of  t decay events (about 30% of the total number of events).The prompt  background category, which contains any other type of background process with a prompt photon, constitutes about 15% of the selected events.Both contributions are estimated using MC simulation.Agreement between data and simulation of the prompt  background was validated in dedicated regions.A validation region enriched in the  process is defined by selecting events with exactly one photon, two same-flavour opposite-sign leptons, fewer than four jets and no -tagged jets.Events in the   validation region fulfil the same lepton and photon requirements as in the signal region.The simulation of   is known to underestimate the number of events with heavy-flavour jets in data.Thus, to define a region orthogonal to the signal region while selecting events with heavy-flavour content, the events in the validation region are additionally required to have fewer than four jets and at least one must pass the -tagging working point with a selection efficiency of 85% but fail the one with 70% efficiency in simulated  t events.The expected fractions of  and   events in the corresponding control regions are about 95% and 50%, respectively.Another significant contribution arises from processes with an electron mimicking a photon signature in the detector, referred to as e-fake.This background contribution amounts to 16% of the total number of selected events and it is estimated from data by applying a tag-and-probe method to  →  +  − events [65].Two control regions are defined in order to determine the e-fake photon rate in data and simulation: one region contains events with an electron-positron pair and the other, enriched in e-fake photon events, contains events with an electron and a photon satisfying the object selection criteria described in Section 4. Additionally, the invariant mass of the pair of objects is required to be in the range [40,140] GeV, and their angular separation in  must exceed 2.62 rad to suppress the contributions from events where the photon is radiated from the electron.Background contributions not originating from -boson events are subtracted in data with a fit of the invariant mass distribution.The  →  +  −  contribution with prompt photons where one of the electrons is not reconstructed or identified is subtracted using simulation.The e-fake photon rate is obtained as the ratio of the event yield after background subtraction in the e-fake-enriched control region to the yield in the electron-positron control region.The calculated ratio is binned in the  T (three bins) and || (four bins) of the photon in the  events and either the electron or the positron in the  +  − events, selected randomly to avoid biasing the selection, and separately for converted and unconverted photons.Scale factors are calculated to correct the e-fake background estimate in the signal region, based on a comparison between the e-fake photon rates obtained using either data or simulation.The systematic uncertainties in the scale factors account for possible mismodelling of the signal and background processes in the fit.The values of the scale factors vary from 0.8 to 1.4 with uncertainties between 5% and 20%.
The background contribution from events where the photon signature arises from hadronic energy depositions in the ECAL or from hadron decays such as  0 → , generically referred to as h-fake, constitutes about 7% of the events.The h-fake background is estimated from data by using the so-called ABCD method.Three orthogonal regions enriched with h-fake photon events are defined by inverting the photon isolation selection and the requirements on four variables related to the shower shape in the first layer of the ECAL, which are part of the photon tight identification criteria.They are chosen because of their small correlation with the photon isolation and their power to discriminate between prompt and h-fake photons.Events are selected for regions A and B if their photon fails at least two out of four identification requirements, while satisfying all other identification criteria, and pass or fail the isolation requirements, respectively.Region C contains events where the photons fail the isolation requirements but satisfy the tight identification criteria.Additionally, the sum of the  T of all tracks within Δ = 0.2 of the photon is required to be larger than 3 GeV to further suppress the prompt-photon contribution in regions B and C. The h-fake background contribution in the signal region is measured as the product of the numbers of events in regions A and C divided by the number of events in region B. The estimate is corrected for the correlation between the criteria, which is obtained using MC simulations.The scale factors are obtained separately for converted and unconverted photons and as a function of the photon  T (two bins) and || (four bins).The considered sources of systematic uncertainty in the h-fake background contribution include the modelling of the  t process, which contributes about 90% of the h-fake events, the shower shapes, and the normalisation uncertainties of the background processes.The scale factors range from 0.6 to 1.5, with uncertainties of 30%-60%.
The contribution from events with a non-prompt or misidentified lepton, referred to as lepton fake, is obtained using the data-driven approach referred to as the matrix method [76].Events are separated into two categories that are based on tighter or looser lepton identification and isolation requirements, and thus enriched in events with real leptons or non-prompt/fake leptons, respectively.The contribution in the signal region is estimated from the data events passing loose lepton selection requirements, corrected by a weight that depends on the real-and fake-lepton efficiencies obtained from the two event categories described above.The efficiencies are parameterised as a function of the lepton kinematic properties.The uncertainties are estimated by using different parameterisations and tighter control regions, and by including the normalisation uncertainty of the prompt-lepton background.This background contribution amounts to around 1% of all selected events, dominated by events with a misidentified electron.

Systematic uncertainties
The precision of the obtained asymmetry  C is affected by several sources of systematic uncertainty, arising from detector effects or theoretical assumptions, as well as uncertainties due to the limited number of events in the MC simulations.The different sources of systematic uncertainty, discussed in the following, affect the event yields, the distribution shape of the observable of interest, or both.The sources of systematic uncertainty affecting the shape of the distribution typically have a larger impact on the precision of the result because global normalisation uncertainties cancel out in the ratio of event yields that defines  C .
The experimental systematic effects include uncertainties in the integrated luminosity and the simulation of pile-up events, as well as effects related to the reconstruction and identification of the physics objects in the analysis.
The uncertainty in the total integrated luminosity is estimated to be 1.7% [77], using the LUCID-2 detector [78] for the primary luminosity measurements.The uncertainty associated with the modelling of pile-up is determined by varying the pile-up reweighting in the simulation within its uncertainties.
The photon and lepton identification and isolation efficiencies, momentum scale and resolution [79,80], and lepton trigger efficiencies in simulation are corrected using scale factors to better describe the corresponding values in data.These corrections, which typically depend on  T and , are varied within their uncertainties to estimate the corresponding systematic uncertainty.
The jet energy scale (JES) uncertainty is derived from a combination of simulations, test-beam data and in situ measurements [69].Contributions from jet-flavour composition, -intercalibration, punch-through, single-particle response, calorimeter response to different jet flavours, and pile-up are also taken into account, yielding a total of 30 uncorrelated JES uncertainty subcomponents, of which 29 are non-zero in a given event depending on the type of simulation used.The jet energy resolution in simulation is smeared by its corresponding uncertainty [81] split into eight uncorrelated sources.The uncertainty associated with the JVT discriminant for pile-up jet rejection is obtained by varying the efficiency correction factors.
The uncertainties in the -jet tagging calibration are determined separately for -jets, -jets and lightflavour jets [82][83][84].For each jet category, the uncertainties are decomposed into several uncorrelated components.
The uncertainty in  miss T arises from the propagation of the energy scales and resolutions of photons, leptons and jets, and the modelling of its soft term [74].
The signal and background modelling uncertainties include those owing to the choice of QCD scales, parton shower, amount of QCD initial-state radiation (ISR), and PDF set.
The effect of the QCD scale uncertainty for each of the  t,   and  t processes is evaluated independently by separately halving and doubling the renormalisation and factorisation scales relative to the default scale choice.The uncertainty from the parton shower and hadronisation for those processes is estimated by comparing the nominal simulated samples interfaced with P 8 with alternative samples interfaced to H 7 [85,86].Uncertainties due to the value of  s used in the ISR parton shower modelling are estimated by comparing the nominal  t,   and  t simulations with alternative samples simulated with higher or lower radiation parameter settings in the A14 tune, controlled by the var3c parameter implemented in P 8.An additional ISR uncertainty is obtained for the  t process by comparing the nominal sample with an additional one with the ℎ damp parameter, which controls the  T of the first additional emission, varied by a factor of two [87].The uncertainty in the PDFs for the signal and background  t processes is estimated using the 30 PDF variations of the PDF4LHC15 prescription [88].The PDF variations are propagated by using alternative generator weights and each of them is considered as a separate nuisance parameter in the fit.
For the e-fake, h-fake, and lepton-fake background contributions, the total uncertainties associated with the corrections obtained using data are propagated to the final result.A normalisation uncertainty of 20% is assigned to the  t decay process, based on the estimated uncertainty in the NLO -factor [46], and a 50% normalisation uncertainty is assigned to  , based on the differences between data and simulation observed in the dedicated control region, and to the minor background processes contributing to the prompt-photon category, i.e. single top quark,  t, diboson, and .

Signal discrimination
A multivariate analysis using a neural network is performed to further separate the  t signal from the background processes.The NN is fully connected and consists of three hidden layers.The first two layers consist of 96 nodes and are followed by a batch normalisation layer.The third layer has 16 nodes.The hidden layers use a parametric ReLU activation function, while the output node uses a sigmoid activation function.The training is performed with Keras [89] with the TensorFlow [90] backend with binary cross-entropy as a loss function.The overall events are split into a training and validation set (85%) and a testing set (15%).The first set of events is used in a 5-fold cross-validation: Events are split randomly into 5 folds, the model is trained on 4 folds and one fold is used for validation of the NN configuration.This procedure is repeated 5 times.The event weights are applied to the events in the training, testing and validation samples.The NN uses a total of 21 variables related to the kinematic properties of individual objects, such as the photon  T and , event shape variables (e.g. miss T and the scalar sum of the  T of the jets in the event), the number of -tagged jets, the pseudo-continuous binned -tagging discriminant [72], the photon conversion type, and invariant masses and angular separations of different objects in the event (e.g. the invariant mass and Δ of the lepton or the photon and the closest -tagged jet).Example variables from among those with the most discriminating power are shown in Figure 2. The MC simulation describes the data within the uncertainties.The largest contributions to the MC uncertainty band are the normalisation uncertainties associated with the backgrounds with prompt photons.The resulting NN discriminant output,  NN , shown in Figure 3, is used to divide the events into a background-enriched region and a signal-enriched region, defined by  NN < 0.6 and  NN ≥ 0.6, respectively.This threshold was optimised to give the smallest expected uncertainty in  C .The observed and expected signal and background event yields in the two regions are summarised in Table 1.The slight underestimate of the data by the SM prediction, observed in Figure 2, is reflected in Figure 3 at large values of  NN because it is expected to partially come from the normalisation of the  t production simulation, which is a free parameter in the profile likelihood unfolding described in the following.

Results
The value of  C is extracted from the |  | − | t | distribution in a fiducial region defined at particle level.The top quark and antiquark are defined at parton level in the MC simulation after final-state radiation but before decay.The fiducial region at particle level is defined by applying selection requirements similar to those at reconstruction level to the stable particles after the event generation and before the detector simulation.The fiducial phase space is defined by requiring exactly one photon, exactly one electron or muon, and at least four jets, of which at least one must be a -jet, defined as follows.Photons are required to not originate from a hadron decay, to have  T > 20 GeV and || < 2.37, and to be isolated such that the sum of transverse momenta of all charged particles surrounding the photon within Δ ≤ 0.2 must be less than 5% of its own  T .Muons and electrons must have  T > 25 GeV and || < 2.5, and must not originate from hadron decays.The momenta of nearby photons, within a Δ = 0.1 cone, are added to the lepton before applying the selection.Jets are clustered with the anti-  algorithm with a radius parameter of  = 0.4.All stable particles are considered in the clustering, except for the selected electrons, muons and photons, and the neutrinos originating from the top quarks.Jets are required to have  T > 25 GeV and || < 2.5.A particle-level jet is identified as a -jet if a hadron with  T > 5 GeV containing a -quark is matched to the jet through a ghost-matching method [91].Jets within Δ = 0.4 of lepton or isolated photon candidates are removed.
The  C value is obtained by means of a simultaneous maximum-likelihood unfolding of the |  | − | t | distributions in the two regions defined by the NN output discriminant.The efficiency of selecting and reconstructing an event that is generated in the fiducial phase space is about 30% in the two regions, while the fraction of events that fulfil the selection at reconstruction level but fail the particle-level requirements is about 20%.The fraction of events that are reconstructed in the No regularisation is applied.The systematic uncertainties are taken into account via nuisance parameters in the likelihood function.They are symmetrised, taking half of the difference between the upward and downward variations as the uncertainty.If both variations have the same sign, the average of the difference between each variation and the nominal value is chosen as the two-sided uncertainty.If only one variation is available (e.g. the uncertainty from the parton shower or the PDF uncertainties) the difference from the nominal value is taken as both the upward and downward variations for the corresponding source.In addition, a pruning procedure is implemented in order to remove the smallest systematic uncertainties.
In addition to the NLO/LO -factor applied to correct the normalisation of the  t decay process, the |  | − | t | template is reweighted to account for the asymmetry in  t production.The weight is obtained at parton level by considering the central value of the prediction for the inclusive  t asymmetry,   t C = 0.0064 +0.0005 −0.0006 , calculated at NNLO accuracy in QCD with EW corrections at NLO [17,18].The weight is then propagated to the distribution at reconstruction level.For consistency, the MC templates obtained with the NLO  t samples are also reweighted to match that asymmetry value.To probe the robustness of the method, and in particular to verify that the unfolding procedure does not bias the result towards the asymmetry in the signal MC simulation, the measurement was repeated with pseudodata.Several pseudodata sets were obtained by reweighting the |  | − | t | distribution of the signal  t production sample to correspond to different  C values and by adding the background contributions.The unfolding procedure was repeated using the nominal simulation.The resulting values of the asymmetry are in agreement with the true  C asymmetry of each pseudodata set within the statistical precision and do not indicate any bias.4. The  C of the  t production process is expected to have a negative sign, while the asymmetry of the background contributions with a top-quark pair, i.e.  t decay and  t, is expected to be positive as discussed in Section 1. Good agreement is observed between the data and the prediction after the fit.As a result of the fit, a few nuisance parameters are slightly constrained: the uncertainties in the normalisation of the  t decay and   backgrounds are reduced by 30% and 15%, respectively.In all cases, the best-fit values of the nuisance parameters are well within one standard deviation of their initial values.
The asymmetry is found to be  C = −0.003± 0.029 = −0.003± 0.024(stat) ± 0.017(syst), assuming the SM  t charge asymmetry of   t C = 0.0064 [18].The systematic uncertainty is derived from its squared value, calculated as the difference of the squares of the total uncertainty and the statistical uncertainty, obtained from a fit without systematic uncertainties.The  C value is compatible with the value obtained from the M G 5_ MC@NLO MC simulation in the same phase space,  C = −0.014± 0.001(scale).The precision of the result is limited by the statistical uncertainty.The impact of the different sources of systematic uncertainty, grouped in categories, is summarised in Table 2.The most relevant sources of systematic uncertainty are the MC statistical uncertainty of the prompt-photon background and the experimental systematic sources related to jets and  miss T .The dependence of the measured  t  C on   t C is estimated by repeating the measurement for different values of the  t asymmetry in the range between 0 and 2 ×   t C .The dependence is found to be linear and can be parameterised as  C = −0.57×   t C + 0.0005.

Conclusion
This paper presents a measurement of the top-quark pair charge asymmetry in  t events using 139 fb −1 of   collision data at a centre-of-mass energy of 13 TeV collected by the ATLAS experiment at the LHC.The selected events have exactly one photon, one lepton, and at least four jets, of which at least one is -tagged.
The inclusive charge asymmetry yields  C = −0.003± 0.029 = −0.003± 0.024(stat) ± 0.017(syst), which is compatible with the Standard Model prediction within the uncertainties.The precision is limited by the statistical uncertainty.

Figure 1 :
Figure 1: Example Feynman diagrams of  t production contributing to the charge asymmetry.

Figure 2 :
Figure 2: Distributions of photon  T (left), angular separation of the lepton and the closest -tagged jet (middle), and transverse mass of the leptonically decaying  boson (right) before the fit.The uncertainty band includes all experimental and modelling systematic uncertainties (cf.Section 6) added in quadrature.Overflow events are included in the last bin of each distribution.The lower part of the plot shows the ratio of the data to the prediction.

Figure 3 :
Figure 3: Distribution of the NN output discriminant before the fit.The uncertainty band includes all experimental and modelling systematic uncertainties (cf.Section 6) added in quadrature.The lower part of the plot shows the ratio of the data to the prediction.
|  | − | t | bin where they were generated is approximately 75%.The parameters of interest, which float freely in the fit, are the signal strength of the bin |  | − | t | > 0 and  C , which replaces the signal strength of the other bin, using the following expression:  C = ( +  + −  −  − )/( +  + +  −  − ).The signal strength  + ( − ) is defined as the ratio of the measured cross section to the expected value given by the SM simulation in the bin |  | − | t | > 0 (< 0) at generator level.The variable  +/− represents the number of  t production events at particle level in the corresponding bin.

Figure 4 :
Figure 4: The distributions of |  | − | t | after the fit in the two regions defined by the NN output.Underflow and overflow events are included in corresponding bins of the distributions.The uncertainty band represents the total post-fit uncertainties.Correlations among uncertainties are taken into account as determined in the fit.The lower part of the plot shows the ratio of the data to the prediction.

Table 1 :
Event yields before the profile likelihood unfolding after the full selection in the two regions defined by the NN discriminant value.The quoted uncertainties correspond to all statistical and systematic uncertainties (cf.Section 6) added in quadrature.

Table 2 :
Summary of the impact of the systematic uncertainties on  C grouped into different categories.The quoted uncertainties are obtained by repeating the fit with certain sets of nuisance parameters fixed to their post-fit values, and calculating the squared uncertainties as the difference of the squares of the full-fit and repeated-fit uncertainties.The category Other experimental includes uncertainties associated with leptons, pile-up and luminosity.