Search for ﬂavour-changing neutral-current couplings between the top quark and the photon with the ATLAS detector at

This letter documents a search for ﬂavour-changing neutral currents (FCNCs), which are strongly suppressed in the Standard Model, in events with a photon and a top quark with the ATLAS detector. The analysis uses data collected in 𝑝𝑝 collisions at √ 𝑠 = 13 TeV during Run 2 of the LHC, corresponding to an integrated luminosity of 139 fb − 1 . Both FCNC top-quark production and decay are considered. The ﬁnal state consists of a charged lepton, missing transverse momentum, a 𝑏 -tagged jet, one high-momentum photon and possibly additional jets. A multiclass deep neural network is used to classify events either as signal in one of the two categories, FCNC production or decay, or as background. No signiﬁcant excess of events over the background prediction is observed and 95% CL upper limits are placed on the strength of left- and right-handed FCNC interactions. The 95% CL bounds on the branching fractions for the FCNC top-quark decays, estimated from both top-quark production and decay, are B( 𝑡 → 𝑢𝛾 ) < 0 . 85 × 10 − 5 and B( 𝑡 → 𝑐𝛾 ) < 4 . 2 × 10 − 5 for a left-handed 𝑡𝑞𝛾 coupling, and B( 𝑡 → 𝑢𝛾 ) < 1 . 2 × 10 − 5 and B( 𝑡 → 𝑐𝛾 ) < 4 . 5 × 10 − 5 for a right-handed coupling.


Introduction
Flavour-changing neutral currents (FCNCs) are forbidden at tree level in the Standard Model (SM) and strongly suppressed at higher orders via the GIM mechanism [1], but several extensions to the SM include additional sources of FCNCs. In particular, some of these models predict the branching fractions (BRs) of top-quark decays via FCNCs to be orders of magnitude larger [2] than those predicted by the SM, which are of the order of 10 −14 [2]. Examples are R-parity-violating supersymmetric models [3][4][5][6] and models with two Higgs doublets [7,8], which allow FCNC processes involving top quarks to have measurable rates.
This letter presents a search for FCNCs in processes with a top quark ( ) and a photon ( ) based on the full dataset of √ = 13 TeV proton-proton collisions collected by the ATLAS experiment [9] during Run 2 of the LHC. The analysis is optimised to search for the production of a single top quark in association with a photon as well as to search for the decay of a top quark into an up ( ) or charm ( ) quark in association with a photon in the case of pair-produced top quarks (¯). Tree-level Feynman diagrams for these processes are shown in Figure 1, where in both cases, exactly one top quark decays via the SM-favoured coupling. FCNC contributions to the production ( → , Figure 1 left) and decay ( → , Figure 1 right) modes can be parameterised in terms of effective coupling parameters [10,11]. Following the notation of Refs. [12,13], the relevant dimension-six operators are ( ) uB and ( ) uW , where ≠ are indices for the quark generation. In general, left-handed (LH) and right-handed (RH) couplings could exist, resulting in different helicities for the top quark in the production mode, which leads to different kinematic properties for the final-state particles from the weak decay of the top quark.
The CMS Collaboration has searched for the production mode using data taken at √ = 8 TeV [14]. The ATLAS Collaboration also performed an analysis that was optimised for the production mode using 81 fb −1 of data at √ = 13 TeV [15], resulting in the strongest upper limits to date. The limits on the LH (RH) effective coupling parameters were translated into BR upper limits of 2.8 × 10 −5 (6.1 × 10 −5 ) for → and 22 × 10 −5 (18 × 10 −5 ) for → . The search presented in this letter supersedes the one in Ref. [15], uses the full 139 fb −1 Run 2 dataset at √ = 13 TeV and is optimised for both the decay and production modes by training a neural-network (NN) classifier to separate the decay mode, the production mode and the SM background.
2 The ATLAS detector ATLAS [9,16,17] is a multipurpose particle detector designed with a forward-backward symmetric cylindrical geometry and nearly full 4 coverage in solid angle. 1 It consists of an inner tracking detector (ID) surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic and hadronic calorimeters, and a muon spectrometer (MS). The ID covers the pseudorapidity range | | < 2.5 and is composed of silicon pixel, silicon microstrip, and transition radiation tracking (TRT) detectors. Lead/liquidargon (LAr) sampling calorimeters provide electromagnetic (EM) energy measurements with high granularity. Hadronic calorimetry is provided by the steel/scintillator-tile calorimeter covering the central pseudorapidity range (| | < 1.7). The endcap and forward regions are instrumented with LAr calorimeters for both the EM and hadronic energy measurements up to | | = 4.9. The MS surrounds the calorimeters and is based on three large air-core toroidal superconducting magnets with eight coils each. The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector. The MS includes a system of precision tracking chambers and fast detectors for triggering. A two-level trigger system is used to select events. The first-level trigger is implemented in hardware and uses a subset of the detector information to keep the accepted event rate below 100 kHz [18]. This is followed by a software-based trigger that reduces the accepted event rate to 1 kHz on average. An extensive software suite [19] is used in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.

Analysis strategy
A signal region (SR) is defined by selecting events that contain one high-momentum photon and the decay products of a semileptonically decaying top quark, i.e. an electron or muon, a -tagged jet and missing transverse momentum. Additional jets may be present in the final state, as one such jet is expected at leading order for signal in the decay mode and additional jets may result from initial-or final-state radiation in both signal modes. The main backgrounds stem from events with prompt photons (mostly¯events in the lepton+jets channel and +jets events), from events with an electron that is misidentified as a photon (referred to as → fakes, mostly in dileptonic¯events), and from events with hadrons that are misidentified as photons (referred to as ℎ → fakes, mostly in semileptonic¯events). Backgrounds with prompt photons are modelled by Monte Carlo (MC) simulations, and control regions (CRs) are defined for the¯and the +jets processes. The¯CR is based on the presence of additional jets, especially an additional -tagged jet. The +jets CR is constructed by requiring that the -tagged jet fulfils only a looser -tagging requirement and not the tight -tagging requirement that is used in the SR. The contributions from → and ℎ → fakes are modelled by MC simulations but are corrected with data-driven scale factors (SFs). In the SR, signal and 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe. The -axis points from the IP to the centre of the LHC ring, and the -axis points upwards. Cylindrical coordinates ( , ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as = − ln tan( /2). Angular distance is measured in units of Δ ≡ √︁ (Δ ) 2 + (Δ ) 2 .
background events are distinguished using a NN with three output nodes, one for each signal mode and one for the SM background. The three nodes are combined into a one-dimensional discriminant to separate the total signal from the background. The signal contribution is then estimated with a binned profile likelihood fit to this discriminant, with systematic uncertainties modelled as nuisance parameters. Separate NNs are trained for the and couplings because the signal processes differ in their kinematic properties, due to differences between the up-and charm-quark parton distribution functions (PDFs), and also in their -tagging probabilities. The different -tagging properties stem from the jets initiated by the -quark from the FCNC top-quark decay in the case of the coupling.

Data and simulation
The proton-proton ( ) collision data analysed for this search were recorded with the ATLAS detector from 2015 to 2018 at a centre-of-mass energy of √ = 13 TeV. Events were selected using single-lepton triggers [18,20,21] and are required to have at least one reconstructed primary vertex that has at least three associated tracks with transverse momenta greater than 500 MeV. After the application of data-quality requirements [22], the data sample corresponds to an integrated luminosity of 139 fb −1 , as determined by using the LUCID-2 detector [23] for the primary luminosity measurements.
Monte Carlo simulated events samples are used in the analysis to optimise the event selection, to train the NN and to predict contributions from various SM processes. The effect of multiple interactions in the same and neighbouring bunch crossings (pile-up) was modelled by overlaying the simulated hard-scattering event with inelastic events generated by P 8.186 [24] using the NNPDF2.3 set of PDFs [25] and parameter values set according to the A3 tune [26]. After the event generation, the ATLAS detector response was simulated [27] by using the G 4 toolkit [28] with either the full simulation of the ATLAS detector or the fast-simulation package [29]. In all processes, the top-quark mass was set to 172.5 GeV. For all samples of simulated events, except those generated using S , the decays of bottom and charm hadrons were performed by E G [30].
The signal in the production mode ( → →l ) and in the decay mode ( →¯→l¯) was simulated with the M G 5_ MC@NLO 2.4.3 generator [31] with the UFO model TopFCNC [11,32] at next-to-leading order (NLO) in QCD, using the NNPDF3.0 PDF set [33]. The scale of new physics was set to Λ = 1 TeV. Interference effects between the production and decay FCNC processes can be neglected [34]. For SM decays of the top quark, the spin correlation was preserved using M S [35]. The parton showering and hadronisation were simulated using P 8.212 [36] with the A14 tune [37] and the NNPDF2.3 PDF set. In the production mode, four samples were generated with different couplings set to non-zero values: LH, RH, LH, and RH. In the decay mode, only samples with LH coupling and LH coupling have been simulated since the kinematic properties of the LH and RH couplings were found to be very similar. Correspondingly, only the LH coupling for the coupling in the decay is considered. The production and decay processes contribute similarly for the coupling, while for the coupling the decay is the dominant process. For given values of the Wilson coefficients, the cross-section for the production process is calculated with M G 5_ MC@NLO at NLO using the TopFCNC model. Then, for given values of the Wilson coefficients, the BR is calculated by the LO relation between the BR and the Wilson coefficients as given in Ref. [11].
Two dedicated MC samples are used to model¯events. In the first sample, photon radiation in¯production was generated at NLO precision in QCD. In the second sample, photons in the decay were generated at LO precision in QCD while removing the overlap with the former sample. Events in both samples were modelled using the M G 5_ MC@NLO 2.7.3 [38] generator with the NNPDF2.3 PDF set. The events were interfaced with P 8.240 [36], which used the A14 tune and the NNPDF2.3 PDF set. To combine the two samples, a -factor is applied to scale the predicted cross-section of the decay sample, while no -factor is applied to the production sample. The nominal value of the -factor for the decay sample is chosen so that the inclusive cross-section agrees with the theoretical cross-section calculation at NLO in QCD in Ref. [39], resulting in a -factor of 1.67 for the decay sample.
The production of¯events was modelled using the P B v2 [40][41][42][43] generator at NLO with the NNPDF3.0 PDF set and the ℎ damp parameter 2 set to 1.5 times the top-quark mass [44]. The events were interfaced to P 8.230 [36] to model the parton shower, hadronisation, and underlying event, with parameters set according to the A14 tune and using the NNPDF2.3 set of PDFs. The¯sample is normalised to the cross-section prediction at next-to-next-to-leading order (NNLO) in QCD including the resummation of next-to-next-to-leading logarithmic (NNLL) soft-gluon terms calculated using T ++ 2.0 [45][46][47][48][49][50][51].
Simulated events for the SM process, with photon radiation in the production of the top quark, were generated at NLO in the four-flavour scheme using the M G 5_ MC@NLO 2.6.2 event generator interfaced with P 8.240 for parton showering. Contributions with photon radiation from top-quark decay products were modelled by using the SM single-top-quark -channel sample described below. The top quark was decayed using M S .
The single-top-quark samples are split into three processes: -channel, -channel and -channel. These samples were modelled using the P B v2 [52] generator at NLO in QCD using the four-flavour (five-flavour) scheme for the -channel ( -channel and -channel) and the corresponding NNPDF3.0 set of PDFs. In the case of the -channel, the diagram removal scheme [53] was used. To avoid an overlap with the SM sample, -channel events with a prompt photon were only selected when the photon is radiated from the top-quark decay products. The events were interfaced with P 8.230, which used the A14 tune and the NNPDF2.3 set of PDFs.
The production of +jets ( = , ) final states was simulated with the S 2.2.8 [54] generator. Matrix elements (MEs) at NLO QCD accuracy for up to one additional parton and at LO accuracy for up to three additional parton emissions were matched and merged with the S parton shower based on Catani-Seymour dipole factorisation [55,56] using the MEPS@NLO prescription [57][58][59][60]. The virtual QCD corrections for MEs at NLO accuracy were provided by the O L library [61][62][63][64]. Samples were generated using the NNPDF3.0 set of PDFs, along with the dedicated set of tuned parton-shower parameters developed by the S authors.
The production of +jets events was simulated with the S 2.  [54] generator depending on the process, including off-shell effects and Higgs boson contributions where appropriate. Fully leptonic final states and semileptonic final states, where one boson decays leptonically and the other hadronically, were generated using MEs at NLO accuracy in QCD for up to one additional parton and at LO accuracy for up to three additional parton emissions. The NNPDF3.0 set of PDFs was used, along with the dedicated set of tuned parton-shower parameters developed by the S authors.
An overlap removal scheme was applied to remove double-counting of events stemming from photon radiation in samples in which a photon was not explicitly required in the final state, similarly to Ref.
[66]. This procedure was applied to the¯, +jets, +jets and single-top -channel samples, to avoid overlaps with the¯, +jets, +jets and SM samples, respectively.

Object and event selection
Electron candidates are reconstructed from clusters of energy deposits in the electromagnetic calorimeter matched to charged-particle tracks in the ID. The candidates must satisfy the The inputs of the DL1r neural network also include discriminating variables constructed by a recurrent neural network, which exploits the spatial and kinematic correlations between tracks originating from the same -hadron. The outputs of the neural network represent the probabilities of the jet to originate from a light-flavour quark or gluon, a -quark and a -quark, which are then combined into a single discriminant.
The missing transverse momentum vector ì miss T , with magnitude miss T , is defined as the negative sum of the transverse momenta of the reconstructed and calibrated physical objects and a soft term built from all tracks that are associated with the primary vertex but not with these objects [77].
To avoid double-counting of detector signatures, objects are removed in the following order: 3  SFs are used to correct the efficiencies in simulation in order to match the efficiencies measured in data for the electron [68, [78][79][80] and muon [80] trigger, reconstruction, identification, and isolation criteria, as well as for the photon identification [68] and isolation requirements. SFs are also applied for the JVT requirement [81] and for the -tagging efficiencies for jets that originate from the hadronisation of -quarks [75], -quarks [82], and -, -, -quarks or gluons [83].
The selected events have exactly one electron or muon, exactly one photon, at least one jet and miss T > 30 GeV. The lepton must be matched, with Δ < 0.15, to the lepton reconstructed by the trigger. Events meeting these criteria are further separated into three orthogonal regions: the SR, the¯CR and the +jets CR. In the SR, exactly one -tagged jet identified with the 60% efficiency WP is required while vetoing events with additional jets that pass the 77% WP, to ensure orthogonality to the CRs. Events in the¯CR are required to have at least four jets with at least two -tagged jets identified at the 77% WP, and at least one of these must also be -tagged at the 70% WP. In order to select a +jets event sample with jets originating from light-flavour and heavy-flavour quarks in proportions similar to those in events satisfying the SR criteria, events in the +jets CR are required to have exactly one -tagged jet identified at the 77% WP, with no jet passing the 70% WP. Additionally, events with an electron-photon pair invariant mass of 80-100 GeV are excluded from the +jets CR to suppress +jets events with → fakes. Table 1 summarises the selection criteria for the individual analysis regions. The selection efficiency of the signal events in the SR is about 8% for the FCNC production mode and about 6.5% for the FCNC decay mode for the LH coupling. The composition of the SM backgrounds after the selection is presented in Figure 2. Around 60% (35%) of the jets in the signal region originate from the hadronisation of -quarks ( -quarks), while around 10% (75%) of the jets in the +jets CR originate from the hadronisation of -quarks ( -quarks). The residual differences between the kinematics of jets originating from the hadronisation of -and -quarks were found to be negligible. Setting the FCNC couplings to the 95% CL limits measured in Ref.
[15], the signal contamination in the CRs is less than 0.3% (1%) for the¯CR ( +jets CR). Table 1: Summary of the analysis region definitions. While the requirements on photons, leptons and miss T are shared, the regions differ in their jet and -tagged jet requirements. The latter ensure orthogonality. All jets that pass the 60% -tagging WP automatically pass the looser -tagging WPs. A hyphen indicates that no criterion has to be fulfilled.

Object SR CR¯CR +jets
Photon ( T > 20 GeV) 6 Data-driven estimate of misidentified photons

Electrons misidentified as photons
Electrons can be misidentified as photons, for example, if the electron track is not reconstructed or if the track is not matched to the energy clusters in the electromagnetic calorimeter. These misidentified photons are referred to in the following as → fakes. The probability for an electron to be misidentified as a photon, → , is measured from data and simulation following the methodology used previously [15,84]. The ratio of the probabilities measured in data and simulation is then applied to correct the simulation.
Two regions are defined in order to measure → . The → region is defined by requiring exactly one electron and one photon with an electron-photon invariant mass in the range 70-110 GeV. The → region is defined by requiring exactly two electrons with opposite electric charge, no photons, and a dielectron invariant mass in the range 70-110 GeV. Both regions are required to have miss T < 30 GeV, and a veto on the presence of -tagged jets is applied. Templates from MC simulation for the invariant mass of the dielectron (electron-photon) pair are estimated from a binned-likelihood fit in the → ( → ) region. The templates also include the SM backgrounds originating from +jets and +jets events. For the → region, third-order Bernstein polynomials are used to create templates for the remaining SM backgrounds. The ratio of the integrals of the aforementioned fitted signal templates is calculated in order to estimate 2 → , where the factor of two accounts for the two electrons in → events that may be misidentified as a photon. The → probabilities are measured independently in six bins in photon | | and separately for the different reconstruction types for converted photons, defined by the number of hits in the silicon detectors and in the TRT.
The following systematic uncertainties in the estimated → probabilities are assessed: removing the third-order Bernstein polynomial functions used to describe the background contributions from the fits, removing the MC templates for +jets and +jets from the fits, varying the fit range from 70-110 GeV to 80-100 GeV, increasing and decreasing the photon energy by 1%, and considering modelling uncertainties of the +jets events, as the +jets process is the only process that contributes significantly. SFs are defined by the ratios of the measured and simulated values of → , and range from about 0.8 for the forward photons to 2.5 for the central photons for one of the reconstruction types for the converted photons. The total uncertainties differ between the bins and the reconstruction types and vary from about 1.5% to about 30%, dominated by systematic uncertainties arising from the removal of the Bernstein polynomials and from the +jets modelling uncertainties in most bins. The validity of the measured → SFs was cross-checked using a selection similar to the → region but requiring at least one -tagged jet to be present. Good agreement between the corrected prediction and the data was observed. The main processes contributing to the analysis regions via → fakes are¯with about 65% and +jets with about 25%.

Hadrons misidentified as photons
A jet can be misreconstructed as a photon, e.g. when a hadron inside the jet decays into two photons that are reconstructed as a single one. The number of events with misidentified hadrons is estimated from data. SFs, defined as the ratio of the numbers of misidentified hadrons in data and in simulation, are applied to the simulation to correct the normalisation of the events with misidentified hadrons. The shapes of the distributions for these processes are estimated from the MC simulation with their uncertainties.
Three hadron fake regions (HFR) are defined by the same criteria as the SR but with modified photon identification and isolation requirements following the same prescription as in Ref. [15]. Assuming no correlation between the identification and isolation variables used to define the hadron fake regions, the contribution of this background in the SR can be estimated using the ABCD method (see e.g. Ref. [ ) represents the number of events the MC simulation predicts in the SR after the correction for the non-zero correlations of the identification and isolation variables. The SFs are estimated independently in six photon | | bins and two photon T bins, and separately for converted and unconverted photons.
The following sources of systematic uncertainty are considered for the estimation of the SFs: the finite number of events in the data and MC simulated samples, variations of the → SFs used to subtract the → background in the different regions by one standard deviation, variations of the correlation between identification and isolation variables, estimated as the maximum correlation seen in the simulation across all T and bins, and variations in the prompt-photon subtraction in the non-tight regions by considering normalisation uncertainties for the individual processes and modelling uncertainties of the¯and processes.
The measured data-driven SFs range from about 0.6 for high-T central unconverted photons to up to 2.2 for high-T forward unconverted photons. The total uncertainties in the SFs vary between about 45% to 65%, where in most of the bins the dominant uncertainty arises from the assumptions about the correlation of the photon identification and isolation variables. The main processes contributing to the analysis regions via ℎ → fakes are¯with about 80% and single top with about 10%.

Neural network for discrimination between signal and background
The signal is distinguished from the sum of the background processes by a fully connected feed-forward NN with backpropagation, implemented in Keras [86] with the TensorFlow [87] back end. Separate NNs are trained for FCNC processes with a or a vertex, reflecting the differences stemming mainly from the different PDFs involved in the production mode. Differences between the LH and RH couplings were found to mostly impact the acceptance of the event selection, while no significant impact on the discrimination power of the network was found. Thus, the LH and RH couplings are not separated in the network.
An optimised set of 37 variables is used as the input to the NN: these were selected by removing the input variables with negligible impact on the separation power of the final discriminant. The input variables include the T and of the charged leptons, photons, -tagged jets and the two leading non--tagged jets, miss T and the photon conversion status. High-level variables, such as invariant masses and angular distances between the objects, jet multiplicities and the -tagging information, are also included. All variables are transformed using cikit-learn's [88] StandardScaler.
The NN consists of six hidden layers with 512, 256, 128, 64, 32, 16 nodes, respectively. The network architecture was optimised, using the expected limit without considering systematic uncertainties, to provide the best separation of the simulated FCNC signals and SM background. The output of the NN consists of three nodes representing the three classes: the FCNC production signal, the FCNC decay signal and the SM background. The softmax function, ( ) = exp( )/ 3 =1 exp( ), in three dimensions for the -th class is used for the activation of the output nodes. Consequently, the target vector of the NN in the training is (1, 0, 0) for FCNC production mode events, (0, 1, 0) for the decay mode, and (0, 0, 1) for all background processes. The NN is trained with the Adam optimiser [89]. The output of the multiclass discriminator is illustrated in Figure 3, showing good separation between the three output classes. The strongest separation is achieved between the FCNC production class and the SM background class.
From the three-dimensional NN output, a one-dimensional discriminant, D, is formed using where prod(dec) represents the NN output for the FCNC production (decay) class and bkg represents the NN output for the SM background class. The discriminant D is inspired by the log-likelihood ratio, with an optimisable parameter ∈ (0, 1) that changes the relative contribution of the NN outputs for the signal modes to the discriminant. The discriminant is in the range (−∞, +∞). The optimal value of the parameter was found to be 0.3 for the NN, and 0.2 for the NN, reflecting the smaller contribution of the production mode in the case of charm-quark-initiated FCNC production. The multiclass NN was found to outperform a simpler binary NN that discriminates only between the FCNC signal and the SM background. The expected upper limit on the signal was found to be up to 30% lower in the case of the multiclass neural network.

Systematic uncertainties
Systematic effects may change the expected numbers of events from the signal and background processes and the shape of the fitted discriminants in the SR and the CRs. These effects are evaluated by varying each source of systematic uncertainty by ±1 and considering the resulting deviation from the nominal expectation as the uncertainty.
Uncertainties due to the modelling of the signal are estimated by considering independent variations of the renormalisation and factorisation scales by factors of 2 and 0.5, but normalising the signal to the nominal cross-section. Additionally, an uncertainty due to the choice of parton shower generator is estimated by considering the change when using an alternative MC sample that uses the H 7.2.1 [98,99] prediction with the H7UE set of tuned parameters [99] and the MMHT2014 PDF set [100] instead of the P 8.212 generator. Uncertainties due to the PDFs are estimated by using the NNPDF set of replicas.
For the background processes, uncertainties due to the renormalisation and factorisation scales and from the PDFs are estimated separately for each process, following the same procedure as for the signal. An additional uncertainty due to the chosen ℎ damp parameter value for the¯process is estimated by doubling it to three times the top-quark mass. For the¯,¯and single-top processes, an uncertainty is estimated from an independent variation of the A14 tune to its Var3c up and down variants [101]. For the¯and single-top processes, an uncertainty due to the final-state radiation modelling is estimated from an independent variation of the renormalisation scale for emissions from the parton shower by factors of 2.0 and 0.5. In addition, a variation of the -factor for decay-sample¯events from 1.67 to 1.97 is considered in order to account for uncertainties in the extrapolation to the search phase-space. The variation is motivated by the dedicated calculation yielding the inclusive NLO -factor of 1.30 used in Ref.
[66] assuming no -factor for the production sample. This uncertainty is decorrelated between the regions to account for the phase-space dependence of the inclusive -factor. The uncertainty due to the choice of parton shower generator is estimated by comparing the nominal predictions with an alternative set using M G 5_ MC@NLO interfaced with H for the¯and SM processes, and P B v2 interfaced with H for the¯and single-top processes. The parton shower uncertainty for the¯and¯processes is fully decorrelated between the individual analysis regions to relax the constraints from these uncertainties by taking into account possible phase-space differences. The impact on the¯process is further split between → and ℎ → fakes. Additionally, for the¯CR, the impact of the parton shower modelling of¯ℎ → fakes is split into distribution normalisation and residual uncertainties where the normalisation difference is removed. Furthermore, an uncertainty due to the choice of generator is considered for the¯and single-top processes by comparing the nominal prediction with a prediction from M G 5_ MC@NLO interfaced with P 8. Finally, for the single-top process, an uncertainty due to the overlap with the¯process is estimated by replacing the diagram removal scheme with the diagram subtraction scheme [53].
An uncertainty of 1.7% in the integrated luminosity is considered [102] for all processes. The uncertainty due to pile-up is determined by varying the average number of interactions per bunch-crossing by 3% in the simulation. The uncertainties due to the SFs for electrons and hadrons that are misidentified as photons are determined as described in Section 6.
The uncertainty originating from the limited number of simulated MC events is implemented via the Barlow-Beeston approach [106]. Two uncertainties for each bin of the fitted distributions are considered, one for the uncertainty originating from the SM backgrounds and one for the combined production and decay FCNC signal.

Results
The normalisations of the signal contribution and the two contributions from¯and +jets production are obtained from a simultaneous binned profile-likelihood fit to the NN discriminant distribution in the SR and the photon T distribution in the¯and +jets CRs, with systematic uncertainties included as nuisance parameters. Figure 4 shows the corresponding post-fit distribution of the NN discriminant for the signal. The qualitative features of these distributions are similar for the NN.
The data and SM predictions agree within uncertainties and no significant FCNC contributions are observed. From the 95% confidence level (CL) upper limits on the signal contribution, derived using the CL s method [107], the corresponding limits on the effective coupling parameters are calculated [108]. From these, limits on the BRs are also derived. The background contributions from¯and +jets production are scaled by normalisation factors estimated to be 1.00 ± 0.10 and 1.15 ± 0.15, respectively, from the fit for the LH coupling. The normalisation values determined in the fit for the LH coupling are 0.97 ± 0.10 for thec ontribution and 1.16 ± 0.15 for +jets. The normalisation values are similar for the fits with the RH FCNC couplings. The K-factors for the¯decay-sample are fitted to the values of about 1.57 for the SR, about 1.61 for the¯CR and about 1.73 for the +jets CR. The smallest post-fit uncertainty on the K-factor is seen in the¯CR, where the uncertainty is found to be about 0.24. The observed and expected 95% CL limits on the effective coupling strengths and the BRs are presented in Table 2 as well as in Figure 5. The fitted normalisations of the FCNC signal are larger than zero, although consistent with zero within one standard deviation. Thus, the observed 95% CL limits for the couplings are larger than the expected 95% CL limits. The dominant source of uncertainty in the signal contribution for the couplings is the statistical uncertainty. All systematic uncertainties worsen the limit by only about 20%. For the couplings, the effect of the systematic uncertainties is larger, worsening the limit by about 40%, but the statistical uncertainties also play an important role. The sources of systematic uncertainty with the largest impact on the estimated signal contribution depend on the coupling studied. For the couplings, the dominant systematic uncertainties arise from the cross-section uncertainty of the SM process, the uncertainty in the electron-photon energy scale and the decay -factor uncertainty in the¯process. For the couplings, the dominant systematic uncertainties originate from the cross-section uncertainty of the SM process, the uncertainty in the hadron-to-photon SFs and the limited number of simulated events for the SM backgrounds. No nuisance parameters are significantly pulled or constrained.   +jet CRs (bottom right). The last bin of the distributions contains the overflow and in the case of the SR, the first bin also contains the underflow. In addition, in the SR, the expected LH signal is overlaid for an expected number of events corresponding to the observed 95% CL limit scaled by a factor of ten. The lower panels show the ratios of the data ('Data') to the background prediction ('Bkg.'). The uncertainty band includes both the statistical and systematic uncertainties in the background prediction. Correlations among uncertainties were taken into account as determined in the fit.

Conclusion
A search for flavour-changing neutral currents (FCNCs) in events with one top quark and a photon is presented using 139 fb −1 of √ = 13 TeV collision data collected with the ATLAS detector at the LHC. Events with a photon, an electron or muon, exactly one -tagged jet, missing transverse momentum and possibly additional jets are selected. The contribution from events with electrons or hadrons that are misidentified as photons is estimated from simulation with data-driven corrections, and the two main background processes with a prompt photon are estimated with control regions. Multiclass neural networks are used to distinguish events with FCNCs in the production process, events with FCNCs in the decay process, and background events. The outputs are combined into a one-dimensional discriminant to search for the FCNC signal. The data are consistent with the background-only hypothesis. Limits are set on the strength of effective operators that introduce a left-or right-handed flavour-changing coupling with an up-type quark . The 95% CL bounds on the corresponding branching fractions for the FCNC top-quark decays are B ( → ) < 0.85 × 10 −5 and B ( → ) < 4.2 × 10 −5 for a left-handed coupling, and B ( → ) < 1.2 × 10 −5 and B ( → ) < 4.5 × 10 −5 for a right-handed coupling. The obtained limits are the most stringent to date and improve on the previous ATLAS limits by factors of 3.3 to 5.4. The improvements in the limits originate mainly from considering events with more than one jet, the optimisation of the signal separation and the increased integrated luminosity.   [65] C. Anastasiou, L. J. Dixon, K. Melnikov