Search for pair-production of vector-like quarks in lepton+jets final states containing at least one 𝒃 -tagged jet using the Run 2 data from the ATLAS experiment

A search is presented for the pair-production of heavy vector-like quarks in the lepton+jets final state using 140 fb − 1 of proton–proton collisions at √ 𝑠 = 13 TeV collected with the ATLAS detector. The search is optimised for vector-like top-quarks ( 𝑇 ) that decay into a 𝑊 boson and a 𝑏 -quark, with one 𝑊 boson decaying leptonically and the other hadronically. Other vector-like quark flavours and decay modes are also considered. Events are selected with one high transverse-momentum electron or muon, large missing transverse momentum, a large-radius jet identified as a 𝑊 boson, and multiple small-radius jets, at least one of which is 𝑏 -tagged. Vector-like 𝑇 -quarks with 100% branching ratio to 𝑊𝑏 are excluded at 95% CL for masses below 1700 GeV. These limits are also applied to vector-like 𝑌 -quarks, which decay exclusively into a 𝑊 boson and a 𝑏 -quark. Isospin singlets with B( 𝑇 → 𝑊𝑏 : 𝐻𝑡 : 𝑍𝑡 ) = 1 / 2 : 1 / 4 : 1 / 4 are excluded for masses below 1360 GeV.


Introduction
The Standard Model (SM) of particle physics is extremely successful in describing elementary particles and their interactions, yet its many shortcomings reveal that it is incomplete.In particular, naturalness [1] suggests that a more complete theory will provide an explanation for how radiative divergences to the Higgs boson mass from -quark loops are cancelled out.Extensions of the SM, such as extra dimensions [2], composite Higgs bosons [3,4] and Little Higgs boson [5] models, predict the existence of vector-like quarks (VLQs) that could mitigate these large radiative corrections to the Higgs boson mass.The VLQs are spin-1/2, colour-triplets that have the same weak isospin for left-and right-handed chiralities.Thus, VLQs would not acquire mass via the Higgs boson [6], allowing them to evade the limits that exclude additional SM-like quarks [7].To cancel out the Higgs boson mass divergence from top-quark loops, the VLQs must couple preferentially to third-generation quarks.VLQs could appear as different types of multiplets: SU(2) singlets, doublets, or triplets of , ,  or  ; where  and  have the same electric charge as the SM and -quarks, while  and  have electric charges 5/3 and −4/3, respectively.In the simplest models, the mass difference between VLQs in a given SU (2) multiplet must be small to satisfy constraints from precision electroweak measurements [6], excluding cascade decays such as  →   → .The VLQs can decay via a flavour-changing neutral current or a charged current, allowing the  and  to each have three possible decays:  →  // and  → / /.The  and  have no SM partners, so these can only decay via  →  and  →  .Decays into final states with first and second generation quarks, though not forbidden, are not favoured as they would not address the hierarchy problem.Examples of Feynman diagrams for  and  pair production and decay are shown in Figure 1.The branching ratio of the VLQ decay to SM particles is not fixed by the theory, therefore, all possible VLQ decays and branching ratios need to be probed in searches for VLQs.A combination of searches for pair-production of VLQs by ATLAS using 36 fb −1 of Run 2 data collected at the Large Hadron Collider (LHC) from 2015 to 2016 at a center-of-mass energy of √  = 13 TeV excludes VLQs with masses below 1310 GeV for any branching ratio to the assumed decays [8].ATLAS searches for pair-produced VLQs using the full proton-proton ( ) collision data sample of 140 fb −1 collected during Run 2 of the LHC between 2015 and 2018 at √  = 13 TeV include a search in final states with one lepton, jets and large missing transverse momentum [9], mostly sensitive to the decay  →  or  → , and in final states with at least one leptonically decaying  boson and a third-generation quark [10], which is mainly sensitive to  →  and  →  .CMS published searches for pair production [11,12] of vector-like quarks using 137 fb −1 of data collected during the LHC Run 2.
ATLAS has previously performed a search for pair-produced VLQs in the one lepton+jets final state using 36 fb −1 of Run 2 data which was optimised to the  →   decay [13].That search excluded -quark masses below 1350 GeV in the scenario B ( →  ) = 1 and -quark masses below 1170 GeV in the  isospin singlet scenario with a branching ratio of B ( →   :  : ) = 1/2 : 1/4 : 1/4.The recent search for pair-produced VLQs by CMS in leptonic final states using the full CMS Run 2 data sample excludes the scenario B ( →  ) = 1 for masses below 1540 GeV [12].
Singly produced VLQs have also been searched for by ATLAS [14,15] and CMS [16][17][18] using the full LHC Run 2 data sample, but the interpretation of the search results depends on an additional constant for the coupling to electroweak bosons [19].This paper presents a search for the pair production of VLQs decaying into third-generation quarks using the   collision data collected at the LHC from 2015 to 2018 at a centre-of-mass energy of 13 TeV with the ATLAS experiment, increasing the data sample relative to the previous search in this decay channel from 36 fb −1 to 140 fb −1 .The analysis is optimised for the  T →   b channel with one  boson decaying leptonically and the other hadronically, resulting in a final state with exactly one lepton, missing transverse momentum and jets, but the search is also sensitive to the other VLQs and decay modes.Targeting events with a leptonically decaying  boson ( → ℓ with ℓ = , ) suppresses SM processes with purely hadronic final states, while the hadronically decaying  boson provides a large branching ratio.Vector-like  candidates are reconstructed such that the mass difference between the leptonically and hadronically decaying  candidates is minimised.The reconstructed mass of the leptonically decaying  is then used as the discriminating variable to test for the presence of a VLQ signal.The dominant background processes are  t, +jets, and single-top-quark production.Control regions enhanced in  t or +jets events are used to correct the mismodelling observed in Monte-Carlo (MC) simulations of those processes.Finally, a profile likelihood fit is performed on the observed distributions of the reconstructed VLQ mass to test for the presence of VLQ signals for various signal models, varying the VLQ mass and the decay branching ratios.The data-driven corrections to  t and +jets in dedicated control regions are new with respect to previous analyses.The statistical analysis also includes additional control regions to better constrain the dominant backgrounds characterised by substantial modeling uncertainties, while maintaining a high signal acceptance and achieving a narrow peak in the reconstructed VLQ mass distribution.Further improvements are brought by the improved object identification, especially from -boson tagging.

ATLAS detector
The ATLAS experiment [20] at the LHC is a multipurpose particle detector with a forward-backward symmetric cylindrical geometry and a near 4 coverage in solid angle. 1 It consists of an inner tracking detector (ID) surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic and hadron calorimeters, and a muon spectrometer.The inner tracking detector covers the pseudorapidity range || < 2.5.It consists of silicon pixel, silicon microstrip, and transition radiation tracking detectors.Lead/liquid-argon (LAr) sampling calorimeters provide electromagnetic (EM) energy measurements with high granularity.A steel/scintillator-tile hadron calorimeter covers the central 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe.The -axis points from the IP to the centre of the LHC ring, and the -axis points upwards.Polar coordinates (, ) are used in the transverse plane,  being the azimuthal angle around the -axis.The pseudorapidity is defined in terms of the polar angle  as  = − ln tan(/2).Angular distance is measured in units of Δ ≡ √︁ (Δ) 2 + (Δ) 2 .
pseudorapidity range (|| < 1.7).The endcap and forward regions are instrumented with LAr calorimeters for both the EM and hadronic energy measurements up to || = 4.9.The muon spectrometer (MS) surrounds the calorimeters and is based on three large superconducting air-core toroidal magnets with eight coils each.The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector.
The muon spectrometer includes a system of precision tracking chambers and fast detectors for triggering.
A two-level trigger system is used to select events.The first-level trigger is implemented in hardware and uses a subset of the detector information to accept events at a rate below 100 kHz.This is followed by a software-based trigger that reduces the accepted event rate to 1 kHz on average depending on the data-taking conditions.An extensive software suite [21] is used in data simulation, in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.

Data and simulated event samples
The data sample analysed is from   collisions with a centre-of-mass energy of √  = 13 TeV.The data were collected between 2015 and 2018 with the ATLAS detector and correspond to an integrated luminosity of 140 fb −1 [22,23].A set of single-electron [24] and single-muon triggers [25] were used, with transverse momentum ( T ) lowest thresholds in the range of 20-26 GeV depending on the lepton flavour and data-taking period.In addition, data were recorded using a trigger targeting events with large missing transverse energy ( miss T ) [26] with thresholds of 70, 90 or 110 GeV, depending on the data taking period.All detector subsystems are required to be operational during data taking and to satisfy data quality requirements [27].
Signal events and SM background with at least one prompt lepton are modelled by MC simulation.Contributions from processes surviving selection requirements due to non-prompt leptons or hadronic jets misidentified as leptons (dominated by QCD multĳet events) are estimated by a data-driven method, with MC samples serving as cross checks.All samples were produced using the ATLAS simulation infrastructure [28] and Geant4 [29].The estimate of some systematic uncertainties used a faster detector simulation employing a parameterisation of the calorimeter response [30].Unless specified otherwise, the parton shower, hadronisation, and underlying events are modelled using Pythia8.230[31], with parameters set according to the A14 set of tunable parameters (the A14 "tune") [32] and using the NNPDF2.3loset of parton distribution functions (PDFs) [33].In the simulation, the -quark and SM Higgs boson masses were set to 172.5 GeV and 125 GeV, respectively.The effect of additional   interactions per bunch crossing (pile-up) is accounted for by overlaying the hard-scattering process with minimum-bias events simulated with Pythia 8.186 [34] using the A3 tune [35] and NNPDF2.3loPDFs.The distribution of the mean number of interactions per bunch crossing in simulation is reweighted such that its distribution matches that measured in the data.
Signal events for the production and decay of  T and  B were simulated at leading-order using Protos v.2.2 [36] with VLQ masses from 1000 GeV to 1800 GeV in steps of 100 GeV, and with a mass of 2000 GeV.The samples were produced for the singlet model, with alternative branching ratio scenarios obtained by reweighting the events based on the decay mode.Samples for the doublet model with a mass of 1.2 TeV were also produced to confirm that kinematic differences due to isospin have negligible impact on the results.The signal cross-sections were calculated with Top++ 2.0 [37] at next-to-next-to-leading-order (NNLO) in QCD including the resummation of next-to-next-to-leading-logarithmic (NNLL) soft-gluon terms.
The dominant background arises from the production of -quark pairs ( t).Events with a single -quark (henceforth referred to as "single top") or a  boson produced in association with jets (+jets) also make significant contributions.Finally, events containing a  boson and jets (+jets), three or four -quarks, multi-boson events, and -quark pairs produced in association with a vector boson or a Higgs boson ( with  = ,  and , respectively) have small contributions.
The production of  t events is modelled using the Powheg Box v2 [38][39][40][41] generator at NLO accuracy in QCD with the NNPDF3.0nlo[42] PDFs and the ℎ damp parameter2 set to 1.5  [43], where   denotes the -quark mass.The dependence on the parton shower and hadronisation models is evaluated by comparing the nominal  t sample with an alternative sample also produced with Powheg Box v2 generator using the NNPDF3.0nloPDFs, but instead interfaced to Herwig7.04 [44,45] using the MMHT2014lo PDFs [46] and the H7UE tune [45].An uncertainty in the matching of the NLO matrix elements (ME) to the parton shower is assessed by comparing the nominal sample to one simulated with MadGraph5_aMC@NLO 2.6.0 [47] using the NNPDF3.0nloPDFs.The effect from possibly under-estimating the initial-state radiation (ISR) is evaluated by comparing the nominal sample to one produced with ℎ damp increased to 3  , the renormalisation and factorisation scales divided by two, and using the "Var3cUp" weight of the A14 tune.The effect from possibly over-estimating the ISR is evaluated by comparing the nominal sample to one obtained by doubling the renormalisation and factorisation scales and choosing the "Var3cDown" weight of the A14 tune [48].The size of the uncertainty in the modelling of final state radiation (FSR) is evaluated by doubling or halving the renormalisation scale for emissions from the parton shower.
The associated production of a -quark with a  boson () accounts for the vast majority of single top events.These events were modelled with the Powheg Box v2 generator at NLO in QCD using the five-flavour scheme and the NNPDF3.0nloPDFs.The nominal sample uses the diagram removal (DR) scheme [49] to remove the contribution from Feynman diagrams already included in the  t production.An alternative sample simulated using the diagram subtraction scheme (DS) [43,49] is used to evaluate an uncertainty due to the choice of  t/ overlap removal scheme.The uncertainty due to the parton shower and hadronisation models is evaluated by comparing the nominal sample of events with events simulated by the Powheg Box v2 generator interfaced to Herwig7.04 using the H7UE tune and the MMHT2014lo PDFs.To assess the uncertainty in the matching of the ME to the parton shower, the nominal  sample is compared with a sample simulated with the MadGraph5_aMC@NLO 2.6.2 generator at NLO in QCD using the five-flavour scheme and the NNPDF2.3loPDFs [42].
The small contributions from single top -channel and -channel production were modelled using the Powheg Box v2 [39-41, 50, 51] generator at NLO in QCD using the four-flavour scheme (-channel) or the five-flavour scheme (-channel) and the NNPDF3.0nloPDFs.The uncertainty due to the parton shower and hadronisation models is evaluated by comparing the nominal sample of events with events simulated by the Powheg Box v2 generator at NLO accuracy in QCD using the five-flavour scheme and interfaced to Herwig7.04 using the H7UE tune and the MMHT2014lo PDFs.To assess the uncertainty in the matching of NLO accuracy ME to the parton shower, the nominal sample is compared with a sample simulated with the MadGraph5_aMC@NLO 2.6.2 generator at NLO accuracy in QCD using the five-flavour scheme and NNPDF2.3nloor NNPDF3.0nloPDFs for the -and -channel, respectively.
The production of +jets ( = , ) was simulated with the Sherpa2.2.1 [52] generator at NLO accuracy in QCD for up to two partons, and at LO accuracy for up to four additional partons.The full samples are then normalised to the NNLO predictions [53].Samples of diboson events () were simulated with the Sherpa2.2.1 or 2.2.2 [52] generator at NLO accuracy in QCD for up to one additional parton and at LO accuracy for up to three additional parton emissions, including off-shell effects and Higgs-boson contributions, where appropriate.The ME calculations for Sherpa samples were matched and merged with the parton shower based on Catani-Seymour dipole factorisation [54,55] using the MEPS@NLO prescription.The virtual QCD corrections were provided by the OpenLoops library [56][57][58].The NNPDF3.0nnlo set of PDFs was used, along with the dedicated tune developed by the Sherpa authors.
The production of  t and   events was modelled using the MadGraph5_aMC@NLO 2.3.3 generator at NLO with the NNPDF3.0nloPDF and Pythia8.210(Pythia8.211)for  t ( ) with the A14 tune and the NNPDF2.3loPDFs for the parton shower and hadronisation.The diagram removal scheme described in Ref. [49] was employed to treat the overlap between   and  t, and was applied to the   sample.The production of  t events is modelled using the Powheg Box v2 generator at NLO with the NNPDF3.0nloPDF set.The production of , three and four -quarks events, and  t events was simulated with MadGraph5_aMC@NLO 2.2.2 interfaced to Pythia8.186 using the A14 tune.The small contributions from processes from  t,  , ,  and multi-top production is summarised as "rare top" processes.Multĳet MC samples were simulated using the Sherpa 2.1.1 generator.The matrix element calculation was included for the 2 → 2 process at leading-order, and the default Sherpa parton shower based on Catani-Seymour dipole factorisation was used for the showering with  T ordering, using the CT10 PDF set [59].
Decays of -and -hadrons were simulated by EvtGen 1.6.0[60] in all simulations except Sherpa, for which the default Sherpa configuration recommended by the Sherpa authors was used.

Object reconstruction
Events are required to contain at least one vertex with two or more associated tracks that must have   > 500 MeV.Among all vertices, the vertex with the highest  2  sum of the associated tracks is taken as the primary vertex (PV) [61].
Electrons are reconstructed from clusters in the EM calorimeter matched with ID tracks and are calibrated [62].Electron candidates must be in the central region of the detector (|| < 2.47) with  T > 27 GeV and match a track with | 0 sin | < 0.5 mm and  0 /  0 < 5; where  0 is the impact parameter between the track and the beam line,   0 is the measured uncertainty in  0 , and  0 is the minimum distance in  between the track and the beam spot.Any candidates in the transition region between the barrel and endcap calorimeters (1.37 < || < 1.52) are removed."Baseline electrons" must satisfy the medium likelihood identification criteria [62], with no selection on the isolation."Tight electrons" must satisfy the tight likelihood identification criteria, plus the following isolation requirements in both the calorimeter and the ID [62].The first isolation requirement is  isol T, cone /  T < 0.2; where   T is the electron candidate  T and  isol T,cone is the energy deposited in the calorimeter within a cone of size  = 0.2 around the candidate direction; any leakage energy and energy from pile-up are subtracted.The second isolation requirement is  isol T,var /  T < 0.15; where  isol T,var is the sum of the track  T without the electron candidate in a cone of radius  = min(10 GeV/  T , 0.2).Scale factors are used to correct for differences between data and simulation for reconstruction, identification, isolation and trigger selection efficiencies [62].
Muons are reconstructed [63] from combined tracks in the MS and the ID, with "baseline muons" required to satisfy the loose identification criteria and no selection on the isolation, while "tight muons" must satisfy the tight identification criteria [63] and satisfy the track-based isolation requirements defined by the TightTrackOnly working point.This working point uses the scalar sum of the  T of all tracks that are within a cone of size  = min(0.3,10 GeV/  T ) around the muon candidate, where   T is the candidate muon T .The track matched to the muon candidate under consideration is excluded from the sum.The muon is selected if this sum is less than 6% of   T .Finally, all muon candidates are required to satisfy | 0 sin | < 0.5 mm and  0 /  0 < 3. Muons are calibrated [64] and are required to have  T > 15 GeV and to be reconstructed within || = 2.5.Scale factors are used to correct for differences in muon reconstruction, identification, vertex matching, isolation and trigger efficiencies between simulation and data [63].
Small-radius (small-) jet candidates are built from particle-flow objects [65], using the anti-  algorithm [66,67] with a radius parameter of  = 0.4.The particle-flow algorithm combines information about tracks in the ID and energy deposits in the calorimeters to form the input for the jet reconstruction.The jet energy is calibrated to the particle level scale by using a sequence of corrections, including simulation-based corrections and in situ calibrations [68].Jets are required to have  T > 25 GeV and || < 2.5.To reject jets originating from pile-up interactions, jet candidates with || < 2.4 and  T < 60 GeV are required to satisfy the "tight" jet vertex tagger (JVT) criterion [69].An algorithm based on deep and recurrent neural networks, called DL1r, is used to identify small- jets containing a -hadron decay [70].Jets are considered to be -tagged if they satisy the criteria for the operating point with an efficiency of 77 % and mistag rates for charm and light jets of 17.7% and 0.52%, respectively, as determined in simulated  t events.The -quark tagging efficiencies in simulation, and the charm and light jet mistag rates, are corrected to match the efficiencies in data [71][72][73].
An overlap removal procedure is applied to prevent double counting of ambiguous reconstructed objects, using the baseline lepton definitions.First, electron-muon overlap is handled by removing muons sharing a track in the ID with an electron if the muon is calorimeter-tagged [63], and otherwise removing the electron.Next, overlap between jets and leptons is removed by rejecting any jets within Δ = 0.2 of an electron, followed by the rejection of any electrons within Δ = 0.4 of the remaining jets.Similarly, jets are discarded if they have fewer than three matched tracks and are within Δ = 0.2 of a muon candidate.Otherwise, the muon is rejected if it lies within Δ = min(0.4,0.04 + 10 GeV/  T ) of a jet.The missing transverse momentum, with magnitude  miss T , is defined as the negative vectorial sum of the transverse momenta of all calibrated objects in an event, plus a track-based soft-term which takes into account energy depositions matched to the primary vertex but not matched with any calibrated object [74].
Finally, large-radius (large-) jets are constructed from the noise-suppressed topological calorimeter-cell clusters calibrated by using local hadronic cell reweighting [75] and the anti-  algorithm with  = 1.0.To reduce the impact of soft radiation, a grooming algorithm called "trimming" [76] is applied.Constituent small- jets, reclustered with the   algorithm with a radius parameter  = 0.2, with  T less than 5% of the large- jet  T are removed.The large- jets are required to have  T > 200 GeV, || < 2.0 and a mass larger than 50 GeV.The jet energy scale (JES) and resolution (JER) and the mass scale (JMS) and resolution (JMR) of large-radius jets are calibrated [77,78].A  boson tagging algorithm identifies high- T hadronically decaying  bosons, whose decay products are collimated to a single large-radius jet [79], with an efficiency of 80%.The -tagging algorithm applies criteria on the mass of the large-radius jet, the number of inner-detector tracks associated with the jets and the energy correlation function ratio

Event selection, reconstruction and categorisation
This search targets final states with exactly one electron or muon, missing transverse momentum, and jets.Events are selected by a single lepton or  miss T trigger, as described in Section 3. If the event is selected by a single lepton trigger, the selected lepton must have  T > 27 GeV and be matched to the object that triggered the event.If the event is selected by the  miss T trigger, it must have  miss T > 200 GeV.The  miss T trigger selection is added to compensate for the efficiency loss of the single muon triggers at high muon  T .These lepton and the  miss T requirements ensure full trigger efficiency over all data taking periods.
Events are required to have exactly one tight muon with  T > 27 GeV or one tight electron with  T > 60 GeV.Increasing the  T threshold for electrons to  T > 60 GeV facilitates the estimation of the background from non-prompt electrons.The fake lepton background estimation relies on the fact that, compared to real leptons, fake leptons that satisfy the baseline lepton selection have a much lower efficiency for satisfying the tight lepton criteria.The single electron trigger selection, which is part of both the baseline and tight electron selection already imposes very strict selections on the electrons with  T < 60 GeV, making it difficult to define a selection tighter than the baseline electron criteria.Studies showed that increasing the electron  T threshold to 60 GeV does not impact the analysis sensitivity.Events with additional leptons with  T > 25 GeV satisfying the baseline criteria are vetoed.In addition, all events must have  miss T > 60 GeV and contain at least three small- jets, at least one of which must be -tagged.The criteria described above are referred to as the preselection.
The focus is on -quarks with a mass above 1.35 TeV, which corresponds to the lower mass limit set by the previous analysis [13].The large mass of the  will lead to decay products with large  T , which in turn will have collimated decay products.In particular, the hadronically decaying  boson will produce a large- jet, so events are required to contain at least one -tagged large- jet.If more than one large- jet is -tagged, the one with a mass closest to the -boson mass (  = 80.38 GeV) is selected as the hadronically decaying  boson ( had ) for the reconstruction of the hadronically decaying -quark candidate.The leptonically decaying  boson ( lep ) is reconstructed from the system of the selected lepton and reconstructed neutrino.The ì   of the reconstructed neutrino is determined from the missing transverse momentum, and the   of the reconstructed neutrino is calculated by assuming that the lepton-neutrino system has the invariant mass of the  boson.This leads to a quadratic equation.When two real solutions are obtained, the one with the smaller absolute value for the reconstructed neutrino   is used.When the solutions are complex, a real solution is obtained by adjusting the  and  components of the neutrino momentum to minimise a  2 parameter that includes the uncertainties in the neutrino and lepton momenta and  boson mass.Finally, the angular distance between the lepton and the reconstructed neutrino is required to be Δ(ℓ, ) < 0.7.
Reconstruction of the two  candidates is done by pairing each of the  candidates,  lep and  had , with a small- jet.The small- jets are selected for pairing with the candidate  bosons by minimising the difference between the hadronic and leptonic reconstructed  masses, If the event contains one -tagged jet, only combinations that include the -tagged jet are considered.If the event contains two or more -tagged jets, only combinations with the two -tagged jets leading in  T are considered.Signal events are expected to have VLQs with low  T and high mass, causing the jet paired with the  had , referred to as  had , to be well separated from the  had .By contrast,  t events will have high- T -quarks that lead to a small opening angle between the resulting  and .Therefore, a requirement of Δ( had ,  had ) > 1.0 is imposed to reduce the  t background.Well reconstructed signal events should also have small values of Δ VLQ , with both the reconstructed -quark masses   near the -quark mass, but with large values of Δ VLQ as the VLQ reconstruction is less optimised for -quarks from the -quark decay.Detector effects, final state radiation, and mis-reconstruction can also broaden the reconstructed mass peaks for signal and  t events.In contrast, a smooth distribution in Δ VLQ is expected for background processes, such as +jets and single top production.The variable Δ VLQ is therefore used to separate regions enriched in single top background with similar kinematic properties as the single top events in the signal region.
The scalar sum  T of the  T of the selected small- jets, the lepton  T and the  miss T , is a powerful discrimination quantity, due to the large expected mass of the .The signal regions require  T > 1900 GeV, while regions to constrain the background nuisance parameters and normalisations require 1400 <  T < 1900 GeV.The control and signal regions are further divided by Δ VLQ , as illustrated in Figure 2. Events in these regions are used in the final likelihood fit as described in Section 8.The signal region is defined by  T > 1900 GeV and Δ VLQ < 500 GeV, which is further divided into the regions SR1 with Δ VLQ < 200 GeV and SR2 with 200 < Δ VLQ < 500 GeV.The region SR1 is designed to capture the best-reconstructed VLQ events that would have a narrow peak in the  lep  and  had  distributions.The region SR2 will capture less-well-reconstructed VLQ events, resulting in broader mass distributions.Despite this, the fraction of VLQs expected to enter this region is non-negligible.A control region with  T > 1900 GeV and Δ VLQ > 500 GeV, denoted  High T ΔCR, has low signal contamination for signal masses that are not excluded yet and is dominated by single top production.This control region provides an opportunity to constrain the modelling of the single top background using events with kinematic properties similar to the signal region.The region with 1400 <  T < 1900 GeV is divided into two control regions in Δ VLQ :  tCR with Δ VLQ < 500 GeV and  Low T ΔCR with Δ VLQ > 500 GeV.The  tCR is rich in  t events with a purity of nearly 70%, allowing the constraint of the normalisation and modelling of -quark pair production.The  Low T ΔCR provides a second region for constraining the single top background, but with kinematics properties similar to the  tCR.A summary of all regions is provided in Table 1.The regions not included in the fit, +jetsCR and  tRWR ( t reweighting region), serve to derive data-driven corrections to the background modelling and are discussed in the following section.

Background estimate
Contributions from the dominant background processes,  t and +jets production, are estimated by using MC simulation with data-driven corrections derived in dedicated control regions dominated by the respective process.The SM backgrounds containing a prompt-lepton (single top, +jets, diboson, etc.) are also estimated by using MC simulations.Finally, the small contribution from multĳet events is estimated by using a data-driven approach.
A correction for +jets events is derived in a dedicated control region referred to as the +jetsCR.This region is constructed to be orthogonal to the signal region, in particular with an inverted jet mass requirement in the hadronic  boson tagging algorithm.Possible biases in the modelling of +jets events due to the inversion of this requirement are investigated and taken into account with a dedicated uncertainty, as detailed below.The +jetsCR also requires 900 <  T < 1900 GeV to reduce possible contamination from VLQ signal events.All other selection requirements are the same as in the signal region, as shown in Table 1.
Requiring at least one -tagged jet, similar to the signal region requirement, causes the +jetsCR to have a significant contribution from  t events.This requirement helps to validate the correction to simulated +jets events as the modelling of +jets depends on the heavy-flavour requirement [82].The estimate of the +jets correction is based on the charge asymmetry of +jets events in the +jetsCR, present due to the asymmetry in -and -quark content in the proton [83].Charge asymmetry is defined as  =  + −  − , where  + and  − are the numbers of events with a positively or negatively charged lepton, respectively.The correction factor to the +jets normalisation is calculated by comparing  in data and MC simulation in the +jetsCR.This approach decouples the charge-asymmetric process, +jets, from possible mis-modelling of the  t events, which is a charge-symmetric process.This method is also useful for separating charge-asymmetric events from the VLQ signal, as it is also charge-symmetric.The contributions to  from charge-symmetric backgrounds, such as  t and multĳet, cancel out.The small contributions from other charge-asymmetric backgrounds, such as single top, diboson, and -associated production, is accounted for using MC samples.The dependence of the +jets normalisation on  T and the -tag requirements was investigated in the control region sidebands (relaxed or tightened requirements on  T or the -tag selection in the +jetsCR) to ensure applicability of this correction in the signal region.The +jets correction is derived as a function of  T up to  T = 4 TeV as a check and no dependence of the +jets correction on the event  T is observed.An additional uncertainty is added to cover the difference between the normalisation correction found in the +jetsCR and the control region sidebands with varied -tag requirements.The dependence of the +jets normalisation on other relevant event and kinematic variables is checked and no significant dependencies are observed.Thus, a single normalisation factor is applied to the +jets MC prediction.The +jets normalisation correction amounts to  +jets = 0.915 ± 0.09 (stat.)± 0.54 (syst.), the largest systematic uncertainties are from the modelling of the single top background and from inverting the requirements of the -tag selection.
The  t background estimate from MC simulation is known to overestimate the number of events at high -quark  T [84], directly impacting the modelling of  T , which is essential for discriminating signal from background.Therefore, a data-driven reweighting is applied to the  t MC events to correct for the difference between the MC and the data in the  T distribution.The correction is derived in the  t reweighting region  tRWR, defined in Table 1, which has a  t purity of 89%, with the next most significant contribution from single top background at 6%.The main selection criteria differentiating the  tRWR from the signal region are Δ(-tag,  had ) < 1.0 and  T > 800 GeV.These criteria are motivated by the low angular distance between the  boson and -quark from the decay of a high- T -quark.To ensure that the contamination by signal event in  tRWR is low, the invariant masses of the reconstructed VLQs,  lep  and  had  , are required to be smaller than 700 GeV. Figure 3(a) shows the  T distributions in the  tRWR before deriving the reweighting.Figure 3: Distributions of  T for data (dots) and predictions (histograms with various colours) in the  tRWR (a) before and (b) after applying the reweighting (see text).The prediction is normalised to the data to illustrate the discrepancies in the shape of the distribution and the effect of the  T shape reweighting.The error bars include the statistical uncertainty and the shaded band represents the total systematic uncertainty.The last bin includes the overflow.
The reweighting is constructed to correct the shape of the  T distribution, but not the normalisation, as it is varying freely in the final likelihood fit and ultimately constrained by the data in the  tCR.The contributions to the  tRWR from processes other than  t are subtracted from the data and the ratio of the resulting  T distribution and the  T distribution from the  t prediction is calculated.The correction factor is then derived in a fit to that ratio with a function of the form: where   (with  = 0, 1, 2, 3) are the free parameters of the fit.Other functional forms are also studied, but the one in Eq. ( 1) is found to describe the data/MC ratio most accurately.The fit is also found to be consistent for different fit intervals and bin choices.The  T and the jet multiplicity,  jets , are correlated, so the correction to  T could also depend on  jets .Therefore, separate fits are performed for events with 3 ≤  jets ≤ 6 and  jets ≥ 7. Figure 3(b) shows the  T distribution in the  tRWR after the application of the reweighting correction.
The statistical uncertainty of the fit is propagated as an uncertainty on the reweighting correction.The impact of the uncertainty in the single top modelling is taken into account by varying the single top contribution by 100% in the  tRWR region.Uncertainties in other processes have negligible impact on the extracted reweighting.The offset parameter  3 in Eq. ( 1) causes the function to approach an asymptote at high values of  T to 0.64 for 3 ≤  jets ≤ 6 and 0.62 for  jets ≥ 7, where the statistical uncertainty is too large to determine the exact dependence of the correction on   with confidence.In addition, an uncertainty of 58% (38%), covering the large statistical uncertainty in the high- T tail, is added if  T ≥ 2.5 TeV for events with 3 ≤  jets ≤ 6 ( jets ≥ 7).Finally, the correction in  T is tested in a dedicated  t-dominated validation region, where the  t MC estimate is found to have good agreement with data after application of the  T -dependent correction.
Events from multĳet production satisfy the lepton requirement of the signal region by mis-identifying a hadronic jet as a lepton or a non-prompt lepton from a heavy-flavour jet, collectively referred to as non-prompt leptons.The contribution from multĳet production is small, accounting for about 5% of the events in the signal region.Background from these processes is estimated by using a data-driven"Matrix Method" (MM) [85].This method uses the classification of the leptons in "loose" and "tight" categories to predict the number of non-prompt leptons.Tight leptons correspond to signal leptons, as described in Section 5. Loose leptons satisfy the baseline criteria, as defined in Section 5, but fail to meet the tight lepton requirements.A "loose region" enriched in non-prompt leptons is created by changing the lepton requirement from tight to loose for the signal region selection.The number of events with prompt (non-prompt) leptons in the signal region is related to the number of events with prompt (non-prompt) leptons in the loose region by the efficiencies for prompt (non-prompt) leptons that satisfy the baseline lepton requirements to also satisfy the tight requirements, which is referred to as the "real efficiency" ("fake efficiency").Therefore, once the real and fake efficiencies are determined, the number of events with non-prompt leptons (the multĳet events) in the signal region can be calculated, as described in Ref. [86].
The tight lepton criteria reduces the contribution of the multĳet background in the signal and control regions to very low levels, such that the shapes of differential distributions estimated by the MM are dominated by statistical fluctuations.An accurate prediction for the multĳet background is particularly important for the  lep  distribution, as it is used in the final likelihood fit.As the background events do not contain real VLQ decays, the distribution of reconstructed masses is largely determined by the kinematic constraints from the event selection.As a result, the  lep  distribution for +jets and multĳet events are very similar, but is much more stable for +jets.Therefore, in the final fit, the shape of the  lep  template for the multĳet background is taken from +jets events, while the total predicted yield is determined from the MM calculation.
For reconstructed  lep  below 200 GeV, the MM estimate for multĳet events is extremely sensitive to the amount of  t that is subtracted from the data.Therefore, an iterative method is employed to determine the total predicted multĳet yield for the scaling of the Z+jets template.First, the MM prediction is evaluated using the nominal normalization for the subtracted  t.Second, the  lep  template (taken from +jets events) is scaled to have the same yield as the MM prediction for events with  lep  > 200 GeV.Third, the normalisation of the  t events is adjusted to get the MM prediction to agree with the  lep  template for  lep  > 200 GeV.The second and third steps are then repeated until the predicted multĳet yield changes by less than 1% from the previous iteration.An uncertainty of 100% is assigned to the multĳet yield to account for the impact of systematic uncertainties in the modelling of prompt lepton processes and the fake-lepton efficiency.An additional uncorrelated uncertainty of 100% is applied to the multĳet prediction for  lep  < 200 GeV to account for the uncertainty of the MM prediction in that region due to the normalization of the  t background.To model the multĳet shape in other variables and analysis regions, the MC sample for multĳet production, scaled to the MM estimate, is used.

Systematic uncertainties
Systematic uncertainties are broadly grouped into the following two classes: experimental uncertainties, which are related to the modelling of the detector response and reconstruction of physics objects, and to the data-driven background estimate or the data-driven corrections to the simulated backgrounds; and theoretical uncertainties, which are related to the modelling of the physics processes by simulation.Unless stated otherwise, uncertainties from a common source are correlated across processes and regions in the final statistical analysis and the impact of the uncertainties is allowed to be constrained by the likelihood fit to the data.
The largest systematic uncertainties are related to the modelling of  t and single top backgrounds.The uncertainties in the theory for the  t prediction include the choice of MC generator (in this case, MC events were generated to NLO accuracy); ISR, FSR, and parton shower model; and shower matching scheme of simulated  t events.Each of these uncertainties is evaluated by comparing the nominal  t prediction to the alternative MC samples, as described in Section 3. Modelling uncertainties in the single top prediction include the choice of the parton shower and the matching of the NLO matrix element to the parton shower, which are evaluated using alternative samples as described in Section 3. Similarly, an uncertainty due to the choice of the  t/ overlap removal scheme is evaluated by comparing the nominal single top MC produced with DR scheme to the alternative sample produced with the DS scheme [87].The DR scheme is found to be more compatible with the data at lower energies, while at higher energies, the DS scheme is observed to provide a better description of the data.To avoid extrapolation of the DS versus DR uncertainty from the regions defined with low  T to those at high  T , this uncertainty is split into a high- T component, applied to regions with  T > 1.9 TeV, and a low- T component, applied to regions with 1.4 <  T < 1.9 TeV.For both the single top and the  t production, uncertainties in the PDF are obtained using the PDF4LHC15 combined PDF set [88].The effect of QCD scale uncertainties is estimated by independently doubling or halving the renormalisation and factorisation scales for single top and  t production, respectively.The QCD scale variation with the largest impact on the fit discriminant ( lep  ) is used.
To simplify the statistical analysis, all the smaller backgrounds from  t, +jets, rare top processes and diboson production are grouped into a single category named "other", and a single overall normalisation uncertainty of 50% is applied.This uncertainty of 50% is adopted from the uncertainty in +jets events which has the largest relative uncertainty among the processes contained in the "other" category.The uncertainty in the +jets contribution was estimated in the previous analysis [13] by investigating the mis-modelling of the jet multiplicity of +jets events.
Experimental uncertainties include an uncertainty of 0.83% on the integrated luminosity measurement on the combined 2015−2018 data [22], obtained using the LUCID-2 detector [23] for the primary luminosity measurements.Uncertainties in leptons arise from potential mis-modelling of the electron and muon energy scales and resolutions [62,64] and from uncertainties in the correction factors to the electron and muon trigger, reconstruction, identification and isolation efficiencies [62,63].
Small- jets have uncertainties in the JES, the JER [68] and on the JMS [77].An uncertainty is assigned to the JVT selection efficiency [69] and to the reweighting factors that correct the pile-up profile in MC simulations to match that in data.Similarly, JES, JER [77] and JMS uncertainties, and in addition JMR [78] systematic uncertainties are assigned to large-radius jets.An uncertainty related to the scale and resolution of the track soft term in the  miss T calculation [74] is applied.
Uncertainties in the -tagging algorithm selection efficiency include uncertainties in the -jet selection efficiency, -jet and light jet mistag rate correction factors and additional components from the extrapolation of the calibrations to high  T and from -jets to  leptons [71][72][73].An uncertainty is assigned to the efficiency correction of the  boson tagging algorithm [79,81].
Uncertainties in the data-driven corrections for simulated backgrounds or in the data-driven multĳet event estimate are briefly listed below and detailed in Section 6.The uncertainty in the data-driven correction for simulated +jets events is 60%.This is among the dominant systematic uncertainties, though still much smaller than the statistical uncertainty of the data set.The uncertainty in the  t  T shape reweighting consists of a statistical component and a systematic component from the uncertainty in the single top contribution in the  tRWR.The two components affecting the multĳet background estimate include a global 100% uncertainty in the MM estimate and an additional 100% uncertainty for reconstructed VLQ invariant masses  lep  below 200 GeV.

Statistical analysis and results
The presence of the VLQ signal is tested by using a fit to the reconstructed mass distributions of the leptonically decaying  candidate ( lep  ) from the signal regions SR1 and SR2 and control regions  tCR,  Low T ΔCR, and  High T ΔCR, denoted fit regions.The mass of the leptonically decaying -quark is chosen because it is found to have a better resolution and sensitivity than the mass of the hadronically decaying -quark.The fit maximises a binned likelihood function, L (, ), constructed as a product of Poisson probabilities for all bins considered in the search and depends on the parameter of interest  and a vector of nuisance parameters (NP) .The parameter of interest is the signal strength  =  test / theory , where  test is the value for the VLQ cross-section being tested and  theory is the theoretical prediction.Each NP   encodes a systematic uncertainty with a Gaussian function prior, except the normalisations of the  t and single top backgrounds, which are unconstrained, and the statistical uncertainties due to the finite size of the MC samples, which are included as one additional NP per bin that accounts for the statistical uncertainty of the total estimated yield in the given bin.If an uncertainty would impact the normalisation or shape of all bins for a given process by less than 1%, the NP is removed from the fit for that process.The test statistic   is defined as the profile likelihood ratio,   = −2ln(L (, θ )/L ( μ, θ)), where μ and θ are the values of the parameters that maximise the likelihood function (with the constraint 0≤ μ ≤ ), and θ are the values of the nuisance parameters that maximise the likelihood function for a given value of .The compatibility with the background-only hypothesis for the observed data is tested by setting  = 0 in the profile likelihood ratio.Upper limits on the cross-section times branching ratio for a given signal are derived by using   in the modified frequentist CL s method [89,90], approximated using the asymptotic formulae [91].For a given signal, the cross-section times branching ratio is excluded at ≥ 95% confidence level (CL) when CL s < 0.05.2. The uncertainty in the total prediction does not equal the sum in quadrature of the individual component due to correlations between the fit parameters.The expected number of  T signal events in each of the five fit regions are given in Table 3 for various VLQ scenarios.The compatibility with the background-only hypothesis for the data is estimated by integrating the distribution of the test statistic above the observed value of  0 .This value is computed for each signal scenario considered, defined by the assumed mass of the -quark and the three decay branching ratios.These results assume the -quark has a narrow width.No significant excess above the background expectation is found.Upper limits at the 95% CL on the  T production cross-section are set for two benchmark scenarios as a function of -quark mass  VLQ and compared with the theoretical prediction from Top++ 2.0 (see Figure 5).The resulting lower limit on  VLQ is determined using the central value of the theoretical cross-section prediction.
For B ( →  ) = 1 and -quark masses from 1000 GeV to 2000 GeV,  T production cross-sections , and (e) SR2, respectively) after the simultaneous fit to data under the background-only hypothesis (Post-Fit).The binning in  lep  was optimised to maximise the sensitivity to the signal in the signal regions while it is ensured that each bin of the fit regions contains a reasonable number of data events to guarantee good fit behaviour.The lower panels of each plot shows the ratio of data and predicted background yields.The arrow indicates that the ratio point is outside the ordinate range.The overflow events are included in the last bin.The error bars include the statistical uncertainty and the shaded band represents the systematic uncertainty after the likelihood fit (see text).The dashed green histograms in (d) SR1 and (e) SR2 show the shape for the signal with  VLQ = 1400 GeV and B ( →  ) = 1 scaled to the total background prediction.The expected signal event yields in each of the fit regions are displayed in Table 3. greater than 10.1 fb to 0.27 fb are excluded at 95% CL.Comparing to the theory cross-section, this results in an observed (expected) lower limit on the -quark mass of  VLQ >1700 GeV (1570 GeV) for this scenario.For branching ratios corresponding to the SU(2) singlet  scenario, the observed (expected) 95% CL lower mass limit is  VLQ >1360 GeV (1360 GeV).The sensitivity of the analysis is limited by the statistical uncertainty in the data sample size.Including all systematic uncertainties for the B ( →  ) = 1 scenario degrades the expected cross-section limit by less than 3.7% for any of the tested masses, and reduces the expected mass limit by only 10 GeV.
In the signal region, the prediction is found to slightly overestimate the data in the signal-sensitive high mass tail of the  lep  distribution.Therefore, the observed limits on the signal cross-section, and thus also on the signal mass  VLQ , are generally more stringent than the expected limits.Large bin-to-bin variations between 1000 to 1600 GeV in the  lep  distribution are observed in SR1, with an excess relative to the prediction in one bin and deficits in three bins.However, no signal model is compatible with the narrow excess in SR1, as the width of the  lep  distribution for any of the signal models would be much wider than the observed one-bin excess.Additionally, no such excess or large bin-to-bin variations of the data are observed in SR2.The narrow excess in SR1 does lead to weaker observed cross-section limits in an  VLQ interval between around 1250 GeV to 1500 GeV relative to the rest of the  VLQ spectrum.
In the previous search by ATLAS performed on a fraction of this data set with an integrated luminosity of 36 fb −1 [13], observed (expected) 95% CL mass limits of 1350 GeV (1310 GeV) for the scenario B ( →  ) = 1 and 1170 GeV (1080 GeV) for a singlet  were found.The present search thus extends the observed mass limits by 350 GeV and 190 GeV for the B ( →  ) = 1 and singlet  case, respectively.The increase of the mass limits can almost entirely be attributed to the increase in the size of the data sample from 36 fb −1 to 140 fb −1 , but about 15% (20%) of the improvement on the mass limits for the B ( →  ) = 1 (singlet ) scenario is expected to come from changes in the analysis strategy, especially from improvements to the  boson tagging and the slicing of the signal region into two regions, SR1 and SR2 according to Δ VLQ .
To check that the results do not depend on the weak-isospin of the -quark in the simulated signal events, a sample of  T events with a mass of 1.2 TeV was generated for an SU(2) doublet -quark and compared with the nominal sample of the same mass generated with an SU(2) singlet -quark.Both the expected number of events and expected excluded cross-section are consistent between the two samples.Thus the limits obtained are also applicable to VLQ models with non-zero weak-isospin.As there is no explicit use of charge identification, the B ( →  ) = 1 limits are applicable to the pair-production of vector-like  -quarks of charge −4/3, which decay exclusively into  .
In addition to the benchmark scenarios, other combinations of  branching ratios are tested by reweighting the relative contributions of the three -quark decay modes.Figure 6 shows the expected and observed lower limits on the -quark mass as a function of B ( →  ) and B ( → ).For each point in the figure, the branching ratios for  →  decay are determined by the requirement B ( →  ) + B ( → ) + B ( → ) = 1.Although the analysis is designed to search for  T, it also has sensitivity to  B production. Figure 7 shows the expected and observed lower limits on the -quark mass as a function of B ( → ) and B ( → ), with B ( →  ) = 1 − B ( → ) − B ( → ).For the B ( → ) = 1 case, a lower limit on the -quark mass of 1345 GeV (1196 GeV) are observed (expected).Similarly to the B ( →  ) = 1 case, the B ( → ) = 1 limits are applicable to the pair-production of vector-like -quarks.

Conclusion
Pair-produced vector-like -quarks are searched for in events with exactly one electron or muon, missing transverse momentum and jets using the full Run 2 ATLAS data set.Final states compatible with the decay of a pair of heavy vector-like quarks are selected and a combined fit to the mass of the reconstructed vector-like quark in three control regions and two signal regions is made.No significant excess over the background expectation is observed and 95% CL upper limits are set on the vector-like quark cross-section and lower bounds are set on the vector-like quark mass.The search is optimised for  T →   , thus the most stringent limits are set for B ( →  ) = 1.For this scenario, masses below 1700 GeV (1570 GeV) are observed (expected) to be excluded at 95% CL, which is the current strongest limit for this production and decay mode.The limits for B ( →  ) = 1 also apply to a vector-like  -quark with charge −4/3, which decays exclusively into  .The observed (expected) lower mass limit for the weak isospin singlet  model is 1360 GeV (1360 GeV) and limits for other -quark branching ratios are presented in the plane of B ( →  ) vs. B ( → ).The analysis was also used to set limits on the mass of -quarks as a function of branching ratios.This search improves the previous result that used 36

Figure 1 :
Figure 1: Representative tree-level Feynman diagrams for (a)  T and (b)  B production for which at least one of the VLQs decays into a  boson.
2 [79, 80].Scale factors correct the  boson tagging efficiency in simulation to match the efficiency measured in data [81].

1 Figure 2 :
Figure 2: Illustration of the two-dimensional plane in  T and Δ VLQ on which the signal and control regions, included in the final combined likelihood fit, are defined.

Figure 4
Figure 4 shows the  lep  distribution in each of the five fit regions after the simultaneous fit to the background-only hypothesis to data.The  lep  distributions of the SM backgrounds show a steep decline towards high  lep  , whereas the signal  lep  distribution is expected to peak close to the simulated  VLQ , at high  lep  in SR1 and SR2, effectively distinguishing it from the SM backgrounds.The corresponding event yields are listed in Table2.The uncertainty in the total prediction does not equal the sum in quadrature of the individual component due to correlations between the fit parameters.The expected number of  T signal events in each of the five fit regions are given in Table3for various VLQ scenarios.The compatibility with the background-only hypothesis for the data is estimated by integrating the distribution of the test statistic above the observed value of  0 .This value is computed for each signal scenario considered, defined by the assumed mass of the -quark and the three decay branching ratios.These results assume the -quark has a narrow width.

Figure 4 :
Figure 4: Distribution for  lep  for data (dots) and predictions (histograms with various colors) in the five fit regions ((a)  Low T ΔCR, (b)  High T ΔCR, (c)  tCR, (d) SR1, and (e) SR2, respectively) after the simultaneous fit to data under the background-only hypothesis (Post-Fit).The binning in

Figure 5 :
Figure 5: Expected (dashed black line) and observed (solid black line) upper limits at the 95% CL on the  T cross-section as a function of -quark mass for (a) the B ( →  ) = 1 scenario and (b) in the SU(2) singlet  scenario.The green and yellow bands correspond to ±1 and ±2 standard deviations around the expected limit, respectively.The thin red line shows the theoretical prediction.

Figure 6 :Figure 7 :
Figure 6: (a) Expected and (b) observed 95% CL lower limits on the mass of the -quark in the branching-ratio plane of B ( →  ) versus B ( → ).Contour lines are provided to guide the eye.The white region is due to the lower mass limit falling below 1000 GeV, the lowest signal mass considered in this search.The yellow marker indicates the branching ratio for the SU(2) singlet  scenario.

[ 15 ]
ATLAS Collaboration, Search for single production of a vectorlike  quark decaying into a Higgs boson and top quark with fully hadronic final states using the ATLAS detector, Phys.Rev. D 105 (2022) 092012, arXiv: 2201.07045[hep-ex].

Table 1 :
Overview of the signal and control regions used in the analysis.

Table 2 :
Event yields in the five fit regions after the fit to the data under the background-only hypothesis.The uncertainties include statistical and systematic uncertainties.The uncertainties in the individual background components can be larger than the uncertainty in the sum of the backgrounds due to correlations.

Table 3 :
Expected  T event yields in the five fit regions for various VLQ scenarios.The uncertainties include statistical and systematic uncertainties.