A search for an unexpected asymmetry in the production of $e^+ \mu^-$ and $e^- \mu^+$ pairs in proton-proton collisions recorded by the ATLAS detector at $\sqrt s = 13$ TeV

This search, a type not previously performed at ATLAS, uses a comparison of the production cross sections for $e^+ \mu^-$ and $e^- \mu^+$ pairs to constrain physics processes beyond the Standard Model. It uses $139 \text{fb}^{-1}$ of proton$-$proton collision data recorded at $\sqrt{s} = 13$ TeV at the LHC. Targeting sources of new physics which prefer final states containing $e^{+}\mu^{-}$ to $e^{-}\mu^{+}$, the search contains two broad signal regions which are used to provide model-independent constraints on the ratio of cross sections at the 2% level. The search also has two special selections targeting supersymmetric models and leptoquark signatures. Observations using one of these selections are able to exclude, at 95% confidence level, singly produced smuons with masses up to 640 GeV in a model in which the only other light sparticle is a neutralino when the $R$-parity-violating coupling $\lambda'_{231}$ is close to unity. Observations using the other selection exclude scalar leptoquarks with masses below 1880 GeV when $g_{\text{1R}}^{eu}=g_{\text{1R}}^{\mu c}=1$, at 95% confidence level. The limit on the coupling reduces to $g_{\text{1R}}^{eu}=g_{\text{1R}}^{\mu c}=0.46$ for a mass of 1420 GeV.


√
= 13 TeV at the LHC. Targeting sources of new physics which prefer final states containing + − to − + , the search contains two broad signal regions which are used to provide model-independent constraints on the ratio of cross sections at the 2% level. The search also has two special selections targeting supersymmetric models and leptoquark signatures. Observations using one of these selections are able to exclude, at 95% confidence level, singly produced smuons with masses up to 640 GeV in a model in which the only other light sparticle is a neutralino when the -parity-violating coupling 231 is close to unity. Observations using the other selection exclude scalar leptoquarks with masses below 1880 GeV when 1R = 1R = 1, at 95% confidence level. The limit on the coupling reduces to 1R = 1R = 0.46 for a mass of 1420 GeV.

Introduction
Under the Standard Model of particle physics, the ratio of the production cross section of a positron and muon together in a proton-proton interaction is expected to be very similar to the production cross section for an electron and anti-muon. This similarity is a consequence of the lepton flavour universality of the electroweak boson interactions that produce these leptons, in combination with charge conservation and the relatively low mass of these leptons with respect to the mass of the electroweak bosons. As was explored in Ref. [1], measuring the ratio of these cross sections can serve as an experimental test of this aspect of the Standard Model (SM) and could have sensitivity to physics Beyond the Standard Model (BSM). Specifically, the ratio: is defined, where the leptons are all taken to be promptly produced in the primary interaction 1 . Scenarios are considered where the presence of a BSM process would bias to be significantly greater than one (more + − than − + ). Two concrete examples of such BSM models are considered in this Letter. The first is an -parity-violating supersymmetry model. As was noted in Ref. The second model considered to drive > 1 is a scalar leptoquark with couplings permitting 1 → − and 1 → − . In that case, processes such as This Letter describes a measurement of using the ATLAS detector at the LHC, which is used as a test for the existence of BSM physics that would bias above one. While related BSM processes could also bias to be significantly less than one, sensitivity to such processes are not considered in this Letter because of the existence of experimental effects that predominantly bias the measured ratio downwards. Examples of these effects are explored further in Ref.
[1], but one such example is the presence of hadronic jets being incorrectly reconstructed as an electron more frequently than as a muon. This bias for fake electrons over fake muons, in combination with the predominance of + over − bosons from the ( ) 2+ initial state [5,6], would manifest as a bias for − + (with a fake − ) over + − (with a fake − ), and hence bias the measured downwards. The analysis strategy presented in this Letter includes estimating corrections to account for some of these biasing experimental effects, the primary motivation for which is the enhancement of sensitivity to BSM physics that increases rather than to improve the accuracy of the measurement, and estimating systematic uncertainties to account for any remaining biases. In particular, for the model-independent statistical analysis of the measured in this Letter it is assumed that the SM hypothesis covers values of less than or equal to one, an assumption that is checked in data in the analysis event selections 2 . Thus evidence for new physics would only be claimed if the BSM contribution is strong enough to overcome any residual biases that have not been accounted for. For calculating limits on signal model parameters the strength of the residual biases are estimated from an SM-enriched control region.
The structure of this Letter is as follows: in Section 2 the ATLAS detector is described and Section 3 details the datasets used in the analysis. Section 4 describes the object reconstruction and Section 5 defines the regions of phase space that is measured in and how experimental effects that could impact its measurement are corrected for, including the verification of these corrections with the measurement of in SM-enriched control regions. Section 6 presents the measurement of in generic signal regions designed to have broad sensitivity to BSM physics that would bias upwards, then in optimised signal regions designed to have sensitivity to the two BSM models described above, alongside a statistical interpretation of the result in the parameter space of these models.

ATLAS detector
The ATLAS experiment [7] at the LHC is a multipurpose particle detector with a forward-backward symmetric cylindrical geometry and a near 4 coverage in solid angle. 3 It consists of an inner tracking detector surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic and hadron calorimeters, and a muon spectrometer. The inner tracking detector covers the pseudorapidity range | | < 2.5. It consists of silicon pixel, silicon microstrip, and transition radiation tracking detectors. Lead/liquid-argon (LAr) sampling calorimeters provide electromagnetic (EM) energy measurements with high granularity. A steel/scintillator-tile hadron calorimeter covers the central pseudorapidity range (| | < 1.7). The endcap and forward regions are instrumented with LAr calorimeters for both the EM and hadronic energy measurements up to | | = 4.9. The muon spectrometer surrounds the calorimeters and is based on three large superconducting air-core toroidal magnets with eight coils each. The field integral of 2 See definitions of -and -in Section 5, and the estimates of fake-lepton contributions to Standard Model expectations. 3 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe. The -axis points from the IP to the centre of the LHC ring, and the -axis points upwards. Cylindrical coordinates ( , ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as = − ln tan( /2). Angular distance is measured in units of the toroids ranges between 2.0 and 6.0 T·m across most of the detector. The muon spectrometer includes a system of precision chambers for tracking and fast detectors for triggering. A two-level trigger system is used to select events. The first-level trigger is implemented in hardware and uses a subset of the detector information to accept events at a rate below 100 kHz. This is followed by a software-based trigger that reduces the accepted event rate to 1 kHz on average depending on the data-taking conditions.
An extensive software suite [8] is used in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment.

Data and Monte Carlo samples
The proton-proton collisions analysed in this Letter are those collected at a centre-of-mass energy of √ = 13 TeV and a 25 ns interbunch spacing between 2015 and 2018. They correspond to an integrated luminosity of 139 fb −1 . The uncertainty in the combined 2015-2018 integrated luminosity is 1.7 % [9], obtained using the LUCID-2 detector [10] for the primary luminosity measurements. In any given data-taking period the unprescaled two-lepton triggers (specifically , or ) with the lowest per-lepton T thresholds were used [11][12][13]. These thresholds ranged from 10 GeV to 24 GeV.
-parity-violating (RPV) [14] models of supersymmetry [15][16][17][18][19][20] were tested using simulated events with −˜0 1 or +˜0 1¯i n the final state, where˜0 1 is the lightest neutralino. This neutralino is considered stable enough on detector scales that it can only be detected through missing transverse momentum, unless it approaches or exceeds the top-quark mass, when it can decay through the RPV coupling. These events were generated at leading order using the Monte Carlo (MC) program M 5_ MC@NLO [21] version 2.61 together with the RPV MSSM UFO model [22]. Shower evolution and hadronisation was performed by P 8 [23] version 8.23. The NNPDF2.3 [24] PDF was used with a set of tuned parameters called the A14 tune [25]. All RPV couplings except 231 were set to zero, so only the smuon RPV interaction is considered. Supersymmetric particles other than the neutralino and smuon were decoupled by setting their masses to a large value. The M hard processes permitted at most one additional light parton in the final state, and they were matched to the P parton shower using the CKKW-L [26] merging scheme. Use of the matching scale MS = 1 4 ( ) + (˜0 1 ) gives a smooth transition between the matrix-element and parton-shower regimes, and distributions with little dependence on the exact scale value. Event samples were generated for a two-dimensional grid of points, distributed in a plane of smuon and neutralino masses, all with a coupling of 231 = 1.0. Samples for other values of the coupling were obtained by weighting the cross sections of the first set in proportion to the square of the desired value of 231 and changing the branching ratio for the smuon decay. The branching ratio for the desired smuon to muon decay is 2% (70%) at 231 = 1 (0.1), whilst the remaining smuons decay via the RPV vertex. Leptoquark events were generated at leading order using M 5_ MC@NLO 2.61 together with the ' 1 ' model of Ref. [27], which is implemented as a Feynrules [28] package named 'LO_LQ_S1' available online [29] and described in Ref. [30]. Shower evolution and hadronisation were performed by P 8.23. The NNPDF2.3 PDF was used with the A14 tune. All leptoquark couplings were set to zero, apart from two flavours of the 1R coupling of Ref. [27] which couples leptoquarks to leptons and quarks in weak singlet states. Specifically, 1R and 1R were assigned a common non-zero value simply denoted ' '. The leptoquark signal events were generated for a set of leptoquark masses, all with a coupling of = 1.0. Charm-quark-initiated processes are neglected since they provide no charge-flavour asymmetry, and their cross section is only O (5%) of that for up-quark-initiated processes. The hard processes specified no additional light jets in the final state and they were matched to the P parton shower using the CKKW-L [31] merging scheme. The merging scale MS was chosen to be 1 4 ( ) + ( 1 ) , where ( 1 ) is the mass of the leptoquark.
Next-to-leading order (NLO) cross sections are used for the samples and were calculated using M -5_ MC@NLO 2.94 with a narrow-width (nw) approximation and the NNPDF2.3 PDF set. To include the effect of the finite-width (fw) approximation in the samples, the cross sections are corrected thus: following an approach similar to that used in Ref. [32]. The narrow-width leading order (LO) cross sections are calculated using M 5_ MC@NLO 2.94 and the NNPDF2.3 PDF set. The finite-width LO cross sections are calculated using the same set-up as the generated samples described above. Theoretical uncertainties in the NLO cross sections are calculated by M 5_ MC@NLO using an envelope of nine variations of the renormalisation and factorisation scales, each taking values of either 0.5, 1,0 or 2.0 times their nominal value.
Samples for other values of the coupling were obtained by weighting the cross sections of the first set in proportion to the square of the desired value of and accounting for the change in leptoquark width. A two-dimensional grid of samples for a variety of leptoquark masses and couplings was thereby obtained. All signal sample events were then simulated using ATLFASTII [33], a fast simulation of the ATLAS detector.
MC simulations of SM processes are not used in the final result of this analysis, but were used to guide the signal-region choices, to study the validity of the analysis strategy, to assist in deriving efficiencies and uncertainties for the fake-lepton background estimate, and to perform cross-checks (see Appendix A for MC sample details). Instead, measurements of are based entirely on comparisons between + − and − + data, and contributions from jets misidentified as leptons, muon corrections and even expected SM yields (see Section 6) are also estimated primarily from data.

Reconstructed objects
Reconstructed objects (electrons, muons, jets, missing transverse momentum) are the building blocks of any analysis. In this analysis, leptons and jets exist in two forms: 'B ' and 'S '. The former are used to define missing transverse momentum and the procedure to resolve ambiguities between objects with overlapping constituents, otherwise the analysis regions are built exclusively on the latter.

B
electrons are required to have | | < 2.47 and T > 10 GeV, and to pass the Loose likelihoodbased identification working point defined in Ref. [34]. The same T and | | demands are placed on B muons, which are also required to pass the Medium identification working point as defined in Ref [35]. The anti-algorithm [36,37] with a radius parameter of = 0.4 is used to reconstruct jets with a four-momentum recombination scheme, using 'particle-flow' objects [38] as inputs. B jets are required to have T > 20 GeV and | | < 4.5. The missing transverse momentum, ì miss T , is calculated from the B leptons and jets as described in Ref. [39] using the Tight working point and 'particle-flow track-based soft term' defined therein.
An overlap removal procedure is applied to B jets and B leptons to avoid double-counting. Firstly, any electron which shares a track with a muon is rejected. Any jet whose angular distance Δ from an electron is less than 0.2 is removed, as is any which has fewer than three tracks lying within Δ = 0.4 of a muon. Finally, electrons and muons within Δ = 0.4 of the remaining jets are then discarded.
The 'jet vertex tagger' (JVT) [40] is applied to jets with | | < 2.4 and T < 120 GeV, and helps to veto jets that are likely to have originated from pile-up (additional collisions from the same or nearby bunch crossings). A similar 'forward jet vertex tagger' (fJVT) is used to help identify and remove pile-up jets with | | > 2.5 [41]. Jets surviving the overlap removal procedure are deemed S if they pass the JVT or fJVT, and have | | < 2.8.
Those leptons which remain are then given a status of S if they meet the following five criteria: (i) they must have T > 25 GeV and | | < 2.47, and (ii) be consistent with the hard-scatter vertex, through | 0 / ( 0 )| < 3 and | 0 sin( )| < 0.3 mm, where 0 and 0 are their transverse and longitudinal impact parameters; (iii) electrons must pass the Tight likelihood-based identification working point defined in Ref. [34], and have charge misidentification suppressed through the use of the boosted-decision-tree-based discriminant described in Ref. [34]; (iv) electrons with T < 200 GeV ( T > 200 GeV) are required to pass the Tight (HighPtCaloOnly) isolation working point of Ref. [34] to reduce contamination by electrons from heavy-flavour decays or misidentified light hadrons; and (v) muons with T < 200 GeV ( T > 200 GeV) are required to pass the Tight (FixedCutHighPtTrackOnly) isolation working point of Ref.
[35] to reduce contamination by muons from semileptonic heavy-flavour and hadron decays.
In the rest of this Letter, leptons and jets are assumed to be only those with S status, unless stated otherwise.

Analysis
The tests of whether is significantly greater than one are made in four signal regions. Two of them, ( -and -) aim to provide sensitivity to general sources of charge-flavour violation, while the other two ( -and -) are less inclusive versions of their partners 4 and target specific RPV supersymmetry and leptoquark theories mentioned in the introduction. These regions are summarised in Figure 1.
The selection common to all signal regions requires the presence of exactly one electron and exactly one muon, of opposite charge. As there are few other constraints on the forms that the signal regions should take, and as the ATLAS experiment has not previously published a charge-flavour asymmetry search, the approach taken in this first search is to prioritise simple selections over complex ones. With this principle in mind: • when defining -, the only requirement which is added to the selection is that and T is the usual transverse mass: • and when defining -(a subset of -) the only additional requirement is that events must contain at least one jet with transverse momentum greater than 20 GeV. The primary purpose of the Σ( T ) requirement is to allow for the low Σ( T ) region to be available for studying the SM behaviour before unblinding the data in the signal regions. The benchmark BSM models have a low yield in Σ( T ) < 200 GeV, and it can be assumed that they are representative of other general sources of charge-flavour violation, so BSM sensitivity is not reduced substantially by excluding this region from -or -. As shown in Figure 1, the region Σ( T ) < 200 GeV is designated as -. If the further requirement of at least one jet with T > 20 GeV is placed, the region is designated -, used to study the SM behaviour in a region more kinematically similar to -.
The signal-optimised regions make use of three more flavour-symmetric event variables: S, T2 and P .
• S is the so-called 'object-based ì miss T significance' defined in Eq. (15) of Ref. [42]. It is a dimensionless measure of the degree to which the apparent missing transverse momentum in the event is 'real' (i.e. attributable to momentum carried away by invisible particles) rather than due to object mismeasurement or pile-up.
[43], where ì and ì represent the contributions to miss T from each semi-leptonic decay of a pair-produced particle, and all possible values that sum to the observed miss T are minimised over. It is evaluated using the algorithm of Ref. [44].
T | is a simple sum of the magnitudes of the transverse momenta of the two leptons and the most energetic jet in the event.
is defined to require S > 10 and T2 > 100 GeV. The first requirement anticipates that neutralinos (˜0 1 ) of the supersymmetric signals should carry away missing transverse momentum, while the second suppresses SM + − backgrounds. In all other respects, -is identical to -.
In contrast, the targeted leptoquark model processes have no invisible particles in the final state, sorequires S < 6. Furthermore, SM backgrounds in this region are suppressed by requiring events to have P > 1 TeV. In all other respects, -is the same as -.
Once the analysis regions are defined, potential biases to the measurement must be considered. The largest source of strictly one-sided charge-flavour bias in the ratio measurement is the misreconstruction of jets as light leptons in +jet events. In particular: (i) more + than − are produced in LHC proton collisions, and (ii) jets misreconstructed as 'fake' leptons are more likely to appear to be electrons than muons. If uncorrected, these two factors would cause − fake + real to be more prevalent than + fake − real and therefore the so-called 'fake' background would favour < 1. To remove the bias, a data-driven estimate of the number of fake-lepton events passing any particular selection is determined, separately for each charge combination, and is subtracted from the raw data counts before the ratio of + − to − + counts is calculated.
The fake-lepton estimate itself is determined using a Likelihood Matrix Method approach of the form described in Ref. [45] or 'Method B' of Ref. [46]. This method predicts the fake lepton background where either one or both leptons is fake, and relies on two lepton definitions with different stringencies. The tighter selection corresponds to the S definition used in the rest of the analysis, and the looser 'L ' definition relaxes this by removing the isolation requirements, vertex requirements, and loosening the electron identification requirement to the Loose likelihood-based working point defined in Ref. [34]. Real-lepton efficiencies are derived in ± ∓ and ± ∓ regions, dominated by → ℓℓ events, as the number of events with two S leptons divided by the number of events with a given lepton loosened to pass the L requirements. The fake-lepton efficiencies are derived using a muon tag-and-probe method, using ± ± pairs for the muon efficiency and ± ± pairs for the electron efficiency. The 'tag' lepton must pass the S requirements as well as T > 50 GeV, and the efficiency is defined as the fraction of L probe leptons also passing the S requirements, once the small SM real-lepton background (estimated from MC) has been subtracted. The requirement for same-charge-sign rather than opposite-charge-sign increases the contribution from events with fake leptons relative to real SM processes, dominantly +jets events. The efficiencies are calculated separately for each lepton flavour and charge (such that the flavour and charge match the lepton that has its selection loosened), and are binned in lepton T . These efficiencies, together with event counts in regions orthogonal to the signal regions and where one lepton is required to pass the L selection, are used to calculate a prediction for the yield of fake-lepton events in the signal regions. The fake-lepton estimate accounts for O (2%) of the events in the signal regions used for the ratio measurement, and O (6%) or O (17%) of the events in the signal region used for the RPV supersymmetry or leptoquark interpretation, respectively. As can be seen in Figure 2, the fake-lepton estimate in − + events is indeed generally higher than in + − events.
The uncertainty in the fake-lepton estimate includes two uncertainties: one propagated from uncertainty in the values of the efficiencies, and a 'non-closure' uncertainty. The non-closure uncertainty is derived in the region used to calculate electron fake efficiencies: an ± ± region (with the electron failing the S selection but passing the L selection, and the muon passing the 'tag' selection described above), which -like the signal regions -has fake leptons originating mostly from +jet events. The region is split into two bins, with either Σ( T ) < 200 GeV or Σ( T ) > 200 GeV. The non-closure uncertainty is taken as the fractional difference in event counts in these bins between the total background estimate and the data. Here, the total background estimate includes the Likelihood Matrix Method fake-background estimate, as well as the real-lepton background estimated using SM MC. It was checked that the real-lepton background contamination in this region has no significant impact on the uncertainty value. The non-closure uncertainty has a magnitude of 21% (13%) for events with Σ( T ) > 200 GeV (Σ( T ) < 200 GeV), which is applied to the signal (control) region.
Only two other sources of potential charge-flavour bias motivate application of an explicit correction to data. Firstly, in certain regions of the detector there are small differences between the reconstruction (and trigger) efficiencies for positively and negatively charged muons. These are largely a result of the toroidal magnetic field that the muons move in while traversing the muon spectrometer, increasing the relative acceptance of muons of one charge in certain regions, usually anti-symmetrically in . To remove these differences, weights (depending on muon charge, transverse momentum and pseudorapidity, and derived from → samples following the tag-and-probe approach described in Ref. [35]) are applied to events after data-taking but before any other use in the analysis. These weights correct for the bias by taking the efficiency values back to the charge-averaged values. Approximately two thirds of these weights have values within 1% of unity. In addition to introducing an overall acceptance change of ∼0.05%, these weights are responsible for event yields acquiring non-integer values. Uncertainties associated with this correction are obtained by propagating the statistical uncertainty of the charge-bias measurement. Secondly, a small correction is applied to data to account for the muon sagitta bias, which is derived in accord with Ref.
[47], and comes with associated uncertainties which are also applied to data. This charge-dependent bias in muon momentum is caused by misalignment of the detector, and is found to be very minor: 68% of muons have a bias of less than 0.2% of the muon T .
Before unblinding the signal regions, the hypothesis that the proton-proton initial state and experimental effects lead to a bias favouring ≤ 1 in the SM was confirmed by measuring , binned in 'stransverse mass' T2 [43], in -. Whilst the ratio is consistent with one within uncertainties, the maximal deviation from one is used to define a 2% 'residual-bias' uncertainty encompassing small remaining uncorrected detector biases. The extrapolation of the uncertainty to high Σ( T ) was validated by inspecting its impact on the measurement in -and -when binned in Σ( T ).

Results
The observed data and fake-lepton background estimate in the + − and − + channels of -and are shown in bins of T2 and P respectively in Figure 2. Benchmark RPV-supersymmetry or leptoquark signal yields are included to demonstrate that these BSM models favour + − over − + . In the lower panels of Figure 2, an estimate of the proportion of SM background processes in each bin is given, showing that¯is expected to dominate in most bins apart from the tails, where the fake-lepton, diboson, and single-top backgrounds become proportionally more important.
The ratio, , is measured in bins, , of T2 ( P ) in the -( -) by maximising a parameterised likelihood model of the observed yields, ì +−/−+ obs,i . The likelihood model assumes an independent Poisson where the expectation in each bin is the combination of a fake-lepton background estimate, +−/−+ , and a total irreducible (real-lepton) SM expectation, exp, , which is a floating parameter in the likelihood. Uncertainties associated with the Likelihood Matrix Method estimate and the non-closure uncertainty in the fake-lepton background estimate are included by parameterising +−/−+ with Gaussian-constrained nuisance parameters, . Muon charge and sagitta-bias corrections are already applied to the observed  Figure 3: A summary of the ratio measurement in the full Run-2 data for -binned in T2 , and -binned in P . Muon charge and sagitta-bias corrections are applied to data along with corresponding uncertainties, and the likelihood matrix method is used to estimate the charge-flavour-biased fake-lepton background such that it can be subtracted from the data. A 2% uncertainty in , encompassing remaining observed detector biases, is also included. The lower panel shows the -value for a one-sided discovery test to reject the SM hypothesis that ≤ 1. yields, 5 ì +−/−+ obs,i , with the relative uncertainties on these corrections included in the +−/−+ ( ì ) term and corresponding Gaussian-constrained nuisance parameters on the expected yields, . The 'residual-bias' uncertainty is included in the same manner. A global ratio measurement from combining all bins in a region gives = 0.987 +0.022 −0.021 for -and -. The binned measured ratios (maximum-likelihood estimators of ) are shown in Figure 3. In the lower bins of these variables the residual-bias uncertainty dominates; in the final two bins of T2 and three bins of P the fake-lepton and statistical uncertainties dominate. Figure 3 also shows one-sided p-values for a hypothesis test of = 1 using a modified profile-likelihood-ratio test statistic that equals zero when ≤ 1, calculated using asymptotic approximations [48]. No significant upward deviation from one is seen in any bin, meaning that the SM hypothesis of ≤ 1 is not excluded anywhere. The largest upward deviation of from one has a local significance of 1 . The largest downward deviation from one is = 0.929 +0.023 −0.022 , in the 80 − 100 GeV bin of T2 , with a local significance of 3.1 . The goodness-of-fit significance to the model that = 1 in all bins is 1.6 , estimated using a likelihood ratio test statistic with the asymptotic approximation.
The CL s method [49] is used to obtain 95% confidence level (CL) upper limits on the number of possible signal events entering -and -, with a fraction entering the + − channel, and these are shown in Figure 4 for a range of values. These limits are calculated using a profile-likelihood-ratio test statistic with the likelihood function from Eq. (3) after fixing the ratio values to = 1 and adding signal Having seen consistency with the SM hypothesis in the ratio measurement, limits are placed on parameters of the two benchmark BSM models. RPV-supersymmetry model exclusion limits are calculated with a one-sided profile-likelihood-ratio test statistic, where a likelihood function is defined for the observed yields in the + − and − + channels of both -and a corresponding control region -. This likelihood function is similar to Eq. (3) with signal yield terms added to the Poisson expectations (these signal terms are scaled by a signal strength parameter sig ) and labelling the regions instead of the bins (each region in this model has a single bin). The are also replaced by a single that is common to both regions. In this respect, the control region drives the measurement of and the signal region determines the value of sig , which is used as the parameter of interest in the test statistic. The variance of across the (S, T2 )-plane ('RPV-plane') outside of the signal region is estimated with measurements of the ratio in bins of the plane axis observables: the binning was chosen to approximately match the statistical precision of the signal region. The estimated variance of 6% is added to the likelihood model as an uncertainty in to cover the modelling assumption that is invariant across the plane. This uncertainty covers both statistical and systematic sources of variance. The model is validated by finding good agreement between the observed and post-fit expected yields in the + − channel of validation regions orthogonal toand -(defined in Figure 1), where the − + channel of these regions is included in the fit along with both channels of -. These validation regions are not included for the test statistic calculations when calculating the limits. Figure 5(a) shows the observed and expected yields after the fit in the RPV-plane, demonstrating good agreement in the validation regions. This analysis is repeated in the ( P , S)-plane ('LQ-plane') for leptoquark benchmark models (see Figure 1), with an estimated 9% variance of across the plane. Figure 5(b) shows the fit result in the LQ-plane, demonstrating again good agreement in the validation regions. Uncertainties are included in the signal terms for lepton reconstruction efficiency, energy scale and resolution, and trigger efficiency differences between MC simulation and data; uncertainties in the jet-energy scale and resolution [50], the modelling of the ì miss T soft term [39], and electron charge identification [51] are also included. The RPV signal model yields include theoretical uncertainties in the signal acceptance due to the choice of parton shower model and factorisation and renormalisation scales. The LQ signal models include effects of factorisation and renormalisation scale uncertainties on the NLO cross-section prediction, which form the ±1 theory band on the observed limit.  Figure 5: Distributions of data in the + − channel of the signal, control, and validation regions, used for the -parity-violating supersymmetry model and leptoquark model limit fits. Note that 'VR-LQ-x' has a lower bound of P > 600 GeV, as it is not necessary to validate the ratio extrapolation any further away, kinematically, from the signal region. The data has the muon charge and sagitta-bias corrections applied. The fake-lepton background estimate and data-driven real SM background prediction derived from the fit are also shown. The sum of these is the total SM background estimate, and its error bar includes contributions from the fake-lepton background estimate, muon charge-bias and sagitta-bias uncertainties on the denominator data yields used to fit the SM background estimate. The lower panel shows the significance of any deviation between the observed data yields and total SM background prediction in each bin, defined in Ref. [52].
As shown in Table 1, no statistically significant deviations of the data from the total SM background prediction are seen in the + − channels of either signal region. By construction, since exp,i is a freely floating parameter, good agreement is seen between the data and SM prediction in the − + channels after the fit excluding the + − channels. These are included in the Table 1 for completeness and comparison with the benchmark signal yields. Table 1: Observed yields, and (post-fit) expected yields for the data-driven SM estimates in the case where sig = 0 and the fit excludes the + − signal region; in such a fit the post-fit expected yields in the − + channel are constrained to match the data exactly. Additionally, signal yields are shown for the benchmark RPV-supersymmetry signal points in -and the leptoquark signal points in -obtained from a fit excluding the + − signal region and setting sig = 1. Small weights correcting for muon charge biases affect all rows except that containing the fake-lepton estimate. These weights, , cause non-integer yields. The uncertainties, √︃ 2 , are given for data to support the choice made to model the yields with a Poisson distribution.
--   Figure 6: Expected and observed exclusion limits are shown for RPV-supersymmetry models which allow for production of a single smuon (decaying into a muon and neutralino) in association with a top quark (decaying leptonically). The expected limit is calculated with the asymptotic approximation, by considering when there is a 50% chance of exclusion at 95% CL s , under the SM-only hypothesis. The smuon is produced through the 231 coupling, which is fixed at unity. All limits are computed at 95% CL and all uncertainties are included. Also shown are dotted lines to indicate the two kinematic limits for the RPV process considered. The observed and expected RPV-supersymmetry limits are shown in Figure 6 for the case where the 231 coupling is fixed at one, and in Figure 7 where the coupling takes values between 0.1 and 1.5. The perturbative upper limit for the 231 coupling is 1.12 at the -boson mass [53], and increases with the energy scale. For coupling values above 1, the limit at high smuon mass becomes constant since the cross-section increase and the branching-ratio decrease cancel each other out. Neutralino masses near and above the top-quark mass are not excluded, as here the neutralino can decay through the RPV coupling and no real ì miss T remains in the final state. For the largest coupling value considered, smuon masses up to 650 GeV are excluded. Figure 8 shows the observed and expected limits on the leptoquark models considered. Since the energy required to produce a pair of leptoquarks is always double that required to make a single one, the suppression of high centre-of-mass energies by steeply falling parton distribution functions naturally leads to places where this analysis has better reach in leptoquark mass than analyses which have targeted pair production. 6 Notably, leptoquark couplings of 1R = 1R > 0.46 are newly excluded for masses above 1420 GeV, up to a value of unity (the largest coupling considered) for a leptoquark mass of 1880 GeV. [54] also assumes that the narrow-width approximation is valid for leptoquarks over the range of coupling values shown.

Conclusion
To search for evidence of new physics, this analysis compares the production cross sections for + − and − + by investigating the ratio = ( → + − + ) ( → − + + ) in a variety of signal regions. New physics processes could potentially raise or lower from one, but even though the largest Standard Model effect known to lower was subtracted, the model-independent tests presented in the first half of this analysis look only for evidence of the 'unexpected' scenario of > 1. No significant model-independent evidence for > 1 was seen when analysing 139 fb −1 of proton-proton collision data recorded at √ = 13 TeV by the ATLAS detector at the LHC.
Further observations were conducted in more exclusive regions optimised for particular signals beyond the Standard Model. These regions targeted: (i) -parity-violating supersymmetric models with non-zero 231 couplings, with smuons and stable neutralinos, and (ii) scalar leptoquark models with 1R = 1R . The secondary measurements were then used to create exclusions in planes in sparticle or leptoquark model spaces. The search was able to exclude singly produced smuons in certain models in which the only other light sparticle is a neutralino, albeit with those exclusions dependent on the existence of 231 -parity-violating couplings. Scalar leptoquarks with 1R = 1R ≤ 1 were excluded for masses below 1880 GeV. This value reduces to 1R = 1R = 0.46 for 1420 GeV (close to the limits obtained in analyses based on leptoquark pair production).

Acknowledgements
We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We

Appendix A
The top pair and single-top backgrounds were modelled using P B [56] v2 interfaced to P 8 [23] and E G [57] and the NNPDF2.3 [24] PDF. A dilepton filter was applied to the¯and processes.
The diboson backgrounds were modelled using S [58]. Hard processes with no or one additional jet in the final state were simulated at NLO, while up to three additional jets were included at LO. S 2.2.2 was used for the fully leptonic final states (ℓℓℓℓ, ℓℓℓ , ℓℓ and ℓ ) together with the CT10 PDF [59]. For the semileptonic final states (ℓℓ and ℓ ), S 2.2.1 was used with the NNPDF [24] PDF. The loop-induced processes ( ℓℓℓℓ, ℓℓ , ℓℓℓℓ , ℓℓℓ and same-sign ℓℓ ) were generated using S 2.1.1 and the CT10 PDF.
The + jets background was modelled using S 2.2.1 with the NNPDF PDF. Up to two jets were generated at NLO and up to four at LO. The¯+ processes were simulated using M G 5_ MC@NLO + P 8. The E G program was used for properties of the bottom and charm hadron decays. The events are normalised to their respective NLO cross sections.  The ATLAS Collaboration