Study of the underlying event in top quark pair production in pp collisions at 13 TeV

Measurements of normalized differential cross sections as functions of the multiplicity and kinematic variables of charged-particle tracks from the underlying event in top quark and antiquark pair production are presented. The measurements are performed in proton-proton collisions at a center-of-mass energy of 13 TeV, and are based on data collected by the CMS experiment at the LHC in 2016 corresponding to an integrated luminosity of 35.9 fb−1. Events containing one electron, one muon, and two jets from the hadronization and fragmentation of b quarks are used. These measurements characterize, for the first time, properties of the underlying event in top quark pair production and show no deviation from the universality hypothesis at energy scales typically above twice the top quark mass. Submitted to the European Physical Journal C c © 2018 CERN for the benefit of the CMS Collaboration. CC-BY-4.0 license ∗See Appendix B for the list of collaboration members ar X iv :1 80 7. 02 81 0v 1 [ he pex ] 8 J ul 2 01 8


Introduction
At the LHC, top quark and antiquark pairs (tt) are dominantly produced in the scattering of the proton constituents via quantum chromodynamics (QCD) at an energy scale (Q) of about two times the t quark mass (m t ). The properties of the t quark can be studied directly from its decay products, as it decays before hadronizing. Mediated by the electroweak interaction, the t quark decay yields a W boson and a quark, the latter carrying the QCD color charge of the mother particle. Given the large branching fraction for the decay into a bottom quark, B(t → Wb) = 0.957 ± 0.034 [1], in this analysis we assume that each t or t quark yields a corresponding bottom (b) or antibottom (b) quark in its decay. Other quarks may also be produced, in a color-singlet state, if a W → qq decay occurs. Being colored, these quarks will fragment and hadronize giving rise to an experimental signature with jets. Thus, when performing precision measurements of t quark properties at hadron colliders, an accurate description of the fragmentation and hadronization of the quarks from the hard scatter process as well as of the "underlying event" (UE), defined below, is essential. First studies of the fragmentation and hadronization of the b quarks in tt events have been reported in Ref. [2]. In this paper, we present the first measurement of the properties of the UE in tt events.
The UE is defined as any hadronic activity that cannot be attributed to the particles stemming from the hard scatter, and in this case from tt decays. Because of energy-momentum conservation, the UE constitutes the recoil against the tt system. In this study, the hadronization products of initial-and final-state radiation (ISR and FSR) that cannot be associated to the particles from the tt decays are probed as part of the UE, even if they can be partially modeled by perturbative QCD. The main contribution to the UE comes from the color exchanges between the beam particles and is modeled in terms of multiparton interactions (MPI), color reconnection (CR), and beam-beam remnants (BBR), whose model parameters can be tuned to minimum bias and Drell-Yan (DY) data.
The study of the UE in tt events provides a direct test of its universality at higher energy scales than those probed in minimum bias or DY events. This is relevant as a direct probe of CR, which is needed to confine the initial QCD color charge of the t quark into color-neutral states. The CR mainly occurs between one of the products of the fragmentation of the b quark from the t quark decay and the proton remnants. This is expected to induce an ambiguity in the origin of some of the final states present in a bottom quark jet [3][4][5]. The impact of these ambiguities in the measurement of t quark properties is evaluated through phenomenological models that need to be tuned to the data. Recent examples of the impact that different model parameters have on m t can be found in Ref. [6,7].
The analysis is performed using final states where both of the W bosons decay to leptons, yielding one electron and one muon with opposite charge sign, and the corresponding neutrinos. In addition, two b jets are required in the selection, as expected from the tt → (eνb)(µνb) decay. This final state is chosen because of its expected high purity and because the products of the hard process can be distinguished with high efficiency and small contamination from objects not associated with t quark decays, e.g., jets from ISR. The measurements presented in this paper characterize, for the first time, the properties of the UE at Q ≥ 2m t .
After discussing the experimental setup in Section 2, and the signal and background modeling in Section 3, we present the strategy employed to select the events in Section 4 and to measure the UE contribution in each selected event in Section 5. The measurements are corrected to a particle-level definition using the method described in Section 6 and the associated systematic uncertainties are discussed in Section 7. Finally, in Section 8, the results are discussed and compared to different Monte Carlo (MC) simulation calculations. The measurements are summarized in Section 9.

The CMS detector
The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T parallel to the beam direction.
Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. A preshower detector, consisting of two planes of silicon sensors interleaved with about three radiation lengths of lead, is located in front of the endcap regions of the ECAL. Hadron forward calorimeters, using steel as an absorber and quartz fibers as the sensitive material, extend the pseudorapidity coverage provided by the barrel and endcap detectors from |η| = 3.0 to 5.2. Muons are detected in the window |η| < 2.4 in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid.
Charged-particle trajectories with |η| < 2.5 are measured by the tracker system. The particleflow algorithm [8] is used to reconstruct and identify individual particles in an event, with an optimized combination of information from the various elements of the CMS detector. The energy of the photons is directly obtained from the ECAL measurement, corrected for zerosuppression effects. The energy of the electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track. The energy of the muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.
Events of interest are selected using a two-tiered trigger system [9]. The first level, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100 kHz within a time interval of less than 4 µs. The second level, known as the high-level trigger, consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1 kHz before data storage.
A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [10].

Signal and background modeling
This analysis is based on proton-proton (pp) collision data at a center-of-mass energy √ s = 13 TeV, collected by the CMS detector in 2016 and corresponds to an integrated luminosity of 35.9 fb −1 [11].
The tt process is simulated with the POWHEG (v2) generator in the heavy quark production (hvq) mode [12][13][14]. The NNPDF3.0 next-to-leading-order (NLO) parton distribution functions (PDFs) with the strong coupling parameter α S = 0.118 at the Z boson mass scale (M Z ) [15] are utilized in the matrix-element (ME) calculation. The renormalization and factorization scales, µ R and µ F , are set to m T = √ m 2 t + p 2 T , where m t = 172.5 GeV and p T is the transverse momen-tum in the tt rest frame. Parton showering is simulated using PYTHIA8 (v8.219) [16] and the CUETP8M2T4 UE tune [17]. The CUETP8M2T4 tune is based on the CUETP8M1 tune [18] but uses a lower value of α ISR S (M Z ) = 0.1108 in the parton shower (PS); this value leads to a better description of jet multiplicities in tt events at √ s = 8 TeV [19]. The leading-order (LO) version of the same NNPDF3.0 is used in the PS and MPI simulation in the CUETP8M2T4 tune. The cross section used for the tt simulation is 832 +20 −29 (scale) ± 35(PDF + α S ) pb, computed at the next-to-next-to-leading-order (NNLO) plus next-to-next-to-leading-logarithmic accuracy [20].
Throughout this paper, data are compared to the predictions of different generator settings for the tt process. Table 1 summarizes the main characteristics of the setups and abbreviations used in the paper. Among other UE properties, CR and MPI are modeled differently in the alternative setups considered, hence the interest in comparing them to the data. Three different signal ME generators are used: POWHEG, MADGRAPH5 aMC@NLO (v2.2.2) with the FxFx merging scheme [21,22] for jets from the ME calculations and PS, and SHERPA (v2.2.4) [23]. The latter is used in combination with OPENLOOPS (v1.3.1) [24], and with the CS parton shower based on the Catani-Seymour dipole subtraction scheme [25]. In addition, two different HERWIG PS versions are used and interfaced with POWHEG: HERWIG++ [26] with the EE5C UE tune [27] and the CTEQ6 (L1) [28] PDF set, and HERWIG 7 [26,29] with its default tune and the MMHT2014 (LO) [30] PDF set.  [4,31], an alternative MPI model based on the "Rope hadronization" framework describing Lund color strings overlapping in the same area [32,33], variations of the choice of α S (M Z ) in the parton shower, and a variation of the values that constitute the CUETP8M2T4 tune, according to their uncertainties.
Background processes are simulated with several generators. The WZ, W+jets, and ZZ → 2 2q (where denotes generically electrons, muons, and τ leptons) processes are simulated at NLO, using MADGRAPH5 aMC@NLO with the FxFx merging. Drell-Yan production, with dilepton invariant mass, m( ), greater than 50 GeV, is simulated at LO with MADGRAPH5 aMC@NLO using the so-called MLM matching scheme [34] for jet merging. The POWHEG (v2) program is furthermore used to simulate the WW, and ZZ → 2 2ν processes [35,36], while POWHEG (v1) is used to simulate the tW process [37]. The single top quark t-channel background is simulated at NLO using POWHEG (v2) and MADSPIN contained in MADGRAPH5 aMC@NLO (v2.2.2) [38,39]. The residual tt+V backgrounds, where V = W or Z, are generated at NLO using MADGRAPH5 aMC@NLO. The cross sections of the DY and W+jets processes are normalized to the NNLO prediction, computed using FEWZ (v3.1.b2) [40], and single top quark processes are normalized to the approximate NNLO prediction [41]. Processes containing two vector bosons (hereafter referred to as dibosons) are normalized to the NLO predictions computed with MADGRAPH5 aMC@NLO, with the exception of the WW process, for which the NNLO prediction [42] is used.
All generated events are processed through the GEANT4-based [43][44][45] CMS detector simulation and the standard CMS event reconstruction. Additional pp collisions per bunch crossing (pileup) are included in the simulations. These simulate the effect of pileup in the events, with the same multiplicity distribution as that observed in data, i.e., about 23 simultaneous interactions, on average, per bunch crossing.

Event reconstruction and selection
The selection targets events in which each W boson decays to a charged lepton and a neutrino. Data are selected online with single-lepton and dilepton triggers. The particle flow (PF) algorithm [8] is used for the reconstruction of final-state objects. The offline event selection is similar to the one described in Ref. [46]. At least one PF charged lepton candidate with p T > 25 GeV and another one with p T > 20 GeV, both having |η| < 2.5, are required. In the EB-EE transition region, p T > 20 GeV is required, differently from the selection in Ref. [46]. The two leptons must have opposite charges and an invariant mass m( ± ∓ ) > 12 GeV. When extra leptons are present in the event, the dilepton candidate is built from the highest p T leptons in the event.
Events with e ± µ ∓ in the final state are used for the main analysis, while ee and µ ± µ ∓ events are used to derive the normalization of the DY background. The simulated events are corrected for the differences between data and simulation in the efficiencies of the trigger, lepton identification, and lepton isolation criteria. The corrections are derived with Z → e + e − and Z → µ + µ − events using the "tag-and-probe" method [47] and are parameterized as functions of the p T and η of the leptons.
Jets are clustered using the anti-k T jet finding algorithm [48,49] with a distance parameter of 0.4 and all the reconstructed PF candidates in the event. The charged hadron subtraction algorithm is used to mitigate the contribution from pileup to the jets [50]. At least two jets with p T > 30 GeV, |η| < 2.5 and identified by a b-tagging algorithm are required. The b-tagging is based on a "combined secondary vertex" algorithm [51] characterized by an efficiency of about 66%, corresponding to misidentification probabilities for light quark and cquark jets of 1.5 and 18%, respectively. A p T -dependent scale factor is applied to the simulations in order to reproduce the efficiency of this algorithm, as measured in data.
The reconstructed vertex with the largest value of summed physics-object p 2 T is taken to be the primary pp interaction vertex. The physics objects are the jets, clustered using the jet finding algorithm [48,49] with the tracks assigned to the vertex as inputs, and the associated missing transverse momentum, p miss T , taken as the negative vector sum of the p T of those jets. The latter is defined as the magnitude of the negative vector sum of the momenta of all reconstructed PF candidates in an event, projected onto the plane perpendicular to the direction of the proton beams.
All backgrounds are estimated from simulation, with the exception of the DY background normalization. The latter is estimated making use of the so-called R out/in method [52], in which events with same-flavor leptons are used to normalize the yield of eµ pairs from DY production of τ lepton pairs. The normalization of the simulation is estimated from the number of events in the data within a 15 GeV window around the Z boson mass [52]. For eµ events, we use the geometric mean of the scale factors determined for ee and µµ events. With respect to the simulated predictions, a scale factor 1.3 ± 0.4 is obtained from this method, with statistical and systematic uncertainties added in quadrature. The systematic uncertainty is estimated from the differences found in the scale factor for events with 0 or 1 b-tagged jets, in the same-flavor channels.
We select a total of 52 645 eµ events with an expected purity of 96%. The data agree with the expected yields within 2.2%, a value smaller than the uncertainty in the integrated luminosity alone, 2.5% [11]. The tW events are expected to constitute 90% of the total background.
In the simulation, the selection is mimicked at the particle level with the techniques described in Ref. [53]. Jets and leptons are defined at the particle level with the same conventions as adopted by the RIVET framework [54]. The procedure ensures that the selections and definitions of the objects at particle level are consistent with those used in the RIVET routines. A brief description of the particle-level definitions follows: • prompt charged leptons (i.e., not produced as a result of hadron decays) are reconstructed as "dressed" leptons with nearby photon recombination using the anti-k T algorithm with a distance parameter of 0.1; • jets are clustered with the anti-k T algorithm with a distance parameter of 0.4 using all particles remaining after removing both the leptons from the hard process and the neutrinos; • the flavor of a jet is identified by including B hadrons in the clustering.
Using these definitions, the fiducial region of this analysis is specified by the same requirements that are applied offline (reconstruction level) for leptons and jets. Simulated events are categorized as fulfilling only the reconstruction-based, only the particle-based, or both selection requirements. If a simulated event passes only the reconstruction-level selection, it is considered in the "misidentified signal" category, i.e., it does not contribute to the fiducial region defined in the analysis and thus is considered as a background process. In the majority of the bins of each of the distributions analyzed, the fraction of signal events passing both the reconstructionand particle-level selections is estimated to be about 80%, while the fraction of misidentified signal events is estimated to be less than 10%.

Characterization of the underlying event
In order to isolate the UE activity in data, the contribution from both pileup and the hard process itself must be identified and excluded from the analysis. The contamination from pileup events is expected to yield soft particles in time with the hard process, as well as tails in the energy deposits from out-of-time interactions. The contamination from the hard process is expected to be associated with the two charged leptons and two b jets originating from the tt decay chain.
In order to minimize the contribution from these sources, we use the properties of the reconstructed PF candidates in each event. The track associated to the charged PF candidate is required to be compatible with originating from the primary vertex. This condition reduces to a negligible amount the contamination from pileup in the charged particle collection. A simple association by proximity in z with respect to the primary vertex of the event is expected to yield a pileup-robust, high-purity selection. For the purpose of this analysis all charged PF candidates are required to satisfy the following requirements: • p T > 900 MeV and |η| < 2.1; • the associated track needs to be either used in the fit of the primary vertex or to be closer to it in z than with respect to other reconstructed vertices in the event.
After performing the selection of the charged PF candidates we check which ones have been used in the clustering of the two b-tagged jets and which ones match the two charged lepton candidates within a ∆R = √ (∆η) 2 + (∆φ) 2 = 0.05 cone, where φ is the azimuthal angle in radians. All PF candidates failing the kinematic requirements, being matched to another primary vertex in the event, or being matched to the charged leptons and b-tagged jets, are removed from the analysis. The UE analysis proceeds by using the remaining charged PF candidates. Figure 1 shows, in a simulated tt event, the contribution from charged and neutral PF candidates, the charged component of the pileup, and the hard process. The charged PF candidates that are used in the study of the UE are represented after applying the selection described above.
Various characteristics, such as the multiplicity of the selected charged particles, the flux of momentum, and the topology or shape of the event have different sensitivity to the modeling of the recoil, the contribution from MPI and CR, and other parameters.
The first set of observables chosen in this analysis is related to the multiplicity and momentum flux in the event: • charged-particle multiplicity: N ch ; • magnitude of the p T of the charged particle recoil system: • average p T (or p z ) per charged particle: computed from the ratio between the scalar sum and the charged multiplicity: p T (or p z ).
The second set of observables characterizes the UE shape and it is computed from the so-called linearized sphericity tensor [55,56]: where the i index runs over the particles associated with the UE, as for the previous variables, and the µ and ν indices refer to one of the (x, y, z) components of the momentum of the parti- Figure 1: Distribution of all PF candidates reconstructed in a PW+PY8 simulated tt event in the η-φ plane. Only particles with p T > 900 MeV are shown, with a marker whose are is proportional to the particle p T . The fiducial region in η is represented by the dashed lines.
Further insight can be gained by studying the evolution of the two sets of observables in different categories of the tt system kinematic quantities. The categories chosen below are sensitive to the recoil or the scale of the energy of the hard process, and are expected to be reconstructed with very good resolution. Additionally, these variables minimize the effect of migration of events between categories due to resolution effects.
The dependence of the UE on the recoil system is studied in categories that are defined according to the multiplicity of additional jets with p T > 30 GeV and |η| < 2.5, excluding the two selected b-tagged jets. The categories with 0, 1, or more than 1 additional jet are used for this purpose. The additional jet multiplicity is partially correlated with the charged-particle multiplicity and helps to factorize the contribution from ISR. The distribution of the number of additional jets is shown in Fig. 2 (upper left).
In addition to these categories, the transverse momentum of the dilepton system, p T ( ), is used as it preserves some correlation with the transverse momentum of the tt system and, consequently, with the recoil of the system. The p T ( ) direction is used to define three regions in the transverse plane of each event. The regions are denoted as "transverse" (60 • < |∆φ| < 120 • ), "away" (|∆φ| > 120 • ), and "toward" (|∆φ| < 60 • ). Each reconstructed particle in an event is assigned to one of these regions, depending on the difference of their azimuthal angle with respect to the p T ( ) vector. Figure 3 illustrates how this classification is performed on a typical event. This classification is expected to enhance the sensitivity of the measurements to the contributions from ISR, MPI and CR in different regions. In addition, the magnitude, p T ( ), is used to categorize the events and its distribution is shown in Fig. 2 (upper right). The p T ( ) variable is estimated with a resolution better than 3%.  Lastly, the dependence of the UE on the energy scale of the hard process is characterized by measuring it in different categories of the m( ) variable. This variable is correlated with the invariant mass of the tt system, but not with its p T . The m( ) distribution is shown in Fig. 2 (lower). A resolution better than 2% is expected in the measurement of m( ).
Although both p T ( ) and m( ) are only partially correlated with the tt kinematic quantities, they are expected to be reconstructed with very good resolution. Because of the two escaping neutrinos, the kinematics of the tt pair can only be reconstructed by using the p miss T measurement, which has poorer experimental resolution when compared to the leptons. In addition, given that p miss T is correlated with the UE activity, this would introduce a bias in the definition of the categories and the observables studied to characterize the UE. Hence the choice to use only dilepton-related variables. Figure 3: Display of the transverse momentum of the selected charged particles, the two leptons, and the dilepton pair in the transverse plane corresponding to the same event as in Fig. 1. The p T of the particles is proportional to the length of the arrows and the dashed lines represent the regions that are defined relative to the p T ( ) direction. For clarity, the p T of the leptons has been rescaled by a factor of 0.5.

Corrections to the particle level
Inefficiencies of the track reconstruction due to the residual contamination from pileup, nuclear interactions in the tracker material, and accidental splittings of the primary vertex [58] are expected to cause a slight bias in the observables described above. The correction for these biases is estimated from simulation and applied to the data by means of an unfolding procedure. At particle (generator) level, the distributions of the observables of interest are binned according to the resolutions expected from simulation. Furthermore, we require that each bin contains at least 2% of the total number of events. The migration matrix (K), used to map the reconstruction-to particle-level distributions, is constructed using twice the number of bins at the reconstruction level than the ones used at particle level. This procedure ensures almost diagonal matrices, which have a numerically stable inverse. The matrix is extended with an additional row that is used to count the events failing the reconstruction-level requirements, but found in the fiducial region of the analysis, i.e., passing the particle-level requirements. The inversion of the migration matrix is made using a Tikhonov regularization procedure [59], as implemented in the TUNFOLDDENSITY package [60]. The unfolded distribution is found by minimizing a χ 2 function where y are the observations, V yy is an estimate of the covariance of y (calculated using the simulated signal sample), λ is the particle-level expectation, ||L(λ − λ 0 )|| 2 is a penalty function (with λ 0 being estimated from the simulated samples), and τ > 0 is the so-called regularization parameter. The latter regulates how strongly the penalty term should contribute to the minimization of χ 2 . In our setup we choose the function L to be the curvature, i.e., the second derivative, of the output distribution. The chosen value of the τ parameter is optimized for each distribution by minimizing its average global correlation coefficient [60]. Small values, i.e., τ < 10 −3 , are found for all the distributions; the global correlation coefficients are around 50%.
The statistical coverage of the unfolding procedure is checked by means of pseudo-experiments based on independent PW+PY8 samples. The pull of each bin in each distribution is found to be consistent with that of a standard normal distribution. The effect of the regularization term in the unfolding is checked in the data by folding the measured distributions and comparing the outcome to the originally-reconstructed data. In general the folded and the original distributions agree within 1-5% in each bin, with the largest differences observed in bins with low yield.

Systematic uncertainties
The impact of different sources of uncertainty is evaluated by unfolding the data with alternative migration matrices, which are obtained after changing the settings in the simulations as explained below. The observed bin-by-bin differences are used as estimates of the uncertainty. The impact of the uncertainty in the background normalization is the only exception to this procedure, as detailed below. For the sources with several contributions, an envelope of the differences in each bin is used. Variations on the event yields are fully absorbed by normalizing the measured cross sections, thus only the sources of uncertainty that yield variations in the shapes have a non-negligible impact.

Experimental uncertainties
The following experimental sources of uncertainty are considered: Pileup: Although pileup is included in the simulation, there is an intrinsic uncertainty in modeling its multiplicity. An uncertainty of ±4.6% in the inelastic pp cross section is used and propagated to the event weights [61].
Trigger and selection efficiency: The scale factors used to correct the simulation for different trigger and lepton selection efficiencies in data and simulation are varied up or down, according to their uncertainty. The uncertainties in the muon track and electron reconstruction efficiencies are included in this category and added in quadrature.

Lepton energy scale:
The corrections applied to the electron energy and muon momentum scales are varied separately, according to their uncertainties. The corrections and uncertainties are obtained using methods similar to those described in Refs. [62,63]. These variations lead to a small migration of events between the different p T ( ) or m( ) categories used in the analysis.
Jet energy scale: A p T -and η-dependent parameterization of the jet energy scale is used to vary the calibration of the jets in the simulation. The corrections and uncertainties are obtained using methods similar to those described in Ref. [50]. The effect of these variations is similar to that described for the lepton energy scale uncertainty; in this case the migration of events occurs between different jet multiplicity categories.

Jet energy resolution:
Each jet is further smeared up or down depending on its p T and η, with respect to the central value measured in data. The difference with respect to data is measured using methods similar to those described in Ref. [50]. The main effect induced in the analysis from altering the jet energy resolution is similar to that described for the jet energy scale uncertainty.
b tagging and misidentification efficiencies: The scale factors used to correct for the difference in performance between data and simulation are varied according to their uncertainties and depending on the flavor of the jet [51]. The main effect of this variation is to move jets into the candidate b jets sample or remove them from it.

Background normalization:
The impact of the uncertainty in the normalization of the backgrounds is estimated by computing the difference obtained with respect to the nominal result when these contributions are not subtracted from data. This difference is expected to cover the uncertainty in the normalization of the main backgrounds, i.e., DY and the tW process, and the uncertainty in the normalization of the tt events that are expected to pass the reconstruction-level requirements but fail the generator-level ones. The total expected background contribution is at the level of 8-10%, depending on the bin. The impact from this uncertainty is estimated to be <5%.
Tracking reconstruction efficiency: The efficiency of track reconstruction is found to be more than 90%. It is monitored using muon candidates from Z → µ + µ − decays, and the ratio of the four-body final D 0 → K − π + π − π + decay to the two-body D 0 → K − π + decay. The latter is used to determine a data-to-simulation scale factor (SF trk ) as a function of the pseudorapidity of the tracks, and for different periods of the data taking used in this analysis. The envelope of the SF trk values, with uncertainties included, ranges from 0.89 to 1.17 [64], and it provides an adequate coverage for the residual variations observed in the charged-particle multiplicity between different data taking periods. The impact of the variation of SF trk by its uncertainty is estimated by using the value of |1 − SF trk | for the probability to remove a reconstructed track from the event or to promote an unmatched generator-level charged particle to a reconstructed track, depending on whether SF trk < 1 or >1, respectively. Different migration matrices, reflecting the different tracking efficiencies obtained from varying the uncertainty in SF trk , are obtained by this method and used to unfold the data. Although the impact is nonnegligible on variables such as N ch or ∑ p T , it has very small impact (<1%) on variables such as p T and p z .

Theoretical uncertainties
The following theoretical uncertainties are considered: Scale choices: µ R and µ F are varied individually in the ME by factors between 0.5 and 2, excluding the extreme cases µ R /µ F = µ(2, 0.5) and µ(0.5, 2), according to the prescription described in Refs. [65,66].
Resummation scale and α S used in the parton shower: In POWHEG, the real emission cross section is scaled by a damping function, parameterized by the so-called h damp variable [12][13][14]. This parameter controls the ME-PS matching and regulates the high-p T radiation by reducing real emissions generated by POWHEG with a factor of h 2 damp /(p 2 T + h 2 damp ). In the simulation used to derive the migration matrices, h damp = 1.58 m t and the uncertainty in this value is evaluated by changing it by +42 or -37%, a range that is determined from the jet multiplicity measurements in tt at √ s = 8 TeV [19]. Likewise, the uncertainty associated with the choice of α ISR S (M Z ) = 0.1108 (α FSR S (M Z ) = 0.1365) for space-like (timelike) showers in the CUETP8M2T4 tune is evaluated by varying the scale at which it is computed, M Z , by a factor of 2 or 1/2.

UE model:
The dependence of the migration matrix on the UE model assumed in the simulation is tested by varying the parameters that model the MPI and CR in the range of values corresponding to the uncertainty envelope associated to the CUETP8M2T4 tune. The uncertainty envelope has been determined using the same methods as described in Ref. [18]. In the following, these will be referred to as UE up/down variations. The dependence on the CR model is furthermore tested using other models besides the nominal one, which is the MPI-based CR model where the tt decay products are excluded from reconnections to the UE. A dedicated sample where the reconnections to resonant decay products are enabled (hereafter designated as ERDon) is used to evaluate possible differences in the unfolded results. In addition, alternative models for the CR are tested. One sample utilizing the "gluon move" model [4], in which gluons can be moved to another string, and another utilizing the "QCD-based" model with string formation beyond LO [31] are used for this purpose. In both samples, the reconnections to the decay resonant processes are enabled. The envelope of the differences is considered as a systematic uncertainty.

Summary of systematic uncertainties
The uncertainties on the measurement of the normalized differential cross section are dominated by the systematic uncertainties, although the statistical uncertainty is a large component. The experimental uncertainties have, in general, small impact; the most relevant are the tracking reconstruction efficiency for the N ch , ∑ p T , ∑ p z , and | p T | observables. Other observables are affected at a sub-percent level by this uncertainty. Theory uncertainties affect the measurements more significantly, a fact that underlines the need of better tuning of the model parameters.
Event shape observables are found to be the most robust against this uncertainty, while ∑ p T , ∑ p z , and | p T | are the ones that suffer more from it. Other sources of theoretical uncertainty typically have a smaller effect.
To further illustrate the impact of different sources on the observables considered, we list in Table 2 the uncertainties on the average of each observable. In the table, only systematic uncertainties that impact the average of one of the observables by at least 0.5% are included. The total uncertainty on the average of a given quantity ranges from 1 to 8%, and hence the comparison with model predictions can be carried out in a discrete manner.

Inclusive distributions
The normalized differential cross sections measured as functions of N ch , ∑ p T , p T , | p T |, ∑ p z , p z , sphericity, aplanarity, C, and D are shown in Figs. 4-13, respectively. The distributions are obtained after unfolding the background-subtracted data and normalizing the result to unity. The result is compared to the simulations, whose settings are summarized in Table 1 and in the appendix. For the predictions, the statistical uncertainty is represented as an error bar.
In the specific case of the PW+PY8 setup, the error bar represents the envelope obtained by varying the main parameters of the CUETP8M2T4 tune, according to their uncertainties. The envelope includes the variation of the CR model, α ISR S (M Z ), α FSR S (M Z ), the h damp parameter, and the µ R /µ F scales at the ME level. Thus, the uncertainty band represented for the PW+PY8 setup should be interpreted as the theory uncertainty in that prediction. For each distribution we give, in addition, the ratio between different predictions and the data.
In tt events the UE contribution is determined to have typically O(20) charged particles with p T ∼ p z ≈ 2 GeV, vectorially summing to a recoil of about 10 GeV. The UE events are anisotropic (as the sphericity is < 1), close to planar (as the aplanarity peaks at low values of ≈0.1), and peak at around 0.75 in the C variable, which identifies three-jet topologies. The D variable, which identifies the four-jet topology, is instead found to have values closer to 0. The threeprong configuration in the energy flux of the UE described by the C variable can be identified with two of the eigenvectors of the linearized sphericity tensor being correlated with the direction of the b-tagged jets, and the third one being determined by energy conservation. When an extra jet with p T > 30 GeV is selected, we measure a change in the profile of the event shape variables, with average values lower by 20-40% with respect to the distributions in which no extra jet is found. Thus when an extra jet is present, the event has a dijet-like topology instead of an isotropic shape.  Figure 4: The normalized differential cross section as a function of N ch is shown on the upper panel. The data (colored boxes) are compared to the nominal PW+PY8 predictions and to the expectations obtained from varied α ISR S (M Z ) or α FSR S (M Z ) PW+PY8 setups (markers). The different panels on the lower display show the ratio between each model tested (see text) and the data. In both cases the shaded (hatched) band represents the total (statistical) uncertainty of the data, while the error bars represent either the total uncertainty of the PW+PY8 setup, computed as described in the text, or the statistical uncertainty of the other MC simulation setups.                        dence on the ME generator with which it is interfaced, i.e., PW+PY8 and MG5 aMC yield similar results. In all distributions the contribution from MPI is strong: switching off this component in the simulation has a drastic effect on the predictions of all the variables analyzed.
Color reconnection effects are more subtle to identify in the data. In the inclusive distributions, CR effects are needed to improve the theory accuracy for p T < 3 GeV or p z < 5 GeV. The differences between the CR models tested (as discussed in detail in Section 3) are nevertheless small and almost indistinguishable in the inclusive distributions. In general the PW+PY8 setup is found to be in agreement with the data, when the total theory uncertainty is taken into account. In most of the distributions it is the variation of α FSR S (M Z ) that dominates the theory uncertainty, as this variation leads to the most visible changes in the UE. The other parton shower setups tested do not describe the data as accurately, but they were not tuned to the same level of detail as PW+PY8. Because of the different hadronization model the HERWIG-based setups show distinct trends with respect to the data from those observed in any of the PYTHIA8-based setups. While describing fairly well the UE event shape variables, HERWIG++ and HERWIG 7 disagree with the N ch , p T , and p z measurements. The SHERPA predictions disagree with data in most of the observables.
For each distribution the level of agreement between theory predictions and data is quantified by means of a χ 2 variable defined as: where δy i (δy j ) are the differences between the data and the model in the i-th (j-th) bin; here n represents the total number of bins, and Cov −1 is the inverse of the covariance matrix. Table 3 summarizes the values obtained for the main models with respect to each of the distributions shown in Figs

Profile of the UE in different categories
The differential cross sections as functions of different observables are measured in different event categories introduced in Section 5. We report the profile, i.e., the average of the measured differential cross sections in different event categories, and compare it to the expectations from the different simulation setups. Figures 14-23 summarize the results obtained. Additional results for p T , profiled in different categories of p T ( ) and/or jet multiplicity, are shown in Figs. 24 and 25, respectively. In all figures, the pull of the simulation distributions with respect to data, defined as the difference between the model and the data divided by the total uncertainty, is used to quantify the level of agreement.
The average charged-particle multiplicity and the average of the momentum flux observables vary significantly when extra jets are found in the event or for higher p T ( ) values. The same set of variables varies very slowly as a function of m( ). Event shape variables are mostly affected by the presence of extra jets in the event, while varying slowly as a function of p T ( ) or m( ). The average sphericity increases significantly when no extra jets are present in the event showing that the UE is slightly more isotropic in these events. A noticeable change is also observed for the other event shape variables in the same categories.
For all observables, the MPI contribution is crucial: most of the pulls are observed to be larger than 5 when MPI is switched off in the simulation. Color reconnection effects are on the other hand more subtle and are more relevant for p T , specifically when no additional jet is present in the event. This is illustrated by the fact that the pulls of the setup without CR are larger for events belonging to these categories. Event shape variables also show sensitivity to CR. All other variations of the UE and CR models tested yield smaller variations of the pulls, compared to the ones discussed.
Although a high pull value of the PW+PY8 simulation is obtained for several categories, when different theory variations are taken into account, the envelope encompasses the data. The variations of α FSR S (M Z ) and α ISR S (M Z ) account for the largest contribution to this envelope. As already noted in the previous section, the HERWIG-based and SHERPA models tend to be in worse agreement with data than PW+PY8, indicating that further tuning of the first two is needed.

Sensitivity to the choice of α S in the parton shower
The sensitivity of these results to the choice of α S (M Z ) in the parton shower is tested by performing a scan of the χ 2 value defined by Eq. (3), as a function of α ISR S (M Z ) or α FSR S (M Z ). While no sensitivity is found in the former case, most observables are influenced by the choice of α FSR S (M Z ). The most sensitive variable is found to be p T and the corresponding variation of the χ 2 function is reported in Fig. 26. A polynomial interpolation is used to determine the minimum of the scan (best fit), and the points at which the χ 2 function increases by one unit are used to derive the 68% confidence interval (CI). The degree of the polynomial is selected by a stepwise regression based on an F-test statistics [70]. A value of α FSR S (M Z ) = 0.120 ± 0.006 is obtained, which is lower than the one assumed in the Monash tune [71] and used in the CUETP8M2T4 tune. The value obtained is compatible with the one obtained from the differential cross sections measured as a function of p T in different p T ( ) regions or in events with different additional jet multiplicities. Table 4 summarizes the results obtained. From the inclusive results, we conclude that the range of the energy scale that corresponds to the 5% uncertainty attained in the determination of α FSR S (M Z ) can be approximated by a [ √ 2, 1/ √ 2] variation, improving considerably over the canonical [2, 0.5] scale variations.

Summary
The first measurement of the underlying event (UE) activity in tt dilepton events produced in hadron colliders has been reported. The measurement makes use of √ s = 13 TeV protonproton collision data collected by the CMS experiment in 2016, and corresponding to 35.9 fb −1 . Using particle-flow reconstruction, the contribution from the UE has been isolated by removing charged particles associated with the decay products of the tt event candidates as well as with pileup interactions from the set of reconstructed charged particles per event. The measurements performed are expected to be valid for other tt final states, and can be used as a reference for complementary studies, e.g., of how different color reconnection (CR) models compare to data in the description of the jets from W → qq decays. The chosen observables and categories enhance the sensitivity to the modeling of multiparton interactions (MPI), CR and the choice of strong coupling parameter at the mass of Z boson (α FSR S (M Z )) in the PYTHIA8 parton shower Monte Carlo simulation. These parameters have significant impact on the modeling of tt production at the LHC. In particular, the compatibility of the data with different choices of the α FSR S (M Z ) parameter in PYTHIA8 has been quantified, resulting in a lower value than the one considered in Ref. [71].
The majority of the distributions analyzed indicate a fair agreement between the data and the POWHEG +PYTHIA8 setup with the CUETP8M2T4 tune [17], but disfavor the setups in which MPI and CR are switched off, or in which α FSR S (M Z ) is increased. The data also disfavor the default configurations in HERWIG++, HERWIG 7, and SHERPA. It has been furthermore verified that, as expected, the choice of the next-to-leading-order matrix-element generator does not impact significantly the expected characteristics of the UE by comparing predictions from POWHEG and MADGRAPH5 aMC@NLO, both interfaced with PYTHIA8.
The present results test the hypothesis of universality in UE at an energy scale typically higher than the ones at which models have been studied. The UE model is tested up to a scale of two times the top quark mass, and the measurements in categories of dilepton invariant mass indicate that it should be valid at even higher scales. In addition, they can be used to improve the assessment of systematic uncertainties in future top quark analyses. The results obtained in this study show that a value of α FSR S (M Z ) = 0.120 ± 0.006 is consistent with the data. The corresponding uncertainties translate to a variation of the renormalization scale by a factor of √ 2.