Study of the underlying event in top quark pair production in pp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {p}\mathrm {p}$$\end{document} collisions at 13Te\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$~\text {Te}\text {V}$$\end{document}

Measurements of normalized differential cross sections as functions of the multiplicity and kinematic variables of charged-particle tracks from the underlying event in top quark and antiquark pair production are presented. The measurements are performed in proton-proton collisions at a center-of-mass energy of 13Te\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$~\text {Te}\text {V}$$\end{document}, and are based on data collected by the CMS experiment at the LHC in 2016 corresponding to an integrated luminosity of 35.9fb-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$~\text {fb}^{-1}$$\end{document}. Events containing one electron, one muon, and two jets from the hadronization and fragmentation of b\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {b}$$\end{document} quarks are used. These measurements characterize, for the first time, properties of the underlying event in top quark pair production and show no deviation from the universality hypothesis at energy scales typically above twice the top quark mass.


Introduction
At the LHC, top quark and antiquark pairs (tt) are dominantly produced in the scattering of the proton constituents via quantum chromodynamics (QCD) at an energy scale (Q) of about two times the t quark mass (m t ). The properties of the t quark can be studied directly from its decay products, as it decays before hadronizing. Mediated by the electroweak interaction, the t quark decay yields a W boson and a quark, the latter carrying the QCD color charge of the mother particle. Given the large branching fraction for the decay into a bottom quark, B(t → Wb) = 0.957 ± 0.034 [1], in this analysis we assume that each t or t quark yields a corresponding bottom (b) or antibottom (b) quark in its decay. Other quarks may also be produced, in a color-singlet state, if a W → qq decay occurs. Being colored, these quarks will fragment and hadronize giving rise to an experimental signature with jets. Thus, when performing precision measurements of t quark properties at hadron colliders, an accurate description of the fragmentation and hadronization of the quarks from the hard scatter process as well as of the "underlying event" (UE), e-mail: cms-publication-committee-chair@cern.ch defined below, is essential. First studies of the fragmentation and hadronization of the b quarks in tt events have been reported in Refs. [2,3]. In this paper, we present the first measurement of the properties of the UE in tt events at a scale Q ≥ 2m t .
The UE is defined as any hadronic activity that cannot be attributed to the particles stemming from the hard scatter, and in this case from tt decays. Because of energy-momentum conservation, the UE constitutes the recoil against the tt system. In this study, the hadronization products of initial-and final-state radiation (ISR and FSR) that cannot be associated to the particles from the tt decays are probed as part of the UE, even if they can be partially modeled by perturbative QCD. The main contribution to the UE comes from the color exchanges between the beam particles and is modeled in terms of multiparton interactions (MPI), color reconnection (CR), and beam-beam remnants (BBR), whose model parameters can be tuned to minimum bias and Drell-Yan (DY) data.
The study of the UE in tt events provides a direct test of its universality at higher energy scales than those probed in minimum bias or DY events. This is relevant as a direct probe of CR, which is needed to confine the initial QCD color charge of the t quark into color-neutral states. The CR mainly occurs between one of the products of the fragmentation of the b quark from the t quark decay and the proton remnants. This is expected to induce an ambiguity in the origin of some of the final states present in a bottom quark jet [4][5][6]. The impact of these ambiguities in the measurement of t quark properties is evaluated through phenomenological models that need to be tuned to the data. Recent examples of the impact that different model parameters have on m t can be found in Refs. [7,8].
The analysis is performed using final states where both of the W bosons decay to leptons, yielding one electron and one muon with opposite charge sign, and the corresponding neutrinos. In addition, two b jets are required in the selection, as expected from the tt → (eνb)(μνb) decay. This final state is chosen because of its expected high purity and because Events of interest are selected using a two-tiered trigger system [10]. The first level, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100 kHz within a time interval of less than 4 µs. The second level, known as the high-level trigger, consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1 kHz before data storage.
A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [11].

Signal and background modeling
This analysis is based on proton-proton (pp) collision data at a center-of-mass energy √ s = 13 TeV, collected by the CMS detector in 2016 and corresponds to an integrated luminosity of 35.9 fb −1 [12].
The tt process is simulated with the powheg (v2) generator in the heavy quark production (hvq) mode [13][14][15]. The NNPDF3.0 next-to-leading-order (NLO) parton distribution functions (PDFs) with the strong coupling parameter α S = 0.118 at the Z boson mass scale (M Z ) [16] are utilized in the matrix-element (ME) calculation. The renormalization and factorization scales, μ R and μ F , are set to m T = √ m 2 t + p 2 T , where m t = 172.5 GeV and p T is the transverse momentum in the tt rest frame. Parton showering is simulated using pythia8 (v8.219) [17] and the CUETP8M2T4 UE tune [18]. The CUETP8M2T4 tune is based on the CUETP8M1 tune [19] but uses a lower value of α ISR S (M Z ) = 0.1108 in the parton shower (PS); this value leads to a better description of jet multiplicities in tt events at √ s = 8 TeV [20]. The leading-order (LO) version of the same NNPDF3.0 is used in the PS and MPI simulation in the CUETP8M2T4 tune. The cross section used for the tt simulation is 832 +20 −29 (scale) ± 35(PDF + α S ) pb, computed at the next-to-next-to-leading-order (NNLO) plus next-to-next-toleading-logarithmic accuracy [21].
Throughout this paper, data are compared to the predictions of different generator settings for the tt process. Table 1 summarizes the main characteristics of the setups and abbreviations used in the paper. Among other UE properties, CR and MPI are modeled differently in the alternative setups considered, hence the interest in comparing them to the data. Three different signal ME generators are used: powheg, MadGraph5_amc@nlo (v2.2.2) with the FxFx merging scheme [22,23] for jets from the ME calculations and PS, and sherpa (v2.2.4) [24]. The latter is used in combination with OpenLoops (v1.3.1) [25], and with the CS parton shower based on the Catani-Seymour dipole subtraction scheme [26]. In addition, two different herwig PS versions  [5,32], an alternative MPI model based on the "Rope hadronization" framework describing Lund color strings overlapping in the same area [33,34], variations of the choice of α S (M Z ) in the parton shower, and a variation of the values that constitute the CUETP8M2T4 tune, according to their uncertainties.
Background processes are simulated with several generators. The WZ, W+jets, and ZZ → 2 2q (where denotes any of the charged leptons e/μ/τ ) processes are simulated at NLO, using MadGraph5_amc@nlo with the FxFx merging. Drell-Yan production, with dilepton invariant mass, m( ), greater than 50 GeV, is simulated at LO with Mad-Graph5_amc@nlo using the so-called MLM matching scheme [35] for jet merging. The powheg (v2) program is  Fig. 1 Distribution of all PF candidates reconstructed in a Pw+Py8 simulated tt event in the η-φ plane. Only particles with p T > 900 MeV are shown, with a marker whose area is proportional to the particle p T . The fiducial region in η is represented by the dashed lines the sum of the expectations for the signal and backgrounds. The shaded band represents the uncertainty associated to the integrated luminosity and the theoretical value of the tt cross section furthermore used to simulate the WW, and ZZ → 2 2ν processes [36,37], while powheg (v1) is used to simulate the tW process [38]. The single top quark t-channel background is simulated at NLO using powheg (v2) and Mad-Spin contained in MadGraph5_amc@nlo (v2.2.2) [39,40]. The residual tt+V backgrounds, where V = W or Z, are generated at NLO using MadGraph5_amc@nlo. The cross sections of the DY and W+jets processes are normalized to the NNLO prediction, computed using fewz (v3.1.b2) [41], and single top quark processes are normalized to the approximate NNLO prediction [42]. Processes containing two vector bosons (hereafter referred to as dibosons) are normalized to the NLO predictions computed with Mad-Graph5_amc@nlo, with the exception of the WW process, for which the NNLO prediction [43] is used.
All generated events are processed through the Geant4based [44][45][46] CMS detector simulation and the standard CMS event reconstruction. Additional pp collisions per bunch crossing (pileup) are included in the simulations. These simulate the effect of pileup in the events, with the same multiplicity distribution as that observed in data, i.e., about 23 simultaneous interactions, on average, per bunch crossing.

Event reconstruction and selection
The selection targets events in which each W boson decays to a charged lepton and a neutrino. Data are selected online with single-lepton and dilepton triggers. The particle flow  Fig. 3 Display of the transverse momentum of the selected charged particles, the two leptons, and the dilepton pair in the transverse plane corresponding to the same event as in Fig. 1. The p T of the particles is proportional to the length of the arrows and the dashed lines represent the regions that are defined relative to the p T ( ) direction. For clarity, the p T of the leptons has been rescaled by a factor of 0.5 (PF) algorithm [9] is used for the reconstruction of finalstate objects. The offline event selection is similar to the one described in Ref. [47]. At least one PF charged lepton candidate with p T > 25 GeV and another one with p T > 20 GeV, both having |η| < 2.5, are required. The two leptons must have opposite charges and an invariant mass m( ± ∓ ) > 12 GeV. When extra leptons are present in the event, the dilepton candidate is built from the highest p T lep-tons in the event. Events with e ± μ ∓ in the final state are used for the main analysis, while e ± e ∓ and μ ± μ ∓ events are used to derive the normalization of the DY background. The simulated events are corrected for the differences between data and simulation in the efficiencies of the trigger, lepton identification, and lepton isolation criteria. The corrections are derived with Z → e ± e ∓ and Z → μ ± μ ∓ events using the "tag-and-probe" method [48] and are parameterized as functions of the p T and η of the leptons.
Jets are clustered using the anti-k T jet finding algorithm [49,50] with a distance parameter of 0.4 and all the reconstructed PF candidates in the event. The charged hadron subtraction algorithm is used to mitigate the contribution from pileup to the jets [51]. At least two jets with p T > 30 GeV, |η| < 2.5 and identified by a b-tagging algorithm are required. The b-tagging is based on a "combined secondary vertex" algorithm [52] characterized by an efficiency of about 66%, corresponding to misidentification probabilities for light quark and c quark jets of 1.5 and 18%, respectively. A p T -dependent scale factor is applied to the simulations in order to reproduce the efficiency of this algorithm, as measured in data.
The reconstructed vertex with the largest value of summed physics-object p 2 T is taken to be the primary pp interaction vertex. The physics objects are the jets, clustered using the jet finding algorithm [49,50] with the tracks assigned to the vertex as inputs, and the associated missing transverse momentum, p miss T , taken as the negative vector sum of the p T of those jets. The latter is defined as the magnitude of the negative vector sum of the momenta of all reconstructed PF candidates in an event, projected onto the plane perpendicular to the direction of the proton beams.
All backgrounds are estimated from simulation, with the exception of the DY background normalization. The latter is estimated making use of the so-called R out/in method [53], in which events with same-flavor leptons are used to normalize the yield of eμ pairs from DY production of τ lepton pairs. The normalization of the simulation is estimated from the number of events in the data within a 15 GeV window around the Z boson mass [53]. For eμ events, we use the geometric mean of the scale factors determined for ee and μμ events. With respect to the simulated predictions, a scale factor 1.3 ± 0.4 is obtained from this method, with statistical and systematic uncertainties added in quadrature. The systematic uncertainty is estimated from the differences found in the scale factor for events with 0 or 1 b-tagged jets, in the same-flavor channels.   We select a total of 52 645 eμ events with an expected purity of 96%. The data agree with the expected yields within 2.2%, a value smaller than the uncertainty in the integrated luminosity alone, 2.5% [12]. The tW events are expected to constitute 90% of the total background.
In the simulation, the selection is mimicked at the particle level with the techniques described in Ref. [54]. Jets and leptons are defined at the particle level with the same conventions as adopted by the rivet framework [55]. The procedure ensures that the selections and definitions of the objects at particle level are consistent with those used in the rivet routines. A brief description of the particle-level definitions follows: -prompt charged leptons (i.e., not produced as a result of hadron decays) are reconstructed as "dressed" leptons with nearby photon recombination using the anti-k T algorithm with a distance parameter of 0.1; -jets are clustered with the anti-k T algorithm with a distance parameter of 0.4 using all particles remaining after removing both the leptons from the hard process and the neutrinos; -the flavor of a jet is identified by including B hadrons in the clustering.
Using these definitions, the fiducial region of this analysis is specified by the same requirements that are applied offline  (reconstruction level) for leptons and jets. Simulated events are categorized as fulfilling only the reconstruction-based, only the particle-based, or both selection requirements. If a simulated event passes only the reconstruction-level selection, it is considered in the "misidentified signal" category, i.e., it does not contribute to the fiducial region defined in the analysis and thus is considered as a background process. In the majority of the bins of each of the distributions analyzed, the fraction of signal events passing both the reconstructionand particle-level selections is estimated to be about 80%, while the fraction of misidentified signal events is estimated to be less than 10%.

Characterization of the underlying event
In order to isolate the UE activity in data, the contribution from both pileup and the hard process itself must be identified and excluded from the analysis. The contamination from pileup events is expected to yield soft particles in time with the hard process, as well as tails in the energy deposits from out-of-time interactions. The contamination from the hard process is expected to be associated with the two charged leptons and two b jets originating from the tt decay chain.
In order to minimize the contribution from these sources, we use the properties of the reconstructed PF candidates in  each event. The track associated to the charged PF candidate is required to be compatible with originating from the primary vertex. This condition reduces to a negligible amount the contamination from pileup in the charged particle collection. A simple association by proximity in z with respect to the primary vertex of the event is expected to yield a pileuprobust, high-purity selection. For the purpose of this analysis all charged PF candidates are required to satisfy the following requirements: p T > 900 MeV and |η| < 2.1; -the associated track needs to be either used in the fit of the primary vertex or to be closer to it in z than with respect to other reconstructed vertices in the event.
After performing the selection of the charged PF candidates we check which ones have been used in the clustering of the two b-tagged jets and which ones match the two charged lepton candidates within a ΔR = √ (Δη) 2 + (Δφ) 2 = 0.05 cone, where φ is the azimuthal angle in radians. All PF candidates failing the kinematic requirements, being matched to another primary vertex in the event, or being matched to the charged leptons and b-tagged jets, are removed from the analysis. The UE analysis proceeds by using the remaining charged PF candidates. Figure 1 shows, in a simulated tt  event, the contribution from charged and neutral PF candidates, the charged component of the pileup, and the hard process. The charged PF candidates that are used in the study of the UE are represented after applying the selection described above.
Various characteristics, such as the multiplicity of the selected charged particles, the flux of momentum, and the topology or shape of the event have different sensitivity to the modeling of the recoil, the contribution from MPI and CR, and other parameters.
The first set of observables chosen in this analysis is related to the multiplicity and momentum flux in the event: -charged-particle multiplicity: N ch ; -magnitude of the p T of the charged particle recoil system: -scalar sum of the p T (or p z ) of charged particles: where k = T or z; -average p T (or p z ) per charged particle: computed from the ratio between the scalar sum and the charged multiplicity: p T (or p z ).
The second set of observables characterizes the UE shape and it is computed from the so-called linearized sphericity tensor [56,57]: where the i index runs over the particles associated with the UE, as for the previous variables, and the μ and ν indices refer to one of the (x, y, z) components of the momentum of the particles. The eigenvalues (λ i ) of S μν are in decreasing order, i.e., with λ 1 the largest one, and are used to compute the following observables [58]: -Aplanarity: A = 3 2 λ 3 measures the p T component out of the event plane, defined by the two leading eigenvectors. Isotropic (planar) events are expected to have A = 1/2 (0). -Sphericity: S = 3 2 (λ 2 +λ 3 ) measures the p 2 T with respect to the axis of the event. An isotropic (dijet) event is expected to have S = 1 (0).
Further insight can be gained by studying the evolution of the two sets of observables in different categories of the tt system kinematic quantities. The categories chosen below are sensitive to the recoil or the scale of the energy of the hard process, and are expected to be reconstructed with very good resolution. Additionally, these variables minimize the effect of migration of events between categories due to resolution effects.
The dependence of the UE on the recoil system is studied in categories that are defined according to the multiplicity of additional jets with p T > 30 GeV and |η| < 2.5, excluding the two selected b-tagged jets. The categories with 0, 1, or more than 1 additional jet are used for this purpose. The additional jet multiplicity is partially correlated with the charged-particle multiplicity and helps to factorize the contribution from ISR. The distribution of the number of additional jets is shown in Fig. 2 (upper left).
In addition to these categories, the transverse momentum of the dilepton system, p T ( ), is used as it preserves some correlation with the transverse momentum of the tt system and, consequently, with the recoil of the system. The p T ( ) direction is used to define three regions in the transverse plane of each event. The regions are denoted as "transverse" (60 • < |Δφ| < 120 • ), "away" (|Δφ| > 120 • ), and "toward" (|Δφ| < 60 • ). Each reconstructed particle in an event is assigned to one of these regions, depending on the difference of their azimuthal angle with respect to the p T ( ) vector. Figure 3 illustrates how this classification is performed on a typical event. This classification is expected to enhance the sensitivity of the measurements to the contributions from ISR, MPI and CR in different regions. In addition, the magnitude, p T ( ), is used to categorize the events and its distri-bution is shown in Fig. 2 (upper right). The p T ( ) variable is estimated with a resolution better than 3%.
Lastly, the dependence of the UE on the energy scale of the hard process is characterized by measuring it in different categories of the m( ) variable. This variable is correlated with the invariant mass of the tt system, but not with its p T . The m( ) distribution is shown in Fig. 2 (lower). A resolution better than 2% is expected in the measurement of m( ).
Although both p T ( ) and m( ) are only partially correlated with the tt kinematic quantities, they are expected to be reconstructed with very good resolution. Because of the two escaping neutrinos, the kinematics of the tt pair can only be reconstructed by using the p miss T measurement, which has poorer experimental resolution when compared to the leptons. In addition, given that p miss T is correlated with the UE activity, as it stems from the balance of all PF candidates in the transverse plane, it could introduce a bias in the definition of the categories and the observables studied to characterize the UE. Hence the choice to use only dilepton-related variables.

Corrections to the particle level
Inefficiencies of the track reconstruction due to the residual contamination from pileup, nuclear interactions in the tracker material, and accidental splittings of the primary vertex [59] are expected to cause a slight bias in the observables described above. The correction for these biases is estimated from simulation and applied to the data by means of an unfolding procedure. At particle (generator) level, the distributions of the observables of interest are binned according to the resolutions expected from simulation. Furthermore, we require that each bin contains at least 2% of the total number of events. The migration matrix (K ), used to map the reconstruction-to particle-level distributions, is constructed using twice the number of bins at the reconstruc-   tion level than the ones used at particle level. This procedure ensures almost diagonal matrices, which have a numerically stable inverse. The matrix is extended with an additional row that is used to count the events failing the reconstruction-level requirements, but found in the fiducial region of the analysis, i.e., passing the particlelevel requirements. The inversion of the migration matrix is made using a Tikhonov regularization procedure [60], as implemented in the TUnfoldDensity package [61]. The unfolded distribution is found by minimizing a χ 2 function where y are the observations, V yy is an estimate of the covariance of y (calculated using the simulated signal sample), λ is the particle-level expectation, L(λ − λ 0 ) 2 is a penalty function (with λ 0 being estimated from the simulated samples), and τ > 0 is the so-called regularization parameter. The latter regulates how strongly the penalty  term should contribute to the minimization of χ 2 . In our setup we choose the function L to be the curvature, i.e., the second derivative, of the output distribution. The chosen value of the τ parameter is optimized for each distribution by minimizing its average global correlation coefficient [61]. Small values, i.e., τ < 10 −3 , are found for all the distributions; the global correlation coefficients are around 50%. After unfolding, the distributions are normalized to unity.
The statistical coverage of the unfolding procedure is checked by means of pseudo-experiments based on independent Pw+Py8 samples. The pull of each bin in each distribution is found to be consistent with that of a standard normal distribution. The effect of the regularization term in the unfolding is checked in the data by folding the measured distributions and comparing the outcome to the originally-reconstructed data. In general the folded and the original distributions agree within 1-5% in each   bin, with the largest differences observed in bins with low yield.

Systematic uncertainties
The impact of different sources of uncertainty is evaluated by unfolding the data with alternative migration matrices, which are obtained after changing the settings in the simu-lations as explained below. The effect of a source of uncertainty in non-fiducial tt events is included in this estimate, by updating the background prediction. The observed binby-bin differences are used as estimates of the uncertainty. The impact of the uncertainty in the background normalization is the only exception to this procedure, as detailed below. The covariance matrices associated to each source of uncertainty are built using the procedure described in detail in [62]. In case several sub-contributions are used to esti- mate a source of uncertainty, the corresponding differences in each bin are treated independently, symmetrized, and used to compute individual covariance matrices, which preserve the normalization. Variations on the event yields are fully absorbed by normalizing the measured cross sections. Thus, only the sources of uncertainty that yield variations in the shapes have a non-negligible impact.

Experimental uncertainties
The following experimental sources of uncertainty are considered: Pileup: Although pileup is included in the simulation, there is an intrinsic uncertainty in modeling its multiplicity. An  a small migration of events between the different p T ( ) or m( ) categories used in the analysis. Jet energy scale: A p T -and η-dependent parameterization of the jet energy scale is used to vary the calibration of the jets in the simulation. The corrections and uncertainties are obtained using methods similar to those described in Ref. [51]. The effect of these variations is similar to that described for the lepton energy scale uncertainty; in this case the migration of events occurs between different jet multiplicity categories. Jet energy resolution: Each jet is further smeared up or down depending on its p T and η, with respect to the central value measured in data. The difference with respect to data is measured using methods similar to those described in Ref. [51]. The main effect induced in the analysis from altering the jet energy resolution is similar to that described for the jet energy scale uncertainty. b tagging and misidentification efficiencies: The scale factors used to correct for the difference in performance between data and simulation are varied according to their uncertainties and depending on the flavor of the jet [52]. The main effect of this variation is to move jets into the candidate b jets sample or remove them from it. Background normalization: The impact of the uncertainty in the normalization of the backgrounds is estimated by computing the difference obtained with respect to the nominal result when these contributions are not subtracted from data. This difference is expected to cover  the uncertainty in the normalization of the main backgrounds, i.e., DY and the tW process, and the uncertainty in the normalization of the tt events that are expected to pass the reconstruction-level requirements but fail the generator-level ones. The total expected background contribution is at the level of 8-10%, depending on the bin. The impact from this uncertainty is estimated to be < 5%.
Tracking reconstruction efficiency: The efficiency of track reconstruction is found to be more than 90%. It is monitored using muon candidates from Z → μ + μ − decays, and the ratio of the four-body final D 0 → K − π + π − π + decay to the two-body D 0 → K − π + decay. The latter is used to determine a data-to-simulation scale factor (S F trk ) as a function of the pseudorapidity of the tracks, and for different periods of the data taking used   in this analysis. The envelope of the S F trk values, with uncertainties included, ranges from 0.89 to 1.17 [66], and it provides an adequate coverage for the residual variations observed in the charged-particle multiplicity between different data taking periods. The impact of the variation of S F trk by its uncertainty is estimated by using the value of |1 − S F trk | for the probability to remove a reconstructed track from the event or to promote an unmatched generator-level charged particle to a reconstructed track, depending on whether S F trk < 1 or > 1, respectively. Different migration matrices, reflecting the different tracking efficiencies obtained from varying the uncertainty in S F trk , are obtained by this method and used to unfold the data. Although the impact is nonnegligible on variables such as N ch or p T , it has very small impact (< 1%) on variables such as p T and p z .

Theoretical uncertainties
The following theoretical uncertainties are considered: Scale choices: μ R and μ F are varied individually in the ME by factors between 0.5 and 2, excluding the extreme cases μ R /μ F = μ(2, 0.5) and μ(0.5, 2), according to the prescription described in Refs. [67,68].
Resummation scale and α S used in the parton shower: In powheg, the real emission cross section is scaled by a damping function, parameterized by the so-called h damp variable [13][14][15]. This parameter controls the ME-PS matching and regulates the highp T radiation by reducing real emissions generated by powheg with a factor of h 2 damp /( p 2 T + h 2 damp ). In the simulation used to derive the Toward Transverse Away been determined using the same methods as described in Ref.
[19]. In the following, these will be referred to as UE up/down variations. The dependence on the CR model is furthermore tested using other models besides the nominal one, which is the MPI-based CR model where the tt decay products are excluded from reconnections to the UE. A dedicated sample where the reconnections to resonant decay products are enabled (hereafter designated as ERDon) is used to evaluate possible differences in the unfolded results. In addition, alternative models for the CR are tested. One sample utilizing the "gluon move" model [5], in which gluons can be moved to another string, and another utilizing the "QCD-based" model with string formation beyond LO [32]  resonant processes are enabled. The envelope of the differences is considered as a systematic uncertainty. t quark p T : The effect of reweighting of the simulated t quark p T ( p T (t)) distribution to match the one reconstructed from data [69,70] is added as an additional uncertainty. This has the most noticeable effect on the fraction of events that do not pass the reconstruction-level requirements and migrate out of the fiducial phase space. t quark mass: An additional uncertainty is considered, related to the value of m t = 172.5 GeV used in the simulations, by varying this parameter by ±0.5 GeV [71].
Any possible uncertainty from the choice of the hadronization model is expected to be significantly smaller than the theory uncertainties described above. This has been explicitly tested by comparing the results at reconstruction level and after unfolding the data with the Pw+Py8 and Pw+Hw++ migration matrices. The latter relies on a different hadronization model, but it folds other modelling differences such as the underlying event tune or the parton shower as well. Thus it can only be used as a test setup to validate the measurement.

Summary of systematic uncertainties
The uncertainties on the measurement of the normalized differential cross sections are dominated by the systematic uncertainties, although in some bins of the distributions the statistical uncertainties are a large component. The experimental uncertainties have, in general, small impact; the most relevant are the tracking reconstruction efficiency for the N ch , p T , p z , and | p T | observables. Other observables are affected at a sub-percent level by this uncertainty. Theory uncertainties affect the measurements more significantly, a fact that underlines the need of better tuning of the model parameters.
Event shape observables are found to be the most robust against this uncertainty, while p T , p z , and | p T | are the ones that suffer more from it. Other sources of theoretical uncertainty typically have a smaller effect.
To further illustrate the impact of different sources on the observables considered, we list in Table 2 the uncertainties on the average of each observable. In the table, only systematic uncertainties that impact the average of one of the observables by at least 0.5% are included. The total uncertainty on the average of a given quantity ranges from 1 to 8%, and hence the comparison with model predictions can be carried out in a discrete manner.

Inclusive distributions
The normalized differential cross sections measured as functions of N ch , p T , p T , | p T |, p z , p z , sphericity, aplanarity, C, and D are shown in Figs. 4, 5, 6, 7, 8, 9, 10, 11, 12 and 13, respectively. The distributions are obtained after unfolding the background-subtracted data and normalizing the result to unity. The result is compared to the simulations, whose settings are summarized in Table 1 and in the appendix. For the predictions, the statistical uncertainty is represented as an error bar. In the specific case of the Pw+Py8 setup, the error bar represents the envelope obtained by varying the main parameters of the CUETP8M2T4 tune, according to their uncertainties. The envelope includes the variation of the CR model, α ISR S (M Z ), α FSR S (M Z ), the h damp parameter, and the μ R /μ F scales at the ME level. Thus, the uncertainty band represented for the Pw+Py8 setup should be interpreted as the theory uncertainty in that prediction. For each distribution we give, in addition, the ratio between different predictions and the data.
In tt events the UE contribution is determined to have typically O (20) charged particles with p T ∼ p z ≈ 2 GeV, vectorially summing to a recoil of about 10 GeV. The distribution of the UE activity is anisotropic (as the sphericity is < 1), close to planar (as the aplanarity peaks at low values of ≈0.1), and peaks at around 0.75 in the C variable, which identifies three-jet topologies. The D variable, which identifies the four-jet topology, is instead found to have values closer to 0. The three-prong configuration in the energy flux of the UE described by the C variable can be identified with two of the eigenvectors of the linearized sphericity ten- Table 4 The first rows give the best fit values for α FSR S for the Pw+Py8 setup, obtained from the inclusive distribution of different observables and the corresponding 68 and 95% confidence intervals. The last two rows give the preferred value of the renormalization scale in units of M Z , and the associated ±1σ interval that can be used as an estimate of its variation to encompass the differences between data and the Pw+Py8 setup sor being correlated with the direction of the b-tagged jets, and the third one being determined by energy conservation.
When an extra jet with p T > 30 GeV is selected, we measure a change in the profile of the event shape variables, with average values lower by 20-40% with respect to the distributions in which no extra jet is found. Thus when an extra jet is present, the event has a dijet-like topology instead of an isotropic shape. The results obtained with pythia8 for the parton shower simulation show negligible dependence on the ME generator with which it is interfaced, i.e., Pw+Py8 and MG5_aMC yield similar results. In all distributions the contribution from MPI is strong: switching off this component in the simulation has a drastic effect on the predictions of all the variables analyzed. Color reconnection effects are more subtle to identify in the data. In the inclusive distributions, CR effects are needed to improve the theory accuracy for p T < 3 GeV or p z < 5 GeV. The differences between the CR models tested (as discussed in detail in Sect. 3) are nevertheless small and almost indistinguishable in the inclusive distributions. In general the Pw+Py8 setup is found to be in agreement with the data, when the total theory uncertainty is taken into account. In most of the distributions it is the variation of α FSR S (M Z ) that dominates the theory uncertainty, as this variation leads to the most visible changes in the UE.
The other parton shower setups tested do not describe the data as accurately, but they were not tuned to the same level of detail as Pw+Py8. The Pw+Hw++ and Pw+Hw7-based setups show distinct trends with respect to the data from those observed in any of the pythia8-based setups. While describing fairly well the UE event shape variables, herwig++ and herwig 7 disagree with the N ch , p T , and p z measurements. The sherpa predictions disagree with data in most of the observables.
For each distribution the level of agreement between theory predictions and data is quantified by means of a χ 2 variable defined as: where δy i (δy j ) are the differences between the data and the model in the i-th ( j-th) bin; here n represents the total number of bins, and Cov −1 is the inverse of the covariance matrix.
Given that the distributions are normalized, the covariance matrix becomes invertible after removing its first row and column. In the calculation of Eq. 3 we assume that the theory uncertainties are uncorrelated with those assigned to the measurements. Table 3 summarizes the values obtained for the main models with respect to each of the distributions shown in Figs. 4, 5, 6, 7, 8, 9, 10, 11, 12 and 13. The values presented in the table quantify the level of agreement of each model with the measurements. Low χ 2 per number of degrees of freedom (dof) values are obtained for the Pw+Py8 setup when all the theory uncertainties of the model are taken into account, in particular for the event shape variables. This indicates that the theory uncertainty envelope is conservative.

Profile of the UE in different categories
The differential cross sections as functions of different observables are measured in different event categories introduced in Sect. 5. We report the profile, i.e., the average of the measured differential cross sections in different event categories, and compare it to the expectations from the different simulation setups. Figures 14, 15 Figs. 24 and 25, respectively. In all figures, the pull of the simulation distributions with respect to data, defined as the difference between the model and the data divided by the total uncertainty, is used to quantify the level of agreement. The average charged-particle multiplicity and the average of the momentum flux observables vary significantly when extra jets are found in the event or for higher p T ( ) values. The same set of variables varies very slowly as a function of m( ). Event shape variables are mostly affected by the presence of extra jets in the event, while varying slowly as a function of p T ( ) or m( ). The average sphericity increases significantly when no extra jets are present in the event showing that the UE is slightly more isotropic in these events. A noticeable change is also observed for the other event shape variables in the same categories.
For all observables, the MPI contribution is crucial: most of the pulls are observed to be larger than 5 when MPI is switched off in the simulation. Color reconnection effects are on the other hand more subtle and are more relevant for p T , specifically when no additional jet is present in the event. This is illustrated by the fact that the pulls of the setup without CR are larger for events belonging to these categories. Event shape variables also show sensitivity to CR. All other variations of the UE and CR models tested yield smaller variations of the pulls, compared to the ones discussed.
Although a high pull value of the Pw+Py8 simulation is obtained for several categories, when different theory variations are taken into account, the envelope encompasses the data. The variations of α FSR S (M Z ) and α ISR S (M Z ) account for the largest contribution to this envelope. As already noted in the previous section, the Pw+Hw++, Pw+Hw7, and sherpa models tend to be in worse agreement with data than Pw+Py8, indicating that further tuning of the first two is needed.

Sensitivity to the choice of α S in the parton shower
The sensitivity of these results to the choice of α S (M Z ) in the parton shower is tested by performing a scan of the χ 2 value defined by Eq. (3), as a function of α ISR S (M Z ) or α FSR S (M Z ). The χ 2 is scanned fixing all the other parameters of the generator. A more complete treatment could only be achieved with a fully tuned UE, which lies beyond the scope of this paper. While no sensitivity is found to α ISR S (M Z ), most observables are influenced by the choice of α FSR S (M Z ). The most sensitive variable is found to be p T and the corresponding variation of the χ 2 function is reported in Fig. 26. A polynomial interpolation is used to determine the minimum of the scan (best fit), and the points at which the χ 2 function increases by one unit are used to derive the 68% confidence interval (CI). The degree of the polynomial is selected by a stepwise regression based on an F-test statistics [72]. A value of α FSR S (M Z ) = 0.120 ± 0.006 is obtained, which is lower than the one assumed in the Monash tune [73] and used in the CUETP8M2T4 tune. The value obtained is compatible with the one obtained from the differential cross sections measured as a function of p T in different p T ( ) regions or in events with different additional jet multiplicities.

Summary
The first measurement of the underlying event (UE) activity in tt dilepton events produced in hadron colliders has been reported. The measurement makes use of √ s = 13 TeV proton-proton collision data collected by the CMS experiment in 2016, and corresponding to 35.9 fb −1 . Using particleflow reconstruction, the contribution from the UE has been isolated by removing charged particles associated with the decay products of the tt event candidates as well as with pileup interactions from the set of reconstructed charged particles per event. The measurements performed are expected to be valid for other tt final states, and can be used as a reference for complementary studies, e.g., of how different color reconnection (CR) models compare to data in the description of the jets from W → qq decays. The chosen observables and categories enhance the sensitivity to the modeling of multiparton interactions (MPI), CR and the choice of strong coupling parameter at the mass of Z boson (α FSR S (M Z )) in the pythia8 parton shower Monte Carlo simulation. These parameters have significant impact on the modeling of tt production at the LHC. In particular, the compatibility of the data with different choices of the α FSR S (M Z ) parameter in pythia8 has been quantified, resulting in a lower value than the one considered in Ref. [73].
The majority of the distributions analyzed indicate a fair agreement between the data and the powheg+pythia8 setup with the CUETP8M2T4 tune [18], but disfavor the setups in which MPI and CR are switched off, or in which α FSR S (M Z ) is increased. The data also disfavor the default configurations in powheg+herwig++, powheg+herwig7, and sherpa. It has been furthermore verified that, as expected, the choice of the next-to-leading-order matrix-element generator does not impact significantly the expected characteristics of the UE by comparing predictions from powheg and Mad-Graph5_amc@nlo, both interfaced with pythia8.
The present results test the hypothesis of universality in UE at an energy scale typically higher than the ones at which models have been studied. The UE model is tested up to a scale of two times the top quark mass, and the measurements in categories of dilepton invariant mass indicate that it should be valid at even higher scales. In addition, they can be used to improve the assessment of systematic uncertainties in future top quark analyses. The results obtained in this study show that a value of α FSR S (M Z ) = 0.120 ± 0.006 is consistent with the data. The corresponding uncertainties translate to a variation of the renormalization scale by a factor of √ 2.
tively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following