Constraining new physics with searches for long-lived particles: Implementation into SModelS

We present the implementation of heavy stable charge particle (HSCP) and R -hadron signatures into SModelS v1.2. We include simpliﬁed-model results from the 8 and 13 TeV LHC and demonstrate their impact on two new physics scenarios motivated by dark matter: the inert doublet model and a gravitino dark matter scenario. For the former, we ﬁnd sensitivity up to dark matter masses of 580 GeV for small mass splittings within the inert doublet, while missing energy searches are not able to constrain any signiﬁcant part of the cosmologically preferred parameter space. For the gravitino dark matter scenario, we show that both HSCP and R -hadron searches provide important limits, allowing to constrain the viable range of the reheating temperature.


Introduction
Exploring physics beyond the standard model (BSM) is one of the key scientific goals of the LHC. Simplified models have turned out to provide useful benchmarks for interpreting LHC results and investigating their implications for the open questions of today's fundamental physics. The public code SModelS [1,2] provides a very efficient framework for this reinterpretation by decomposing the signal of an arbitrary new physics model (respecting a Z 2 symmetry or a larger symmetry with a Z 2 subgroup) into simplifiedmodel topologies. This then allows us to apply the simplifiedmodel cross-section upper limits or efficiency maps provided by the experimental collaborations (typically as a function of the BSM masses involved) in the context of the new model. 1 So far only BSM searches in final states with missing transverse momentum (MET) could be taken into account within SMod-elS. However, it has widely been recognized recently that wellmotivated BSM theories can provide non-neutral long-lived parti-* Corresponding author.
E-mail addresses: heisig@physik.rwth-aachen.de (J. Heisig), sabine.kraml@lpsc.in2p3.fr (S. Kraml), andre.lessa@ufabc.edu.br (A. Lessa). 1 A certain degree of approximation results from neglecting properties like the exact production mechanism and the spin of the particles in decays, see Refs. [3][4][5][6] for a corresponding discussion. cles (LLPs) leading to distinct signatures, which often provide great sensitivity at the LHC [7]. A method for (re)using LLP simplified-model constraints was developed previously by some of us in [8]. In this Letter, we now present the implementation of HSCP and R-hadron 2 signatures in SModelS v1.2 and demonstrate its usability to constrain arbitrary BSM scenarios containing such signatures. The models considered as illustrative examples are the inert doublet model (IDM) and the minimal supersymmetric standard model (MSSM) with a gravitino as the ligtest supersymmetric particle (LSP) and a stau next-to-LSP (NLSP). In both cases we concentrate on the cosmologically interesting regions of parameter space which satisfy dark matter constraints.
The IDM provides one of the simplest dark matter models supplementing the standard model by just another SU(2) doublet. While MET searches are scarcely sensitive to the cosmologically allowed region of parameter space, we show that, for small mass splittings within the inert doublet, a large range of dark matter masses can be tested with HSCP searches.
Our second showcase, the gravitino dark matter model, is a cosmologically attractive scenario allowing to alleviate the grav-itino problem and to accommodate large reheating temperatures T R ∼ 10 9 GeV in the early Universe while respecting bounds from big bang nucleosynthesis (BBN) [9]. The complexity of the model reveals a larger number of contributing topologies, including R-hadron signatures relevant for both squarks and gluinos when their decays are 3-or 4-body suppressed. We show that the LLP results have the potential to be competitive with cosmological constraints and impact the allowed range for the reheating temperature.
With respect to previous versions, SModelS v1.2 includes an additional step in the decomposition accounting for the probabilities of BSM particles to decay promptly or to appear long-lived. Our implementation goes beyond the method presented in [8] by incorporating a better treatment of finite lifetimes (for cτ of the order of the size of detector) and the inclusion of CMS 13 TeV results. Concretely, we added the experimental cross-section upper limits for the direct production of HSCPs from [10,11] and R-hadrons from [11], and we developed and included recasted efficiency maps for the 13 TeV analysis [11]. SModelS v1.2 is publicly available at http://smodels .hephy.at.
In the following, Section 2 briefly describes the implementation of LLP signatures into SModelS. The application to the IDM and gravitino dark matter scenarios is presented in Section 3. Section 4 contains our conclusions. Finally, in Appendix A and Appendix B we provide, respectively, details about the recasting of the HSCP analyses and a discussion about the treatment of intermediate lifetimes.

Implementation in SMODELS
Given the BSM particle masses, quantum numbers, total decay widths, branching ratios (BRs) and total production cross-sections, SModelS v1.2 performs a decomposition of the input model into a coherent sum of simplified-model topologies. Because of the required Z 2 symmetry, the simplified-model topologies have a twobranch structure from the production of two BSM states followed by their respective cascade decays. We require that each cascade decay terminates with a long-lived particle (which may lead to MET or a visible LLP signature) while all intermediate decays are required to be prompt. By long-lived we mean here and in the following that the particle traverses the whole detector.
At each step in the cascade we compute the probability for the BSM particle to decay promptly (F prompt ) or to decay outside the detector (F long ), which can be relevant for metastable particles with proper lifetimes within few cm to few m [8]. The respective weight of a topology, σ , is hence given by: where σ prod is the production cross-section of the mother particles and X, Y are the BSM final states of the two decay chains; the index i runs over all intermediate BSM particles. Using the particle proper lifetime (τ ), F prompt and F long can be approximated by: and where we choose inner /γ β eff = 1 mm and outer /γ β eff = 7 m. 3 A detailed motivation of the above expressions and values is given in Appendix B. Note that this choice is conservative, since the signature of particles decaying inside the detector may provide additional sensitivity. However, such cases introduce a dependence on the decay products of the long-lived particle which is left for future work. Topologies terminating in neutral LLPs (including stable neutral BSM particles) lead to the conventional MET signatures, which constitute the bulk of supersymmetry (SUSY) searches already included in the SModelS database [1,2,12]. In order to be able to test also topologies terminating in electrically or colour-charged BSM states, we have added the CMS searches for lepton-like HSCPs, colour-octet (gluino-like) R-hadrons and colour-triplet (squarklike) R-hadrons at 8 TeV [13] and 13 TeV [11] centre-of-mass energies.
For the 8 TeV HSCP results, we make use of the recasting provided in Ref. [10], while a dedicated recasting was performed for the 13 TeV results (see Appendix A for details). This allowed us to compute efficiency maps for the eight simplified-model topologies introduced in Ref. [8], which are included in the SModelS v1.2 database. For the R-hadron searches we consider only the direct production topology 4 and make use of the cross-section upper limits from Refs. [11,13] for the cloud hadronization model (assuming a 50% probability for g-gluon bound state formation). These limits are also included in the SModelS v1.2 database. The relevant topologies and their notation in SModelS v1.2 are summarized in Fig. 1.

Physics applications
To demonstrate the physics impact, in the following we apply SModelS v1.2 to two BSM scenarios and derive the LHC constraints on their parameter space. As mentioned in the introduction, we consider the IDM as well as the MSSM with a gravitino LSP and a stau NLSP as illustrative examples.

The inert doublet model
The IDM is a two-Higgs doublet model with an exact Z 2 symmetry, under which all standard model fields (including the Higgs doublet H ) are assumed to be even, while the second scalar doublet is odd. It supplements the standard model Lagrangian by the gauge kinetic terms for as well as additional terms in the scalar potential, which now reads After electroweak symmetry breaking the model contains five physical scalar states with masses given by where Imposing m h 0 125.09 GeV [14], we are left with five free physical parameters: m H 0 , m A 0 , m H ± , λ L and λ 2 .
Despite its simplicity, the IDM leads to a rich phenomenology and provides a viable dark matter candidate with observable signatures in direct and indirect detection experiments. For recent accounts see, e.g., [15][16][17]. At the LHC the IDM is extremely difficult to observe via MET searches [18][19][20][21][22][23]. For instance, a reinterpretation of dilepton plus MET searches at 8 TeV [24] provides sensitivity up to m H 0 55 GeV only [23]. We checked that current 13 TeV analyses [25,26] do not significantly change this picture. However, in this low-mass region, the H 0 thermal relic density, IDM , is above the observed dark matter density, CDM . There are in fact three regions where the IDM can account for the entire observed relic density (55 GeV m H 0 ≤ m h 0 /2, m H 0 72 GeV and m H 0 500 GeV) and a region where it can account only for a fraction of CDM (72 m H 0 500 GeV) [17,27].
Considering non-MET signatures the situation is, however, different. As pointed out in [16] the mass-degenerate region m ≡ m H ± − m H 0 1 GeV is potentially accessible at LHC up to much larger m H 0 in searches for long-lived particles. Note also that in the third region mentioned above (m H 0 500 GeV) a rather small mass splitting (|m H 0 − m A 0 /H ± | below a few GeV) is required in order to match the relic density constraint [16,28]. It is therefore interesting to focus on this region with small mass splittings, m 1 GeV, and use SModelS v1.2 to reinterpret the LHC limits from HSCP searches within the IDM model. For this purpose we perform a scan over the 5-dimensional parameter space: [29,30] and we assume a 10% uncertainty on the prediction. In addition, following Ref. [17], we take into account constraints from Higgs invisible decays [31], electroweak precision observables [32,33], from searches for charginos and neutralinos at LEP-II [18,34], indirect detection limits from γ -ray observations of dwarf spheroidal galaxies [35] and theoretical con-straints on unitarity, perturbativity and vacuum stability computed with 2HDMC [33]. 5 We use the nested-sampling algorithm Multinest [37,38] to efficiently explore the parameter space. In the following we only consider points allowed within the 2σ region of the global fit [17].
For the allowed parameter space we compute the decay tables and production cross-sections with MadGraph5_aMC@NLO [39] and evaluate the LHC constraints with SModelS v1.2. For each parameter space point, the constraining power of LHC searches can be conveniently parametrized by the ratio of the relevant signal cross-section, σ th , to the corresponding analysis upper limit, σ UL : r ≡ σ th /σ UL (see Ref. [2] for details). If r ≥ 1 for at least one analysis, we consider the point as excluded.
The results are shown in Fig. 2. In the left panel we display the signal strength r in the m H ± vs. cτ plane, while the right panel shows the dark matter fraction R in the m H 0 vs. m plane. Although r does depend on all the model parameters, it is mostly driven by the mass and lifetime of charged Higgs. We hence show an approximate exclusion curve in the plot (dark dashed curve). In all the parameter space considered, we have verified that the exclusion is completely dominated by the HSCP searches; even though the MET constraints were also applied, they could not exclude any of the points.
In the quasi-stable limit (cτ 10 3 m) HSCP searches exclude H ± masses up to 580 GeV. This limit goes beyond the 13 TeV LHC limit for direct production of detector-stable staus which reaches mτ = 360 GeV [11]. The reason for the higher reach is the appearance of the additional W -mediated production channel pp → H 0 H ± as well as (to a lesser extent and depending on m A 0 ) the 14 fb against σ (ττ ) 0.15 fb [11] for the same stau mass.
At the lower edge of our scan range, for m H ± m H 0 100 GeV, HSCP searches are able to constrain decay lengths down to around cτ 2 m. Here the significant exponential suppression of F long is compensated by the large cross-section. Note that our choice for outer /γ β eff leads to a somewhat conservative exclusion limit in this part of the parameter space (see Appendix B, left panel of Fig. B.6). Let us also point out that the calculation of cτ involves a certain degree of approximation. In particular, the decays are computed at leading order with MadGraph5_aMC@NLO, where the decay channels with first generation quarks in the final state are turned off for m below the pion mass. This introduces the (artificial) gap seen at cτ ∼ 100 m, which would not appear if the proper phase-space including the pion mass were considered.
In the right panel of Fig. 2, we show the HSCP exclusion curve in the m vs. m H 0 plane. It excludes a significant part of the parameter space with m 0.2 GeV. Interestingly, the HSCP searches are starting to exclude part of the region with R = 1, where the IDM can account for the entire observed relic density (m H 0 500 GeV). Note that disappearing track searches have the potential to further extend the reach towards larger m [16].

Gravitino dark matter scenario
The gravitino -the superpartner of the graviton -is an attractive dark matter candidate in supersymmetric theories. A gravitino (G) LSP is also interesting from the model-building point of view. Due to strong BBN constraints for gravitinos that decay into lighter sparticles, an unstable gravitino is either required to be extremely heavy or the reheating temperature has to be kept small [40]. While the former limits the viable options for supersymmetry breaking, the latter reduces the options for possible baryogenesis mechanisms. In the gravitino DM scenario these problems can be circumvented. Thermal gravitino production however still imposes strong constraints on the maximal reheating temperature to avoid overclosure of the universe [41].
Once the gravitino is the LSP, the lightest sparticle of the MSSM (i.e. the NLSP) can be any sparticle. However, in order to not reintroduce severe constraints from BBN due to late decays of the NLSP [42], certain choices appear more promising than others. For instance, the stau is an attractive NLSP candidate providing a large annihilation cross-section, thus resulting in smaller freezeout abundances. Moreover, τ → τG decays inject less hadronic (and electromagnetic) energy than the decays of various other NLSP candidates, e.g., χ 0 1 → Z /γ +G. As a consequence, the impact of the late time stau decays on BBN is reduced. At the same time, it also reduces the contribution to the gravitino abundance through NLSP decays. This allows for a larger thermal contribution of gravitino production while not over-closing the Universe.
Since the thermal contribution is (approximately) proportional to the reheating temperature, th [41,43,44], it allows for higher values of T R , as preferred by classes of models for leptogenesis and inflation. 6 6 We note that these arguments simply serve to motivate our choice of a stau NLSP, but do not exclude other NLSP candidates.
We here consider the gravitino DM scenario with a stau NLSP, revisiting the parameter scan performed in [9,45] refining and updating the constraints from LLP searches at the LHC. The scan was performed within the framework of the so-called phenomenological MSSM (pMSSM) with the additional assumption m q 1,2 ≡ m Q 1,2 = m u 1,2 = m d 1,2 . In this way a 17-dimensional parameter space was achieved with input parameters and scan ranges given by 7 : The particle spectrum was computed with SuSpect 2.41 [46] and FeynHiggs 2.9.2 [47]. All points were required to have the lighter stau τ 1 as the NLSP. Moreover, m h ∈ [123; 128] GeV [14,48] was imposed.
The decay widths and branching ratios were computed with SDecay [49,50] and (in the case of missing dominant decay channels) MadGraph5_aMC@NLO [39], while the freeze-out abundance of staus was computed with micrOMEGAs [29]. The analysis considered constraints on the MSSM Higgs sector from LEP, the Tevatron and the LHC, EW precision bounds as well as theoretical constrains arising from charge or colour breaking vacua; see [45] for details and references. In addition, we here impose updated  [53]. 7 In this phenomenologically driven parameter scan the spectrum parameters of the third generation sfermions, m τ1 , mt 1 , mb 1 , θ τ and θ˜t , were chosen as input parameters in order to obtain an equally good coverage of small and large mixing scenarios. Tree-level relations were used to translate these parameters into soft parameters. In the further analysis only the values recalculated by the spectrum generator are used consistently. Since the gravitino mass can be taken as an additional free parameter, for each point in the pMSSM parameter space satisfying the above requirements, 10 gravitino masses are randomly generated. They are required to lie within an interval where the total G abundance can match the measured dark matter abundance: where th G h 2 and non-th G h 2 are the thermal and non-thermal gravitino relic abundances, respectively. The latter is generated once the frozen-out τ s decay to G s, thus enhancing the total gravitino abundance. The former is determined mostly by the gravitino mass, the gaugino masses and the reheating temperature, T R [41,43,44]. Given a pMSSM point and gravitino mass, first non-th G h 2 is computed and from this the reheating temperature imposing eq. (9) with CDM h 2 = 0.1189, taking into account the contributions from the complete spectrum. For each of the resulting points in the (17 + 1)-dimensional parameter space (that by construction fulfils the relic density constraint), constraints from BBN [54,55] are imposed (see Ref. [9] for further details).
After selecting around 26k points satisfying the above constraints, we use SModelS v1.2 to decompose each point's LHC signal into all relevant simplified-model topologies, taking into account the production and cascade decays of all sparticles. The LO production cross-sections are computed with Pythia 8 [56,57]; NLL cross-sections for gg, gq and qq are then obtained using NLL-Fast [58][59][60][61][62][63][64]. The results from the 8 TeV and 13 TeV HSCP and R-hadron searches implemented in SModelS v1.2 are then applied in order to constrain the points. Note that, since all the cascade decays terminate (at collider scales) in the stau NLSP or in R-hadrons, MET constraints do not apply.
From all the tested points, ∼5k are excluded and ∼21k are allowed, as shown in Fig. 3. For a fixed gravitino mass (below ∼ 200 GeV), the largest values of T R are excluded by the LHC constraints. This is due to the fact that the largest reheating temperatures are typically achieved for points with small gluino masses, which in turn contain large production cross-sections at the LHC. As a result these points can be probed by the HSCP and R-hadron searches. We see, however, that the largest values of T R ( 10 9 GeV) obtained in the scan are still allowed by the LHC constraints obtained with SModelS.
In order to discuss which searches and topologies are relevant for testing the gravitino scenario, we show in Fig. 4 a histogram for the number of excluded points as a function of the gluino mass. In the left panel, the number of excluded points is grouped according to which is the most constraining type of signature. The stacked histogram shows that the bulk of the points are excluded by topologies containing HSCP signatures, as expected. Nonetheless, a significant fraction of points at low mg are excluded by R-hadron constrains for long-lived gluinos. These points typically have heavy squarks, resulting in suppressed 3-body or 4-body gluino decays. In a similar way, points with light squarks and heavy gauginos and higgsinos lead to long-lived squarks which can also be constrained by the R-hadron searches, as shown by the orange histogram.
In order to illustrate the constraining power of combining results for multiple simplified-model topologies, we also display (dark blue histogram) the distribution of excluded points obtained using only the CMS limits for pair production of long-lived staus, gluinos and squarks. As can be seen, the number of excluded points in this case (∼ 200) is drastically reduced when compared to the one obtained with all the topologies included in SModelS. We point out, however, that the constraining power of SModelS is still limited by the number of simplified-model results contained in its database. The points from the pMSSM scan performed here display a large variety of topologies and many of them do not fall within the eight HSCP or the two R-hadron topologies included in the database.
SModelS can also conveniently be used to identify the most relevant missing topologies. In the right panel of Fig. 4, we show the non-excluded points with a total SUSY production crosssection (at 13 TeV) larger than 5 fb. Due to their sizeable crosssection, such points have a potential for being excluded by the HSCP or R-hadron searches. The stacked histogram shows the distribution of non-excluded points as a function of the gluino mass grouped according to the missing topology with largest weight (cross-section times branching ratio). Most of the points have mg < 1.7 TeV, since this ensures σ (gg) 5 fb. The almost flat distribution at large mg corresponds to points with light squarks in the spectrum, thus also resulting in large total crosssections.
We see that the missing topology which occurs more often in Fig. 4 (light blue histogram) corresponds to pair production of BSM particles, which then go through 4-body decays to the HSCP. This topology is mostly generated by points with very light gluinos, which then decay directly to the τ through 4-body decays. Furthermore, we see that topologies with 1-step decays to R-hadrons (green and dark red histograms) might also give potentially powerful constraints on this scenario. These topologies often appear in points with light quarks (gluinos) which decay to long-lived gluinos (quarks).

Conclusions
Long-lived particles have received increasing attention from both the experimental and theoretical communities, due to their novel collider signatures and possible connection to dark matter.
In this Letter we discussed how some of the LHC searches for LLPs, interpreted in the context of simplified models, can provide a powerful means for constraining interesting dark matter scenarios. For this purpose, we presented an implementation of (leptonlike) HSCP and (gluino-and squark-like) R-hadron signatures into SModelS. The SModelS database was extended by official CMS simplified-model results from the 8 and 13 TeV runs of the LHC, as well as additional efficiency maps obtained from a recast of the CMS analyses.
We then used the new version of SModelS, v1.2, to investigate how these searches impact two concrete physics scenarios: the Inert Doublet Model and a gravitino dark matter model containing long-lived staus. While missing-energy searches are not able to constrain any significant part of the cosmologically interesting parameter space of the IDM, we found that, for small mass splittings within the inert doublet of m 0.2 GeV, HSCP searches are sensitive to dark matter masses of up to 580 GeV. The gravitino dark matter model, on the other hand, can be constrained by both HSCP and R-hadron searches, and we showed how SModelS can be used to derive constraints on the reheating temperature which are complementary to cosmological bounds.
The current version of SModelS is easily extendible to include additional new simplified-model results from searches for detector-stable LLPs from both CMS and ATLAS. The inclusion of other types of LLP signatures, for example displaced vertices, is more involved and left for future developments of the code.
In this appendix we detail the recasting of the 8 and 13 TeV HSCP searches used in SModelS v1.2. We first review the recasting for the 8 TeV CMS HSCP analysis presented in [10]. The authors of [10] provide signature efficiencies for the off-and online selection criteria, P on (k) and P off (k), respectively, as a function of the generator-level kinematics, k = (η, p T , β), of isolated 8 HSCP candidates. The signal efficiency for a given parameter point can be computed from the generated events: where the sum runs over all N events and 8 Details on the imposed isolation criteria can be found in [8,10].
For one HSCP candidate in an event the formula holds with Using the efficiencies computed from (A.1) for the direct pair production of staus and the observed and expected number of background events (along with its error) from Ref. [10], we obtain upper limits for the stau cross-section as a function of the stau mass. These can then be directly compared to the CMS values presented in Ref. [10]. The left panel of Fig. A.5 shows the CMS and our results for the cross-section upper limits (upper frame) as well as their ratio (lower frame). As we can see, the difference is always below 5% and compatible with Monte Carlo errors. Hence we expect that recasting uncertainties for the efficiencies computed with the above method and included in the SModelS database should only be of a few percent.
For the respective 13 TeV analysis [11], however, such a recast has not yet been provided by the collaboration. Nonetheless, since the trigger and selection criteria of the 8 and 13 TeV analyses are very similar, we expect that the signal efficiencies from the 13 TeV search do not differ drastically from the 8 TeV ones. The two analyses only differ in a slightly stronger cut on the ionization loss and time-of-flight, which effectively amounts to a slightly stronger cut on the HSCP velocity in the latter analysis. 9 Ref. [65] reported an attempt to model the 13 TeV signature efficiencies by multiplying the 8 TeV ones with a velocity dependent correction function fitted in order to resemble the signal efficiencies reported in [11]. On top of the slight reduction of the signature efficiencies for high velocities, this study revealed a better performance of the CMS detector in the region of low velocities leading to larger signal efficiencies for large HSCP masses. This latter feature could, however, not be described by a universal velocity dependent correction function for direct pair production and inclusive production. In order for a proper understanding of the differences between Runs 1 and 2, further information (which is not publicly available) is needed.
Therefore we choose to follow a conservative approach taking into account the reduction in the efficiency due to the slightly stronger cuts on the velocity of the HSCP candidate. We model this by multiplying P off (k) with a correction function which is assumed to depend only on β: The effect of a slightly stronger cut on p T [11] is found to be negligible for masses of a few hundred GeV. We determine the parameters a, b in a global fit to the signal efficiencies for the pair production and inclusive production model reported in Ref. [11]. To this end we define the χ 2 function: is the signal efficiency for a mass point m of the considered model using the signature efficiencies with the correction function f corr (a,b) and (A ) m CMS is the respective signal efficiency reported by CMS in Ref. [11]. The characteristic size of the uncertainty, σ A , is (arbitrarily) set to 0.02, which roughly reflects the precision of the recasting we aim at.
We minimize the χ 2 using Multinest [37,38] and obtain the best-fit parameters a 500 and b = 0.807. This fit uses all the 12 benchmark points (6 for direct stau production and 6 for inclusive production) considered in Ref. [11] and for which signal efficiencies were reported. We verified that very similar results are obtained when using only a subset of the benchmark points.
Once again we compare the upper limits for the total stau direct production cross-section obtained using our recast procedure and the ones reported by CMS. The comparison is shown in the right panel of Fig. A.5, where we see that, despite having a worse agreement than the 8 TeV results, the 13 TeV upper limits are within 20% while for large range of stau masses. Only for mτ 1.2 TeV the recasting significantly diverges from the official values. We point out, however, that the results are still conservative due to the above mentioned effects.

Appendix B. Finite lifetimes
Although the LLP searches considered here are aimed at detector-stable particles, they can also constrain models with intermediate decay length of the order of the detector size, where only a certain fraction of particles decay after traversing the entire sensitive detector. In this case the fraction of long-lived particles, F long , may be significantly smaller than one and the resulting signal efficiency becomes sensitive to the value chosen for L eff ≡ outer /γ β eff (see eq. (3)). Here we discuss in detail what are the expected values for L eff and justify our choice, L eff = 7 m, implemented in SModelS v1.2 and used in the results presented in Section 3.
The precise value of F long (and hence L eff ) depends on the input model and experimental analysis and requires a full Monte Carlo simulation for each model point in order to determine the boost distribution of the LLPs. However, since SModelS aims for a fast (although approximate) computation of LHC constraints for a large variety of BSM models, our goal is to determine an average value for L eff which can approximately reproduce the correct value of F long obtained from a full simulation. Before we can justify this approximation, we must first discuss how to obtain F long from the full simulation for a given input model.
We first define the probability F long for a (metastable) particle with momentum k to decay outside the detector in a given event: Here γ = (1 − β 2 ) −1/2 and outer (|η|) is the travel length through the CMS detector, which we approximate by considering a cylindrical volume with radius of 7.4 m and length of 10.8 m. Using now the off-and online efficiencies ( P on and P off ) discussed in Appendix A, we can extend the signal efficiency calculation from eq. (A.1) to the case of finite lifetimes using: where F long (k i j ) is the decay probability from eq. (B.1) for the j-th particle in the i-th event.
Using eqs. (B.2) and (A.1) we can then compute the total signal efficiency for a given input model taking into account the correct finite lifetime suppression factor. This efficiency can then be compared to the one computed with SModelS using F long to extract the precise value for L eff given a specific input model.
In Fig. B.6 we consider the direct production of staus (left panel) and direct production of squarks followed by a 1-step decay into the HSCP at the 13 TeV LHC. In both cases we vary the HSCP mass and lifetime, with mg = mq = 2 TeV in the second case.
The colour of each point in the plane shows the correct value for L eff = outer /γ β eff which should be used in SModelS in order for the signal efficiencies to exactly match the full simulation values.
As we can see, L eff does not vary significantly, spanning values of about 4-8 m. Therefore, in order to remain conservative and at the same time avoid (significantly) underestimating the signal efficiency, we choose L eff = 7 m (or outer 10 m and γ β 1.43) .   Fig. B.6 compares the exclusion curves obtained with SModelS v1.2 using this approximation (dashed lines) to the ones obtained using the full simulation (solid lines). Although the SModelS curves are conservative, we see that they agree quite well with the full simulation in most of the parameter space. We have checked that this also holds for other masses and topologies, though for the sake of conciseness do not show these results. We therefore conclude that using a fixed L eff = 7 m value is a valid approximation .  Fig. B.6. The effective characteristic travel length, outer /γ β eff in the parameter plane spanned by the HSCP mass, m HSCP , and its proper decay length, cτ , for direct production (left panel) and for the 1-step decay topology where we choose m prod = 2 TeV for the mass of the produced mother particle (right panel). The solid and dashed curves denote the 95% CL exclusion for the event-based computation of /γ β and for the approximation choosing outer /γ β eff = 7 m, respectively. For the direct production we choose the cross-section Drell-Yan stau pair production, while the cross-section for 1-step decay corresponds to (degenerate) squark production with mq = mg .