Constraining new resonant physics with top spin polarisation information

We provide a comprehensive analysis of the power of including top quark-polarisation information to kinematically challenging top pair resonance searches, for which ATLAS and CMS start losing sensitivity. Following the general modelling and analysis strategies pursued by the experiments, we analyse the semi-leptonic and the di-lepton channels and show that including polarisation information can lead to large improvements in the limit setting procedures with large data sets. This will allow us to set stronger limits for parameter choices where sensitivity from the invariant mass of the top pair is not sufficient. This highlights the importance of spin observables as part of a more comprehensive set of observables to gain sensitivity to BSM resonance searches.


Introduction
Given the lack of any conclusive hint for new physics beyond the standard model (BSM), it is important to enhance the sensitivity of collider searches that target new states and interactions that are kinematically accessible at the large hadron collider (LHC) after the first runs.
Observables which directly reflect the final state momentum transfer, such as invariant mass or transverse momentum distributions are obvious choices for searches for new resonant states. However, if the new physics production cross section is small, these observables might not have enough discriminating power to isolate the signal from the competing backgrounds satisfactorily. In these circumstances, the LHC experiments typically favour multivariate techniques over rectangular cut flows. While this approach can increase the sensitivity dramatically, care needs to be taken during a e-mail: christoph.englert@glasgow.ac.uk b e-mail: james.ferrando@desy.de c e-mail: k.nordstrom.1@research.gla.ac.uk the training stage of the analysis. In particular, experimental constraints (such as the detector's granularity, response effects etc.) need to be included and understood precisely in order to formulate a realistic sensitivity estimate. While this is clearly not an experimental limitation, the optimisation of these methods lies firmly within the remit of the expertise of the experimental community. Observables which enhance the sensitivity in a cut-flow based analysis are likely to retain their power when used in such a context, so from this perspective it is still useful to investigate such observables in order that they can be used by the experiments. Additionally this also allows us to gain a physical understanding of where the sensitivity comes from which can be absent from high-dimensional multivariate based analyses.
From a theoretical perspective, in the case of a low expected BSM cross section, there is therefore still motivation to ask whether observables which are complementary to invariant mass distributions provide sensitivity improvements. This also allows us to potentially gain a physical understanding of where sensitivity improvements found through multivariate techniques come from.
For instance, constraints on the production cross section of new resonant states derived from mass resonance searches are strongly dependent on the assumed width of the new state. Larger widths reduce the sensitivity gained from reaching the pole of the resonance in the energy range of a collider as the signal increasingly resembles a continuum excess rather than a localised peak, which reduces the shape information present in the distribution. We will show that spin polarisation observables are precisely observables which can improve the limit setting in such a case.
Assuming large statistics, multi-dimensional analyses in more than one observable become possible. This opens up the opportunity to study a variety of distributions and their correlations. In particular a spin-assisted tt invariant mass search, which is the focus of this work, becomes possible. Models, which are typically employed by the ATLAS and CMS collaborations to look for and constrain the presence of new resonances are extra dimension scenarios, see e.g. [1,2]. In particular, the compactified Randall-Sundrum (RS) model of Ref. [3] introduces a series of isolated graviton resonances into the 4D effective theory. If SM fields propagate in the entire five-dimensional anti-de sitter (AdS) background geometry, the 4D theory will also contain Kaluza-Klein copies of the low energy states that are identified with the SM.
The recent experimental study in [1] demonstrated that the constraint on the production cross section of e.g. a 3 TeV gluon g K K decaying to tt weakens by almost an order of magnitude when going from /m = 10% to /m = 40%. Such large widths can be problematic from a modelling perspective but are not unexpected in strongly coupled theories inherent to the dual formulation of RS-type theories. In fact, one of the coupling choices we will make in our analysis corresponds to a width of /m = 37.5%, to be compared to /m ≈ 15% for the default coupling choice made for the ATLAS benchmark point. This does not require the presence of additional strongly coupled states in direct vicinity as these are given by the higher Kaluza-Klein modes which still are well separated in mass, although the convolution with parton densities could in practice produce a non-negligible contribution at lower masses as their widths also get large. From this AdS/CFT [4][5][6][7] perspective, the top quark being the heaviest particle discovered so far plays a special role as its mass could be direct evidence of (at least partial) compositeness. A potential composite structure of extra resonances could therefore be reflected in the analysis of the associated top quark spin observables, while a tt bump search alone does not access this level of detail.
These BSM-induced effects can be contrasted with the fact that tt production in the SM at the LHC is dominated by parity-invariant QCD processes. We therefore can expect to produce almost unpolarised tops. At the high invariant masses we consider there is a sizeable contribution from weak processes which makes the SM expectation slightly left-handed: for m(tt) > 3 TeV, P t ≈ −0.15 where P t = +(−)1 correspond to completely right-(left-)handed tops. This fact has inspired many studies of top polarisation as a probe into BSM physics, both in pair [8][9][10][11][12][13][14][15][16][17] and single [18][19][20][21][22][23][24][25] production. As the decays of Kaluza-Klein gluons g K K and gravitons G K K are dominated by right-handed tops these distributions are modified as pointed out in for example [26,27].
The crucial point for including spin information to the limit setting is that increasing the width of a parent particle only has a modest effect on spin observables of its decay products. Therefore, they offer a great opportunity to not only give us more information generically, but also reduce the impact of considering wider signal models. We will show that this allows us to enhance the sensitivity of analyses like [1]. Therefore, we consider pp → g K K /G K K → t RtR production in this paper and study both the semi-leptonic and the di-leptonic final states of the top decays in the region where the reported sensitivity is low. Our goal is to determine to what extent top polarisation and spin correlation measurements allow us to make stronger empirical statements for the models studied in e.g. [1]. 1 Our results can be considered as a litmus test that motivates the consideration of such observables to the aforementioned multivariate techniques pursued by the experiments.
The paper is organised as follows: in Sect. 2 we quickly introduce the model and discuss relevant parameter for our analysis to make this paper self-consistent. In Sect. 3.1 we discuss the semi-leptonic final state, while Sect. 3.2 focuses on the di-leptonic final state. In Sect. 4 we summarise our results and present our conclusions in Sect. 5.

The model
In RS1 models [3] the hierarchy problem is solved by introducing an extra compactified dimension r UV < z < r TeV with a warped anti-de Sitter geometry AdS 5 . This explains fine-tuning in M Planck /M Weak in terms of the localisation of the 4D graviton near the "Planck" brane, z = r UV with a fundamental scale of M Planck and the Higgs sector near the "TeV" brane, z = r TeV , with a fundamental scale of M Weak . Thanks to the warped geometry we then expect M Planck /M Weak ∼ exp{π k(r TeV − r UV )}, where k is the AdS curvature scale and r C = r TeV − r UV is the size of the extra dimension. This is solved by kr C ∼ 11 for the observed values of the Planck and weak scales, and hence massively reduces the required fine-tuning. Methods to stabilise the geometry are known [28].
If the SM fermions propagate in all five dimensions, we can additionally explain the structure of the Yukawa sector through localisation [29]. The profile of the fermions' wave function is determined by a localisation factor ν (see [27] for details) which exponentially peaks towards the Planck brane for ν < −1/2 and towards the TeV brane for ν > −1/2 (this can be understood as mixing with CFT bound state in the dual picture; see [6,30] for details). To avoid constraints from Z → b LbL while reproducing the correct Yukawa structure we will gauge right-handed isospin and set ν t R > ν Q3 L > ν other following [31]. In general we will keep ν other < −1/2.
Setups with the right-handed top quark localised close to the TeV brane, a flat third generation left-handed quark doublet profile, and other the fermions localised close to the Planck brane are phenomenologically viable [31]. Thanks to t R living on the TeV brane and (t, b) L being almost flat, the dominant decay mode of g K K and G K K is to t RtR .
These are typical parameter choices that underpin the experimental analyses. For the graviton, branching fractions to hh and V L V † L are also sizeable as the Higgs and therefore also the longitudinal modes of the weak bosons are located on the TeV brane, but strong constraints on the masses of both particles m(g K K ) and m(G K K ) are typically derived from top resonance searches [1,2].
Our model setup follows these strategies of ATLAS and CMS [1,2] but varies slightly between the gluon and graviton signals. In general the gluon will always be easier to detect due to much larger cross sections as it can be produced efficiently through uū and dd annihilation, whereas graviton production is dominated by gluon fusion. As such it does not make sense to compare identical parameter points and we focus on choices which give a (relatively) narrow and a wide resonance for each signal model.
For our graviton samples we use the model file from [32] and consider the above extreme case where t R is localised on the TeV brane (i.e. being fully composite), Q3 L is very close to flat, and the decay widths of the lightest KK graviton resonance therefore are with c = k/M Planck . The factor of 3.83 is the first root of the Bessel function J 1 , which is encountered in RS models for the wave function along the compactified direction, and which stems from the boundary condition for gravitons. φ sums over Z L , W L , and h. Decays to right-handed tops are therefore dominant at ∼70% and offer good prospects for detection; however, both Z Z [33] and W W searches offer additional information (see [34,35]). We consider two values of c = {1, 2}, which correspond to the widths of While c = 2 is in the upper end of the range where we can trust our assumption that higher curvature terms can be neglected in our calculations [33] this is a useful point to consider in order to have a wide, fully polarised resonance as one of our benchmark points. Note that our model setup has m G 1 ≈ 1.5m g 1 which would put our chosen mass points in tension with current constraints on m g 1 . However, our intention is to show the value of adding polarisation information to searches and G 1 is a useful example of a source of a fully polarised resonance: searches for g 1 will in general always be more sensitive due to the more efficient production mechanism.
For our gluon sample, generated with the model file introduced in [36], we soften the localisation requirement and set ν Q3 L ∼ −0.4 and vary ν t R ∼ {−0.3, 0} which corresponds to effective couplings of g g 1 b LbL = g g 1 t LtL = g S , and g g 1 t RtR = {2, 6}g S . These give widths of g 1 /m g 1 = {6.2%, 37.5%} and branching ratios to tt = {78.5%, 96.5%}. While always dominated by right-handed tops, the fraction of right-handed to left-handed tops also changes which should be reflected in the polarisation observables.

Event generation and analysis
Our background is leading order semi-and di-leptonic tt samples generated using MadGraph 5 [37,38] and reweighted to the NNLO cross section given in [39][40][41]. We focus on √ s = 14 TeV collisions. Our signal samples are also generated with MadGraph using the UFO model format [42] to import models implemented in the Feyn-Rules [43] language. These parton level samples are then showered in Herwig 7.0.3 [44,45] and analysed using the Rivet framework [46] which we also use for applying smearing and efficiencies to the physics objects according to typical ATLAS Run 2 resolutions (where available, with Run 1 resolutions used otherwise) [47][48][49] at the beginning of the analysis routine.

Analysis selections and reconstruction
The analysis of the semi-leptonic samples focuses on reducing non-tt backgrounds and reconstructing the individual tops, largely following the boosted approach detailed in [1]. We start by finding electrons with p T > 25 GeV for |η| < 2.47 and muons with p T > 25 GeV with |η| < 2.7. We then cluster narrow anti-k T [50] R = 0.4 jets with p T > 25 GeV inside |η| < 2.8 and fat Cambridge-Aachen [51,52] R = 1.2 jets with p T > 250 GeV inside |η| < 2, and we require at least one of each after removing narrow jets which overlap with the leading fat jet.
Since we are interested in highly boosted tops, we have to accept some overlap between the lepton and b-jet on the leptonic side so we do not require these to be isolated and assume we can veto events with hard leptons from heavy flavour decays inside QCD-produced jets. 2 Following [54], we top-tag the leading fat jet with HEPTopTagger [55,56] mostly using the default setup of [56]. Note our choice of R = 1.2 is well motivated compared to the choice of R = 1.5 in the benchmark study in [56] since we consider much heavier resonances. Our only deviations from the default setup is that we require the candidate to have a mass between 140 and 210 GeV and a p T > 250 GeV, since widening the mass windows allows us to gain some statistics while still keeping non-tt backgrounds negligible and our signal tops are so highly boosted that there is no loss in efficiency in a slightly higher cut in p T . This provides our hadronic top candidate and we require at least one of the narrow jets to be b-tagged with an efficiency of 70% and fake rate of 1%; see e.g. [57].
Our narrow jets tend to be quite hard since we are interested in the high-m tt region but we have checked that the leading narrow jet p T distribution peaks in the range from 50 to 300 GeV where the MV1 algorithm used by ATLAS outperforms this naive estimate [58] for our signal samples. To reflect the degradation of performance at higher p T , we use a fake rate for light quarks and gluons of 10% above 300 GeV. We have checked that combining the p T -dependent btagging with contemporary top-tagging techniques renders the W j j background negligible compared to SM tt production at our signal mass points. We expect other SM backgrounds to be negligible: we find lower signal acceptance × efficiencies than the 13 TeV ATLAS study in [59] thanks to our stricter top-tagging which further suppresses all non-tt backgrounds. The final sensitivity of our study could potentially be improved by using a more permissive top-tagging algorithm and taking care to estimate non-tt background contributions.
In the next step, we require missing transverse energy / p T with | / p T | > 20 GeV and | / p T | + m T > 60 GeV where We reconstruct the leptonic W by assuming that its decay products are the leading lepton and a neutrino, which accounts for all of the reconstructed missing transverse momentum. The longitudinal component of the neutrino momentum is found by assuming the W is produced onshell, and we choose between the two resulting solutions by picking the one which minimises |m blν − m t | after combining with the leading b-tagged jet. This object is our leptonic top candidate.
We extract m(tt) by adding the found leptonic and hadronic top candidates and define θ l ± by boosting to the leptonic top's rest frame and taking the angle between the lepton and the top's direction of travel. 3 The final m(tt) dis- 3 Note that here are studies [60] that aim to extract the polarisation information from boosted hadronic tops but we do not attempt to do so here. We can expect the sensitivity of such a measurement to be smaller than that of the leptonic side measurement. tribution is shown in Fig. 1(1) and the cos θ l ± distribution in Fig. 2a.

Di-leptonic study
The semi-leptonic final state discussed in Sect. 3.1 is naively much more attractive due to a six times larger branching fraction (since we are only considering electrons and muons) and a less involved reconstruction of the individual top momenta. Nonetheless, it is worthwhile to also consider the di-leptonic final state as it offers two clean final state leptons which enable a comparably straightforward measurement of spin correlations with increasing statistics.
When considering di-leptonic tt decays, however, we run into a qualitatively new issue related to the reconstruction of the individual top momenta: with two neutrinos in the final state, we will have to make an educated guess of how the single missing transverse energy vector decomposes into the transverse components of the neutrinos p T,ν/ν before reconstructing the longitudinal momentum components. There are a number of approaches that we outline in the following.
The first method is to simply solve the full system of kinematic equations by assuming all intermediate particles are produced on-shell and that your measured kinematic quantities are exact [61,62]. This will in general provide up to eight sets of solutions, one of which being close to the true momenta assuming that the assumptions are valid. Using smeared kinematic quantities results in a larger mean number of solutions which causes large combinatorial uncertainties. CMS have made use of this approach together with a matrix element method [63] to reduce the number of solutions on the basis of the matrix element weight.
A second method is to use so-called "neutrino weighting" [64,65], which scans over a large number of proposed neutrino solutions and constructs and assigns individual weights for each guess based on how well the solution solves the kinematic equations. It is then possible to calculate observables for single events by either selecting the solution with the highest weight, or adding up the values for all solutions with correct weighting. This method is often used by ATLAS and has the advantage of only relying on kinematic information.
A third method, which is the one we will adopt in this work, uses kinematic insights from the M T 2 [66] observable. The so-called M T 2 Assisted On Shell (MAOS) method [67,68] uses the solution for the transverse components of the two neutrino momenta which provides M T 2 . The bisection method for calculating M T 2 [69] and subsequent improvements of the algorithm [70][71][72] have made it possible to find the solution numerically. The solutions for the neutrino momenta k ± ν/ν (where ± denotes the remaining twofold of the reconstruction if required by only using events with m(t) − M T 2 < C for some cut C.

Analysis selections and reconstruction
We begin the analysis by finding electrons with p T > 25 GeV inside |η| < 2.47 and muons with p T > 25 GeV inside |η| < 2.7. We then find anti-k T R = 0.4 jets with p T > 25 (a) (b) Fig. 3 Two-dimensional shape distributions of m(tt) and cos θ l ± for the expected SM background (a) and a narrow (g t R = 2) g 1 (b) in the semi-leptonic analysis. This corresponds to the worst-case scenario among our signal models from the perspective of gaining additional information from the polarisation measurement GeV with |η| < 2.8. Again we have to accept some overlap between the leptons and jets due to the large top boost, so we do not require these to be isolated and again assume we can separate very hard prompt leptons from a nearby jet. We then b-tag the jets within |η| < 2.5 with 70% efficiency and a 1% fake rate (10% for p T > 300 GeV with the comments regarding this choice made in Sect. 3.1.1 also valid here), and require at least two b-tags. We also require missing transverse energy / p T with | / p T | > 60 GeV. While the high boost of our tops means that we can usually correctly pair b-jets to leptons by taking the ones closest to each other in η-φ space, we make use of some standard approaches to further reducing the combinatorial uncertainty. Due to the large boost we consider, we do not gain much from cutting on M tt T (0), which is often considered in the literature [73][74][75][76], where M tt T (0) is defined as the transverse mass of the entire tt system when m νν = 0: We therefore select the candidate which minimises at least two out of three test variables: T 2 , T 3 , and T 4 defined in [75]. These correspond to how well the solution corresponding to each pairing reconstruct the W and top masses and the expected M T 2 distribution. If either of the pairings returns complex solutions for the neutrino momenta we automatically select the other one. Once we have selected a pairing we 5 Note that we change the pairing algorithm defined in [75] slightly: this is because we find that vetoing the entire event if neither pairing results in a viable-seeming solution suppresses the W W j j background with little signal efficiency loss. We do not use m bl for determining the correct pairing (referred to as the T 1 test variable in [75]) since this would make the total number of test variables even and it correlates strongly with T 2 .
As discussed above we reconstruct the individual neutrinos using the MAOS method. We take the solution for the transverse momenta of the neutrinos which gives the correct M T 2 , and solve the remaining kinematic constraints to give two solutions for the longitudinal component of each neutrino. This results in four final solutions for the complete kinematics of the event with equal weights. This technique has been used for example in phenomenological studies of production angle measurements in [67] and top polarisation measurements in [77]. Despite the fourfold combinatorial uncertainty which introduces a large smearing of the final m(tt) distribution as shown in Fig 1b, it reproduces truthlevel angular observables well as this only affects the longitudinal neutrino momenta. The cos θ l ± distribution in Fig. 2 shows this in practice and confirms the final distributions are closer to their true shapes than in the semi-leptonic analysis. Unlike in the semi-leptonic case in Sect. 3.1.1 we can extract the lepton angle from both tops by again boosting to the indi-

Signal vs. background discrimination
We estimate the limits that can be set on the signal strength μ = σ/σ expected for our model setups with the m(tt) and combined m(tt) − θ l ± distributions by using the modified frequentist confidence level C L s as outlined in [78]: for each of the 2D-binned distributions (examples of which are shown in Figs. 3, 4) we calculate the likelihood ratio where s i , b i and d i are the expected number of signal and background, and the observed number of events for each bin, respectively. Using the likelihood ratio we can compute C L s+b = P s+b (X < X obs ) , (4.2) 3) To avoid spurious exclusions we do not use bins which have no background events-this has a negligible effect as we have ensured there is sufficient statistics in all bins which are expected to contribute to the exclusion limit for our signal models.
A value of C L s < 0.05 is interpreted as excluding the corresponding value of μ at 95% confidence level [79]. While our statistical setup is meant to closely resemble those currently employed by the LHC experiments there is also a recent study in [80] which investigates the information gain from using multi-dimensional distributions such as our m(tt)-θ l ± one using Bayesian methods. The study looks at combining information from p T and angular observables in VBF production of a single Higgs decaying to two photons, and uses this to constrain Wilson coefficients in SMEFT. Similar ideas have also recently been treated in a very elegant manner by the study in [81] which investigates the information content of various combinations of kinematic observables and the effect of restricting the available phase space through kinematic selections on the total information available in VBF production of a single Higgs decaying to pairs of taus and four leptons, and single Higgs + single top production.
When calculating limits we use a flat Gaussian systematic of 5% on the total cross section 6 of the background and only statistical uncertainties for the signal. To propagate the systematic uncertainty to individual bins we assume the fractional systematic error is the same in all bins, and calculate the correct uncertainty which would lead to the stated uncertainty on the total cross section when adding up all the bins assuming they are statistically independent. In general introducing systematic uncertainties and propagating these in a consistent manner always requires us to make an assumption of how this is to be done which introduces a large effect on the final limit on μ. In order to provide an estimate of the importance of the systematic uncertainty on our limits we also present a comparison to limits calculated with no systematic uncertainties in Figs. 5 and 7.

Improvement from top polarisation observables
Before we comment on the relative improvement from including polarisation-sensitive observables let us quickly investigate the expected phenomenology in the model we consider. As can be seen from Fig. 1. The reconstruction smears out the resonance so the signal appears very wide for all signal models in the semi-leptonic and di-leptonic analysis. For relatively narrow resonances our reconstruction of the semi-leptonic channel yields a better performance; however, once moving to larger widths, the m tt distribution quickly loses its peak-like features. In such a case, setting limits by using m tt as a single discriminant effectively means constraining a continuum excess.
Considering directly inferred angular quantities like φ (l + l − ) from, e.g., the di-lepton final state does not offer a great discriminative power, see Fig. 6. This is in particular true when we would like to discriminate between different signal hypotheses once an excess has been discovered. The reason for the highly correlated φ(l + l − ) is the large considered mass range of the tt resonance, which leads to backto-back tops and leptons as a consequence.
It is exactly the boost to the top rest frame which lifts this degeneracy (modulo reconstruction inefficiencies). Since the signal produces highly polarised tops, we see a large modification of these lepton angle distributions, which provides additional discrimination power (Fig. 2) that we can use to tighten the estimated constraint on μ when combined with tt, Figs. 3 and 4 (we also show the distribution of the expected SM background which exhibits no particular resonant features in the m(tt)-cos θ l ± plane). Note that the polarisation of the tops from g 1 decays differs between the two coupling choices and this is visible in both channels.
Using the m(tt)-cos θ l ± correlation as the baseline of the limit setting outlined above we obtain a large improvement by a factor up to ∼ 3 with increasing luminosity compared to m(tt) alone in Fig. 8b for the ideal case of the dileptonic analysis of a wide highly polarised resonance, as the large statistics available with 100 fb −1 provide an efficient sampling of the sensitivity unveiled in Fig. 4. This relative improvement reduces for smaller reconstructed widths that can be reached in the semi-leptonic channel as discriminating power in m(tt) is gained, yet an improvement at large luminosity by a factor of ∼ √ 2 is still possible for our bench-(a) (b) Fig. 7 Limits on μ for a wide (g t R = 6) g 1 assuming a no systematics and b 5% systematics on the total cross section (see text for details on how this is propagated to the individual bins) which can be set with different assumed total luminosities using m(tt) and cos θ l ±  9 Limits on μ for a wide (c = 2) G 1 using the semi-leptonic (a) and di-leptonic (b) analyses for a fixed luminosity of 100 fb −1 with 5% systematics on the total cross section (propagated to bins as explained in the text) as a function of resonance mass using m(tt) and cos θ l ± (black line) and only using m(tt) (red line). The ±σ bands are for the combined result mark less-polarised gluon, which is the least sensitive of our parameter points. It is exactly this improvement from including polarisation information, which renders the analyses potentially sensitive-depending on systematics-to broad gluon-like resonances at L ∼ 100 fb −1 at our benchmark setting. Discrimination solely based on m(tt) flattens out and an analysis which focuses exclusively on resonant-like enhancements will have less sensitivity by factors up to 3.
The improvement is not too sensitive on the precise mass scale around our chosen benchmark, and becomes especially relevant at large widths as alluded to in the beginning of this work, Figs. 7, 8 and 9.
As can be seen from our results for graviton-like resonances, depending on the size of the cross section, only including spin polarisation is not enough to reach a sensitivity to constrain the underlying model satisfactorily. Nonetheless the relative improvement by a factor of 3 should provide an important handle to tackle such low-cross-section scenarios much better at large luminosity, possibly as part of a multivariate approach invoked by the experiments.

Conclusions
Resonance searches at the LHC tt final states are a well motivated strategy for discovering new physics beyond the SM [1,2]. While peaks in the mass spectrum are very powerful indicators of the presence of such new physics, we also often expect to see large modifications to other distributions and combining this information through multi-dimensional distributions often offers a good way to improve sensitivity. Additionally, if the resonance becomes wide, invariant mass distributions necessarily lose sensitivity. We have performed a detailed investigation of the semi-leptonic and di-leptonic tt final states for √ s = 14 TeV and provide quantitative estimates of the information gain from including top polarisation information in the limit setting. Our results demonstrate that this information helps to ameliorate the loss in sensitivity for wider signal models. To make our analysis comparable to the practice of the experiments we have focussed on the RS scenario as a particular candidate that provides a theoretically well-defined framework for such a phenomenological situation. For the fully polarised scenarios we study in this work we find improvements of factors of up to 3 (2) on the limit of the signal strength for the di-(semi)-leptonic analysis at large luminosity, with larger improvements for wider signal models as expected. For our benchmark choice of 3 TeV resonances, including this information is crucial to exclude gluon-like at 95%. Interestingly the larger improvement for the di-leptonic analysis allows this channel to become competitive with semi-leptonic one for resonance searches for these types of models; however, we would like to note that this statement heavily depends on the systematics modelling and only a dedicated experimental analysis can fully assess the relative sensitivities.
While these improvements are specific to our parameter choices at face value, similar relative improvements can be expected for other, non-graviton or gluon resonances (not limited to RS models) that predict a net polarisation of the top pair. Polarisation information is therefore an important ingredient to a more comprehensive analysis strategy that builds upon the invariant top pair mass, providing additional information in multivariate approaches.