On the LHC sensitivity for non-thermalised hidden sectors

We show under rather general assumptions that hidden sectors that never reach thermal equilibrium in the early Universe are also inaccessible for the LHC. In other words, any particle that can be produced at the LHC must either have been in thermal equilibrium with the Standard Model at some point or must be produced via the decays of another hidden sector particle that has been in thermal equilibrium. To reach this conclusion, we parametrise the cross section connecting the Standard Model to the hidden sector in a very general way and use methods from linear programming to calculate the largest possible number of LHC events compatible with the requirement of non-thermalisation. We find that even the HL-LHC cannot possibly produce more than a few events with energy above 10 GeV involving states from a non-thermalised hidden sector.


Introduction
One of the central motivations for the LHC has been to probe models of dark matter (DM) in which the DM candidate is a weakly interacting massive particle (WIMP), meaning that it has mass and interaction strength comparable to other known particles (i.e. comparable to the electroweak scale). In such models the DM particle is predicted to be in thermal equilibrium with the bath of Standard Model (SM) particles, so that its relic abundance can be calculated via the freeze-out mechanism [1].
The non-observation of new physics at the LHC has however cast doubt on the idea of WIMP DM and has led to increased interest in alternative ideas. A simple modification of the WIMP framework is to assume that even though DM was in thermal equilibrium in the early Universe, the annihilation processes that are responsible for setting its relic abundance are secluded from the SM and therefore much more difficult to observe at the LHC [2]. Within this framework (recently coined "WIMP next door" [3]) it is easily possible to reconcile LHC constraints with the idea of thermal freeze-out [4]. $ This article is registered under preprint number: TTK-  Email address: kahlhoefer@physik.rwth-aachen.de (Felix Kahlhoefer) Nevertheless, LHC results may also be taken to indicate that DM was not produced via thermal freeze-out and that in fact the DM particle was never in thermal equilibrium with the SM thermal bath. Among the many possible alternatives (collectively referred to as FIMPs for feebly-interacting massive particles [5]) are models in which DM is produced via out-of-equilibrium decays (so-called superWIMPs [6]) or via the freeze-in mechanism [7]. A question of general interest is therefore whether the LHC may be able to also shed light on models in which the dark sector is so weakly coupled to the SM that it was never in thermal equilibrium.
In this letter we address this question by comparing interaction rates at the LHC and in the early Universe. This comparison is based on the following simple observations: 1. If the same physics can be used to describe the early Universe and the LHC, any process that can be induced at the LHC also took place in the early Universe (provided the process happens on short enough timescales). Given a temperature of the SM thermal bath, it is straight-forward to calculate the rate at which this process must have occurred. 2. Assuming that SM states dominate the energy density of the early Universe, we can directly compare the rate of any such process (at a given temperature) to the corresponding Hub-ble rate. If it exceeds the Hubble rate, the process happened sufficiently frequently that the final state particles were produced with large abundance. This typically implies that the inverse process must also have occurred frequently, leading to thermal equilibrium between initial and final states.
It is straight-forward to perform such a comparison in the context of specific models. Here we demonstrate that it is in fact also possible to compare rates at the LHC and in the early Universe in a model-independent way. To do so, we evaluate the thermalisation condition between the SM and a hidden sector 1 for a very general functional form of the cross section of the process linking the two sectors. Requiring that the two sectors do not thermalise then leads to constraints on this cross section, which can be translated into an upper bound on the rate of the same process at the LHC. This approach allows us to show that any process that does not reach thermal equilibrium in the early Universe can only induce a negligibly small number of events with sufficiently high energy at the LHC.
We conclude that, provided the Universe once was at a temperature comparable to LHC energies, any hidden sector that we could hope to observe at the LHC must have been at least partially in thermal equilibrium in the early Universe. Conversely, any hidden sector that was not in thermal equilibrium in the early Universe is unobservable for the LHC.
This letter is structured as follows. In section 2 we present the derivation of the upper bound on the number of observable LHC events, first for a rather specific assumption on the functional form of the cross section and then using a general parametrisation. A detailed discussion of the underlying assumptions is provided in section 3. Appendix A takes a closer look at the case of s-channel resonances.

Thermal equilibrium and LHC predictions
We consider a process that converts two SM particles into one or more hidden sector states. Rather than specifying the details of the underlying interactions, we simply assume that the total cross section σ of this process depends in some way on the centre-of-mass energy √ s of the colliding particles. The fundamental requirement we impose is that the cross section is sufficiently small that the hidden sector states never reach thermal equilibrium (in short, we require non-thermalisation). To make this statement quantitative, we define the reaction rate Γ as the product of the thermally averaged cross section times relative velocity and the equilibrium number density of the SM particles in the initial state [8]: where N c denotes the number of colour degrees of freedom in the initial state and we have assumed the masses of the SM particles to be negligible. 2 The requirement that no thermal equilibrium is achieved can then be rephrased as for all temperatures T , where g * counts the number of effective relativistic degrees of freedom at temperature T and M pl denotes the Planck mass. It will be convenient to rewrite this requirement by introducing the dimensionless reaction rate The non-thermalisation constraint then simply becomes γ(T ) < 1 for all T . Let us now consider the same process at the LHC. Clearly, the process will only be relevant if both of the particles in the initial state can be supplied via proton-proton collisions. 3 In this case, the leadingorder production cross section at the LHC is given by where √ s tot is the total centre-of-mass energy of the LHC, x i denote the fraction of the proton momentum carried by the particles in the initial state and f i denote the parton distribution functions (pdfs).
Here we use the MSTW 2008 NNLO pdfs [10], setting the factorisation scale to the partonic centreof-mass energy The total number of production processes of hidden sector states that will occur at the LHC is then given by N LHC = σ LHC L, where L denotes the integrated luminosity. Again, it will be convenient to rewrite this expression slightly:

A first example
Our aim is to find the functional form σ( √ s) that maximises N LHC while satisfying the constraint γ(T ) < 1. To gain some intuition on the nature of these requirements, let us first make a very simple ansatz and write where δ(x) is the Dirac δ-function. Such a cross section could arise for example from the exchange of an on-shell s-channel mediator connecting the SM initial state to the hidden sector (see Appendix A for further details). With this ansatz we find where in the second line we have defined x 0 = √ s 0 /T . It is straight-forward to see that if we vary T (or equivalently x 0 ), the expression is maximised For each value of √ s 0 the coefficient σ 0 is fixed in such a way that the cross section saturates the non-thermalisation constraint. Shaded bands indicated pdf uncertainties at the 90% confidence level.
Clearly, the non-thermalisation constraint requires the cross section connecting SM states to the hidden sector to be extremely small. We observe in particular that the constraint becomes more stringent with increasing √ s 0 . The reason is that in this case the non-thermalisation constraint becomes sensitive to higher energies, corresponding to earlier times and hence larger densities in the early Universe.
We can make use of the upper bound on σ 0 to substitute our ansatz into eq. (5) and calculate the maximum number of events that can be expected at the LHC for a cross section of this particular form. The result is shown in figure 1 for an integrated luminosity of L = 3000 fb −1 for three different combinations of initial states: gg, uū and dd. We make the following observations: 1. The predicted number of events depends sensitively on √ s 0 . This is a direct consequence both of the non-thermalisation constraint becoming weaker for smaller √ s 0 and of the pdfs preferring smaller partonic centre-of-mass energies. 2. We find significantly larger values of N LHC for the case of a gg initial state than for a uū initial state, again a direct consequence of the larger gluon pdfs at small momentum fraction. 3. In any case, the upper bounds on N LHC are extremely tight. Even for √ s 0 = 10 GeV and a gg initial state, we find N LHC < 0.49, making a discovery of this process impossible. 4 In principle, the bound on N LHC could be further relaxed by going to even smaller values of √ s 0 .
However, at this point our description becomes increasingly questionable, because non-perturbative effects become important for the description of the initial state. Moreover, processes with such small partonic centre-of-mass energy are very challenging to observe at the LHC and would be more suitable for searches at low-energy colliders like Belle II. To avoid these complications, we will not consider values of √ s 0 below 10 GeV. In fact, even for √ s 0 = 10 GeV pdf uncertainties are already substantial. We estimate these uncertainties by varying the pdfs around the central set following the procedure described in ref. [10]. The shaded bands in figure 1 illustrate the resulting changes in N LHC at the 90% confidence level. For √ s 0 = 10 GeV the uncertainty amounts to approximately 20-30%. We conclude that the simple ansatz chosen above leads to a very stringent bound on N LHC . Nevertheless, it is far from clear that this ansatz comes close to the optimal functional form of the cross section. After all, we found that γ(T ) ≈ 1 only for a very small range of temperatures. We therefore expect that it should be possible to relax the bound on N LHC by considering more general forms of σ( √ s).

General cross sections
Since it is difficult to consider completely arbitrary variations of σ( √ s), we will instead consider a discretised version of the problem (see ref. [11] for a similar approach in the context of DM direct detection experiments). For this purpose, we define a (potentially large) number of discrete values for √ s: where i = 0, . . . , m and ∆ > 1 defines the step size.
Since we are only interested in quantities involving the integral of σ( √ s), we can then approximate the cross section by with σ i = σ( √ s i ). This approximation can be made for any cross section that does not vary too rapidly on each interval [ √ s i , √ s i+1 ], and it becomes exact in the limit ∆ → 1. Rather than considering cross sections of arbitrary functional form, we can therefore simply study cross sections of the form of eq. (10) with arbitrary coefficients σ i . As we will see below, writing σ( √ s) as a sum of δ-functions (rather than e.g. as a piecewise constant function) has the advantage that it leads to a significant simplification of the relevant equations.
Making use of the approximation introduced in eq. (10), the number of predicted events at the LHC can be written as The second important simplification that we make is to assume that it is sufficient to check the non-thermalisation constraint only for discrete values of T . Specifically, we define for j = 1, . . . , n and appropriate values T 0 < √ s 0 and n > m, such that all relevant temperatures are covered. With this definition, we find that γ j ≡ γ(T j ) can be written as with (15) We have therefore reduced the problem to a wellknown problem of linear programming: the maximisation of b · x with respect to a vector x = (σ i ) with positive entries for a given vector b, subject to a set of constraints given by A·x ≤ 1. This optimisation problem can be easily solved with well-known numerical methods. For sufficiently small step size ∆ and sufficiently large m and n, the value of the maximum should become independent of the precise value of ∆ and of the choice of T 0 . We find that this is indeed the case, and that choosing ∆ = 1.1 (corresponding to m, n ∼ 100) is fully sufficient.

Results
For concreteness, let us consider the case of a gg initial state and set √ s 0 = 10 GeV and T 0 = 1 Gev.
For √ s tot = 13 TeV and L = 3000 fb −1 we find N LHC < 0.62. This value should be compared with the value N LHC < 0.49 that we found for the much simpler ansatz above. In fact, the similarity of the two results is no coincidence, as will become clear by inspecting the optimum solution.
The central observation is that the optimum solution has σ i = 0 for almost all values of i. The few non-zero values of σ i are well-separated and each (nearly) saturate the bound derived above for the case of a single δ-function. In this way, the individual contributions conspire to give γ(T ) ≈ 1 over a wide range of temperatures. This behaviour is illustrated in the left panel of figure 2, which shows that in this particular case the optimum solution consists of a sum of six separate δ-functions.
We emphasise that the number of δ-functions contributing to the optimum solution is not an artefact of the chosen step size, which is in fact much smaller than the distance between two δfunctions. Instead, it is a generic feature of the optimum solution and is robust under a change of step size. In other words, the "distance" between the individual δ-functions is optimal -introducing any additional contribution would strengthen the non-thermalisation constraint and hence lead to a stronger bound on N LHC . This observation makes it clear why the optimum solution is so close to the one obtained for a single δ-function: The dominant contribution to N LHC stems from the δ-function at √ s 0 , while the additional δ-functions at higher energies only give sub-leading contributions. This is illustrated in the right panel of figure 2, which shows the predicted number of LHC events with partonic centre-of-mass energy E cm above a minimum energy E min as a function of that minimum energy. The dashed grey line shows for comparison the bound on N LHC obtained for a single δ-function with √ s 0 = E min (see figure 1). For the specific case considered here, we find that the first three δ-functions contribute 0.45, 0.15 and 0.019 to the total of 0.62 predicted events.
To conclude this discussion, we emphasise that the numbers discussed above depend of course on the somewhat arbitrary choice of √ s 0 . To first approximation, reducing √ s 0 by a factor r will relax the bound on N LHC by the same factor. However, even if we allow √ s 0 as low as 1 GeV (and consider √ s tot = 14 TeV), we cannot obtain more than O(10) events with in full data set of the HL-LHC, almost all of which would be at extremely low energies and therefore likely unobservable.

Discussion
We have shown in a very general way that any process connecting the SM to a hidden sector that was not in thermal equilibrium in the early Universe is unobservable at the LHC. Turning this argument around, we conclude that any hidden sector that can be produced and observed at the LHC must have been at least partially in thermal equilibrium with the bath of SM particles in the Early Universe.
We emphasise that this does not necessarily mean that the LHC can only probe DM models in which the DM particle itself was in thermal equilibrium. For example, the DM particle may be produced from the decays of another metastable hidden sector particle, which itself was in thermal equilibrium (as in the case of gravitino DM produced in the decays of the next-to-lightest supersymmetric particle).
An immediate consequence of these observations is that it is impossible for the LHC to test most realisations of the freeze-in mechanism, in which no part of the hidden sector thermalises with the SM. More complex freeze-in models may be testable, for example if they contain a new state that couples only very weakly to the hidden sector and decays dominantly into SM particles. However, it will be very challenging for the LHC to establish a connection to the DM problem from such observations alone.
Three fundamental assumptions enter our analysis: 1. We have assumed that the early Universe reached temperatures comparable to the energies accessible for the LHC. Clearly, if the reheating temperature is significantly smaller than LHC energies, the LHC may probe processes that have never been in thermal equilibrium (see ref. [12]). 2. We have assumed that the physics relevant for the interactions of the hidden sector in the early Universe is the same as the physics relevant to the LHC. This assumption could potentially be violated by phase transitions in the early Universe or by thermal effects suppressing certain processes (see [13,14] for an example). 3. We have assumed a standard cosmological history up to energies of a few TeV. A mechanism that would significantly increase the expansion rate (e.g. by introducing a large number of additional relativistic degrees of freedom) would relax the non-thermalisation constraint and hence our bound on the number of observable LHC events [15].
While these assumptions clearly limit the generality of our results, they are essentially unavoidable when trying to connect LHC physics to early Universe cosmology in a predictive way. At the same time they provide useful guidance for the ways in which early-Universe cosmology must be modified in order to evade our conclusions. In other words, our assumptions can be seen as a list of loopholes that may be exploited to obtain observable LHC signatures from non-thermalised hidden sectors. In any case, we emphasise that no further assumptions have been made. In particular, by considering cross sections with completely arbitrary dependence on the centre-of-mass energy, we avoid the need to specify the way in which the hidden sector interacts with the SM.
With growing interest in models of FIMP DM with non-thermal production mechanisms it be-comes more and more urgent to develop new experimental strategies to probe these theories. We have shown that the LHC is not sensitive to nonthermalised hidden sectors, since the interactions are either too weak or occur dominantly at too low energies to be observable. Further work is needed to establish whether low-energy colliders or beam-dump experiments are more promising [16], or whether we need to rely on non-collider experiments, such as low-threshold direct detection experiments [17,18], to make further progress. The model-independent method proposed in this work may provide a useful tool for future studies of these questions.
from SM initial states both at the LHC and in the early Universe. The mediator will be in equilibrium with the thermal bath if Γ R > H(T ∼ M R ), where Γ R and M R denote the total width and mass of the resonance, respectively. If, however, the partial width for decays into the hidden sector is sufficiently small Γ(R → hidden sector) < H(T ∼ M R ), decays of the mediator may populate the hidden sector without bringing it into equilibrium with the thermal bath. In this appendix we show that if Γ(R → yy) < H(T ∼ M R ) for a given hidden sector final state yy it follows automatically that γ(T ∼ M R ) < 1 for any process xx → R → yy.
Let us consider a specific SM initial state called xx. First of all, we make use of the narrow-width approximation to write σ xx→R→yy = σ xx→R × BR(R → yy) . (A.1) The central observation is now that the cross section σ(xx → R) for the production process and the partial width Γ(R → xx) for the inverse process depend on the same matrix element and therefore differ only by phase space factors. This enables us to write where N c counts the colour degrees of freedom of x and c is an order unity numerical pre-factor. For a gg initial state and a scalar resonance, one finds for example c = 1/2 [19].
We can now substitute eq. (A.2) into eq. (1) to calculate an upper bound on the reaction rate for the 2 → 2 process. For T ∼ M R , and making use of BR(R → xx) ≤ 1, we find σ xx→R→yy v n eq In other words, for the case of an s-channel resonance the non-thermalisation constraint that we impose is necessary but not sufficient to ensure that the hidden sector does not thermalise with the SM. Imposing instead Γ(R → yy) < H(T ∼ M R ) would lead to even stronger bounds on N LHC than what is shown in figure 1.