One-loop Corrections to the Higgs Boson Invisible Decay in a Complex Singlet Extension of the SM

The search for dark matter (DM) at colliders is founded on the idea of looking for something invisible. There are searches based on production and decay processes where DM may reveal itself as missing energy. If nothing is found, our best tool to constrain the parameter space of many extensions of the Standard Model (SM) with a DM candidate is the Higgs boson. As the measurements of the Higgs couplings become increasingly precise, higher-order corrections will start to play a major role. The tree-level contribution to the invisible decay width provides information about the portal coupling. Higher-order corrections also gives us access to other parameters from the dark sector of the Higgs potential that are not present in the tree-level amplitude. In this work we will focus on the complex singlet extension of the SM in the phase with a DM candidate. We calculate the one-loop electroweak corrections to the decay of the Higgs boson into two DM particles. We find that the corrections are stable and of the order of a few percent. The present measurement of the Higgs invisible branching ratio, BR$(H \to$ invisible $)<0.11$, already constrains the parameter space of the model at leading order. We expect that by the end of the LHC the experimental measurement will require the inclusion of the electroweak corrections to the decay in order to match the experimental accuracy. Furthermore, the only competing process, which is direct detection, is shown to have a cross section below the neutrino floor.


Introduction
The search for dark matter (DM) has replaced the search for the Higgs boson as the main goal of particle physicists. In fact, since the Higgs has been discovered at the Large Hadron Collider (LHC) by the ATLAS [1] and CMS [2] collaborations, and the Higgs couplings have been measured with great precision, the attention has turned to the outstanding problems of the Standard Model (SM). The search for DM is certainly on the top of the list especially because at this point we cannot even be sure if it comes in the form of an elementary particle. Therefore, even if collider physics is not the place to prove a DM candidate exits, it can help us by hinting at some particular directions even if only by excluding the parameter space of particular models. The Higgs invisible decay measurements are probably one of best quantities to probe the dark sector of particular models. The branching ratio of Higgs to invisible is now bounded to below 11% by ATLAS [3]. This number will improve both in the next LHC run and in the high luminosity stage. This increasing precision will take us further inside the dark sector of the models.
In this work we discuss the Higgs invisible decay in the Complex Singlet extension of the SM (CxSM) which amounts to the addition of a complex scalar singlet to the known SM fields while keeping the SM gauge symmetries. While the tree-level decay of the Higgs into DM involves only the portal coupling, the one-loop corrections to the decay give us access to the quartic coupling of the singlet field. Therefore, the one-loop result gives us a more complete understanding of the Higgs potential. There is a competing/complementary measurement which is the one given by the direct detection process. The DM-nucleon cross section is only relevant at one-loop due to a cancellation that renders the tree-level cross section proportional to the DM velocity and therefore negligible [4,5]. The one-loop corrections to the direct detection process were calculated in [6,7] and compared to the latest experimental results from XENON [8]. We will discuss the interplay between direct detection and the branching ratio of the invisible Higgs decay including the electroweak corrections in both processes.
Our analysis will be performed taking into account the most relevant theoretical and experimental constraints on the model. These are collider constraints and also DM constraints. We will then calculate the next-to-leading order (NLO) electroweak corrections to the invisible decay width of the SM-like Higgs boson using several renormalization schemes. Once the allowed parameter space is found, the NLO result will be compared with the leading order (LO) one. The final goal is to understand if the NLO Higgs branching ratio into two DM particles can be larger than the experimentally measured value for some regions of the parameter space. Moreover, as the new data will become available both at the next LHC run and at the high luminosity stage the Higgs coupling measurements will be more precise and the theoretical calculations need to match this precision.
The outline of the paper is as follows. In section 2, we will introduce the CxSM together with our notation. Section 3 is dedicated to the description of the different renormalization schemes used in this work. Section 4 discusses the experimental and theoretical constraints on the model. In section 5, the results are presented and discussed. Our conclusions are collected in section 6. Finally, there are two appendices, the first one where the results for the scalar pinched self-energies are presented and the second one where we discuss the minima of the CxSM potential.
In this section we introduce the version of the CxSM used in this work. The model is a simple extension of the SM by the addition of a complex singlet field with zero isospin and zero hypercharge. As a singlet for the SM gauge group, the scalar field appears only in the Higgs potential. The SM Higgs couplings will be, however, modified by the rotation angle from the matrix that relates the scalar gauge eigenstates with their mass eigenstates. The doublet field Φ and the singlet field S are defined as where H, S and A are real scalar fields and G + and G 0 are the Goldstone bosons for the Z and W ± bosons. The v, v A and v S are the vacuum expectation values (VEVs) of the corresponding fields and can all be, in general, non-zero in which case mixing between all three scalar fields arises. We will, however, focus on a model where a DM candidate is generated by forcing the potential to be invariant under a symmetry, unbroken by the vacuum. We choose to impose invariance of the potential under two separate Z 2 symmetries acting on S and A, that is, S → −S and A → −A. The resulting renormalizable potential is where all constants are real. By choosing v A = 0, the A → −A symmetry remains unbroken and A is stable, becoming the DM candidate of the model. The other Z 2 symmetry is broken since v S = 0 which leads to mixing between S and H. The mass eigenstates of the CP-even field h i (i = 1, 2) relate to the gauge eigenstates H and S through where the rotation matrix is given by The mass matrix in the gauge basis (H, S) is given by where the tadpole parameters T 1 and T 2 are defined via the minimisation conditions, and at tree level, the minimum conditions are T i = 0 (i = 1, 2). The mass of the DM candidate A is given by while the remaining mass eigenstates are obtained via Therefore, the scalar spectrum of the CxSM consists of two Higgs bosons, h 1 and h 2 , one of which is the SM-like Higgs with a mass of 125 GeV, and one DM scalar, which we call A. Since the mixing between the two scalars is introduced only via the rotation angle, the couplings of the two Higgs bosons to the remaining SM particles is modified by the same factor k i defined as.
where g H SM SM SM denotes the SM coupling between the SM Higgs and the SM particle SM . With these definitions the parameters of the potential can now be written as functions of our choice of input parameters given by as

Renormalization
Our goal is to calculate the decay width of the Higgs bosons into a pair of DM particles, h i → AA, at NLO. Since A only couples to the two Higgs bosons h i we just need to renormalize the scalar sector. With the trilinear h i couplings to the DM particles given by and according to our choice of input parameters we need to renormalize the masses of the two scalars h i , the mass of the DM particle, m A , the singlet VEV v S and the mixing angle α. Besides these parameters we also need to renormalize the h i and A fields and the tadpoles to work with finite Green functions. We start by formally defining the relation between the bare and the renormalized quantities as where δβ is the counterterm of the physical quantity β and β 0 is the bare quantity. All bare fields φ 0 are related to their renormalized version via where Z φ is the field strength renormalization constant.

On-Shell Renormalization of the Scalar Sector
We start by calculating the mass and field counterterms in the scalar sector using the on-shell scheme. The renormalization constants for the DM particle are defined as where Z A is the field strength renormalization constant, D 2 A,0 = m 2 A,0 and δD A is the mass counterterm for A.
The two scalars h 1 and h 2 again mix at one-loop order and therefore both the field renormalization constants and the mass counterterms are defined by with D 2 hh,0 = diag(m 2 h1,0 , m 2 h2,0 ) and the matrices δZ hh and δD 2 hh defined as The on-shell renormalization conditions lead to the following expressions for the counterterms of the scalar fields h i where Σ h i h i denotes their self-energies. Similarly, the expressions for the DM field A read The diagonal terms of δD 2 hh or δD 2 A are related to the mass counterterms and to the corresponding tadpoles. The off-diagonal terms are related to the tadpoles to be discussed in the next section.

Tadpole Renormalization
Tadpole renormalization is essentially the way we choose the VEVs at 1-loop order so that the minimum conditions hold. Another way to express it is to state that the terms proportional to the scalar fields at 1-loop order have to vanish. The VEV chosen to fulfil this condition [9,10] is the true VEV of the theory. We will follow the scheme proposed by Fleischer and Jegerlehner [9] for the SM with the goal of rendering all counterterms related to physical quantities gauge independent. The scheme was applied to various extensions of the SM (see e.g. [11,12]). For the CxSM a brief description follows. We start by defining the true VEVs by performing the shifts which lead to the following shifts in the tadpole parameters at NLO The minimum equations lead to the following relations between the shifts in the VEVs and the tadpole counterterms with the relation between the tadpole counterterms δT 1,2 in the gauge basis and those in the mass basis, δT h 1,2 , given by The shift introduced in the VEVs can be applied to the mass matrix from Eq. (5). The additional terms resulting from that shift read The last term in Eq. (24) vanishes, because after the shift the tadpole conditions can be applied again. The mass matrix can now be rotated into the mass basis and all counterterm shifts can be applied leading to Using Eqs. (22) and (23) as well as the relations Eq. (11) between the potential parameters and the input parameters we can express the shifts ∆D 2 h i h j (i, j = 1, 2) as with the trilinear Higgs couplings given by In terms of Feynman diagrams this can be seen as the contribution of the tadpole diagram (times a factor i, at vanishing momentum transfer) to the propagators of h 1 and h 2 , which were not included previously in the definition of the self-energies. We define and the renormalized self-energies take the form This shift of contributions from the mass counterterm matrix into the self-energy corresponds to the inclusion of the tadpole diagrams into the self-energy. With this change in the renormalized self-energy the following results for the counterterms hold Following a similar reasoning, the counterterms of the field A can be expressed as

Renormalization of the Mixing Angle α
There are two parameters left to be renormalized. We start with the rotation angle α. Previous works [11,13] lead us to the conclusion that a scheme that is simultaneously stable (in the sense that the NLO corrections do not become unreasonably large) and gauge independent can be built by combining the one proposed in Ref. [14,15] with the gauge dependence handled by the use of the pinch technique [16,17]. The scheme proposed in [14,15] introduces a shift in α, the angle from the rotation matrix R α , and by relating it to the field renormalization matrix constant leads to the following counterterm for α, The result is model independent, it only assumes the mixing of solely two fields. This relation can now be expressed in terms of self-energies as This counterterm turns out to be gauge dependent. This in itself would not be a problem if the complete amplitude for the process was gauge independent, which is not the case. There is, however, a procedure to isolate this gauge dependence in a systematic and consistent way known as the pinch technique [16][17][18][19]. After successfully applying the pinch technique, the pinched self-energies can be defined by adding the additional contributions to the self-energies from the pinch technique. This results in The loop integral B 0 and the factor O ij as well as Σ add h i h j (p 2 ) are defined in App. A. Note that the expression with ξ = 1 does not mean that a specific gauge has been chosen. The additional terms together with the tadpole self-energies result in a gauge-independent result which can just be written in that form. We can now define a gauge-independent counterterm for α, for which two different scales will be chosen: • Setting the external momenta to the respective OS masses, p 2 = m 2 h i , called OS pinched scheme.
• Setting the external momenta to the mean of the masses, In the p * pinched scheme the additional gauge-independent terms from the pinch technique vanish so that the expression for the mixing angle counterterm becomes more compact. We can write the counterterm for α in the p * scheme and the OS pinched scheme as With these definitions, δα is gauge independent by construction and the problem with the gauge dependence is solved.

Renormalization of v S
The last parameter to be renormalized is the VEV v S of the scalar singlet. We will be using a process-dependent scheme and also a derivation thereof where the conditions are imposed at the amplitude and not at the physical process level, defined as zero external momentum scheme (ZEM) scheme [13]. The latter, although less stable, allows to cover the entire parameter space because it is not constrained by kinematic restrictions.

Process-dependent Scheme
The process to be used needs a coupling constant proportional to v S and if we want to use a decay, the only possibilities 1 in the CxSM are h 1 → AA and h 2 → AA. Therefore one of the processes will be used to extract the singlet VEV renormalization constant, and because we want to use the measurement of the SM-like Higgs invisible width, the second Higgs will be used for that purpose. Note, however, that any of the two Higgs bosons can be the SM-like one, while the other can either be lighter or heavier than 125 GeV. Hence, there are two scenarios to be analysed and we have to find v S for both. In the process-dependent scheme the counterterm is calculated by forcing that is, the LO and NLO decay widths are equal. This is turn leads to where A LO h i →AA is the amplitude of the process h i → AA at LO and A NLO h i →AA is the amplitude at NLO. Because the LO amplitude is just a coupling constant, the expression further simplifies to The NLO contribution A NLO h i →AA can be written in terms of the vertex corrections A VC h i →AA and the vertex counterterm such that where i, j ∈ {1, 2}, but i = j. And with the trilinear h i couplings to the DM particles λ h i AA given in Eq. (12) we have Finally, the expression for the counterterm v S reads for the two processes. These counterterms are gauge independent and lead to UV-finite results. The renormalization scheme also leads to stable results. Therefore, the only drawback is the kinematic restriction which forces us to be in a restricted region of the parameter space. We discuss a solution to avoid this restriction in the next section.

ZEM Scheme
The ZEM scheme was introduced in [13] to avoid kinematic restrictions on the parameter space, and we will now apply it to the CxSM. It is a simple derivation of the process-dependent scheme, where the square of all external momenta are set to zero at the level of the amplitude, eliminating therefore the kinematic constraint. Choosing the same physical processes, the condition now reads where p 2 = 0 means that all squared external momenta are set to zero. There is another difference relative to the process-dependent scheme: the NLO leg corrections are not canceled by the corresponding counterterms, because the leg counterterms are defined through the OS scheme. Therefore Eq. (46) now takes the form Again, this equation can be solved for the two processes h 1 → AA and h 2 → AA to obtain the counterterms We now just have to check if the final result is finite and gauge independent. The question of gauge dependence in the alternative tadpole scheme is always related to wave function renormalization constants. A thorough analysis leads to the conclusion that although finite the result is gauge dependent due to the term for the corresponding process h i → AA. The problem was solved by simply replacing the selfenergies in the wave function renormalization constants in Eq. (48) by their pinched versions. This way δv S becomes gauge independent. This change in the δZ h i h j , however, is only applied to terms appearing in Eq. (48) where the ZEM counterterm of v S is defined and not anywhere else. Otherwise, a gauge dependence in the overall amplitude of the renormalized process could be reintroduced. Therefore, the resulting counterterms for v S in this modified ZEM scheme read The renormalization is now complete and before moving to the presentation of the NLO results we will discuss the constraints imposed on the model.

Constraints on the Model
The constraints imposed to find the allowed parameter space are implemented in ScannerS [20][21][22]. In this section we will just briefly review the most relevant theoretical and experimental constraints considered.

Theoretical Constraints
• Boundedness from Below The conditions to have a stable minimum are easily obtained by writing, Φ † Φ ≡ x and |S| 2 ≡ y and writing the quartic terms of the potential Forcing the potential to be bounded in all directions leads to the following conditions at tree level • Perturbative Unitarity Constraints Following [23] we force the eigenvalues of the scattering matrix M 2→2 of all possible twoto-two scalar scattering interactions to obey leading to • Stability of the Vacuum In the CxSM the most general vacuum structure is obtained by the following expectation values for the fields because of the SU (2) invariance. Therefore, the value of the tree-level potential at each vacuum configuration is given by We have chosen to work in the configuration where the potential is V (v, v S , 0) to have one DM candidate. In App. B we show that by choosing the vacuum configuration with non-zero v and v S (and v A =0) to be a minimum automatically implies that this configuration is the absolute minimum at tree level.

Experimental Constraints
Before moving to the experimental constraints we note that where m W,Z are the masses of the massive W and Z bosons, respectively, and c w denotes the cosine of the Weinberg angle, is equal to 1 at tree-level, like in the SM. Also, no tree-level flavour-changing neutral currents are introduced because the gauge singlet does not couple to fermions and to gauge bosons in the gauge basis.
We will now briefly review the experimental constraints implemented in ScannerS and used for the generation of parameter points.

• S, T, U precision parameters
The additional scalar fields in the CxSM contribute to the gauge bosons self-energies and this implies deviations from the SM predictions. These deviations relative to the SM have to be within experimental bounds, i.e. ScannerS compares the model predictions with the electroweak precision results from experiment. Then the program applies a consistency check on the S, T, U parameters [24] with 95 % confidence level to check if the constraints are fulfilled.
• Compatibility with the LHC Higgs data and exclusion bounds There are two important constraints coming from colliders. The most relevant one is the one coming from the LHC related to the measurements of the discovered Higgs boson. The searches for additional scalars also play a role in restricting the parameter space of the model. ScannerS enforces these bounds by the interfaces with HiggsSignals [25,26] and HiggsBounds [27,28]. Agreement of the signal rates of the SM-like Higgs boson of the CxSM with the observations at 2σ level is checked by HiggsSignals-2.6.1. Through HiggsBounds-5.9.0 the exclusion bounds from searches for extra scalars are taken into account.
• DM relic density The CxSM has a scalar DM candidate and therefore the predicted DM relic density of this model should not exceed the measured value. Smaller values are not excluded since they allow for additional contributions coming from other sources. ScannerS is interfaced with the program package MicrOMEGAs [29] to include this constraint from the relic density.
• DM direct detection As previously stated, the DM-nucleon cross section is only relevant at one-loop order due to a cancellation that renders the tree-level cross section proportional to the DM velocity and therefore negligible [4,5]. However, one-loop corrections to the DM-nucleon spinindependent cross section have to be below the present experimentally measured result from XENON1T [8], as discussed in [6,7]. We will come back to this important constraint in the next section.

Higgs Decay into Dark Matter
The CxSM has two CP-even scalars h 1 or h 2 and any of them can play the role of the 125 GeV SM-like Higgs boson denoted h 125 in the following. The non SM-like Higgs can be either heavier or lighter than 125 GeV. In order to optimize the analysis we fixed h 1 to always be the lightest of the two and considered two distinct scenarios, • m h 1 = m h 125 (scenario I): the width is calculated from h 1 → AA and the process h 2 → AA is chosen for the renormalization of v S .
• m h 2 = m h 125 (scenario II): the width is calculated from h 2 → AA and the process h 1 → AA is chosen for the renormalization of v S .
The LO decay width is given by while the NLO expression can be written as with λ(x, y, z) = x 2 + y 2 + z 2 − 2xy − 2xz − 2yz and A LO and A NLO denoting the LO and NLO amplitudes, respectively. The LO amplitude is simply the coupling constant and therefore the decay width takes the form where both h 1 and h 2 can be the SM-like Higgs h 125 . For the NLO amplitude we need to compute the vertex corrections together with the counterterm contributions. The vertex corrections are just the sum of all irreducible contributions at 1-loop order while the vertex counterterm can be read off the Lagrangian yielding where i, j ∈ {1, 2} but i = j. We finally arrive at the overall NLO contributions for the processes which will be calculated numerically using Eq. (57). The value obtained for the width depends on the renormalization scheme used which will be discussed in the next section. We have explicitly checked that for all scenarios the NLO width is UV-finite and gauge independent.

Allowed Parameter Space
For our numerical investigation we performed a scan in the CxSM parameter space using ScannerS [20][21][22] and kept only those points that are compatible with the above described theoretical and experimental constraints. The scan ranges for the input parameters are summarized in Tab. 1. The DM mass has to be below 62.5 GeV for h 125 → AA to be kinematically allowed. The SM input parameters are taken from [44] and their values are given in Tab We have also used the program BSMPT [45,46] to check for the possibility of having a strong first order EW phase transition (SFOEWPT). We found that in the parameter space probed there were no points with a SFOEWPT. Before starting the discussion of the allowed parameter  space we again remind the reader that there is a kinematical constraint that applies to the process-dependent scheme but not to the ZEM scheme of the counterterm δv S . As previously discussed two of six parameters are fixed, one by G F and the other one is the 125 GeV Higgs boson mass. This leaves us with the 4 input parameters m s , m A , α, v S where m s denotes the scalar mass of the non-125 GeV Higgs boson. In Fig. 1 we show correlations between α, v S and m s . In the upper row a strong correlation can be seen between α and v S . This is to be expected since all SM couplings to the h 125 Higgs boson have an additional c α in scenario I or s α in scenario II. These couplings are very well measured and only small deviations are allowed. Thus, the additional factor has to be close to 1 and α has to be close to 0 or ± π 2 , respectively. Moreover, the parameters α and v S are connected through the decay width of the 125 GeV Higgs boson into DM particles. As can be seen in Eq. (59), the LO decay width in scenario I is proportional to Thus, in order for the LO branching ratio of the 125 GeV Higgs into DM particles in the CxSM not to exceed experimental limits [3], this ratio has to be small. Therefore, if v S is small α has to be small. This behavior can be seen in Fig. 1. In scenario II the LO decay width is proportional to Therefore, if v S is small, α has to be close to ± π 2 which can be seen in Fig. 1 as well. One should also mention that there is a hard bound on α coming from the Higgs coupling measurements.
The plots in the lower row in Fig. 1 show the relation between v S and m s . The two parameters m s and v S can be related via d 2 . Because in scenario I m s = m h 2 and α cannot deviate much from zero we can write Using again the small angle approximation in Eq. (11), λ and δ 2 can be expressed as With this simplified expressions the fourth constraint in Eq. (54) results in where d 2 was considered to be positive. This relation explains the line in Fig. 1 (lower left) for scenario I, showing m s and v S are linearly related with the correctly predicted slope. The same calculation applies to scenario II. In this case, m s = m h 1 and the angle α is close to ± π 2 . The conclusion is again that m s and v S are linearly related. For example, setting m s to the highest possible value in this scenario, i.e. about 125 GeV, v S has to be at least 35 GeV. In this scenario only a small part of the parameter space is constrained but in Fig. 1 (right) we see that the far left side of the plot indeed contains no parameter points in scenario II. Fig. 2 shows the parameter space spanned by m s and m A . The blue points (scenario II) are the ones where the kinematical constraint (due to the process-dependent scheme) appears. As expected the constraint is not there for scenario I (red points). In scenario I the DM mass m A prefers values close to 125/2 GeV, whereas in scenario II (blue points), m A has values close to half of m s or also close to half of m h 125 in the ZEM scheme where the kinematic constraint 2m A < m s from the renormalization condition on v S ceases to apply. This behavior results from DM constraints applied on the DM mass m A . To visualize the effect of DM constraints, we show in green the points that passed all constraints except the dark matter ones. The reason for these constraints is the requirement that the relic density obtained in the CxSM must not exceed the observed value of the relic density. Therefore, the thermal annihilation processes of two DM particles A into one of the scalar particles h i must be efficient enough. This annihilation is enhanced close to the threshold, so that the DM mass m A is preferably close to half of the 125 GeV or half of m s . In Fig 3 (left) we present a histogram showing the points frequency as a function of the relic density for both scenarios. This plot clearly shows us that there are points that saturate the relic density but most of the points have a low h 2 Ω cdm and would need other DM candidates. The percentage of points that is in the range −5σ < h 2 Ω cv cdm ≤ 2σ, where h 2 Ω cv cdm is the experimental central value, is around 1% and the preferred values for the parameters are for the two resonant regions already discussed. In the right panel we present the relic density as a function of the DM mass with m s presented by the color bar for the scenario where m h 2 = 125 GeV. There are points that saturate the relic density in the entire DM mass range probed. We clearly see that these points all have a DM mass that is half of m s or half m h 2 . There are also some outliers that saturate the relic density in the region where m s is roughly between 30 and 50 GeV for a DM mass above 30 GeV. For the other scenario, since only the case half of 125 GeV is possible all values of m h 2 can in principle saturate the relic density. In Fig 4 we show a histogram of the frequency of the variable α without and with the relic density constraint for scenario I. Without the DM constraints there is a bound on α that forces it to be close to zero. This is related to the already discussed bounds from colliders. Looking at the Boltzmann equation where n is the DM number density, H is the Hubble parameter, σv is the velocity-averaged cross section and n 2 eq is the density of DM particles when in thermal equilibrium with the photon bath. The annihilation cross section σ(AA → SM SM ), where SM are SM particles, is proportional to sin α cos α. Hence, if either sin α → 0 or cos α → 0 we get σv → 0 and no freeze-out will occur or the relic density will be extremely high at the end of freeze-out.
The interesting feature is then that as we move closer to the limit where the couplings are all SM-like (α ≈ 0 is scenario I) we lose the DM candidate because of the constraints from DM. This is not surprising because in this limit the portal coupling vanishes and freeze-out is no longer possible.
Let us now move to the last constraint coming from DM, the direct detection process. Since we allow DM not to saturate the relic density we need to define a DM fraction where (Ωh 2 ) A is the calculated relic density for each point in the CxSM and (Ωh 2 ) obs DM is the central value of the experimental measurement. In the comparison with the data, we are actually comparing an effective DM annihilation cross section defined by where f AA and σ AN , the direct detection DM nucleon cross section, are calculated by MicrOMEGAs. This is because the experimental limits assume the DM candidate to make up for all of the DM abundance. This constraint is particularly relevant because it directly probes the portal coupling just like the invisible decay. Even if, as we have already discussed, the DM nucleon cross section is only relevant at one-loop order, it could be that the experimental bound from XENON1T [8] would provide a stronger restriction than the one from the invisible Higgs decay. It turns out, however, that it does not. In Fig. 5 we present the effective spin-independent DM nucleon cross section [6,7] as a function of the DM mass for scenario I (left) and scenario II (right). The neutrino floor [47] is also presented as a grey shaded region. For the range of masses considered it is below a line of about 10 −48 cm 2 . We can see that the points are not only below the XENON1T line but they are also below the neutrino floor and therefore have extremely small chances of being detected directly. Therefore, in the near future, and perhaps also in the far future, information about the dark sector of the CxSM will come only from the LHC. This shows the importance of taking into account the radiative corrections for the invisible Higgs decay.

Numerical Results and Analysis of the SM Higgs Decay into DM
In the following, we present and discuss the LO and NLO decay widths for all allowed points in the parameter space, for the two scenarios. There are a total of four schemes corresponding to the combination of the choices of the counterterms δα (p * pinched and OS pinched) and δv S (process-dependent and ZEM). We display results for the relative size of the NLO decay width with respect to the LO result, defined as (71) Figure 6: ∆Γ plotted against the scalar mass m s , where h 125 = h 1 (red points) and h 125 = h 2 (blue points). All different combinations of possible renormalization schemes are shown. Interesting sections (indicated by the red band) of the two plots in the second row are also shown in more detail.
In Fig. 6 we present ∆Γ as a function of m s for the two scenarios and for the four different possible combinations of renormalization conditions. The relative NLO corrections in scenario II (blue points) are quite small in the process-dependent scheme (denoted by 'pd' in the plot), but become comparatively large in the ZEM scheme with respect to scenario I (red points). Both in scenario I and II, ∆Γ is barely affected by the choice of the renormalization scheme of α. Larger differences occur when changing the renormalization scheme of v s from the process-dependent to the ZEM scheme but they still remain relatively stable in scenario I. Note, that the peaks in scenario I in the ZEM scheme, that induce larger ∆Γ are related to kinematical thresholds of the B 0 and C 0 functions of the loop integrals. They are better visualized by the zoomed inserts in Fig. 6. In scenario II, the change in ∆Γ when turning from the process-dependent to the ZEM scheme has a large effect. Here, ∆Γ can go from −50 % to 10 %, whereas in the process-dependent scheme, ∆Γ varies between −3 % and 3 %. Thus, the ZEM scheme can result in relatively large corrections at NLO. These large corrections, however, only occur in a small number of points. These are the points that would be rejected by the additional kinematic constraint that in scenario II is effective in the process-dependent scheme. They hence only occur in the ZEM scheme.
One further remark is in order here. One has to be careful when directly comparing the results for ∆Γ in the different renormalization schemes. A consistent comparison would require the proper conversion of the input parameters when going from one scheme to the other. This requires the implementation of the conversion formulae which is beyond the scope of this paper. Our goal here primarily is to show which sizes of relative corrections at all can be expected in the various schemes. Apart from the ZEM scheme they are all relatively small and numerically stable in the sense defined above. In Fig. 7 we present ∆Γ as a function of m s with all other input parameters fixed. The resulting scenarios do not necessarily fulfil all theoretical or experimental constraints any more but are shown here for illustrative reasons. The peaks that can be seen in the figure origin from thresholds in the loop functions and depend on the chosen scheme as the two schemes used for the derivation of δα are evaluated at different scales. For example, the peak in the OS pinched scheme seen in Fig. 7 at m S ≡ x OS = 250 GeV appears in the p * pinched scheme at the m S ≡ x p * value equal to 330 GeV because since in the p * pinched scheme the self-energies are evaluated at the mean of the scalar masses. The peaks only occur in scenario I, because most of the SM masses occurring in the calculation (e.g. the W and Z boson mass) are of order of 100 GeV. The purpose of this analysis is to improve the precision of the calculation of the Higgs invisible decay width so that it can be used to constrain the parameters from the dark sector. The current observed limit on the branching ratio of the 125 GeV Higgs decay into invisible particles is given by [3] BR(h 125 → invisible) 0.11 +0.04 −0.03 , at 95 % confidence level. In order to compare results the calculated branching ratio is needed which in turn means that we need the total decay width of the 125 GeV Higgs boson in the CxSM including NLO EW corrections. Since the corrections are not available for all decays in the model we can only estimate the branching ratio using the total decay width of the 125 GeV Higgs boson in the SM without EW corrections 2 which is taken from [48,49] and is given by In order to translate this decay width into the CxSM set-up it will be multiplied by the appropriate squared angular factor k 2 i , where the index i is chosen according to the mass scenario. Also the NLO h 125 → AA width is added to obtain the total decay width in the CxSM. Furthermore, in scenario II the 125 GeV Higgs boson is the heavier of the two scalar particles (h 125 ≡ h 2 ). If h 1 is light enough, the decay h 2 → h 1 h 1 is also allowed and is added to the total decay width. Thus, the LO and approximate NLO branching ratio of the decay h 125 → AA is given by where δ is defined as This expression is approximate in the sense that the NLO EW corrections are only included in the Higgs-to-invisible decay but not in the SM-like CxSM Higgs decays into SM particles. It is justified, however, if the EW corrections to these decay widths are small enough compared to the EW corrections to the h 125 → AA decay 3 . Moreover, for a better approximation the NLO corrections to the decay h 125 → h 1 h 1 have to be included as well unless its contribution to the total width is negligibly small. In Fig. 8 the calculated approximate NLO branching ratios for all generated parameter points are displayed versus the corresponding LO values. The experimental limit on the branching ratio is shown as well. However, the limit is only indicated for the NLO result, since the parameter points are generated with respect to the limit at LO. Almost all parameter points have an NLO branching ratio below the experimental limit . Only about 0.2 % of the points are above the experimental limit. The highest obtained branching ratio is, however, around 0.121 and therefore still lies well within the experimental uncertainty. The relative change of the branching ratio at NLO with respect to LO has been calculated and increases the LO value by up to 7-8% at most. Thus, the NLO contributions to the branching ratio are too small to further constrain the model. Moreover, it is interesting to see that the points from scenario II result in smaller branching ratios, especially when using the ZEM scheme. This is to be expected, since many points in that scenario have negative relative NLO contributions to the decay width.

Conclusions
In this work we have calculated the EW NLO corrections of the Higgs decay into two dark matter particles in the CxSM. We have used four different renormalization schemes but with all masses and fields renormalized on-shell. Except for very particular regions of the parameter space corresponding to thresholds in the Passarino-Veltman functions, the corrections were shown to be quite small, on the per cent level in all renormalization schemes. There is one exception, however, given by the ZEM scheme with h 2 being the SM-like Higgs. Here, points that could not be used in the process-dependent scheme for the renormalization of v S due to kinematic constraints, lead to relatively large corrections that amount up to a few tens of per cent.
The central value of the measured invisible Higgs branching ratio is now at 0.11. The inclusion of the EW NLO corrections to the decay width of the process h 125 → AA does not lead to extra constraints on the parameter space because the calculated approximate NLO branching ratios for all allowed parameter points are found to be within the experimental error. Calculating the EW corrections to all decays of the SM-like CxSM Higgs boson into SM particles (and, if kinematically allowed into a pair of lighter scalars) will further improve the obtained result. But more importantly, tighter experimental constraints will be obtained in the near future in the upcoming LHC run [51] and even more at the high luminosity stage.
We have also shown why it is crucial to have a precise measurement of the invisible width -it is the only direct probe of the portal coupling. In fact, the other possible way to probe the same coupling would be through the DM-nucleon cross section. However, we have shown that this cross section is not only below the present experimental bound from XENON1T [8] but is also below the neutrino floor which makes it virtually unusable. Therefore, in the near future and perhaps also in the far future, information about the dark sector of the CxSM will come only from the LHC. This shows the importance of having the radiative corrections for the invisible Higgs decay.

A The Scalar Pinched Self-Energy in the CxSM
In this appendix we will present the result for the scalar pinched self-energy in the CxSM. We define the quantity (i, j = 1, 2) to write all couplings in the CxSM between the scalars and the SM particles X, Y as where g SM XY H and g SM XY HH are the corresponding couplings between the SM particles X and Y and one or two SM Higgs bosons and k i is given in Eq. (9). With these definitions the self-energies iΣ add h i h j are given by Here m W,Z denote the masses of the W and Z bosons, g = 2m W √ 2G F is the SU (2) gauge coupling, c w the cosine of the weak mixing angle, ξ V (V = W, Z) are the bare gauge couplings and λ V ≡ 1 − ξ V . The integrals are defined as . (79d)

B Minima of the CxSM Higgs Potential
To analyze all possible vacuum configurations, the scalar potential of the CxSM, has to be considered with the fields defined as Due to the SU (2) invariance we can choose a configuration where only the fields H, S and A can acquire a non-zero VEV, in the following labeled x H , x S and x A . The stationary conditions of the potential read with the scalar fields collected in the vector The three nontrivial equations in Eq. (82) can be written as from which we read off that for all VEVs a possible solution is to set them to zero or solve the equations in brackets. Thus, eight different cases, in general, have to be considered. Moreover, if x S and x A are simultaneously non-zero, the terms in brackets in Eqs. (84b) and (84c) have to be zero. Since these two terms only differ in the sign in front of the parameter b 1 , this can only be achieved if b 1 is set to zero. Here, however, b 1 is always chosen to be non-zero and thus these cases cannot result in a minimum of the potential. Furthermore, it has to be checked whether the stationary point is indeed a minimum of the potential, i.e. the Hessian matrix of the potential has to be positive definite. The general form of the Hessian matrix reads where the diagonal elements are To start with the remaining cases, first the desired minimum is considered, namely the configuration with the VEVs x H and x S to be non-zero and x A to be zero. Since the VEVs are chosen to be input parameters, they are in this case relabeled as v and v S and the Eqs. (84) can be solved for other parameters resulting in Next, the positive definiteness of the Hessian matrix has to be checked. For this Eq. (87) is used to simplify the Hessian matrix in Eq. (85) leading to The matrix is positive definite if the determinants of all minors are positive, i.e. the relations have to be satisfied. If these inequalities hold, the potential is automatically bounded from below (compare with Eq. (52)). Moreover, the Hessian matrix of the potential resembles the mass matrix of the scalar fields, i.e. the eigenvalues of the matrix are the squared masses of the corresponding particles and thus the eigenvalues have to be positive, i.e. the Hessian matrix has to be positive definite. Furthermore, the parameter b 1 is just given by −m 2 A . This means that if the VEVs v and v S are given as input parameters and the VEV for the field A is chosen to be zero and the potential parameters fulfill the relations in Eq. (89), this configuration of VEVs is a minimum of the potential, as desired. The remaining question now is, whether this minimum is automatically the global minimum of the potential. Thus, the values of the potential at all minimum configurations have to be calculated and compared. For the desired configuration the value of the potential at the minimum reads Now all other VEV configurations have to be checked for their potential values at the stationary point and whether or not they are indeed a minimum of the potential.
• case x H = x S = x A = 0: This is the most trivial configuration, and the value of the potential at this point reads V (0, 0, 0) = 0.
(91) Thus, the difference between the values of the potential at the two configurations results in The inequality is true because of the relation between δ 2 , λ and d 2 from Eq. (89).
• case x S = x A = 0, x H = 0: Here the nontrivial equation from Eqs. (84) can be solved for x H and results in Here m 2 has to be negative. The value of the potential results in where in the second step the relations Eq. (87) were used. The difference between the values of the potential of the different configurations reads The inequality again holds because of the relations Eq. (89).
• case x H = x A = 0, x S = 0 Here the nontrivial equation from Eqs. (84) can be solved for x S and results in Here b 1 + b 2 has to be negative. The value of the potential results in where in the second step the relations Eq. (87) were used. The difference between the values of the potential of the different configurations reads The inequality again holds because of the relations Eq. (89).
• case x H = x S = 0, x A = 0 Here the nontrivial equation from Eqs. (84) can be solved for x A and results in Here b 2 − b 1 has to be negative. The value of the potential results in where in the second step the relations Eq. (87) were used. Here the parameter b 1 does not get canceled and the difference between the values of the potential of this configuration with respect to the desired minimum state depends additionally on b 1 and an inequality similar to the other cases cannot be shown as straightforwardly. It is, however, sufficient to look at the Hessian matrix. It results in where E is a combination of potential parameters. It can be seen that b 1 is a negative eigenvalue of the matrix. Thus, it cannot be positive definite and this VEV configuration cannot be a minimum.
• case x S = 0, x H = 0, x A = 0 The last case is a bit more complicated, since now two VEVs are non-zero. Here it is easier to redo the same steps as in the desired minimum configuration. First, the VEVs are relabeld as w and w A . Next, the stationary conditions from Eqs. (84) are solved for other parameters to obtain the relations Similar to the last case, the value of the potential of this configuration will again depend on b 1 , so comparing values with the desired minimum configuration will not lead to a simple inequality. Thus, the Hessian matrix is again considered. With the help of Eqs. (102) it can be simplified to