Prospects for direct detection of dark matter in an effective theory approach

We perform the first comprehensive analysis of the prospects for direct detection of dark matter with future ton-scale detectors in the general 11-dimensional effective theory of isoscalar dark matter-nucleon interactions mediated by a heavy spin-1 or spin-0 particle. The theory includes 8 momentum and velocity dependent dark matter-nucleon interaction operators, besides the familiar spin-independent and spin-dependent operators. From a variegated sample of 27 benchmark points selected in the parameter space of the theory, we simulate independent sets of synthetic data for ton-scale Germanium and Xenon detectors. From the synthetic data, we then extract the marginal posterior probability density functions and the profile likelihoods of the model parameters. The associated Bayesian credible regions and frequentist confidence intervals allow us to assess the prospects for direct detection of dark matter at the 27 benchmark points. First, we analyze the data assuming the knowledge of the correct dark matter nucleon-interaction type, as it is commonly done for the familiar spin-independent and spin-dependent interactions. Then, we analyze the simulations extracting the dark matter-nucleon interaction type from the data directly, in contrast to standard analyses. This second approach requires an extensive exploration of the full 11-dimensional parameter space of the dark matter-nucleon effective theory. Interestingly, we identify 5 scenarios where the dark matter mass and the dark matter-nucleon interaction type can be reconstructed from the data simultaneously. We stress the importance of extracting the dark matter nucleon-interaction type from the data directly, discussing the main challenges found addressing this complex 11-dimensional problem.


Introduction
Detecting the particles forming the Milky Way dark matter halo is one of the priority of astroparticle physics [1]. The direct detection technique is currently playing an important role in this context [2]. Its goal is to measure the energy deposited in an underground detector by dark matter particles of the galactic halo scattering off the detector nuclei [3]. Cryogenic bolometers and liquid noble gas scintillators are two important examples of detectors used in this field of research [4]. Current direct detection experiments have reached the sensitivity to probe a broad spectrum of possible dark matter-nucleon interactions, including those depending on the dark matter-nucleus relative velocity and the momentum transfer [5]. LUX, SuperCDMS and CDMSlite are currently setting the most stringent bounds on the velocity and momentum independent dark matter coupling to the Xenon and Germanium nuclear charge density operators [6][7][8].
The next generation of direct detection experiments will exploit ton-scale targets operating in a low background environment [9]. There are great expectations on the discovery potentials of these new experimental devices, since their large exposure will allow to probe the vast majority of the particle models for weakly interacting dark matter [10][11][12][13]. The prospects for direct detection of dark matter with ton-scale detectors have been analyzed using complementary approaches. Interesting investigations performed in this context include applications of the extended Likelihood approach [14], studies of the interplay of Bayesian and frequentists statistics [15,16], analyses of the complementarity of different detection strategies [17] and target materials [18][19][20], attempts to reconstruct the local dark matter velocity distribution [20][21][22][23], and an exploration of the Sun's gravitational focusing effect [24].
The prospects for direct detection of dark matter depend not only on the strength of the dark matter-nucleon interaction, but also on the momentum and velocity dependence of the scattering amplitude in the non-relativistic limit. The majority of the forecasts made regarding the prospects for direct detection of dark matter with ton-scale detectors assume that dark matter couples to the velocity and momentum independent nuclear charge density and spin current density operators only. The former interaction operator generates the socalled "spin-independent" interaction, the latter the familiar "spin-dependent" interaction. A broader set of momentum and velocity dependent interaction operators is however allowed by Galilean invariance, and energy and momentum conservation [25]. The prospects for detecting a dark matter signal produced by 4 momentum-dependent interaction operators and 5 linear combinations of non-relativistic operators have been recently studied in Refs. [20] and [26], respectively. An analysis extended to all momentum and velocity dependent interaction operators arising from the exchange of a heavy spin-0 or spin-1 particle, which explores the full multi-dimensional parameter space of the dark matter-nucleon interaction theory, is however still missing.
In this paper we perform the first comprehensive analysis of the prospects for direct detection of dark matter with future ton-scale detectors in the general 11-dimensional effective theory of isoscalar dark matter-nucleon interactions mediated by a heavy spin-one or spinzero particle. Ref. [25] gives a systematic and complete formulation of this theory, extending the previous analysis of Ref. [27]. Within this theoretical framework, Refs. [26,28,29] analyze the data of current direct detection experiments separately. Ref. [5] compares the full 11dimensional parameter space of the theory to current observations in a global statistical analysis of several direct detection experiments, including the recent LUX, SuperCDMS and CDMSlite results. Mathematica packages to evaluate the relevant nuclear form factors [30] and compute direct detection exclusion limits [31] are publicly available.
We draw our conclusions from a suite of synthetic data, that we generate from 27 benchmark points selected in the 11-dimensional parameter space of the model. From our synthetic data, we extract the profile likelihoods and the marginal posterior probability density functions of the model parameters. Using state-of-the-art Bayesian and frequentist statistical methods, we identify the most promising scenarios and outline the main challenges emerging from this study.
The paper is organized as follows. In Sec. 2 we introduce the non-relativistic effective theory of the dark matter-nucleon interaction. We also define the benchmark points in the parameter space of the theory, from which in Sec. 3 we simulate independent samples of synthetic data. Sec. 4 is devoted to the statistical methods used in the analyses of the synthetic data. The prospects for detecting dark matter with ton-scale detectors in the effective theory of the dark matter-nucleon interaction are presented in Sec. 5. Details on the dark matter response functions used in the calculations are provided in the Appendix.

The effective theory of the dark matter-nucleon interaction
In this section we define the effective theory of the dark matter-nucleon interaction studied in the paper. A more complete introduction to the subject can be found in Refs. [25,27,28,30].

Definitions
The effective theory of the dark matter-nucleon interaction is a non-relativistic field theory where the interaction operators are restricted by Galilean invariance, energy and momentum Table 1. List of the 10 non-relativistic operators defining the effective theory of the dark matternucleon interaction studied in this paper. The operators O i are the same as in Ref. [30].
conservation, and hermiticity [28]. In this framework, five non-relativistic Galilean invariant operators generate the algebra of χ-nucleon effective interaction operators, where χ denotes the dark matter particle. The five operators are: the identity 1 χ 1 N , the momentum transfer q, the χ-nucleon transverse relative velocity operator v ⊥ χN , and the dark matter and nucleon spin operators S χ 1 N and 1 χ S N , respectively. Any dark matter-nucleon interaction operator can be expressed as a combination of the five generating operators. In this paper, we restrict ourselves to dark matter-nucleon interactions arising from the exchange of a heavy spin-0 or spin-1 particle, and hence to the 10 operators listed in Tab. 1. 1 The most general Lagrangian describing the dark matter-nucleon interaction is given by the linear combination where χ + (χ − ) and N + (N − ) are the positive (negative) frequency parts of the non-relativistic dark matter and nucleon fields, respectively. In Eq. (2.1), c p i and c n i are the coupling constants for protons and neutrons. They are related to the isoscalar and isovector coupling constants c τ i (τ = 0, 1) by the relations c p i = (c 0 i + c 1 i )/2 and c n i = (c 0 i − c 1 i )/2. Following Ref. [30], we introduce c τ i constants with dimension (mass) −2 . We collectively denote them by c. In this paper we restrict our analysis to isoscalar interactions, i.e., we set c 1 i = 0. The differential cross section for dark matter scattering on a target nucleus of mass m T is given by where j χ and j N are, respectively, the dark matter and nucleus spins, while M N R represents the non-relativistic scattering amplitude. We denote by P tot the average of |M N R | 2 over initial spins, summed over final spins. P tot is proportional to the total transition probability and it can be expressed as a combination of nuclear and dark matter response functions, namely The additional operators O16 = −O10O5, O13 = O10O8, O15 = −O11O3 and O14 = O11O7 are difficult to generate in explicit particle models, whereas the remaining operator, O2 = (v ⊥ χN ) 2 , cannot be a leading-order operator in effective theories.
where v and µ T are the dark matter-nucleus relative velocity and reduced mass, respectively. The dark matter response functions [30]. We evaluate the nuclear response functions using our FORTRAN version of the Mathematica package introduced in Ref. [30]. In Eq. (2.3), y = (qb/2) 2 , where b is the oscillator parameter in the independent-particle harmonic oscillator model [30].
The differential rate of scattering events per unit time and per unit detector mass is given by where m χ is the dark matter mass, ρ χ is the local dark matter density and ξ T is the mass fraction of the nucleus T in the target material. In Eq. (2.5), the angle brackets denote the average where f is the local dark matter velocity distribution in the galactic rest frame boosted to the detector frame. In Eq. (2.6), v min (q) = q/2µ T is the minimum velocity that a dark matter particle must have in order to transfer a momentum q to the target nucleus, and v e (t) is the time-dependent Earth velocity in the galactic rest frame. In this paper we consider the anisotropic velocity distribution proposed in Ref. [32], with astrophysical parameters set at their mean values, i.e. blue line in the left panel of Fig. 6 in Ref. [32]. See also Refs. [33,34] for an introduction to this galactic model.

Benchmark points P 1 -P 27
The effective theory of isoscalar dark matter-nucleon interactions mediated by heavy spin-one or spin-zero particles depends on the 10 constants c 0 i , i = 1, 3, . . . , 11, besides the dark matter mass. In order to assess the prospects for direct detection of dark matter in this theoretical framework, we have selected 27 benchmark points in the parameters space of the theory. Their properties are summarized in Tab. 2, and illustrated in the 10 planes m χ -c 0 i , i = 1, 3 . . . , 11, in Fig. 1. We have selected 8 light benchmark points, P 1 -P 8 with m χ = 10 GeV; 8 relatively heavy benchmark points, P 17 -P 24 with m χ = 250 GeV; and 8 benchmark points with m χ = 50 GeV, namely P 9 -P 16 . As shown in Tab. 2, the benchmark points P 1 -P 24 allow us to analyze the prospects for detecting a signal produced by the interaction operators   Table 2. Properties of the benchmark points studied in this paper. For each benchmark point, this table shows the dark matter particle mass and the coupling constants different from zero. We also report N th j and N j , respectively the expected and the "observed" number of dark matter scattering events in the Germanium (j = 1) and Xenon (j = 2) detectors. series expansion. The operator O 3 has been included in the linear combination defining the benchmark points P 25 , P 26 and P 27 , since it interferes with the leading operator O 1 .

Ton-scale direct detection experiments
We simulate our synthetic data from the benchmark points P 1 -P 27 . In the simulations, we assume the ton-scale Germanium and Xenon detectors described in the following. For each target material we include the most abundant stable isotopes present in Nature. For Germanium, the most abundant isotopes are 70 Ge, 72 Ge, 73 Ge, 74 Ge and 76 Ge, whereas for Xenon, they are 128 Xe, 129 Xe, 130 Xe, 131 Xe, 132 Xe, 134 Xe, and 136 Xe. We calculate the mass fractions ξ T of the 5 Germanium isotopes and of the 7 Xenon isotopes listed above from the relative abundances reported in Tab. 1 of Ref. [35]. In general, different isotopes have distinct nuclear response functions. For the Germanium and the Xenon isotopes considered in the analyses, we calculate the nuclear response functions using our FORTRAN version of the Mathematica package described in Ref. [30].

Germanium
For the ton-scale Germanium detector, we assume a Gaussian energy resolution with energy dispersion [10] and a constant experimental efficiency E = 0.3. Accordingly, we calculate the differential rate of dark matter scattering events per unit time and unit Germanium detector mass as follows In Eq. (3.2), dR/dE R is defined as in Eq. (2.5) and E O is the observed energy. The latter coincides with the true nuclear recoil energy E R in the limit of infinite energy resolution. In our analyses we consider the time average of Eq. (3.2). The total number of scattering events in the signal region (E inf , E sup ) is then given by In our simulations we assume E inf = 10 keV and E inf = 100 keV, and a raw exposure for the Germanium detector of M T = 1000 × 365 kg-day.

Xenon
The ton-scale Xenon detector considered in this paper has the same energy resolution and efficiency of the LUX experimental apparatus, but a larger exposure given by M T = 1000 × 365 kg-day. In the following, we briefly review how to calculate the expected number of dark matter scattering events in a detector with the same features as LUX.
The differential spectrum of the variable S 1 , i.e. the observed number of dark matter induced photoelectrons (PE), is given by In Eq. (3.4), the Gaussian of mean n and standard deviation √ nσ PMT , with σ PMT = 0.37, gives the probability of observing S 1 PE when the true number of dark matter induced PE is n. In the same equation, the Poisson distribution of mean ν(E R ) gives the probability of producing n PE from a recoil energy E R . In the case of LUX, the function ν(E R ) can be extracted from the panel (b) of Fig. 3 in Ref. [6]. In our simulations we assume the same expression for ν(E R ), and the experimental efficiency reported in Fig. 9 of Ref. [6], multiplied by an additional factor 1/2, corresponding to the 50% nuclear recoil acceptance quoted by the LUX collaboration. In our analyses, we consider the time average of Eq. (3.4). The differential rate dR/dE R appearing in Eq. (3.4) is defined as in Eq. (2.5), and the total number of scattering events in the signal region (S inf 1 , S sup 1 ) is given by In the simulations, we assume S inf 1 = 2 PE and S sup 1 = 30 PE.

Background
Future ton-scale detectors will measure recoil events originating from local radioactivity, cosmic rays, and other experimental backgrounds, besides nuclear recoil events induced by dark matter scattering in the target material.
For the irreducible background events in the ton-scale Germanium and Xenon detectors, we assume the following energy spectrum where a j (b j ) is the lower (upper) limit of the signal region. We have introduced an index j to characterize the quantities depending on the detector type. j = 1 refers to the Germanium detector, whereas j = 2 identifies the Xenon detector. With this notation,Ê 1 = E O and E 2 = S 1 . In Eq. (3.6), a 1 = 10 keV, b 1 = 100 keV and 1 = 10 keV, whereas a 2 = 2 PE, b 2 = 30 PE and 2 = 2 PE. The energy spectrum in Eq. (3.6) includes a flat component and an exponentially decreasing component, as expected for dark matter direct detection experiments [10]. In Eq. (3.6) η = 0.5, which implies one background event in the signal region, both for the Germanium detector and for the Xenon detector. Accordingly, we define the quantity µ as the total number of expected events in the signal region. Again, j = 1 refers to the Germanium detector, and j = 2 corresponds to the Xenon detector.

Synthetic data
We now describe the procedure that we have followed to simulate synthetic data given a benchmark point. We illustrate the procedure for a generic detector of type j. A sample of synthetic direct detection data is a set of N j recoil energies, or PE, {Ê j } i=1,...,N j . The datapoints of this set are randomly generated from the recoil energy spectrum of a benchmark point (e.g. one of the points P 1 -P 27 ). More specifically, the simulation of synthetic direct detection data consists of two parts: (1) generating the number N j ; (2) sampling the events {Ê j } i=1,...,N j . We randomly sample the number N j of observed events from a Poisson distribution of mean µ tot (m χ ,ĉ), where the parameters (m χ ,ĉ) identify one of the benchmark points P 1 -P 27 . Subsequently, we randomly sample N j recoil energies, or PE, from the spectral function In our analyses, we have repeated this procedure twice for each benchmark point P 1 -P 27 . Once assuming the Germanium detector (j = 1) and once assuming the Xenon detector (j = 2). Tab. 2 reports the number of simulated events for each benchmark point and for each target material.

Statistical framework
We now introduce the statistical methods applied to the analysis of our synthetic direct detection data. The aim of our analyses is to reconstruct m χ and the coupling constants c 0 i from about 100 scattering events randomly generated from the benchmark points P 1 -P 27 .
As explained in Sec. 3.4, for each benchmark point we generate two samples of synthetic data. One sample for the ton-scale Germanium detector (j = 1) and one sample for the tonscale Xenon detector (j = 2). Each sample consists of a (N j + 1)-dimensional array of Accordingly, the total Likelihood function takes the following form In Eq. (4.1) the first two terms constrain µ (j) tot (m χ , c) to match N j , whereas the sum in the second line contains the information on the spectrum of the observed events. The spectral information is encoded in the functions f (j) defined in Eq. (3.9).
We use the Likelihood function in Eq. (4.2) to construct the posterior probability density function (PDF) of the model parameters (a Bayesian appraoch) and their profile Likelihood (a frequentist approach). The posterior PDF, P(m χ , c|d 1 , d 2 ), is related to the total Likelihood function by Bayes' theorem, according which  data, we assume log-priors both for the dark matter mass and for the coupling constants. Log-priors allow to sample the posterior PDF varying the model parameters within prior ranges spanning several orders of magnitude. Tab. 3 shows the prior ranges assumed in our analyses. We present our results in terms of 2D marginal poster PDFs and 2D profile likelihoods. The 2D marginal posterior PDF of the model parameters m χ -c 0 1 , for instance, is defined as follows P marg (m χ , c 0 whereas the 2D profile likelihood of the same model parameters is given by  Within the Bayesian approach to data analysis, limits on the coupling constants c τ i and on the mass m χ are expressed in terms of x% credible regions (CR). These regions of the parameter space contain x% of the total posterior probability, and are such that P marg at any point inside the region is larger than at any point outside the region. Within the frequentist approach to data anlysis, one can use the profile likelihood to construct approximate frequentist confidence intervals from an effective chi-square defined as ∆χ 2 eff ≡ −2 ln L prof /L max , where L max is the absolute maximum of the Likelihood function. Under certain regularity conditions the distribution of ∆χ 2 eff converges to a chi-square distribution with a number of degrees of freedom equal to the number of relevant parameters (Wilks' theorem [36]), e.g. 2 for a 2D profile likelihood.
We use confidence intervals and credible regions to assess whether the mass m χ and the couplings c 0 j characterizing the benchmark points P 1 -P 27 can be extracted from our synthetic direct detection data.
Finally, to sample the multidimensional Likelihood function in Eq. (4.2) we use the Multinest program [37][38][39]. We use our own routines to evaluate event rates and the Likelihood function. Figures have been produced using the programs GetDist [40], Getplots [41] and Matlab.

Results
We now analyze the synthetic data generated from the benchmark points P 1 -P 27 as explained in Sec. 3.4. We tackle this problem within the statistical framework of Sec. 4. First, we focus on the benchmark points P 1 -P 24 , where the dark matter-nucleon interaction is described by a single operator (Sec. 5.1). Then, we analyze the case in which the dark matter particle interacts with the detector nucleons through a linear combination of non-relativistic operators (Sec. 5.2). Our goal is to assess whether the benchmark values of m χ and of the constants c 0 i = 0 at P 1 -P 27 can be extracted from our synthetic data.

Prospects for benchmark points P 1 -P 24
Tab. 2 summarizes the properties of the benchmark points P 1 -P 24 . To analyze the synthetic data generated from the points P 1 -P 24 , we adopt two strategies: 1. Fitting procedure A. For each benchmark point P A , with 1 ≤ A ≤ 24, we fit the dark matter-nucleon effective theory of Sec. 2 to our synthetic data, assuming in the fit 2 free parameters only. The 2 free parameters are m χ and c 0 k = 0, where c 0 k is the only coupling constant different from zero at P A . When analyzing our synthetic data following the fitting procedure A, we fix to zero the coupling constants c 0 j with j = k. The fitting procedure A is the one commonly used in fitting the leading spin-independent and spin-dependent interactions, i.e. O 1 and O 4 , to direct detection data. The blue lines in Figs. 2, 3 and 4 represent the 95% CL contours found applying the fitting procedure A to the benchmark points P 1 -P 24 .
2. Fitting procedure B. Alternatively, we fit the dark matter-nucleon effective theory of Sec. 2 to our synthetic data assuming in the fit 11 free parameters. The 11 free parameters are m χ and the 10 coupling constants c 0 i , with i = 1, 3, . . . , 11. In contrast to the fitting procedure A, the fitting procedure B extracts the correct dark matter-nucleon interaction type from the data directly. It requires a comprehensive exploration of the full 11-dimensional parameter space of the effective theory defined in Sec. 2. The black lines and the colored regions in Figs. 2, 3 and 4 represent, respectively, the 95% CR contours and the 2D profile likelihoods that we obtain applying the fitting procedure B to the benchmark points P 1 -P 24 . Fig. 2 summarizes our findings for the benchmark points P 9 -P 16 . We present our results in the 8 planes m χ -c 0 i , with i = 3, 5 . . . , 11. Applying the fitting procedure A to the synthetic data generated from the points P 9 -P 16 , we produce the 95% CL contours (blue lines) shown in Fig. 2. These contours are relatively narrow, compared to the large volume of the parameter space explored in our analyses. In addition, in each panel of Fig. 2 the benchmark points (green circles) are fully contained in (or at the very edge of) the blue contours. We conclude that m χ and the constant c 0 k = 0 can be extracted from the synthetic data, applying the fitting procedure A to the benchmark points P 9 -P 16 .
However, the fitting procedure A can be only used when the form of the dark matternucleon interaction is known a priori. Though this is a reasonable simplifying assumption in a preliminary exploration of the data, it is far from being supported by any empirical evidence. In general, we have therefore to tackle a problem of more complex nature: extracting the correct dark matter-nucleon interaction type from the data directly. The fitting procedure B is the correct way of approaching this challenging problem.
We therefore apply the fitting procedure B to the synthetic data generated from the points P 9 -P 16 . Our aim is to assess whether extracting the correct dark matter-nucleon interaction type from about 100 scattering events is indeed feasible. From an analysis of the benchmark points P 9 -P 16 based on the fitting procedure B, we extract the 95% CR contours and the profile likelihoods shown in the 8 planes m χ -c 0 i , i = 3, 5 . . . , 11, of Fig. 2. Importantly, we find that for the benchmark points P 10 , P 11 and P 14 , corresponding to the interaction operators , the correct dark matter-nucleon interaction type and the dark matter mass can be extracted from the synthetic data directly. In fact, the best fit dark matter mass and coupling constants that we find applying the fitting procedure B to the benchmark points P 10 , P 11 and P 14 match the corresponding benchmark values within a very good accuracy (with the exception of the dark matter mass at P 11 ). For instance, for the benchmark point P 14 , we find the best fit values: log 10 (m χ /GeV) = 1.71, log 10 (m 2 v c 0 9 ) = 0.89 and all the other coupling constants are compatible with zero. Remarkably, the values of m χ and c 0 9 at the benchmark point P 14 are: log 10 (m χ /GeV) = 1.70, log 10 (m 2 v c 0 9 ) = 0.90, and c 0 i = 0, for i = 9. When applied to the benchmark points P 9 , P 12 , P 13 , P 15 and P 16 , the fitting procedure B gives less accurate results. More specifically, from the synthetic data generated from these benchmark points, we are not able to extract the correct dark matter-nucleon interaction type. However, for the benchmark points P 9 , P 15 and P 16 , we can accurately reconstruct the dark matter mass from an analysis of the corresponding synthetic data (see Tab. 4 for a list of best fit values). From this study, the benchmark point P 12 appears as the most difficult to analyze.
Figs. 3 and 4 summarize our findings for the benchmark points P 1 -P 8 and P 17 -P 24 , respectively. Applying the fitting procedure A to the benchmark points P 1 -P 8 , we are always able to extract m χ and c 0 k = 0 from the synthetic data. For the benchmark points P 17 -P 24 , we find that the fitting procedure A can only constrain the ratio (c 0 k ) 2 /m χ , since v min is approximately independent of m χ , when m χ is significantly larger than the target nuclei mass. For this reason the 95% CL contours extracted from the fitting procedure A applied to P 17 -P 24 are broader than in Figs. 2 and 3.
Applying the fitting procedure B to the benchmark points P 1 -P 8 , we find that the correct dark matter mass can be in all cases extracted from our synthetic data. Remarkably, for the benchmark points P 1 -P 8 , the relative difference between the best fit value of log 10 (m χ /GeV), and the value of log 10 (m χ /GeV) at the benchmark point is of the order of a few percent (see Tab. 4). Extracting the correct dark matter-nucleon interaction  Figure 2. Analysis of the synthetic data randomly generated from the benchmark points P 9 -P 16 . These benchmark points are characterized by m χ = 50 GeV and by the coupling constants listed in Tab. 2. Different panels refer to distinct benchmark points. In all panels, results are presented in terms of 2D profile likelihoods (colored regions), 95% credible regions (black contours) and 95% confidence levels (blue contours). For each benchmark point, we have constructed the 2D profile likelihoods and credible regions, by fitting the full 11-dimensional effective theory of Sec. 2 to our simulated data (fitting procedure B). In each panel, we have derived the 95% confidence levels, by fitting m χ and the coupling constant on the y-axis to our simulated data (fitting procedure A). Green circles, crosses and stars are the benchmark points, the best fit values and the means resulting from the fitting procedure B. We find that for the benchmark points P 10 , P 11 and P 14 , corresponding to the interaction operators O 5 , O 6 and O 9 , the leading dark matter-nucleon interaction and the dark matter mass can be extracted from the synthetic data directly, without any further assumption. operator from our synthetic data is however difficult in the case of the benchmark points P 1 -P 8 . The 2D profile likelihoods reported in Fig. 3 are indeed very flat, and different interaction operators can fit the data. From an analysis of the benchmark points P 16 -P 24 based on the fitting procedure B, we find that in the case of the point P 22 , we can extract the correct dark matter mass and the correct dark matter-nucleon interaction type from the synthetic data. The relative difference between log 10 (m χ /GeV) at the best fit point, and log 10 (m χ /GeV) at P 22 is of about 15%. Analogously to the benchmark point P 14 , at P 22 the operator O 9 = −i S χ · ( S N × q/m N ) is the leading dark matter-nucleon interaction operator.  The properties of the remaining benchmark points with m χ = 250 GeV are more difficult to extract from the synthetic data randomly generated in this study. Figs. 2, 3 and 4 also show the 2D marginal posterior PDFs extracted from our synthetic data, and the associated 95% CR contours (black lines in the figures). The 95% CR contours follow the support of the 2D profile likelihoods, except for the benchmark points P 9 , P 10 , P 12 , P 15 and P 16 . In general, the Bayesian means in Figs. 2, 3 and 4 (green stars in the figures) deviate from the benchmark points considerably. Indeed, because of volume effects, Bayesian means tend to prefer values of m χ corresponding to regions in parameter space where the posterior PDF is relatively small.  Figure 5. Analysis of the synthetic data randomly generated from the benchmark point P 26 . This benchmark point is characterized by m χ = 50 GeV and by the coupling constants listed in Tab. 2. Different panels refer to distinct pairs of model parameters. In all panels, results are presented in terms of 2D profile likelihoods (colored regions) and 95% credible regions (black contours). We have constructed the 2D profile likelihoods and credible regions, by fitting the full 11-dimensional effective theory of Sec. 2 to our simulated data. For this benchmark point, we find that the dark matter mass and coupling constants c 0 1 , c 0 3 and c 0 4 can be simultaneously extracted from our synthetic data.
Tab. 4 (at the end of the paper) provides a detailed summary of our statistical analyses based on the fitting procedure B. For each benchmark point, Tab. 4 shows the best fit values of the model parameters different from zero at the benchmark point, and the 95% CL upper limits on the remaining model parameters.  Figure 6. Same as Fig. 5, but for the benchmark points P 25 .

Prospects for benchmark points P 25 -P 27
In the last part of this section, we analyze the synthetic data randomly generated from the benchmark points P 25 -P 27 . At the benchmark points  Fig. 5 we also report the 95% CR contours obtained from a Bayesian analysis of the same dataset (black lines). The green circle, cross and star in each panel represent the benchmark point, the best fit point and the Bayesian mean, respectively.
Comparing the position of the benchmark point P 26 projected in the six panels of Fig. 5 with the one of the best fit point in each panel, we see that, in the case of P 26 , the dark matter mass and coupling constants can be simultaneously reconstructed applying the fitting procedure B to our synthetic data. This conclusion is confirmed by an analysis of the best fit values listed in Tab. 4. For instance, at the best fit point we find log 10 (m χ /GeV) = 1.86 and log 10 (m 2 v c 0 1 ) = −3.39, whereas at the benchmark point P 26 , log 10 (m χ /GeV) = 1.70 and log 10 (m 2 v c 0 1 ) = −3.60. This is a remarkable result, since the fitting procedure B does not assume any prior information on the form of the dark matter-nucleon interaction. The reconstruction of the correlated coupling constants c 0 1 and c 0 3 is particularly effective, as shown by the 2D profile likelihood in the bottom-central panel of Fig. 5. In addition, in each panel of Fig. 5 the Bayesian means do not differ from the best fit points dramatically.
Figs. 6 and 7 show our findings for the benchmark points P 25 and P 27 , respectively. In the case of P 25 (i.e. a light dark matter candidate) the best fit m χ is larger than the value of m χ at the benchmark point, whereas the best fit coupling constants are systematically smaller than their values at P 25 . The relative difference between log 10 (m χ /GeV) at the best fit and at the benchmark point is of about 10%. However, the value of m χ at the benchmark point is not contained in the 1D interval at 95% CL extracted from the profile Likelihood by the Getplots program (see Tab. 4). In the case of P 27 (i.e. a relatively heavy dark matter candidate) the benchmark point reconstruction procedure appears to be more complicated, given the synthetic data generated in this study. The best fit dark matter mass and coupling constants are significantly larger than at the benchmark point P 27 .

Conclusions
We have studied the prospects for direct detection of dark matter with future ton-scale detectors in the 11-dimensional effective theory of isoscalar dark matter-nucleon interactions mediated by a heavy spin-one or spin-zero particle.
From a variegated sample of 27 benchmark points selected in the parameter space of the theory, we have simulated independent sets of synthetic data for ton-scale Germanium and Xenon detectors. We have then analyzed the synthetic data, using state-of-the-art Bayesian and frequentist statistical methods and numerical tools. From our statistical analyses, we have extracted 2D marginal posterior PDFs and profile likelihoods in the planes spanned by the independent pairs of model parameters. We have also computed the best fit points (a frequentist approach) and the means (a Bayesian approach) of the model parameters.
Comparing the best fit points and the means of the model parameters to their benchmark values, we have identified the most promising scenarios and the main challenges emerging from our analysis of the 27 benchmark points. For all benchmark points, we find that the correct dark matter mass and coupling constants can be extracted from the synthetic data, when in the fit the true dark matter-nucleon interaction type is assumed a priori. This fitting procedure corresponds to the usual way of fitting the familiar spin-independent and spin-dependent interactions to direct detection data.
The problem of extracting the correct dark matter-nucleon interaction type from the data directly is of a much more complex nature. It requires the exploration of the full 11dimensional parameter space of the dark matter-nucleon effective theory. In our analyses, we have identified five scenarios where the dark matter mass, the dark matter-nucleon interaction type and the associated coupling constant can be extracted from the synthetic data simultaneously. The five scenarios correspond to the benchmark points P 10 , P 11 , P 14 , and P 26 , with m χ = 50 GeV, and to the benchmark point P 22 , with m χ = 250 GeV. At the benchmark points P 10 and P 11 the leading dark matter-nucleon interaction operators are O 5 = −i S χ · ( q/m N × v ⊥ χN ) and O 6 = ( S χ · q/m N )( S N · q/m N ), respectively. The benchmark points P 14 and P 22 assume O 9 = −i S χ ·( S N × q/m N ) as leading interaction operator, whereas at the benchmark point P 26 , dark matter interacts with the nucleons through a linear combination of the interaction operators In the analyses of the benchmark points with m χ = 10 GeV, we have always been able to accurately reconstruct the dark matter mass from the synthetic data, though it has been difficult to extract the correct dark matter-nucleon interaction type from the same data.
In summary, we have presented the first comprehensive analysis of the prospects for direct detection of dark matter with future ton-scale detectors in the 11-dimensional effective theory of the dark matter-nucleon interaction. In our analyses, we have stressed the importance of extracting the correct dark matter nucleon-interaction type from the data directly, describing the challenges found addressing this complex 11-dimensional problem. The results presented in this paper will be particularly useful in interpreting the data of future dark matter direct detection experiments.

A Dark matter response functions
In what follows we list the dark matter response functions appearing in Eq. (2.3). These response functions have been derived from the ones of Ref. [30], setting to zero the couplings c τ 12 , . . . , c τ 15 , with τ = 0, 1: We assume for definiteness that the dark matter particle has spin j χ = 1/2.  Table 4. Detailed summary of our statistical analyses based on the fitting procedure B. For each benchmark point (BP), this table shows the best fit (BF) values of the model parameters different from zero at the benchmark point, and the 95% confidence level upper limits (UP) on the remaining model parameters. For certain parameters, we also report the 1D intervals at 95% confidence level (CL) constructed from the profile Likelihood with Getplots. To simplify the notation we have introduced the dimensionless quantities G i ≡ m 2 v c 0 i .