IceCube events and decaying dark matter: hints and constraints

In the light of the new IceCube data on the (yet unidentified) astrophysical neutrino flux in the PeV and sub-PeV range, we present an update on the status of decaying dark matter interpretation of the events. In particular, we develop further the angular distribution analysis and discuss the perspectives for diagnostics. By performing various statistical tests (maximum likelihood, Kolmogorov-Smirnov and Anderson-Darling tests) we conclude that currently the data show a mild preference (below the two sigma level) for the angular distribution expected from dark matter decay vs. the isotropic distribution foreseen for a conventional astrophysical flux of extragalactic origin. Also, we briefly develop some general considerations on heavy dark matter model building and on the compatibility of the expected energy spectrum of decay products with the IceCube data, as well as with existing bounds from gamma-rays. Alternatively, assuming that the IceCube data originate from conventional astrophysical sources, we derive bounds on both decaying and annihilating dark matter for various final states. The lower limits on heavy dark matter lifetime improve by up to an order of magnitude with respect to existing constraints, definitively making these events---even if astrophysical in origin---an important tool for astroparticle physics studies.


Introduction
In the last few years, one of the most remarkable events in astroparticle physics is the discovery of a high-energy astrophysical neutrino flux by the IceCube experiment: the initial hint only consisted of a couple of PeV events [1]; later, additional shower and track events at lower energies were unveiled [2], and eventually the discovery was confirmed and strengthened in the 3 years dataset [3]. While the origin of these events is still under discussion, they undoubtedly serve as a powerful diagnostic tool for a number of astrophysical as well particle physics models (for some speculations see [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] and for a review see [21]).
In this respect, it was for instance quickly realized that the highest energy events [22] or even the totality of the signal [23] might be attributed to relatively heavy (PeV-scale) dark matter (DM) decay (for the follow-up works see [24][25][26][27][28]). For the self-consistency of the article, the key model-independent features of this putative DM signal are summarized in section 2 (for a review of decaying DM see [29]. ) In the light of the extended data set of IceCube [3] and since the source(s) of these events have not been identified yet, we address more extensively some topics of relevance for the DM interpretation of data. First, under the assumption that DM decay is responsible for (the bulk of) the events, we study the angular distribution of the data and its compatibility with the expected angular distribution from DM decay. To this aim we make use of several statistical tests, such as maximum likelihood, Kolmogorov-Smirnov and Anderson-Darling tests, and we also discuss some limitations and possible improvements. In particular, we estimate the statistics needed to distinguish between DM-like and isotropic (extragalactic astrophysical origin) angular distributions of the events and compare it with the current diagnostic power. Our main result is that currently a DM-like angular distribution of the events is somewhat preferred over an isotropic one, albeit not significantly (with a p-value ∼ 10%). Details are reported in section 3, which is the main section of the article.
Another important issue concerns the "plausibility" of this scenario. While at first sight a PeV-scale DM might appear unusual, and although alternative frameworks such as the Weakly Interacting Massive Particles (WIMPs) one remain attractive, in our opinion it is worth being cautious and open-minded to avoid the street light effect bias 1 . Given the absence of signals of new physics at any laboratory experiment, most of the current searches for DM are essentially driven by the "traditional" techniques available to probe them. At the very least, the observed high energy neutrino events provide one more street light in the challenging quest for DM identification, opening a window on the yet poorly explored PeV scale. Although we remain agnostic on the actual particle physics scenario behind such a DM candidate, and although most of our results are generic, we note that several mechanisms exist to achieve the correct abundance of PeV-scale DM (for some models addressing this issue see [30,31]). In section 4 we make some brief considerations on an effective field theory approach to a class of models involving heavy DM, concerning the operators leading to its decay and decay products, the associated neutrino spectrum, and some comments on possible production mechanisms.
A further issue of interest is if the proposed candidate conflicts with other existing bounds, notably gamma-ray bounds. We devote section 5 to address this question and conclude that at the moment the constraints are not sufficient to exclude the DM interpretation: neutrino data remain the main handle to probe the PeV-scale DM framework.
Finally, in section 6 we elaborate on what we can learn from IceCube events about the heavy DM properties if they will be eventually attributed to astrophysical sources. Interestingly, even in this case the IceCube events can be used to set stringent limits on the lifetime of decaying heavy DM. We derive bounds in the mass-lifetime plane for various possible standard model decay products, which are the best bounds available in a number of cases and in all cases quite competitive with gamma-ray constraints. Section 7 summarizes our conclusions and discussions of the results. Furthermore, although we focus in this paper mainly on decaying DM, in appendix A we also derive bounds on the annihilation cross section σv of annihilating DM with mass below ∼ 100 TeV. This is possible since the lower threshold of IceCube data is ∼ 20 TeV and hence the lower energy part of events can be used to constrain WIMP-like DM annihilation cross section.

Neutrino flux from decaying dark matter
In this section we summarize the energy and angular distributions of the expected neutrino flux from decaying DM, which closely follows the notation and terminology used in [23]. The neutrino flux from DM decay has both a Galactic and an extragalactic contributions. While in the case of annihilating DM the latter is often subleading and dependent on the poorly known small scale clustering properties, for decaying DM the contributions of both fluxes are comparable and their relative contribution is robustly known. Some differences exist, however, in the energy and angular distributions of these components.
For the angular distribution analysis of IceCube events we use Galactic coordinates (b, l), with b and l denoting the Galactic latitude and longitude, respectively (varying in the range −π/2 ≤ b ≤ π/2 and −π ≤ l < π). The Galactic component of neutrino flux originates from the decay of DM particles in the Milky Way halo with the following differential flux: where m DM and τ DM are the DM mass and lifetime, respectively, and ρ h (r) is the density profile of DM particles in our Galaxy as a function of distance from the Galactic center (GC), r. The dN ν /dE ν factor is the energy spectrum of neutrinos produced in the decay of a DM particle. The dependence of Galactic neutrino flux on the Galactic coordinates originates from the off-center position of the Sun with respect to the center of DM halo, which is located at the GC. The neutrino flux at Earth is given by the line-of-sight integration over the parameter s, which is related to r by where R 8.5 kpc is the Sun-GC distance. The extragalactic component of the neutrino flux, which originates from the DM decay at cosmological distances, is isotropic to the leading order and thus independent of the Galactic coordinates. The differential flux with respect to the neutrino energy measured at Earth, E ν , is given by where H(z) = H 0 Ω Λ + Ω m (1 + z) 3 is the Hubble expansion rate as a function of redshift z and ρ c = 5.5 × 10 −6 GeV cm −3 is the critical density of the Universe. For the cosmological parameters we take the values derived from the Planck temperature map data [32]: Ω Λ = 0.6825, Ω m = 0.3175, Ω DM = 0.2685 and h ≡ H 0 /100 km s −1 Mpc −1 = 0.6711. Apart from a relative normalization factor of order unity, the spectral shapes of the Galactic and extragalactic contributions are typically similar (see for instance Fig. 1 in [23]). To a first approximation, this means that the angular and energy dependences are basically factorized. In the following we shall perform the analyses of the two observables independently; i.e., by integrating over energy we obtain the angular distribution for the angular analysis of data and by integration over the solid angle we derive the energy distribution for the energy analysis of data. An ideal analysis should include both the energy and angular distributions simultaneously; but performing this analysis requires detailed information of the detector (such as the declination dependence of effective area, etc.) which is anyway unavailable now.
The theoretical angular probability distribution function (PDF) from decaying DM is given by , (2.5) and, from the normalization condition, one has κ = 1 4π (η + Ω DM ρ c β) , For our numerical analysis we adopt a Navarro-Frenk-White density profile [33] ρ h (r) where r c 20 kpc is the critical radius and ρ h = 0.33 GeV cm −3 , which yields a DM density at the Solar System ρ = 0.39 GeV cm −3 [34]. Figure 1a shows the illustrative PDF of decaying DM in Galactic coordinates, where the stronger reddish color depicts higher value of PDF (the color code is in linear scale). It is worth to emphasize that in the case of decaying DM, the relative contribution of Galactic (which is centered at GC) and extragalactic (which is isotropic) components is fixed; the total PDF has a mild concentration around GC and flattens out to quasi-isotropic when moving to larger latitudes and longitudes. For the case of isotropic distribution (which is the case for astrophysical neutrinos), obviously the PDF is In section 3 we will use the PDFs in eqs. (2.4) and (2.9) for the angular analysis of IceCube data. The expected energy spectrum of neutrinos flux from DM decay can be obtained by integrating eqs. (2.1) and (2.3) over solid angle. The average Galactic flux over the full sky can be written as Similarly to eq. (2.10), for the extragalactic component one can write 3 Angular distribution of IceCube data: dark matter vs. isotropy In [3], it is merely mentioned that the collected high energy data by IceCube have been checked to be compatible with an isotropic distribution. However, the relevant question in testing the DM scenario is whether the angular distribution of data prefer a DM-like distribution or isotropic one, and what is the significance of preference. In this section we elaborate on this question. Needless to say, a rigorous evaluation of the compatibility with (or preference for) a  decaying DM distribution can only be performed within the IceCube collaboration. We show in the following that a number of tests are possible with the currently available information, under different simplifying approximations. Our preliminary conclusions are the following: Either a likelihood method, see section 3.1, or a Kolmorogov-Smirnov test, see section 3.2 (including its Anderson-Darling variant in section 3.3), suggest that data prefer a DM-like distribution with respect to an isotropic one at the confidence level ranging from 89% to 98% C.L.; i.e., roughly at the "2 σ" level. In section 3.4 we discuss some of the possible limitations of our treatments. We argue that, while an improved treatment of the angular resolution or background-signal separation might degrade somewhat these figures, some preference for a DM-like distribution (i.e. for some excess toward the inner Galaxy) persists. We also perform a KS forecast analysis of the number of events needed to reject the DM distribution, if the distribution were isotropic. This forecast does suggest that current diagnostic power of a KS test is not better than 1 to 2 σ level, and that a statistics of about 120 events is needed to reach a ∼ 99% C.L.. This appears within reach of the lifetime of the IceCube detector, which suggests that more conclusive tests should be within reach even with the current experiment. Definitely, a better diagnostic power can be attained by the planned next generation of neutrino detectors.

Likelihood analysis
A first test consists in checking how well the event distribution follows a DM-like distribution as opposed to an isotropic one, via a likelihood test. For each event we can define a probability distribution function p i . We follow here the same "flat sky" approximation common in neutrino telescopes for point source analyses [35], namely where | x i − x| is the angular distance between two points and σ i is the error in the reconstruction of direction reported by IceCube. In this paper we analyze 35 events collected during three years at IceCube detector 2 [3]. Out of this 35 events, 7 are track events (with small un-certainty in the reconstruction of incoming neutrino direction, σ 1 • −2 • ) and the remainder are shower events 3 . In Table I in the supplementary section of [3], all the events properties, including the coordinates x i and uncertainties σ i , are listed. In the Galactic coordinates, the angular distance writes where (b i , l i ) is the Galactic coordinate of the i-th event 4 . Figure 1b shows a schematic view of the sky-map observed by IceCube.
The following test statistics (TS) can be defined for the likelihood analysis: In the formulae above, N refers to the number of signal events. It would be by far too optimistic to assume that N = 35 events, since according to IceCube, a number of background events given by N b = 15 +10.1 −5.8 contaminates the sample. For illustrative purposes, let us assume that N b = 15, namely the reported central value by IceCube, which means that N = 20 in eq. (3.3). However, still we do not know which events are background and which ones are signal. To simplify a bit the combinatorics problem, we can nonetheless make the assumption that none of the events with energy E ν > 150 TeV is due to background (that is atmospheric neutrinos or muons) since IceCube collaboration estimates the number of background events in that range of energy 1. This leaves 26 15 = 7, 726, 160 ways of selecting which events are background, among the low energy events. For each case we calculate the TS like value, whose distribution (with the mean value TS like = 2.1) is shown in Figure 2a. To estimate the p-value, we generated a similar distribution of TS like values from ∼ 10 5 sets of isotropically distributed 20 events datasets. For each realization of 26 15 set, we calculate the p-value by comparing its TS like with the TS like values of generated events (simply, the p-value is the fraction of generated events which have smaller TS like values than the one computed by observed data). Figure 2b shows the distribution of computed p-values. It is clear graphically that data prefer mildly the DM angular distribution rather than isotropic distribution. The average p-value is found to be ∼ 2%, which means a ∼ 98% C.L. preference for DM distribution.

Kolmogorov-Smirnov test
An alternative statistical test, the KS test, can be performed to quantify the compatibility of data with DM vs. isotropic distributions. The KS test is a powerful test with several advantages, including its non-parametric nature which means that it is independent of the underlying (unknown) distribution function of data. Although the two dimensional KS tests suffer in  general of some ambiguities (such as the choice of integration region), a one-dimensional KS test is possible if one takes into account the symmetry of the problem: the DM distribution only depends on the angle ϑ measuring the angular distance from GC (i.e. the half-angle of the cone around GC with the axis passing through the GC), and not on the azimuthal angle ϕ around the cone. This allows one to use ϑ as the only variable, suitable for a oneparameter KS test. The DM and isotropic distribution functions, in terms of ϑ parameter, are the followings: 1. The isotropic distribution p iso (ϑ) is given by 2. The DM distribution p DM (ϑ) can be obtained by changing the variables from Galactic coordinates (b, l) to (ϑ, ϕ), such that cos ϑ = cos b cos l; and it takes the following form: where β and η are given respectively in eqs. (2.5) and (2.7); and r, given in eq. (2.2), takes the following form: r(s, ϑ) = s 2 + R 2 − 2sR cos ϑ .
Notice that for both the above PDFs, we have the normalization π 0 p(ϑ) sin ϑ dϑ = 1. The KS test compares the empirical distribution function (EDF) of data with the cumulative distribution function (CDF) of the distribution being tested. The EDF of data is given by where N is the number of signal events and Θ is the Heaviside step function. The CDF of DM and isotropic distributions can be calculated as: and, For illustration, Figure 3 shows the CDF for DM (red solid) and isotropic (blue dashed) distributions, and EDF for all the data, i.e. including the background events. Graphically data show a preference for DM distribution; however, as we discussed in section 3.1, the contribution of background events to the EDF should be taken into account. The statistical estimator used for the KS test consists in the maximal distance between the EDF and the theoretical CDF of tested distribution. For instance, for the case of DM the test statistics is defined as An analogous definition holds for the isotropic case by replacing CDF DM → CDF iso . To account for the fraction of background events, we follow the same procedure as for the likelihood test: we assume that the signal is made by N = 20 events, and that all the 9 events with energy > 150 TeV are signal events, so that 15 out of 26 events with lower energies are background events. Then, we calculate the estimator TS KS for all the possible ways of choosing 15 events out of 26 events. For the time being, we will use only the best-fit angular positions of the data. We shall comment on this approximation in section 3.4. Figure 4a shows the distribution of TS KS values for DM and isotropic distributions, for all the possible ways of choosing background events. As can bee seen in Figure 4a, TS KS have statistically smaller values for the DM case with respect to the isotropic one, which means that again the data prefer to some extent the former distribution.
To calculate the p-value, we generate ∼ 10 5 sets of events (each set including 20 events) according to the isotropic distribution and for each set we calculate the TS KS for both DM and isotropic distributions. As in the likelihood test, for each realization of background choosing (that is 26 15 ways) the p-value is calculated by comparing the TS KS of that realization with the TS KS distribution of generated events, that is the fraction of generated events having smaller TS KS . Figure 4b shows the distribution of p-values for all the realizations of 26 15 ways. As  can be seen, the two distributions are separated, with the DM one having statistically smaller values (hence being closer to the data). On the average, 10% of generated isotropic sample have smaller TS KS than the values obtained for data vs. DM distribution, i.e. nine times out of ten isotropically distributed events lead to a more distant cumulative distribution from the data. This can be interpreted as a mild (less than "2 σ") preference for the DM distribution, qualitatively similar to the likelihood test results in section 3.1 (for comparison, 73% of generated isotropic sample have smaller TS KS than the values obtained for data vs. isotropic distribution.) The fact that the p-value is now higher than in the likelihood test is also reasonable, since the likelihood test uses more information and should thus be more powerful.

Anderson-Darling test
One of the shortcomings of the KS estimator used in the previous section is that generally it is not particularly sensitive to deviations of the EDF from the CDF at the endpoints (that is ϑ 0 and π), while the DM distribution differs from the isotropic one notably at ϑ 0. A powerful test in such cases is the Anderson-Darling (AD) test, which in its basic form used here is non-parametric. The AD test statistics estimator is defined as and similarly for the isotropic distribution. Here again N = 20. Like the KS test, we calculate the TS AD for all the possible ways of dropping the 15 background events from the 35 events. Figure 5a shows the distribution of TS AD for all the possible realization, for both DM and isotropic distributions, which the statistically larger TS AD for isotropic distribution indicates the preference of data for DM distribution. Again, like the KS test, to calculate the p-value, we generate ∼ 10 5 sets of events (each including 20 events) according to the isotropic distribution. With the same method described in section 3.2 we calculate the distribution of p-values for AD test, which is shown in Figure 5b.

Other issues about the directional analysis
In the tests performed previously, some simplifying approximations have been used. In this section we comment on a couple of these approximations and their effect on our results. Also, we perform a forecast analysis which basically addresses the question of how many signal events are required to distinguish between the DM and isotropic distributions at a certain confidence level. This is a useful theoretical "benchmark" information to keep in mind, independent of experimental details which might have been mis-modeled in our treatment.
• Angular Resolution: in both the KS and AD tests, we neglected the resolution of the detector in reconstructing the direction of incoming neutrinos and we fixed the incoming direction of events to their best-fit values reported by IceCube. However, for cascade events there is a large uncertainty in the incoming direction which cannot be neglected. In principle, one can think of generalizing the previous method by increasing the number of Monte Carlo realizations further, replacing each generated data set with a distribution, with each data point replaced by a sampling over its gaussian error. Unfortunately, a quick estimate of the runs needed to correctly sample the distribution function is sufficient to convince oneself that it is numerically unfeasible.
On the other hand, the simpler question if the "preference" for a DM-like distribution can be washed away by accounting for angular resolution can be addressed with a single and conservative academic case: we repeated the analysis by shifting all the incoming directions of cascade events 10 • up (that is 10 • farther from the GC). We have performed the KS and AD tests on this shifted data set, and the obtained level of discrimination is approximately similar to one reported in sections 3.2 and 3.3. In more detail, the average p-value of KS and AD test for "data vs. DM" are close to the previous ones, while for "data vs. isotropy" average p-value decreases to ∼ 40%; which means the shifted data are more compatible with the isotropic distribution (as we expected). However, the preference for the DM distribution persists.
• Background Rejection: in previous analyses, we simplified the "signal/background" separation by simply assuming that all the events with E ν > 150 TeV are signal, and being agnostic on the remaining ones. However, one can think of more clever separation criteria, like the method developed (and used in IceCube data analysis) in [36,37], which combines a dominant veto criterion based on the declination of the events with some energy cut. If we use this method, one ends up inferring that 13 out of the 35 events are signal events 5 and the 15 background events should then be selected among the 22 remaining events. Performing the statistical tests by this events selection, still one finds similar values for p-values as reported in sections 3.2 and 3.3. This is not surprising, since a priori we expect that the high energy events, which are among the signal events in both methods of background rejection, play the crucial role in both analyses.
• Forecast: There are obviously further approximations not addressed in the above treatments. For instance, we did not allow for a variable level of background, neither for the instrumental dependence of the effective area on the angular direction. Actually, there are reasons to believe that accounting for the latter effect would not alter much our previous conclusions, since at least for events with declination lower than 20 • -30 • , the IceCube detector would actually see an incoming isotropic flux as almost isotropic (see for instance the solid and dashed gray lines in Fig. 3 of Ref. [3]). Only a handful of data are characterized by higher declinations, where the Earth acts as a shield for higher energy events.
A further sanity check consists in asking oneself if the results we obtained are reasonable, based only on statistical expectations rather than detector-dependent considerations and the actual data. For instance: on a purely theoretical ground, how many events, if isotropically distributed, are needed in order to reject a DM-like anisotropy pattern at a certain confidence level? This is also useful to gauge the diagnostic potential that can be reasonably expected by the statistics collected by IceCube over its whole lifetime of about 20 years.
For the specific case of the KS test, the critical value of TS KS for a confidence level 1 − α (provided that n 35) is given analytically by 2 n ln( 2 α ). We generate n events according to the isotropic distribution and then we calculate at which confidence level it is possible to reject the DM distribution by calculating the p-value by comparing to ∼ 10 5 generated data set of events. Figure 6 shows the rejection confidence level as function of the number of events. A current diagnostic potential around 80% C.L. appears reasonable, comforting our previous analyses. Also, it suggests that with statistics of about 120 events (which is reasonably within reach of the lifetime of IceCube detector) it should be possible to reject the DM distribution at about 99% C.L.

Dark Matter decay, energy spectra and possible production mechanisms
Compared with the angular distribution of the events, most of the features of the neutrino spectrum from DM decay are significantly more model-dependent. As we argued in our previous publication [23], the most generic feature is that the spectrum presents a high-energy cutoff at some fraction of the DM mass, typically m DM /2 if two body decays including one neutrino are present. Differently from most astrophysical models which predict relatively featureless spectra over several decades of energy, spectra associated to DM decay are expected to depart somewhat from a (scale invariant) power-law. The most common spectral feature is some "spectral dip" between the end-point and the low-energy part of the spectrum, where the atmospheric (and possibly other astrophysical) backgrounds take over. This feature is typically expected in all (relatively common) circumstances where the spectrum is characterized by comparable level of "soft" neutrino channels (as coming from quark cascades) and "hard" neutrino channels (notably associated to leptonic final states). This was shown parametrically for different final states in [23]. Here we take a more theoretically justified approach, but shall limit ourselves to some benchmarks to illustrate the main points.
In a large class of models, named of the "portal type", a link between the dark sector and the standard model can be established via a Lagrangian of the form where O SM and O DM are gauge-invariant operators composed solely of the standard model and dark sector fields, respectively, d is the dimension of the (in general, effective field theory) portal operator, and Λ is an energy scale. The so-called "neutrino portal" [38] consists of replacing O SM with the singlet operator (H L) constructed from the lepton and Higgs SU(2) doublets. In the standard model this is the lowest dimensional (dimension 5/2) singlet operator that can be used to build a model where the DM candidate decay into neutrinos.
Concerning O DM , the simplest possibility is obviously to replace it with a spin-1/2 gauge singlet neutrino N identified with the DM candidate. Basically this corresponds to DM being a heavy "sterile neutrino", in which case L portal has dimension d = 4 and the corresponding theory is renormalizable. This possibility has been explored in [30] in connection with the interpretation of the IceCube data. Since this is the simplest case, let us summarize the main results: The lifetime of N (which is the DM candidate) is roughly given by [30] τ DM ∼ 8 × 10 28 s M N 1 PeV  where M N is the mass of N and y N is the dimensionless coefficient of the operator N L H. At tree level, N decays to ± W ∓ , ν Z and ν h channels with the relative branching ratios 2 : 1 : 1. In the specific model of Ref. [30] even the flavor structure of decay channels (i.e., the branching ratios to each flavor) are determined. To be definite, let us separately consider the two cases, normal mass hierarchy (NH) and inverted mass hierarchy (IH), as reported in [30], which lead to the following branching ratios: where = e, µ, τ and U i are the elements of the neutrino mixing matrix (we assume CP conservation in neutrino sector). For the mass of DM particle we have the freedom to choose a value within twice the energy width of last (observed) bin of IceCube data. We choose m DM ≡ m N = 4 PeV. Figure 7 shows the energy spectrum of (ν e + ν µ + ν τ )/3 (relying on [39] computed as in [23]) for this mass for both NH and IH cases.
In this model, a variety of production mechanisms are possible depending on the details of its completion. For example, by adding a mass term M N N and an interaction term with a scalar Φ, g Φ N N , we can consider the possibility that N is populated via the decay of Φ in the case that Φ is heavier than N . This possibility has been studied in [30] by taking Φ to be the "Higgs" of the broken gauged B − L symmetry. In this scenario, Φ can also be invoked for both inflation and leptogenesis, and the mass of Φ, m Φ is acquired via the vev of Φ. We note however that this is by no means the unique choice. For instance, for the case m Φ m N , the sterile neutrino can attain the right abundance via the so-called "freeze-in scenario" for coupling g 10 −6 ; this would represent a variation of the model described in [40].
Another possible choice for the "Dark Sector" operator would be to replace O DM with a gauge-invariant combination (mimicking the SM operator (H L)) built out of a singlet scalar φ and a singlet fermion χ; i.e., O DM → χ φ, which leads to the unique dimension 5 operator of this portal type. The spectra would be different in this case due to the presence of a "dark" daughter particle in the final state. This operator is typically adopted in "Asymmetric Dark Matter" models, see for instance the review [41], or [42] for a specific example. These scenarios imply a link between the abundance of DM and the generation of baryon asymmetry.
Although it is possible certainly to utilize these models to allow for a PeV-scale DM, we do not indulge in further details here since phenomenologically one can reproduce signals similar to the one mediated by the dimension four operator, apart from the kinematical differences.
At dimension 6 and/or higher, a larger number of operators containing other "portals" in the standard model become possible. A list of those carrying B − L quantum number is reported for instance in [43]. The discussion becomes soon very technical and model-dependent. Qualitatively, however, the higher-dimensional operator models share the possibility of sizable branching ratios into hadronic final states, associated to softer neutrino spectra. Thus, while any model that fits the spectrum should be relatively similar in the hard channel (associated to "leptonic" operators), the lower-energy part of the spectrum is more model-dependent. In [23], we already argued that with both soft and hard channels available, a fit of the data can be easily achieved. It is worth showing however that already with the most constrained model, corresponding to the dimension-4 operator N H L, an acceptable fit can be obtained: Figure 8 shows the energy distribution of events for a 4 PeV DM mass, including both the DM signal and the atmospheric residual background (only relevant at low energies), for the previously mentioned NH model (left panel) and IH model (right panel). The width of the shaded regions corresponds to variation of τ DM within 1σ range around the best-fit point obtained from the fit to IceCube data (τ DM = 7.3 × 10 27 s for NH and τ DM = 1.1 × 10 28 s for IH). The dashed green line shows the expected events for an E −2 ν astrophysical spectrum. Qualitatively, this DM model (the simplest one!) is in a better agreement with the data than the astrophysical one above about 300 TeV, and in slightly worse agreement at lower energies. However, in our opinion the model matches the data very satisfactorily, given the fact that it is very reasonable to expect either some extra signal at low energy (coming from sub-leading soft channels of DM decay, as for instance mediated by some dimension-6 operators) or extra background from either prompt atmospheric neutrinos or other astrophysical neutrinos with a relatively steep spectrum.

Gamma-ray bounds
Any high-energy neutrino signal is associated with gamma-ray signals. The DM decay is not an exception and here we discuss some of these potential signatures. At energies E γ > TeV, the Universe is opaque to gamma-rays. However, cascades develop: gamma-rays produce e ± pairs through the interaction with the interstellar radiation field and, at PeV energies, with the CMB. In turn, these e ± will produce further gamma-rays via inverse Compton onto CMB photons. The process continues until gamma-ray energies fall below a threshold of O(100) GeV. So, the extragalactic component of the DM electromagnetic signal (i.e., gamma and e ± fluxes) will certainly end up in cascades. The inverse Compton scattering photons induced by e ± will populate a gamma-ray flux with energies roughly between E 1 ∼ O(1) GeV and E 2 ∼ O(100) GeV, see for example Fig. 5 in [44]. This flux is constrained by the Fermi data [45] which have an integrated energy density (figure affected by a 20%-30% uncertainty) To check if the expected gamma-ray signal from decaying DM with properties discussed in section 4 violates the bound in eq. (5.1), we calculate the total electromagnetic energy budget; i.e., the total energy density deposited in prompt gamma and e ± in the decay of DM. A quick computation shows that for the NH case in the previous section, one obtains and for IH case, the energy density is smaller by a factor of two. The above energy density is smaller than the one measured by the Fermi-LAT by almost a factor of ∼ 20. Even including the residual isotropic contribution from DM halo (i.e. the anti-GC flux), the energy density still falls below the Fermi bound by almost one order of magnitude The above figures imply that it is not possible to probe PeV-scale DM decay with Fermi data on the isotropic residual flux, yet. However, there is some mounting evidence that at least a significant fraction of the diffuse gamma-ray flux at high-energy may be due to unresolved BL Lac objects, see for instance the recent [46] and references therein. If this can be solidly established, a corresponding improvement of the constraining power can be expected. In fact, for the Galactic contribution that we considered in eq. (5.3) the situation is more complicated and, at the same time, more promising since additional diagnostics is available. The absorption length of gamma-rays at PeV energies is comparable to typical Galactic distances: so, one cannot assume either full absorption/cascade development or complete transparency. Also, both the CMB and the Galactic interstellar radiation field are important (see for instance [47]). Both the computations of the non-absorbed gamma-ray flux and of the IC signal from diffuse photons up-scattered by e ± from DM decay constitute in general complicated 3D problems. However, for our goal here it is sufficient to show that even the direct bounds on the photon flux at ∼ PeV energies are not ruling out the scenario considered in this paper (after having shown above that the cascade bounds are not constraining either). To that purpose, we compare the photon flux due to the Galactic DM halo with the CASA-MIA bounds of Ref. [48]. Since the CASA-MIA bounds were derived under the isotropic approximation, we compare the bounds with the anti-GC contribution of the halo signal, which represents the isotropic residual of the DM signal (the additional signal is indeed peaked towards the Galactic Center). On the other hand, to be conservative, we shall assume unattenuated flux for the DM photons. The CASA-MIA bounds state that: • The integrated flux of γ-rays above 330 TeV must be below 1.0 × 10 −13 cm −2 s −1 sr −1 .
For the time being, the above results show that the assumed DM decay scenario is safely within the γ limits. However, more constraining result would follow from a "customized" analysis exploiting the peculiar angular dependence of the DM signal, as done e.g. for the Galactic Plane profile template in [49].

DM lifetime constraints from astrophysical explanations of IceCube data
Up to this point we discussed the possibility of explaining the IceCube data within the decaying DM scenario. However, let us assume that the observed events in IceCube originate from yet unspecified astrophysical sources. In this case, IceCube observation can be used to constrain DM properties. In this section we derive the corresponding limits on heavy DM decay lifetime. Additionally, the lower energy part of IceCube data can be used to probe the annihilation cross section of DM. We study the annihilating DM case in appendix A.
A lower limit on the DM lifetime τ DM can be obtained with a method similar to the one used in [50], to which we address the reader for further details. The major differences with respect to that paper consist in the more refined spectra (which account for W, Z−strahlung corrections) and the dataset that we are using in this section. In our analysis we consider the 12 bins of energy from 10 1.4 TeV to 10 3.8 TeV of IceCube data (see Figure 2 of Ref. [3]). The number of observed events in the i-th bin, N i data , is given in the fourth column of Table 1. For the astrophysical neutrinos, we assume two unbroken power-law fluxes: E −2 ν and E −2.3 ν fluxes with the normalizations reported in IceCube paper [3]. The expected astrophysical events (including the best-fit atmospheric background) in each bin is given in the third column of Table 1. For each bin i we can derive the N i limit such that the number of events from DM decay should be smaller than N i limit . At q% confidence level, the N i limit can be read from the following equation: Table 1. Summary of the binned information used in our analysis for deriving limit on DM lifetime, including: bin width, the expected number of astrophysical events for E −2 ν and E −2.3 ν fluxes, N astro , the number of observed events, N data , and the upper limit on neutrino events at 90% C.L., N limit , assuming E −2 ν and E −2.3 ν fluxes. The last column is the astro-model-independent values of N limit . bin # log 10  fluxes, with differences typically at the 10% level or smaller. This means that the limits are data-driven and robust with respect to the details of the astrophysical explanation involved. In particular, at higher masses, a similar result would be obtained if we had been even more agnostic on the astrophysical explanation and had simply derived, in each bin, the maximum theoretical flux consistent with the observed data. This astro-model-independent limit corresponds to replacing L(N i data , N ) in eq. (6.2) by 3) The obtained N i limit for this case are shown in the last column of Table 1. As can be seen, for energies 400 TeV the values of N limit are almost the same as the case of E −2 ν (or E −2.3 ν ) astrophysical flux. Deviations grow at lower energies, since the exact amount of the signal accounted for by astrophysical sources becomes more important. Figure 9 shows the obtained lower limit on the DM lifetime for various DM decay channels: Figure 9a for leptonic final states and Figure 9b for hadronic, gauge boson and Higgs final states. The final state ν ανα in Figure 9a means DM decaying to all the flavors of neutrinos with equal branching ratios. The gray dashed curve in Figure 9a shows the lower limit on DM lifetime obtained before the recent IceCube data, taken from [50], which should be compared with the red solid curve. As can be seen, by assuming an astrophysical origin for IceCube data, the lower limit on τ DM can be improved by about one order of magnitude. In both panels in Figure 9 we assumed a E −2 ν astrophysical flux. However, to illustrate the dependence of the limits on the assumed astrophysical flux, in Figure 10a we plot the limit on τ DM for one channel, DM → µ + µ − , for both E −2 ν and E −2.3 ν astrophysical fluxes, as well as the model-independent limit described in eq. (6.3). As it can be seen, the limits are the same for DM AE n a n a DM AE e + e -DM AE m + m -DM AE t + t -DM AE n a n a HAMANDA+IC22+IC40L  astrophysical fluxes, respectively. The green dotted line is for the astro-model-independent case (i.e. using N limit from the last column of Table 1). All the curves are at 90% C.L..
higher DM masses and only differ slightly at lower masses. Obviously, the model-independent limit is the most conservative one.
Of course, in specific models of DM the lower limit on τ DM will depend on the combination of different branching ratios. As an example, Figure 10b shows the lower limit on τ DM for the model proposed in [30], with the red solid line for NH and the blue dashed line for IH. For any other model, the corresponding limit on lifetime can be derived from the curves in Figure 9 by appropriate scaling of the channels according to the branching ratios determined in the model.
Note that independent constraints from diffuse gamma-rays yield bounds in the range of 10 26 − 10 27 s [51,52], which makes the present bounds from neutrinos comparable if not better. In particular, for ∼ PeV masses and leptonic final states, these neutrino bounds are stronger by a factor of a few, which is consistent with the previous conclusions that the diffuse gamma-ray data are a factor of few below the sensitivity needed to probe the decaying DM models fitting the IceCube data.

Discussion and Conclusion
The discovery of a flux of high energy neutrinos by the IceCube collaboration, originating from outside of the solar system, has ushered us into a new era for astroparticle physics. This observation opens a new window for exploring the high energy astrophysical sources, either Galactic or extragalactic ones, but also for constraining physics beyond the standard model. Since a consensus has still to be reached on the interpretation of the detected events, it is important to sharpen the diagnostic tools to probe different alternatives. Here we have revisited the viability of a Dark Matter decay interpretation as the source of these events.
We have performed an extensive analysis of the angular distribution of the events. With a variety of statistical tests (maximum likelihood, Kolmogorov-Smirnov and Anderson-Darling tests) we concluded that a mild preference (at or below the two sigma level) exists for a DMlike distribution with respect to a purely isotropic distribution foreseen for a conventional astrophysical flux of extragalactic origin. Qualitatively, the result does not depend on the type of statistical estimator used, nor from some of the approximations performed, and reflects an enhanced (albeit statistically not very significant) flux from the inner Galaxy region. We have also estimated that in a decade or so, IceCube should collect a sufficient statistics to perform a more constraining test, such that a DM distribution could be falsified at (close to) the three sigma level. Already at the present level, some aspects of our analyses are limited by the lack of knowledge of relevant experimental parameters. Thus, we definitely encourage the collaboration to provide a more stringent test of the DM hypothesis, together with the ongoing tests of the Galactic Plane component or of the compatibility with isotropy.
Concerning the energy spectrum, although we are not performing extended fits to the spectral shape, it is clear that DM models and a typical astrophysical model can provide comparably good fits and that the data have little discriminatory power at the moment. We substantiated this point by showing that even the simplest and most constrained effective field theory operator (dimension four) allowing for heavy DM decay in portal type models, is capable to provide a satisfactory fit of the data, notably at the highest energies (above a couple of hundred TeV). Needless to say, in more complicated models with more free parameters and extra decay channels the fit can be improved. Like for the directional signature, qualitatively the DM and astrophysical models do show clear spectral differences. The two classes of models typically differ: i) at E ν 2 PeV, where DM presents an abrupt cutoff, while astrophysical models are often associated to a very mild decrease (and actually a peak at the Glashow resonance energy of about 6.4 PeV); ii) in the intermediate energy range, say between 200 and 800 TeV, where a more or less prominent dip of events can be accommodated in DM models, but would be more difficult to explain in the astrophysical models. It is worth mentioning, however, that this "dip" is not extreme: the difference in the event rate at these intermediate energies is not expected to exceed a factor of a few between the two cases. Probably, a reduction of the errors bar size by a similar level (so, naively at least a factor four higher statistics) is needed before this diagnostic tool becomes more meaningful. It is worth mentioning that the spectral shape at lower energy (tens of TeV) is affected by the atmospheric background as well-which may be underestimated-so the agreement with the spectrum at the highest energy is certainly a more stringent test. Of course, it is possible and in fact likely that these neutrinos are not due to DM decay. Even assuming that the IceCube data originate from conventional astrophysical sources, however, we could derive bounds on decaying DM for various final states. We have derived the lower limit on heavy dark matter lifetime which shows improvements by up to an order of magnitude with respect to the existing constraints. That is by itself an interesting spin-off of the IceCube data for astroparticle physics. Also, in appendix A we showed that some non-trivial bounds can be obtained on the annihilation cross section, σv , of WIMP-like DM with masses 100 TeV. For neutrino final states, these bounds are already stronger than the bounds from unitarity requirement. Definitely, by extending the threshold of the IceCube analysis to lower energies-it is encouraging that progress is being made in this sense [53]these bounds could be made stronger.
Finally, we have commented on the complementary constraints on decaying DM from gamma-rays, which are not yet competitive with neutrino ones. We have shown that generally the available gamma-ray data are sensitive to PeV-scale DM lifetime ∼ 10 26 − 10 27 s, which is one order of magnitude smaller than the lifetime required to interpret the IceCube neutrino data. The main obstacle to sharpening this sensitivity using the Fermi data is the yet unknown origin of the diffuse extragalactic background flux. However, this diagnostics can be improved by achieving a better understanding of this background and, to some extent, by resolving part of its sources. Another opportunity is the measurement of the photon fraction in high energy cosmic ray flux, as we mentioned in section 5. In particular, an ad hoc analysis (accounting for the peculiar anisotropy associated to the DM signal) of the photon fraction in cosmic ray data in the (0.1-1) PeV range can boost up current sensitivity. The IceCube collaboration itself, by using the IceTop facility, can contribute to progress in this direction.
Definitively, no matter what the nature of this newly discovered flux will turn out to be, these observations have contributed to open a new window to the universe and, at the same time, offered a serendipitous new probe of astroparticle physics. Table 2. The upper limit on σv (at 90% C.L.) for various channels of DM annihilation and three masses of DM, obtained from the residual isotropic Galactic halo flux at anti-GC direction (see eq. (A.3)). The unit of σv is 10 −22 cm 3 s −1 .
X X X X X X X X X X X • The limits in Table 2 are obtained by considering merely the Galactic halo contribution; in this sense, they are very conservative. However, a second contribution to isotropic neutrino flux is the cosmic flux from the annihilation of DM at all the redshifts. The cosmic flux is given by where ζ(z) takes into account the DM clustering which depends on the redshift. There are several calculation of ζ(z) in the literature (see for instance [54][55][56][57][58][59]). Although most of the estimations of ζ(z) match at large z (where contribute to cosmic flux negligibly), at small z values, where the significant part of flux comes from, large discrepancies exist. A recent overview and re-evaluation of the uncertainties in ζ(z) can be found in [54], which represents a follow-up and improvement of the method proposed in [60]. For our analysis, we take two extreme cases of ζ(z), from Figure 5 of [54], obtained from the result of Millennium Simulation II [61]. Table 3 reports the obtained limits on σv , considering both the fluxes in eqs. (A.3) and (A.5), for various channels of DM annihilation. The reported limits correspond to min÷max value used for ζ(z), as we described. As can be seen, for the minimum values used for ζ(z), the limit on σv in Table 3 is close to the one obtained by considering just the Galactic halo contribution (see Table 2) since in this case the residual isotropic Galactic halo contribution is the dominant flux. On the other hand, for the maximum values used for ζ(z), the cosmic flux dominates and the limits on σv are stronger by almost an order of magnitude. For the neutrino final state case, these limits are more stringent than the ones imposed by unitarity constraints (see for instance Fig. 2 in [62]) and might thus be of some phenomenological interest.