Analysis of decay chains of superheavy nuclei produced in the 249Bk+48Ca and 243Am+48Ca reactions

The analysis of decay chains starting at superheavy nuclei 293Ts and 289Mc is presented. The spectroscopic properties of nuclei identified during the experiments using the 249Bk+48Ca and 243Am+48Ca reactions studied at the gas-filled separators DGFRS, TASCA and BGS are considered. We present the analysis of decay data using widely adopted statistical methods and applying them to the short decay chains of parent odd-Z nuclei. We find out that the recently suggested method of analyzing decay chains by Forsberg et al may lead to questionable conclusions when applied for the analysis of radioactive decays. Our discussion demonstrates reasonable congruence of α-particle energies and decay times of nuclei assigned to isotopes 289Mc, 285Nh and 281Rg observed in both reactions.

(Some figures may appear in colour only in the online journal)

Introduction
The experiments on the synthesis of the odd-Z superheavy elements in the 249 Bk+ 48 Ca and 243 Am+ 48 Ca reactions yield chains of the sequential decay of the isotopes of elements Mc or Ts following a scheme (Ts)→Mc→Nh→Rg →. Unfortunately, the energy spectra of α particles of nuclei with odd numbers of protons and/or neutrons can be quite broad and sometimes may not be used for unambiguous identification of neighboring isotopes. Therefore, in such cases there is a need to analyze also their decay times.
Neighboring parent isotopes with commensurate lifetimes can be observed in one reaction studied at the same energy of bombarding ions. The study of the same reaction but at different beam energies usually changes the yields of these isotopes. Moreover, the decays of the same nuclei can pass through different energy levels (allowed or hindered transitions) that can also lead to the observation of somewhat different lifetimes of nuclei. Thus, taking into account the possible comparability of the lifetimes of neighboring isotopes, reasonably expected fluctuations of the decay times of nuclei, and a small number of observed nuclei in such experiments, the data analysis should be performed using statistical methods which have proven their value and reliability. It helps to get the reference assignments and to avoid possible errors eventually arising from using methods, which are newly proposed but untested for low statistics events.
In a recent paper, Forsberg et al [1] presented results of experiments aimed at the study of the fusion-evaporation reaction 243 Am+ 48 Ca with the TASISpec setup at the Transactinide separator and chemistry apparatus (TASCA) at the GSI. Two recoil-α-fission and five recoilα-α-fission events were observed and assignment of currently available similar decay chains to the isotopes 288 Mc and 289 Mc was discussed [1]. Previously using the same reaction, four similar recoil-α-α-fission decay chains were observed at the Dubna gas-filled recoil separator (DGFRS) and were assigned to 289 Mc [2]. Besides, two recoil-α-fission and one recoil-α-αfission events were reported from an experiment at the Berkeley gas-filled separator (BGS) and assigned to 288 Mc [3]. These decay chains significantly differ from the decay chains of isotopes assigned to 287,288,290 Mc. First, the total decay time of all nuclei in these chains is much smaller than those for 287,288,290 Mc. Second, these chains are terminated by spontaneous fission (SF) of isotopes of Rg or perhaps element Nh, while the chains of 287,288,290 Mc end by fission of isotopes of lighter elements-direct SF of Db or products of their electron capture, isotopes of Rf (see, e.g., [4]). , analytic expression for the probability density function f (t) (for 14 events, black dashed line), an 'exponential distribution' g(t) (according to [1], brown short-dashed line), and 90% confidence interval for arithmetic mean FoM ar values (red dashed-dotted lines) (all data are from [1]). Probability density function exp(−t/τ)/τ for an exponential distribution is shown by the blue solid line. FoM j values and function f (t) are also shown in the insert in the logarithmic time scale.
The decay properties of nuclei in all fourteen decay chains were analyzed with a novel 'more robust' statistical treatment explained in appendix A in [1]. A figure-of-merit (FoM) was proposed as a probability density function for decay times of nuclei, and corresponding values were calculated for each correlation time t in three decay steps of elements with proton numbers Z=115 (Mc), 113 (Nh), and 111 (Rg). The analytic expression for the FoM values for a data set with N data points and average lifetime τ of nucleus was obtained by weighting an 'exponential distribution' g(t)=(t/τ) exp(−t/τ) (as stated in [1]) with the normalized likelihood function for τ (formula A.2 in [1]): with respect to relative decay times (individual decay times t j were divided by average lifetimes τ 1-3 for each decay step 1-3) as well as an 'exponential distribution' g(t) are shown in figure 1 (dashed curve, squares, and short-dashed curve, respectively). The lower and upper limits for arithmetic mean FoM ar values, corresponding to a 90% confidence level (CL), were obtained by Monte Carlo simulations [1]; these limits are shown by red dashed-dotted lines in figure 1. In addition, the probability density for the true exponential distribution exp(−t/τ)/τ is shown by the blue solid line.

Discussion of newly proposed method
In figure 1, the functions f (t) and g(t) are calculated for the relative lifetimes (t=t j /τ) and number of events N=14. The individual FoM j values are shown for all of the 38 decay times given in table 1 in [1]. As it can be seen, the functions f (t) and g(t) cannot be considered as probability density functions for a normal exponential distribution which is exp(−t/τ)/τ. This function reaches a maximum at t=0, if the radioactive decay times are considered.
In [5] a function g(t)=(t/τ) exp(−t/τ) was obtained for the distribution of logarithms of times according to the method suggested by Schmidt et al [6,7]. Here the frequency distribution of decay times dN/dt=exp(−t/τ)/τ is changed to a representation, where time t is replaced by ln(t). In this case the derivative of the decay function N(t) over y=ln(t) is transformed to dN/dy=1/τ exp(y) exp[−exp(y)/τ] (or function g(t) if parameter y is changed to t again). The distribution of ln(t) has a maximum at the value of lifetime of the nucleus, t=τ. The function f (t), which is similar to g(t), together with FoM values are shown in the logarithmic time scale in the insert in figure 1.
The doubtfulness of the proposed method can be demonstrated by table 4 in [1]. Here none of the 14 FoM 1 values, only one of the 14 FoM 2 values, and one of the 10 FoM 3 values fall within the suggested 90% interval of 0.181-0.255 (namely, 5% of all individual decay times); the same is seen in figure 1 where only two of the 38 decay times are located between horizontal lines denoting 90% confidence interval. For the geometrical average FoM geom values, only four of the 14 values are within this interval (29%). Despite the fact that these intervals were calculated for the arithmetic average value FoM ar of the 14 geometrical mean values FoM geom. , each calculated for two to three individual decay times, it is hard to believe in the validity of a proposed method in which only two of the 38 individual FoM j values fall  within the 90% confidence interval. From the results of the proposed calculations it could be assumed that the analyzed decay chains originate from different parent nuclei despite the fact that all of these nuclei were produced in the same reaction.
Such a conclusion may result from the fact that in this method neither decay times corresponding to the maximum of the exponential distribution (t=0) nor the time closest to the average lifetime of nuclei (the maximum of distribution of ln(t)) nor the large times meet the proposed 90% interval (see figure 1). If all of the decay times of the nuclei in the chain are close to the maximum of the exponential distribution of t (t≈0), the FoMs will be lower than 0.181 (see figure 1). Vice versa, the decay times close to the maximum of distribution of ln(t) will result in large FoMs exceeding 0.255. Moreover, to match the interval of 0.181-0.255, a decay chain should begin with decay of parent nucleus with, e.g., high FoM, but be followed by decay of the (grand) daughter nucleus with low FoM or vice versa. Finally, if some decay chain has a large FoM value, then the next chain should be with a small FoM value. However, observation of decay chain with times which are close to one or another maximum should be much more probable than observation of chains with lower or larger times. Because of this inconsistency, we consider that the model is incorrect and the FoM values given in tables 4, 5, and 7 in [1] are not meaningful.
Based on their calculations, Forsberg et al [1] state that 'the short chains do not constitute a set of chains that have the same origin and follow the same decay sequence, and that they should not be grouped together'.
In addition to the 14 short decay chains observed in the 243 Am+ 48 Ca reaction [1][2][3], 16 similar short chains were observed after α decay of parent nuclei produced in the 249 Bk+ 48 Ca reaction studied at DGFRS in order to synthesize isotopes of Ts with mass numbers 293 and 294 [8][9][10].
Despite the availability of experimental data on the 14 short decay chains from the 243 Am + 48 Ca experiments [1][2][3] and the 16 chains from the 249 Bk+ 48 Ca reaction [8][9][10], only four [2] and ten [8][9][10] chains, respectively, from the DGFRS experiments were recently considered by Forsberg et al in their paper 'A new assessment of the alleged link between element 115 and element 117 decay chains' [11]. The same method was applied again for the analysis of compatibility of decay times of nuclei in these chains that resulted in conclusion by Forsberg et al 'the ten element 117 chains together with the four element 115 chains from Dubna do not, on a close to 100% confidence level, form a common ensemble' [11]. Again, the same FoM values are presented in table 1 in [11] which contains the same mistake.
In the following sections we present existing experimental data and results of their statistical analysis using widely adopted mathematical methods.

Experimental data
For the analysis of all of the available data for nuclei which can be attributed to the parent isotope 293 Ts and descendants, we include data from experiments aimed at the synthesis of element Ts. The α-particle energies Eα and corresponding decay times for given isotopes observed in the 249 Bk+ 48 Ca and 243 Am+ 48 Ca reactions in experiments performed at DGFRS [2,[8][9][10], TASCA [1], and BGS [3] are shown in figure 2. Despite different assignments in [1,3], in figure 2 all of the short decay chains from the TASCA and BGS experiments are assigned to originate from parent isotope 289 Mc.
Time intervals for events following a missing α decay in the DGFRS experiments were measured from preceding registered events and are shown by horizontal arrows. It means that upper limits of decay times are given. In the TASCA and BGS experiments in total the four decay chains were observed as recoil-α-SF events whereas the other six chains consist of two consecutive α decays and SF.
In two of the 16 decay chains of 293 Ts [8][9][10] α particles of 285 Nh were presumably missing. In the BGS experiment, in the 43 decay chains assigned to originate from 288 Mc which undergoes five consecutive α decays, in 23 cases α particles were missing or assignment of registered signals to given nuclei was 'unsecure' [3]. Thus, the expected number of missing decays in three chains, each of which would consist of the two α decays, is 0.64. Despite the seemingly low value, the probabilities of missing one or two α particles in three chains do not look small, 34% and 11%, respectively. In the 23 decay chains assigned to 287,288 Mc in the TASCA experiment up to 17-20 α particles may be considered as missing (results of the same experiment are published in [1,12]). In addition to the three events indicated in table 1 in [12] as 'missing', several other events in the chains nos. 6, 13, 18, and 22 (marked with footnotes f, k, n, and q) as well as one of the ten beam-off escape-like events (footnote a) may be random. The main part of presumably missing α particles was indicated as 'several options' in chains nos. 6, 8, 9 14, 17, and 21. In all of these cases several signals were registered but none of them is determined definitely as a non-random α particle. The observation of several signals does not mean that at least one of them is real event. If so, the number of possibly missing α particles in the seven short decay chains would be about two. Thus, one cannot exclude that several α events were missing in the four short decay chains in the TASCA and BGS experiments and these chains actually consist of more than one α decay.
However, in the following discussion we omit this possibility and will use decay times as they are given in the published papers. Besides, in two short decay chains observed in the 249 Bk+ 48 Ca reaction [10] instead of SF the isotope 281 Rg underwent α decay followed by 5 ms SF of 277 Mt which may be explained by a 10% SF branch of 281 Rg and does not change further conclusions.
As it can be obtained from a simple Monte Carlo simulation, the ratio between the maximal and minimal time values of the 25 exponentially distributed random events is expected to be within an interval of about 30-1800 with a CL of 90% (the same CL as in [7]). In addition, for analysis of decay times t of nuclei and evaluation whether the sets of experimental data are compatible with the assumption that these data originate from the decays of the single radioactive species, a statistical test procedure suggested by Schmidt et al [7] was applied. This method is based on the calculation of standard deviations σ(ln(t)) exp of logarithmic decay-time distributions for given numbers of observed events. Larger fluctuations in ln(t) exp values could indicate that a continuous background contributes to the data or that several activities with different half-lives are observed. The standard deviation values σ (ln(t)) exp for different nuclei together with lower and upper 5% limits (shown in brackets), expected for a single exponential decay of the given number of events N, are the following: The σ(ln(t)) exp values calculated for all of the decay chains of two of the five isotopes ( 293 Ts and 281 Rg) somewhat exceed upper limits. However, the largest contribution to σ(ln (t)) exp values is due to the shortest decay times for 293 Ts (one event with t<0.1 ms) and 281 Rg (one event with t<0.1 s), see figure 2. For an exponential distribution, the probability density of registration of short decay times is the most probable. Though, the expected numbers of such events, taking into account numbers of observed decay chains and half-lives of nuclei calculated with 68% CL (T 22 ms Note, the same conclusion follows from table 3 in [1] where data for short decay chains observed only in the 243 Am+ 48 Ca reaction at DGFRS, TASCA, and BGS are analyzed by method [7]. Therefore, again one comes to the conclusion that for the most part the decays attributed to 293 Ts (vis., 14), 289 Mc (25), 285 Nh (25), 281 Rg (23), and 277 Mt (2), are consistent with the assumption that they originate from the decay of a single nuclide at all of the steps of decay chains.

Statistical analysis
In this section we present results of statistical analysis of experimental data by conventional methods of mathematical statistics.
The sequences of experimental decay times of nuclei, from the parent to the latest registered one t i →t i+1 →K→t i+n , are given. Thus, one can separate the decay times from each decay step in all of the experiments, group them to sets of random numbers t ij -A 115,j , A 113,j , and A 111,j , where the index j is the type of the experiment (performed, e.g., at DGFRS, TASCA, or BGS), and analyze these values separately from other steps. The task of the mathematical analysis of the decay times of nuclei consists in the assessment of the compatibility (congruence) of each set of the observed decay times of nuclei.
The first characteristic of interest is the function of probability distribution of decay times: In statistics, τ represents the mathematical expectation of the random quantity, subject to a distribution (1), with variance τ 2 . The density p(t, τ) of (1) is a function: The expectation e and the variance v of (1) have different meanings (e=τ and v=τ 2 ) and both are independent on time t. Though the individual events are characterized by exponential distribution, their sums are not exponentially distributed. A sum N(0, t) (a random quantity) of the above rare events in the time interval [0, t] has the Poisson distribution: where a=t/τ. Here m is an arbitrary integer number, τ is a parameter, and a is the mathematical expectation of (3) and at the same time its variance, both depending on time t. It is known that the best estimate of the parameter a on the basis of experimental value N (t 1 , t 2 ) is N(t 1 , t 2 ) itself (maximum likelihood estimate MLE), or N(t 1 , t 2 )+1-the Bayes estimate. Thus, one can write: as a time bin (or channel), we have a regression N(t i ) where t i stands for the [t i1 , t i2 ]. With all this information from the data, we can start the analysis of the regression by the most popular method of the regression analysis-the least squares method. However, for successful analysis, a regression should be formed of thousands events. While at our disposal there are only several events or a couple of dozens in the best case. Therefore, we should use methods suitable for analysis of a small number of events; the most popular among them is the maximum likelihood assessment, according to which the value of τ in (1)  This is an efficient (having the asymptotic 1/n) estimate of this parameter, and its square is a good estimate of its variance t 2 .
The most general problems we discuss here are: 1. Estimation of the parameters of distribution functions; 2. Testing hypotheses.
To solve them, there exists a powerful apparatus of mathematical tools, elaborated and investigated for many centuries.
We have already mentioned the MLE to solve the problem of parameter estimation. Unfortunately, the MLE method does not answer the question-to what extent the model of the decay with the parameter τ is adequate for the available data. Thus, to select the best model for our data we have to form different hypotheses and test them trying to find the most probable one.
The main idea of testing hypotheses in mathematics can be expressed in the following way. For a distribution of the tested data its important characteristic is chosen-normally some function of parameters of the tested data-let us call it S. Since S is a function of random quantities, it is random itself. It has its distribution function and its main momentsthe expectation and the variance (or σ-the square root of the variance). The behavior of each random quantity is completely unpredictable-it can possess any value, completely chaotically, making any jumps from one to another, though with different probabilities. But the fact that these probabilities are different can be noticed only for an ensemble of such events, not for a single event.
Thus, for determining S, we should take such function of the data, of which it is constructed, that reacts on the behavior of separate events with a noticeable inertia and which fluctuates preferably in a certain interval. In that case, the estimates and decisions made on basis of function are more reliable than those based on a single event. A typical case of such parameter S is the current mean of the data.
In the literature, sometimes one can find the incorrect statement that any S has the same distribution as the original data. However, e.g., the decay times have the exponential distribution but if we take their mean as S, it will have the gamma-distribution, asymptotically tending to the normal one.
Thus, our first step is establishing a distribution function of S and building the interval of concentration of events with a preset probability (a confidence interval CI). Using the expectation and the σ of S and choosing the CL α (probability) we build such CI I c =[t 1 , t 2 ] into which our S falls with the probability α, and then we make a decision whether to accept or reject the tested hypothesis. Let us consider two important cases.
• Case I: one-sample test for S. Let A=t i , i=1, 2,K, n be a set of registered decays, subject to the exponential distribution with a parameter τ. We take the current average of A as our S and then we have mean and σ estimates: Assuming that n is large enough so that the distribution of t is approximately normal, for a selected α we build a α-confident CI. If the tested value of S falls into our CI, one can say that the tested hypothesis about this value does not contradict our data A with the CL approximately equal to α, so that we are inclined rather to accept this hypothesis than to reject it [13].
• Case II: two-sample test for S. This is the most important case. Let A 1 and A 2 be two sets of decays of an isotope with an unknown parameter τ. Then to decide whether A 1 and For testing both distributions for congruence with respect to mean values we can use the normal or Student's tests and with respect to variances-the χ 2 test. In the Student's case we build a confidence interval I 1 t t 10 n n , where t α,n−1 is the value of the Student's standard random quantity, corresponding to the α probability and the number of degrees of freedom n−1, and s n s n and 13 . 12 Then we can test the hypotheses: 1. H 1 : The data sets A 1 and A 2 have the same distribution (it means that they are congruent with respect to their means); 2. H 2 : The data sets A 1 and A 2 are subject to different distributions (are not congruent).
It is reasonable enough to accept H 1 if the intervals I 1 and I 2 overlap and the probability covering the interval of overlapping is sufficiently large. One can calculate the so called types I andII errors (reject a true hypothesis and accept the wrong one) and, analyzing them, make the corresponding decision. If the overlap is only partial, we need also to test the variances of the distribution σ 2 . In both the tests the sample parameters are used as reference values.
Using the χ 2 -test we build a confidence interval J 1 n 1 1 3 are the values of the χ 2 standard random quantities, corresponding to the α probability and the number of degrees of freedom n−1.
Similarly, if the CIs overlap, then the both distributions are congruent also with respect to their variances. These two tests are classical tools for testing the distributions (normal or approximately normal) for equivalence. Let us apply the described tests to the analysis of data obtained in the DGFRS, TASCA, and BGS experiments.
The results of calculations are given in table 1 and figure 3. The first column of the table 1 shows the atomic number of element and the origin of data which are indicated by the target used in experiments (given in parentheses). The next three columns show corresponding mean values, variances, and standard deviations (root square of variance) calculated according to formula (6) and (12), respectively. The confidence intervals for mean values of lifetimes t calculated for normal (N) and Student's (t) distributions for ≈63% and 90% CLs, respectively, are given in the next two columns. The last column show χ 2 -confidence interval for variance (90% CL). Note, the N-confidence intervals were calculated for one standard deviation. Because of low statistics and thus incomplete compliance of distribution of mean values with the normal one, these correspond to 62.8%-64.1% CLs estimated from Monte-Carlo simulations for 10-14 decay times observed in the reactions with 249 Bk and 243 Am shown in figure 2. If two standard deviations are applied in these calculations, the CLs increase to 87.4%-89.3%, again lower than the 95% level expected for a real normal distribution.
From table 1 it follows that all of the CIs for Z=115 and 113 decays observed in experiments with 249 Bk and 243 Am overlap; hence the data are more congruent than not. For Z=111 decays, both data sets have, according to the normal test, different means for a CL of about 63%, but similar Student's means and the variances. Applying ≈88% CL for the N test, we obtain reasonable congruence between the two results (see figure 3). Therefore, one can conclude that there is no indication of the non-congruence of the results obtained in different experiments.

Discussion and conclusions
Strictly speaking, the empirical test used by Forsberg et al is a surrogate for a criterion for testing hypotheses, without the standard statistical consideration accepted in the mathematical statistics. The papers [1,5,11] claim the analysis of a set of decay chains. Still, the procedure deals with separate events for which certain testing values have been elaborated by the simulations. However, the mathematics requires dealing not with separate events but with some representative functions of events (statistics)-this is a categorical dogma. In the approach [1,5,11], neither a preset model nor parameters are known.
In addition, an apparently correct probability density function for distribution of logarithms of times f (t) was applied for data analysis. According to this, the decay times close to the lifetimes of nuclei should be the most probable. However, subsequent unjustified treatments of data were performed, namely, calculation of the geometrical mean value of f (t) for each step of the decay chain and then calculation of the arithmetic mean of these values for the set of chains. As a result, the mostly probable decay times, according to f (t), became less probable than the other times. We consider that this is the major disadvantage of the approach. The analysis of data by the method within widely accepted conventional mathematical statistics demonstrates reasonable congruence of decay times of descendant nuclei of 293 Ts from the 249 Bk+ 48 Ca reaction and nuclei from the 243 Am+ 48 Ca experiment performed at DGFRS [2,[8][9][10] as well as nuclei observed in the 243 Am+ 48 Ca reaction studied at TASCA [1] and BGS [3]. As it is seen in figures 2, 3 and table 1, all of the data for the first two steps of decay of Z=115 nuclei do not reveal any difference between each other. For the terminal isotope one of the tests, although with low CL (≈63%), hints that data from the studied reactions may be inconsistent. However, two more tests as well as the same N test but with ≈88% CL do not indicate the non-congruence of the results obtained in different experiments. Note, if the accuracy of the results obtained by the conventional methods is not high, that is not their fault but is caused by low statistics.
Thus, the absence of evident proof of the incompatibility of decay times and, conversely, the similarity of decay pattern of nuclei (namely, comparable α-particle energies and lifetimes of Z=115 and 113 isotopes and lifetimes of spontaneously fissioning Rg) observed in different experiments with 249 Bk and 243 Am supports the attribution all of the considered nuclei to 289 Mc and its descendants. Note, such assignment of these nuclei would be in full agreement with both the decay properties of the neighboring isotopes and their production cross sections. For example, the 2n-evaporation channel, which leads to production of 289 Mc in the 243 Am+ 48 Ca reaction, was observed in the 245 Cm( 48 Ca, 2n) 291 Lv reaction with cross section of about 1 pb, similar with the 3n-reaction cross section at about 33 MeV excitation energy (E * ) of the compound nucleus (see figure 4 in [4]). Thus, the observation of the 2n and 3n channels with close cross sections at E * =34 MeV in the 243 Am+ 48 Ca reaction does not look improbable.
In two of the 16 decay chains of 293 Ts the isotope 281 Rg underwent α decay instead of SF from which the α-decay branch of 12 7 9 -+ % (68% CL) was estimated [4]. No such decays were observed in the 243 Am+ 48 Ca reaction although approximately 0.7-2.9 could be expected. This value does not seem to be so large that contradictory of results would be clear. Also one cannot exclude possible missing α decays of 281 Rg in any of the three experiments performed at DGFRS, TASCA, and BGS.
Therefore, the analysis of the data shows that there are no valid reasons which could confidently point out the inconsistency of the results obtained in the 249 Bk+ 48 Ca and 243 Am + 48 Ca reactions. However, an even more detailed measurement of the excitation function for the 243 Am+ 48 Ca reaction and, in particular, direct measurement of the mass number of the parent nucleus leading to short decay chains could more directly point to the origin of these chains.