An alternative methodology for high counting-loss corrections in neutron time-of-flight measurements

In counting experiments associated with pulsed sources, a high data collection rate can lead to considerably large counting losses, especially in the case of spallation Time-of-Flight facilities equipped with medium and short flight paths where the research interest is focused on higher neutron energies where counting losses can be quite large due to the higher neutron flux, the more compressed time frame compared to the one on lower energies and the higher cross-section depending on the reaction. Examples of such measurements are the neutron induced fission experiments at the new experimental area EAR-2 at the n_TOF facility at CERN. Although analytical expressions to account for this inefficiency exist in literature, the introduced corrections are not always sufficient to retrieve the true reaction rate, therefore a different approach is mandatory. This work explores the possibility to quantify the counting losses using detector emulation devices and exponential fits in waiting time distributions. The methodology is benchmarked in the test case of the standard 238U(n,f) cross-section with reference to 235U(n,f) for bandwidths up to 1.9 MHz and counting losses that exceed 60%.


Introduction and overview
Nuclear physics experiments require the counting of certain observables such as the reaction rate. At high reaction rates, regardless of the use of a continuous or a pulsed particle source, the probability to detect overlapped events increases. This overlapping, known in literature as pile-up, eventually leads to counting losses and depends on the high instantaneous pulse rate, the attributes of the formed signals (i.e. width) as well as on the dead-time of the acquisition system, therefore a special treatment is required to account for this effect in the analysis of the measured observable.

Continuous sources
In the simplest case of continuous sources, pile-up and dead-time correction models have been studied and developed over several decades.
These models treat a detector as either an extended, non-extended or a mixture of both systems as described in the review papers by Müller [1,2], providing analytical expressions to calculate the correction factor DT or, similarly, the ratio between the corrected c and the observed o interaction rate for a given fixed dead time . The two most common models used are the extended and non-extended, which are often referred to as paralyzable and non-paralyzable, respectively [3]. In the former model an event that is recorded during the detector's deadtime, will result in the extension of the dead-time period while in the latter the same event will be lost since the dead-time period remains intact regardless of the arrival of new events. The proposed correction factors DT  In the aforementioned models, a prior knowledge of the intrinsic dead-time is required, however, a methodology has recently been developed for in-beam -spectroscopy measurements [4] where an artificial dead-time can be considered and the appropriate corrections can be introduced based on well defined counting rates.

Pulsed sources
In the case of pulsed particle sources a different approach is necessary since the detector is only periodically triggered as opposed to the case of a continuous particle source. A subset of research infrastructures that make use of a pulsed source are time-of-flight facilities where the recorded observable is sorted in timing channels. The number of counts recorded in a given channel depends on the number of counts in the neighbouring channels due to the dead-time [5] which is an additional constraint in modelling the pile-up effect.
Three basic models are currently used in timing applications to predict the counting losses occurring in the th channel by pile-up effects, for a given number of bursts b and a dead-time that is given in channels by the difference − 0 . In the approximation of small counting losses, Bollinger et al. [6] calculated the corrected number of counts in the channel , c ( ) as shown in Eq. (2) Alternatively, in the approximation of small intensity variations of the incident particle beam, Coates [7] gives the corrected number of events as shown in Eq. (3) Moore [8] describes a more general case where the beam intensity of the incident particles might vary appreciably with a relative variance 2 , as can be seen in Eq. (4). It has to be noted that for small variations in the beam intensity, the denominator in Eq. (4) becomes 1 and subsequently the correction is the same as the one seen in Eq. (3).
Finally, the innovative and detailed works of Guerrero et al. [9] and Mendoza et al. [10] explore the possibility of pile-up corrections in the case of rapidly varying counting rates using software techniques. It has to be noted that these methodologies have been successfully used and experimentally validated in the case of detector arrays for either constant or varying counting rates in ( , ) reactions in the resolved resonance region.

Waiting time distributions
A general approach that can be used either for constant or pulsed sources, takes advantage of the fact that counting experiments are In small time differences, recorded counts are less than the expected ones which follow an exponential decay distribution, due to counting losses occurred during the experiment. described by the Poisson distribution and therefore, as calculated analytically in Appendix, 2 the time difference between consecutive events, at a constant counting rate , follows an exponential decay distribution [13], as shown in Eq. (5) ∕ describes the probability density to detect an event after the detection of another. It has to be noted that this expression is valid only at a constant reaction rate or in the approximation of reaction rates that do not vary appreciably, therefore it is not recommended to be applied in the resolved resonance region.
This characteristic of counting experiments can be used to quantify counting losses as illustrated in Fig. 1. The experimental points are expected to follow the theoretical probability distribution seen in Eq. (5) down to a certain time difference. Below this difference they diverge due to counting losses (grey shaded area) which are attributed either to the finite resolution of the detector signals, the intrinsic dead-time of the detection system or inefficient pulse reconstruction, if applied. To quantify these losses and therefore calculate the correction factor DT , the experimental data has to be fitted using Eq. (5). The ratio between the integral of the fitted curve (extrapolated until it intercepts the y-axis) and the data points provides the correction factor DT , as expressed in Eq. (6).
A similar approach has been recently implemented in the backward extrapolation method [14] where reactor data are corrected using artificial signals and fourth degree polynomial fits.

Detector emulation
As discussed previously, the available counting-loss correction models are able to provide reliable corrections within certain counting rates. Additionally, the fitting of waiting time distributions can be globally used provided that the reaction rate is constant and the distributions themselves have enough statistics to perform the fits without introducing additional uncertainties.
The alternative methodology proposed deals with the prediction of the True Counting Rate (R true ) in an experiment, given a recorded Experimental Counting Rate (R exp ) using detector emulation devices and basically provides a hardware estimation of the counting losses occurred. The basic idea is to generate emulated signals that feature the same characteristics as the detector signals of the real measurement at given frequencies that represent R true , feed them in the acquisition system of the experimental campaign and analyse them using the same data analysis framework used in the real experiment to calculate R exp . The ratio between R true and R exp represents the dead-time correction factor DT for each input frequency, as expressed in Eq. (7). In addition to individual correction factors, this methodology provides a function R true = (R exp ), derived from the emulated data, that can be used to estimate R true at given counting rates R exp recorded in an experiment.

Experimental set-up
In order to extract the correction function R true = (R exp ), a dualchannel CAEN DT5800 desktop digital detector emulator [15] was used. This module has the possibility to generate user-defined signals at userspecified frequencies and distributions, such as the Poisson distribution and can emulate the intrinsic dead-time of a detector, which in the framework of the present work was negligible. This methodology has been benchmarked with fission cross-section measurements at the n_TOF facility [16][17][18] at CERN and the input signal shapes were derived from the Micromegas detector [19][20][21][22][23][24] used in the fission experiments.
The goal of this study was to quantify the counting losses due to the finite resolution of the detector signals and possible inefficient offline signal reconstruction. In order to avoid introducing too many parameters in this study that could lead to incorrect corrections, a constant amplitude of 1 V was initially chosen as an output of the emulator instead of a more realistic amplitude distribution. This choice also helps to pinpoint, cases where there is an unrecoverable pile-up, as explained later in the text.
The signals were generated sequentially following a Poisson distribution, so as to emulate a counting experiment at various frequencies ranging from 20 kHz up to 1.9 MHz. The output was then sent to the n_TOF Data Acquisition System [25] and recorded by 12-bit flash ADCs at a 900 MHz sampling rate. The acquisition was enabled at a 500 ms cycle with a 10 ms acquisition window. The total cycles (or bunches) recorded per frequency, were in the order of ∼10 3 , which gives a total acquisition time in the order of ∼10 s and consequently ∼10 6 total number of counts, making the statistical uncertainty of the recorded counts negligible.
It has to be mentioned that during the tests for the validation of the methodology it was observed that the higher the counting rate the more important it becomes to have the emulation data characteristics as close as possible to those of the experimental data, since all the details of the signal shape play a more significant role in the pulse shape reconstruction performances.

Data analysis
The signals recorded by the data acquisition system, were processed offline using the pulse shape analysis routine of the experimental data [26] based on the calculation of the first derivative of each recorded waveform. Among other quantities, the amplitude and arrival time of each signal were stored for further processing. Fig. 2 shows a typical signal reconstruction that was achieved at 1 MHz using negative signals with a constant amplitude of 1 V that corresponds to around 100 ADC channels (dashed grey line).  Typical amplitude spectrum reconstructed at 400 kHz input frequency. Apart from the main peak around 100 ADC channels, which corresponds to the constant input amplitude, double, triple and quadruple piled-up events are evident around 200, 300 and 400 ADC channels. The dashed spectrum illustrates the shape of the normal and the double pile-up peak and is reconstructed from a low input frequency run of 40 kHz. The spectra are normalised with respect to the acquisition time.

Signal reconstruction
When introducing a pulse shape analysis routine to analyse data, a one-to-one signal reconstruction is not guaranteed specially in cases where data is recorded at high counting rates. An example of unrecovered pile-up is shown in Fig. 2 and is marked with a magenta triangle. In this case, the high counting rate resulted in the formation of a triple pile-up event right after the arrival of the unrecovered signal, which led the reconstruction routine to discard it. It has to be noted that since the reconstruction routine uses a set of user-defined parameters, this unrecovered signal could possibly be recognised as a true event with different parameters. The method proposed in this work takes also into account such cases and can provide correction for an inefficient use of the reconstruction routine, since this leads to counting losses as well.
Prominent examples of unrecoverable pile-up are the signals marked with grey triangles (Fig. 2). Although the routine is able to successfully reconstruct the waveform, it cannot distinguish signals that arrive very close to each other, resulting in the formation of a bigger signal similar in shape to the input one and therefore in the misinterpretation of two or more signals as a single one. The initial choice of a constant amplitude in the input signals pinpoints these cases since the signal amplitude is higher than 100 channels by a factor of 2, 3, etc. which would be impossible to know in the case of an amplitude distribution.
The last two simple cases seen in Fig. 2 are examples where the reconstruction works well, either for normal and isolated signals (green rhombi) or for piled-up events that can successfully be recovered (blue squares). Fig. 3 shows a typical amplitude spectrum reconstructed for an input frequency of 40 kHz, where apart from the main peak at ∼100 ADC channels, double, triple and quadruple unrecoverable piled-up events are evident at the peaks around 200, 300 and 400 ADC channels, respectively.
A simple yet efficient means to validate that the signal reconstruction is properly done, is to calculate the true directly from the amplitude spectra. Since the input amplitude was chosen to be ∼100 channels, double, triple and quadruple piled-up events can be easily spotted in the peaks around 200, 300 and 400 ADC channels. The double (triple or quadruple) pile-up peak is formed by two (three or four) events that arrived almost at the same time and they formed a signal twice (thrice or four times) the input amplitude. The integral of each peak, will give the total number of events in each category. To calculate the real number of events in each peak, the integral has to be multiplied by 2, 3 or 4 depending on which peak it was calculated for. The sum of all the peak integrals divided by the acquisition live time will provide the experimental counting rates exp which were in a good agreement within 3% with respect to the input true . This small discrepancy is attributed to the fact that the pile-up peaks are not isolated but are overlapping, rendering the integration range relatively uncertain.
Finally, in order to obtain a more realistic emulation of the actual fission fragments distribution the pulse-height spectrum from a fission experiment at the n_TOF facility was considered (seen in Fig. 4).

Pile-up correction
To extract the pile-up correction factor from the data retrieved from the detector emulator, exponential fits were applied to the waiting time distributions for the bandwidth 20 kHz up to 1.9 MHz. A typical fit can be seen in Fig. 5 for 1 MHz. As described in Section 1.3, below a time difference (in this case around 1 μs), which dictates the fitting range, the experimental points diverge from the exponential distribution. Below ∼500 ns a sudden drop of the counting rate occurs, corresponding to the time difference between two consecutive events below which they are no longer distinguished by the detection and data analysis system. This time is directly related to the width of the detector signal and is considered to be close to its FWHM and is known in literature as resolving time. According to Eq. (6), the correction factor DT will be calculated as the ratio between the integral of the fitted curve (extrapolated until it intercepts the y-axis) and the integral of the experimental points. It has to be mentioned that, as seen from the 95% confidence band in Fig. 5, the experimental points exhibit so clear an exponential distribution that small changes in the fitting range do not appreciably affect the parameters of the fitting curve (below 0.1%) and therefore in this case the correction factor DT is calculated with an estimated uncertainty of the order of 0.1%.
Additional calculations were also performed based on Eq. (7) for cross-checking. The R true was obtained from the detector emulator, therefore it is known, while the R exp was calculated by integrating the amplitude spectra, for each frequency and dividing that with the total acquisition time in each case. The DT in this calculation was in agreement within less than 0.1% compared to the ones calculated from Apart from the experimental points and the fitting function, the confidence band at 95% confidence level is shown. Fig. 6. Relation between the experimental (R exp ) and the true counting rate (R true ). The experimental data has been parameterised using a parabolic function. The inset shows the correction factor DT as a function of R exp and the parabolic parametrisation that was used to describe their relation. the waiting time distributions. This very good agreement can also be seen and justified in Fig. 5. Since the total counts recorded were in the order of 10 6 , a 0.1% difference implies that there is a discrepancy in the thousands (i.e. 10 3 counts) and as seen in Fig. 5 the area of the confidence band, which is proportional to the fit uncertainty, becomes larger around 7 μs where the count values are indeed in the order of ∼10 3 .
Finally the functions R true = (R exp ) and DT = (R exp ) were calculated by fitting the experimental data points with a parabolic function, as seen in Fig. 6. For the R true = (R exp ) relation, the second order parameter was found to be 6.3(1)×10 −4 kHz −1 , the first one 0.96(1) while the value of 1(3) kHz was obtained for the constant one. For the DT = (R exp ) case, the second order parameter was found to be 9.7(3) × 10 −8 kHz −2 , the first one 4.95(3)×10 −4 kHz while the value of 9.95(1)×10 −1 was obtained for the constant one. The correction factor DT was extracted using Eq. (7) with a total estimated uncertainty around 2% as calculated by uncertainty propagation. The uncertainties of R exp and R true were calculated by varying the parameters of the fitting curve within their uncertainties. It has to be mentioned that both the constant amplitude as well as the fission fragment amplitude distribution provided consistent fitting parameters, within their corresponding uncertainties.

Benchmarking
To benchmark the developed counting-loss correction methodology, time-of-flight data from the 238 U(n,f) experiment was used. This measurement, which was performed using two reference samples ( 238 U and 235 U), took place at the newly commissioned experimental area EAR-2 [27], of the n_TOF facility at CERN. Particular features of this experimental area, that lies ∼20 m above a spallation lead target, are  the high instantaneous flux and the wide neutron beam energy range spanning from the meV region up to tens of MeV [28]. In addition, in the region around 1.5 MeV, where typical fission cross-sections have values of the order of a few barns, the neutron flux exhibits its maximum value as well as variations all the way up to 4 MeV as seen in Fig. 7.
The time frame of interest in this region is compressed due to the fact that for a flight path in the order of ∼20 m the time-of-flight difference for incident neutron energies of 1 and 4 MeV is of the order of 700 ns, during which the neutron flux is reduced by a factor of ∼9. As a consequence, taking into account the short time of 700 ns and the signal's relatively large FWHM of 200 ns, as well as the high instantaneous flux and fission cross-section, the probability to have piled-up pulses in this energy range is very high. These reasons justify the high experimental counting rates that were recorded for the 235 U and 238 U reference samples in this energy range as shown in Fig. 8.

Application in the case of the 238 U(n,f) cross-section
The high instantaneous counting rate (herein counting rate) in this energy region led to significant counting losses that could not be recovered using the available correction models for pulsed sources. More specifically, the models described in Eqs. (2) and (3) were used to correct the recorded yield for both 238 U and 235 U in order to extract the 238 U(n,f) cross-section (8) with reference to the 235 U(n,f) one (5) using Eq. (8). • is the areal density of the sample expressed in nuclei/cm 2 .
• DT is the dead-time correction factor.
• misc is the product of all other correction factors introduced in the analysis (amplitude cut, self-absorption, impurities etc.) which are not of relevance in the context of this work. Details for these factors and how they were calculated, can be found in [29,30].
It has to be noted that the superscripts (8) and (5) refer to the 238 U and 235 U samples, respectively.
The correction factors DT were calculated based on the different methods discussed in the introduction as well as using the new methodology proposed in this work. Firstly, the correction factors DT obtained from the Bollinger and Coates models as seen in Eqs. (2) and (3), respectively for a fixed dead-time of 200 ns, which corresponds to the signal FWHM, were in agreement within 0.2%. Secondly, the DT factor calculated using the same fixed dead-time and Moore's correction model, as seen in (4), was also in good agreement within 0.1%, compared to the aforementioned models since the beam intensity variation was in the order of less than 5% and as described in [8] variations in the order of 15% are considered to be small. It has to be noted that when applying these methodologies, the signals that were recorded within the fixed dead-time, where properly rejected and not taken into account in the cross-section calculation.
In addition, fits in the waiting time distributions were used. In order to perform the fits, as described previously, the reaction rate has to be constant or must not vary appreciably. In addition, the waiting time distributions must rely on good statistics so as to reduce the uncertainty of the fitting parameters. For these reasons, the recorded reaction rate was divided in the binning seen in Fig. 8. A typical fit performed in 235 U for signals detected at time-of-flights that correspond to the incident neutron energies between 2 and 2.2 MeV can be seen in Fig. 9 where the waiting time is calculated as the time difference between the initial pulse that arrives in the time interval that corresponds to the 2-2.2 MeV energy region and any pulse that is successfully reconstructed in time-offlights all the way down to the end of the acquisition window. The end of the fitting range, however, was chosen to be a time significantly smaller than the time difference that corresponds to the end of the acquisition window because in larger times the waiting time distribution follows the behaviour of the reaction rate and therefore structures that deviate from the theoretical behaviour (i.e. resonances in the cross-sections, dips in the flux etc.) are expected which had an impact on the goodness of the fit. It has to be mentioned that the uncertainty in the correction factor DT is estimated of the order of 10% and is derived from the uncertainty of the parameters of the fitting curve for small variations in the fitting range.
Finally, Fig. 10 shows the evaluated 238 U(n,f) cross-section (ENDF/B-VII.1 [31]) which is considered as standard above 2 MeV and the calculated cross-sections using the models in Eqs. (2), (3) and (4), fits in the waiting time distributions and the developed methodology to correct for the counting losses occurred, in the energy region of 1.8-4 MeV.

Discussion and remarks
The results obtained using different correction models, indicate miscalculations in the case of high counting rates. More specifically the models of Bollinger et al. [6], Coates [7] and Moore [8] underestimate the dead-time correction, mainly in the case of 235 U, resulting in a 238 U(n,f) cross-section calculation that is overestimated by ∼20% with respect to the ENDF/B-VII.1 evaluation. The uncertainty bars seen in Fig. 10 (blue squares) correspond to the statistical uncertainty of the recorded yield, since there is not any reference whatsoever on the uncertainty of these models.
The application of the fitting methodology of waiting time distributions in time-of-flight data was proven to be more reliable since the correction factors that were calculated based on it, provide a 238 U(n,f) cross-section that is in agreement with the evaluation within 3% on  average. However the uncertainty of this method depends on the quality of the fits in the waiting time distributions. Since these distributions were calculated by making compromises between the statistics, the variation of the reaction yield and an adequate binning to reproduce the cross section, an uncertainty of the order of 15% can be justified. The uncertainty bars seen in Fig. 10 (green triangles) were calculated as the quadratic sum of the statistical uncertainties along with the uncertainties of the correction factors DT both for 238 U and 235 U in each case.
Finally, the developed methodology predicts correction factors DT that can reproduce the 238 U(n,f) cross-section with discrepancies that are smaller than 2% and an uncertainty that is in the order of 3% and is dictated by the accuracy of the fitting of the DT .

Conclusion
In counting experiments associated with pulsed sources and the time-of-flight technique, counting losses can be very high resulting in an underestimation of the reaction yield. Correction models do exist in literature, but cannot always provide adequate corrections in the case of high expected counting rates that can reach up to 1.9 MHz and counting losses that exceed 60%. Such is the case of fission crosssection measurements with the Micromegas detectors at the second experimental area (EAR-2) at the n_TOF facility at CERN, which offers a high instantaneous neutron flux resulting in a recorded reaction rate of the MHz order in the case of 238 U(n,f) and 235 U(n,f) reactions.
Since the models that were used did not prove to be adequate to account for the occurring counting losses, a new methodology has been developed based on detector emulation devices and fits in waiting time distributions, which can be used regardless of the particle detector. This methodology was benchmarked using 238 U(n,f) and 235 U(n,f) experimental data for a bandwidth of 20 kHz-1.9 MHz and predicted correction factors that could reproduce the standard 238 U(n,f) crosssection with reference to the 235 U(n,f) one with maximum discrepancies in the order of 2% with an estimated uncertainty in the order of 3%.

Appendix. Waiting time distributions
Counting experiments are described by the Poisson distribution therefore the probability of observing events in a time interval , given a counting rate = ∕ , is expressed by the Poisson distribution function is the average number of events recorded in the time window.

A.1. Useful probabilities
In order to make the mathematical formalism that follows less complex, the probabilities given below need to be calculated using Eq. (A.1). The probability of not observing events in a time interval can be calculated as while the probability of observing a single event in the approximation of a small time interval , is

A.2. Consecutive pulses
Assuming a varying counting rate ( ) and a pulse arriving at a time 0 , as shown in Fig. A.11 where the vertical thick lines indicate the position of each pulse, the waiting time distribution of consecutive events can be calculated. The next pulse was chosen to arrive within the time window ( , + ) while the time frame between the two pulses ( 0 , ) was divided in equal intervals , so that = ∕ → 0. This implies that is relatively large, therefore the width is considered to be constant.
The fact that the next pulse was chosen to arrive within the time window ( , + ) can be equally expressed as having none pulses in the sub-intervals and exactly one pulse in the window. The total probability can be written as follows 3 Considering the probabilities that were previously calculated in Eqs.    Since is chosen to be large, without loss of generality, the sum in the previous expression tends to an integral. In addition, only one pulse is expected to arrive in the time window ( , + ) which can mathematically be expressed as a combination of either a small counting rate ( ) or a time window that can be considered so infinitesimal that only one pulse can fit in. This assumption can lead to the following expression for the probability in Eq. (A.5): The probability density d ∕d can then be deduced from Eq. (A.6): In the case of a constant counting rate the probability density shown in Eq. (A.7) becomes an exponential decay: The resulting calculation in Eq. (A.8) can be interpreted as follows: The probability to have a large separation between consecutive pulses, with respect to the counting rate, converges to 0. More specifically, at a constant counting rate it is more probable to detect the next event close to the first one or similarly it is less and less probable to detect it in much later times as illustrated in Fig. A.12, where the shaded sections represent the arrival probabilities for the case of a constant counting rate = 1 Hz.

A.3. The general case
The general case regards the calculation of the waiting time distributions from the first event, to the next, to the second next, to the  Fig. A.14. Snapshot of three consecutive pulses where the first pulse arrives at = 0 , the second at = 1 and the third at an arbitrary time . third next etc. Such an example is presented in Fig. A.14. In this case the first pulse arrives at = 0 , the second at = 1 and the third at a random time . It is evident that these distributions form a family of distributions ( ) with 1 ( ) being the distribution describing the waiting time between consecutive pulses in the time interval , 2 ( ) being the distribution describing the waiting time to have two adjacent pulses in the time window after the first event etc. The distribution 1 ( ) has been described and calculated in A.2. The 2 ( ) distribution can be calculated by following a similar reasoning: After the arrival of the first pulse at = 0 , there is not any pulse between = 0 and = 1 , there is one pulse at = 1 , there is no pulse in the ( 1 , ) time window and there is one pulse at the arbitrary time . The aforementioned scenario can be expressed as follows using Eqs. Without loss of generality = 0 can be considered as = 0 and therefore the resulting probability density function 2 ∕ is given as: Similarly regarding the 3 distribution it can be proven that the probability density function d 3 ∕d can be written as: Finally a general expression for the probability density functions d ∕d of the family of distributions ( ), which is known in literature as the Erlang distribution, can be calculated to be: For a constant counting rate = 1 Hz, Fig. A.13 shows the probability density functionsd ∕d for the waiting time distributions ( ) for up to five consecutive pulses.