Next Article in Journal
Autostereoscopic 3D Measurement Based on Adaptive Focus Volume Aggregation
Next Article in Special Issue
Advanced System-on-Chip Field-Programmable-Gate-Array-Powered Data Acquisition System for Pixel Detectors
Previous Article in Journal
Bypassing Heaven’s Gate Technique Using Black-Box Testing
Previous Article in Special Issue
A Real-Time Energy Response Correction Method for Cs3Cu2I5:Tl Scintillating Dosimeter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Principal Component Analysis Applied to Digital Pulse Shape Analysis for Isotope Discrimination

by
Katherine Guerrero-Morejón
1,
José María Hinojo-Montero
1,
Fernando Muñoz-Chavero
1,
Juan Luis Flores-Garrido
2,
Juan Antonio Gómez-Galán
3,* and
Ramón González-Carvajal
1
1
Department of Electronic Engineering, University of Sevilla, 41092 Sevilla, Spain
2
Department of Electrical and Thermal Engineering, University of Huelva, 21007 Huelva, Spain
3
Department of Electronic Engineering, Computers, and Automation, University of Huelva, 21007 Huelva, Spain
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(23), 9418; https://doi.org/10.3390/s23239418
Submission received: 30 October 2023 / Revised: 22 November 2023 / Accepted: 24 November 2023 / Published: 26 November 2023
(This article belongs to the Special Issue Advances in Particle Detectors and Radiation Detectors)

Abstract

:
Digital pulse shape analysis (DPSA) techniques are becoming increasingly important for the study of nuclear reactions since the development of fast digitizers. These techniques allow us to obtain the (A, Z) values of the reaction products impinging on the new generation solid-state detectors. In this paper, we present a computationally efficient method to discriminate isotopes with similar energy levels, with the aim of enabling the edge-computing paradigm in future field-programmable gate-array-based acquisition systems. The discrimination of isotope pairs with analogous energy levels has been a topic of interest in the literature, leading to various solutions based on statistical features or convolutional neural networks. Leveraging a valuable dataset obtained from experiments conducted by researchers in the FAZIA Collaboration at the CIME cyclotron in GANIL laboratories, we aim to establish a comparative analysis regarding selectivity and computational efficiency, as this dataset has been employed in several prior publications. Specifically, this work presents an approach to discriminate between pairs of isotopes with similar energies, namely, 12,13C, 36,40Ar, and 80,84Kr, using principal component analysis (PCA) for data preprocessing. Consequently, a linear and cubic machine learning (ML) support vector machine (SVM) classification model was trained and tested, achieving a high identification capability, especially in the cubic one. These results offer improved computational efficiency compared to the previously reported methodologies.

1. Introduction

Technological breakthroughs in particle detectors and the development of new radioactive ion beam facilities (RIBFs), along with advances in machine learning (ML) and artificial intelligence (AI), have made particle, isotope, and ion classification techniques increasingly relevant in nuclear physics research. These techniques are crucial to discard contaminant beams [1]. Additionally, they must be computationally efficient to be executed in real time, reducing the amount of data to be transmitted, stored, and processed.
On the one hand, the construction and operation of new and upgraded RIBFs, such as FAIR [2], EURISOL [3], SPES [4], EXOTIC [5], or SPIRAL [6], will enable the study of new exotic features of the nuclear structure due to the availability of high-intensity radioactive ion beams. On the other hand, the continuous improvement in the spatial and temporal resolution of silicon detectors, which are the main semiconductor used due to the fact that the gap between the valence and conduction bands is 1.12 eV, has led to improved accuracy in particle energy measurement [7]. This has made it possible to distinguish isotopes whose energies are very close.
In addition to the improvement of these two key elements for advancing our understanding of nuclear structure, there is a constant growth in the computational capacity and speed provided by devices such as FPGAs. They can perform complex and highly demanding operations at the first level of processing where the data acquisition system (DAQ) or the event selector are located [8], enabling the edge-computing paradigm in the nuclear physics field. To achieve this purpose, efficient isotope discrimination algorithms are vital. For this purpose, different techniques have been published in the literature. Some authors have used time-of-flight (ToF) to identify and classify the nature of the isotope under study [9,10]. This technique is based on measuring the time difference between two given time stamps (either the start and stop time stamp—requiring a detector close to the source and a second detector located at a certain distance for its implementation—or the stop time stamp and the start time stamp of the accelerator radiofrequency, RF—note that this method is only valid in those facilities that implement pulsed beams) [9]. The main drawback of this technique is that it requires a complex clock distribution network to ensure that all detectors receive exactly the same clock signal, resulting in complex control and acquisition systems such as those implemented in FAZIA [11,12].
A second technique reported in the literature corresponds to the measurement of the ionization energy loss [9]. This technique is based on the relation of the momentum of the particle with its velocity, which is estimated after measuring the ionization energy losses. Nevertheless, this method presents two major challenges that make particle detection difficult and limit its application: Two particles, with different masses but the same momentum, generate the same energy loss and the saturation of the detector itself.
Finally, a third method for isotope discrimination is known as pulse shape analysis (PSA), which is a powerful technique for characterizing and distinguishing particles or isotopes based on the unique waveforms they generate after impinging on a particle or scintillation detector [13,14,15,16,17,18,19,20,21]. A very promising variant of this technique is its digital implementation, known as digital PSA [22,23,24,25,26]. It leverages the benefits of digital signal processing to achieve reliability and inclusion of advanced algorithms such as artificial neural networks (ANN), [27,28], or machine/deep learning algorithms, [29,30], making it a preferred choice for many modern applications in nuclear physics, radiation detection, and related fields. The latter technique has demonstrated high performance in classifying isotopes, even when their energy and atomic weight are very similar. An example of this performance is given in [28], where the isotope pairs 12,13C, 36,40Ar, and 80,84Kr were identified with a hit rate close to 91%. Furthermore, this technique, due to the high computational cost it requires, will benefit greatly from devices such as FPGAs, as they offer the possibility of running multiple operations in parallel at a low cost [27,31].
This research work proposes a new classifier for PSA based on SVM ML algorithms that improves the accuracy in the discrimination of fragments generated in nuclear reactions with similar energies, reducing the number of computational resources used. In order to perform this task, it is required to collect a large dataset for different values of the mass number (A), atomic number (Z), and energy. Specifically, the dataset used in this work was acquired from experimental measurements using the CIME cyclotron at GANIL [22,27,28]. Thus, in order to discriminate the different isotopes, the dataset is preprocessed using principal component analysis (PCA) to reduce the amount of information that the SVM algorithm must process, obtaining a solution that can be easily integrated into a physical system such as an FPGA without compromising the discrimination results. This last requirement is vital for the application of this method in array detectors containing a large number of single silicon detectors, working simultaneously to process a huge amount of data.
The rest of the document is as follows: Section 2 describes the acquisition of the dataset used to train and assess the proposed classifier. Section 3 details the proposed classifier based on the SVM algorithm, as well as the preprocessing of the dataset. Section 4 collects the results obtained by applying the proposed method. Section 5 presents a comparison of the results with other techniques published in the literature. The results are also discussed in this section. Finally, in Section 6, some conclusions are drawn and future work is outlined.

2. Data Source and Acquisition Description

As described in the previous section, the dataset used in this work corresponds to the results obtained from measurements at GANIL using CIME cyclotron-accelerated ions. This experiment focuses on the electric current signal produced by the detected particle. They used a silicon neutron transmutation doped (NTD) detector of 300 μm thickness and 200 mm 2 active area, collimated to a 10 mm diameter. In this experiment, no target was required because the detector was placed above the beam line to collect the ions directly. The detector was mounted in an inverse configuration because, according to the authors, this configuration increases the plasma time differences for ions of a given energy [14]. A voltage of 190 V was applied during the experiment, and the depletion voltage was 140 V. The detector was connected to a PACI low-gain pre-amplifier, operating with a bandwidth of 300 MHz [26], and placed very close, exactly 4 cm away, to avoid significant signal degradation, even inside the vacuum chamber. The outputs provided by this amplifier are proportional to the charge and current produced by the detected particle and were sent to a commercial 8-bit ACQIRIS digitizer [32], operating at a sampling frequency of 2 GHz. This ACQIRIS acquisition system stored all the signals from the different ions using the same amplitude scale for direct analysis and comparison. They then measured the energy with a peak sensing ADC, connecting the charge output of the PACI to standard shaping analog electronics. The beam energy in the experiment ranged from 7.39 AMeV to 8.68 AMeV and the accelerated ion species covered a somewhat wide range, from 12C to 84Kr. In each run of the experiment, they used “mixed” beams with known isotopes, all with different mass and charge-to-mass ratios but with the same energy per nucleon. The identity of each pulse and the particle mass number were determined by measuring the total energy. A more detailed description of the experiment can be found in [22]. Within their results, the authors have managed to find three pairs of isotopes with very similar total energy: 12C at 98.54 MeV vs. 13C at 96.75 MeV; 36Ar at 313.92 MeV vs. 40Ar at 312.88 MeV and 80Kr at 688.43 MeV vs. 84Kr at 676.18 MeV. Figure 1 shows the current pulse shapes corresponding to these three isotope pairs, it can be observed that there is difficulty in discriminating between isotopes of similar energy, noting also that the most complicated case to solve is 36,40 Ar due to the higher overlap of their pulses throughout the graph. In this work, new methods will be developed to discriminate between these pairs of isotopes trying to achieve an algorithm based on computationally simple techniques that minimize the number of mathematical operations, avoiding divisions and non-linear operations. This is explained in the next section, Section 3.

3. Methodological Approach and Procedures

This section describes the algorithm implemented to classify the detected isotopes. To demonstrate its performance, the dataset described in the previous section, Section 2, is used. Figure 2 describes the summary of the workflow to obtain the final isotope classifier.
The first stage of this workflow is to load the samples obtained for each isotope pair into memory, creating the necessary data structure. The information stored in this structure corresponds to the label, the name of the isotope with which the data are associated, and the number of protons (Z) and neutrons (N) that compose it. Once this step is completed, the second stage, denoted in Figure 2 as “Data preprocessing”, is performed. During this stage, the PCA technique is applied to reduce the number of features to be processed. This process is described in Section 3.1. After the reduction of the sample space, an SVM-based classifier is trained (Section 3.2). Finally, after the training phase, the generated model is applied to a new dataset to validate its estimation. All the coding works were implemented in Matlab, using the Statistics and Machine Learning Toolbox [33,34].

3.1. Data Pre-Processing: Principal Component Analysis

Principal component analysis is a statistical-algebraic technique that allows the dimensionality of a dataset to be reduced while preserving the maximum amount of information. This is achieved by linearly converting the data into a new coordinate system in which a significant portion of the data’s variation can be explained using fewer dimensions than the original dataset. This enables a reduction in the memory footprint of a dataset and significantly simplifies the classification algorithm without compromising precision [35]. For this work, the available dataset consists of the following feature vectors:
  • 80Kr,1100 observations with 300 features
  • 84Kr, 1100 observations with 300 features
  • 36Ar, 2000 observations with 200 features
  • 40Ar, 2000 observations with 200 features
  • 12C, 2000 observations with 100 features
  • 13C, 2000 observations with 100 features
Each observation corresponds to a sample vector with 300, 200, and 100 features for 80,84Kr,36,40Ar, and 12,13C isotopes, respectively. Note that each of the features corresponds to a sample of the electric current acquired by the 8-bit ACQIRIS digitizer, operating with a sample period of 1 ns. These vectors are arranged in a matrix for each isotope pair. The digitizer provides the samples with an interval of 1 ns. Thus, each observation has a duration of 300 ns, 200 ns, and 100 ns for Kr, Ar, and C isotopes, respectively. The digitizer output for each isotope pair is plotted in Figure 1. Note that the X-axis represents the time instant of the acquired sample and it can also be interpreted as the number of the sample, without loss of generality.
Naturally, the aim of applying PCA is to reduce the amount of features to process in the classification techniques without compromising the accuracy and, hence, improving its computational efficiency. This reduction of features to be processed is beneficial since these techniques, see PCA, act as an enabling technology for the implementation of the edge-computing paradigm in the field of nuclear physics research. This new approach opens up the potential for processing and analyzing data in proximity to the silicon detector, thereby decreasing the amount of valuable data that need to be transmitted in an experiment. Furthermore, in the specific case of PCA, the way to generate the most relevant features of a dataset allows the resulting data to be used as a visualization tool, thus improving the understanding of the obtained data. Figure 3 shows a typical visual example that represents a dataset that could be the raw feature matrix corresponding to a certain isotope. Note that in this example, the matrix is composed of n observations and each observation has p features. Therefore, the sample space presents p dimensions.
After applying PCA, the dimensionality of the dataset is reduced, generating a subset Z = { z 1 , z 2 , , z j , , z k } , where k is a number much smaller than the original features p, k p . Figure 4 depicts the condensation of the information provided by multiple variables through PCA into just a few adjacent components.
Each principal component z j is obtained by a linear combination of the original variables. They can be understood as new features obtained by combining the original features in a certain way. The first principal component of a group of variables ( x 1 , x 2 , …, x p ) is the normalized linear combination of these features that has the highest variance (1).
z 1 = ϕ 1 , 1 · x 1 + ϕ 2 , 1 · x 2 + + ϕ p , 1 · x p
The coefficients ϕ 1 , 1 , , ϕ 1 , p are known as “loadings” and define each principal component, z j . The loadings are understood as the weight that each feature has in each principal component, and it tells us what kind of information each component collects. These coefficients correspond to the eigenvector and eigenvalue of the covariance matrix.
For the dataset used in this work, a matrix of principal components of the same dimensions as the original dataset is obtained. However, because the purpose of PCA is to reduce the amount of data and retain as much information as possible, a minimum number of principal components must be found that are sufficient to preserve and explain the original features. There is no standardized solution or method to select the optimal number of principal components. Nevertheless, a widely accepted criterion is to evaluate the proportion of cumulative explained variance and select the minimum number of components beyond which the increase in variance is no longer significant. Figure 5 shows the cumulative explained variance of the principal components of the dataset for 80,84Kr. As can be observed, the greatest variation in the cumulative explained variance occurs in the first 20 principal components, where this parameter varies from 0.873959 for only one PC to 0.963423 for the first 20 PCs. From this number of principal components, the increase in the cumulative explained variance is not significant compared to the amount of data that must be processed.

3.2. Classifier: SVM

Support vector machines (SVMs) are a popular linear classifier based on supervised machine learning models [36,37], i.e., the sample dataset must be labeled to be used. Its effectiveness in classification, numerical prediction, and pattern recognition tasks has been exploited in this work to train an efficient model capable of identifying isotopes of similar energies. The aim of SVMs is to find a line or a hyperplane in dimensions greater than 2 between different classes of data such that the distance on either side of that line or hyperplane to the nearest data point is maximized. For this research, linear and cubic kernels are used.
In relation to the inputs of the model to be trained, these correspond to the characteristics generated by the PCA algorithm. Specifically, a series of principal components is evaluated that varies between 1 and 10. For each of these PCs, a different classifier model is trained. It is important to note that preprocessing the data with PCA not only contributes to reducing the number of characteristics of the observations, but also helps to maximize their variance; increasing the separation of some characteristics from others. A simple way to interpret this effect is from a geometrical point of view. The new features composed of the principal components occupy different positions in space and with greater separation between them. This allows for simplifying the line or hyperplane estimated by the SVM and, therefore, facilitating its classification.
The output of the classifier corresponds to the label associated with each of the observations. Thus, for each pair of isotopes, the classifier properly categorizes each one into its respective class and, once classified, assigns the corresponding number of neutrons and protons. These values are used in Section 4 to compare these algorithms with other classification techniques.
To train each model, the 80–20% rule was applied, i.e., 80% of the dataset was used for the training while the remaining 20% was dedicated to evaluate the accuracy of the model generated. Note that the accuracy is defined as the true positive predictions that are correct over the total number of cases. Furthermore, to obtain an accurate and robust classifier, a k-fold cross-validation strategy was applied to the training dataset. The main purpose of this method is to divide the dataset into multiple subsets or “folds”, allowing for training and testing the model multiple times. Specifically, the training dataset was subdivided into 5 sections so that during each training iteration composed of 5 iterations, 4 of these sections were used to train the model and the remaining one was used to evaluate it. Note that k-fold-based training was chosen since the amount of available data was limited, and, in these cases, this technique contributes to reduce the risk of overfitting, offering a more reliable estimate of the model’s performance. Finally, 40 iterations were run to obtain the final model. This procedure was applied for each of the chosen methods: linear and cubic.
To determine the optimal number of principal components, three metrics were considered for their evaluation: accuracy, merit factor, and the performance previously achieved by the neural network that we aim to surpass. In the case of the first metric, the precision of the SVM classifier, the estimation of the number of principal components to be used was performed according to two criteria. First, the success–error rate obtained after evaluating the confusion matrix must be higher than 90%, a value similar to that of other scientific publications [27,28]. The second defined criterion is that an increase in the number of principal components should represent a negligible percentage improvement in the classification. For this purpose, a percentage of 1% was established as the improvement threshold. Below this threshold, the computational resources required to implement the PCA algorithm and the SVM classifier increase significantly, leading to an increase in the complexity and size of a possible hardware implementation. Figure 6 presents the accuracy achieved by the model as a function of the different principal components used. The data represented in Figure 6 are summarized in Table 1 and Table 2.
The second metric is the merit factor. This metric describes the ability of the proposed SVM classifiers to discriminate between isotopes with similar energy, which are the most challenging classification cases in this type of study. In order to quantify the classification efficiency of our trained SVM models, measurements were conducted by estimating the degree of overlap between neighboring clusters. A widely used merit factor (FOM) M for gamma–neutron discrimination is presented in [22,38]. Equation (2) represents the generalized two-dimensional form of the merit factor M.
M = μ 1 μ 2 ( σ 1 + σ 2 ) · 2.35
where μ 1 , 2 and σ 1 , 2 represent the corresponding two-dimensional centers and one-dimensional standard deviations of the classes, respectively [28].
The merit factor is interpreted as follows: values of M > 0.75 can be associated with a good rejection rate, and when M > 1 , almost all events are completely separated. To ensure acceptable discrimination between a pair of isotopes, the FOM must exceed 0.75; with the linear SVM, this value is adequately surpassed, except for the case of 36,40Ar. However, with the cubic SVM model, this threshold is exceeded for all three pairs of isotopes, ensuring proper discrimination and, therefore, an acceptable rejection rate. The merit factor data obtained for different numbers of principal components are displayed in Figure 7, and Table 3 and Table 4 represent these same results.
Finally, the third criterion for principal component selection is predicated on surpassing the previously established outcomes of the neural network referenced in this study. In other words, it is imperative that the merit factor value not only exceeds the minimum required by definition but also outperforms the classification capacity level of the reference neural network.
Based on the results collected under the aforementioned three criteria, it can be observed that in the case of the isotope pairs 12,13 C and 80,84 Kr, the requisite number of principal components is four to achieve the required accuracy of the SVM classifiers and surpass the neural network in the cubic case. However, for 36,40 Ar, it is necessary to increase the number of principal components to six in order to attain an accuracy exceeding 92%. Hence, these numbers of principal components are employed to generate the final models, thereby conserving computational resources through classifier model simplification. Furthermore, these results were substantiated by cumulative explained variance, as depicted in Figure 8 for each isotope pair.

4. Identification between Pairs of Isotopes with Similar Energy

Once the optimal number of principal components required for the SVM classifier is determined, as detailed in the preceding section, the results of the classifier configured are presented in this section. Table 5 also contains evaluation metrics such as the prediction precision for each isotope; this metric is useful and reliable in cases where the dataset is symmetric between classes, and post PCA, our dataset is completely symmetric. This metric provides information about how often the true positive predictions are correct. It also contains the value of the merit factor calculated with Equation (2).
To describe the performance of our classification model, a validation tool known as a confusion matrix was used. Figure 9 and Figure 10 represent the values of each confusion matrix for each isotope pair to better understand the number of successes with respect to the total. Note that both figures were plotted on a logarithmic scale to improve their readability.
In order to validate the obtained results, a comparison is performed with the other methodologies previously documented in the scientific literature. Table 6 presents a comparison of the merit factors obtained through each method, including our own proposal.
From Table 6, it can be seen that the proposed classifiers achieve better classification results than the previously published methods. In addition, due to the reduction of the dataset due to the application of PCA, the required computational resources are optimized, as described in the next section.

5. Computational Cost

The relative efficiency of the algorithms is determined by comparing their computational complexities. In this section, the computational cost is evaluated considering the number of mathematical operations—number of additions and multiplications required—that each of the proposed algorithms (PCA + linear SVM, PCA + cubic SVM) require for their execution. Additionally, the theoretical resources occupied by a model based on neural networks [28] are also estimated in order to perform a proper comparison. Note that this methodology has been used previously in the literature [39].

5.1. Estimation

In order to determine the number of resources required for each method, the cost function collected in Table 7 was used.
First, the estimation of the PCA algorithm was performed considering the function f 1 ( x ) . This function represents the set of operations to be carried out to obtain the linear combination of each principal component, where x i represents the corresponding feature and ϕ i denotes the loading factor associated with the feature x i . Thus, the computation of a single principal component composed of p features requires N p multiplications and p 1 additions. Second, the resources required by the SVM algorithms were estimated using the functions f 2 ( z ) and f 3 ( z ) . In both functions, N reflects the total number of principal components used to generate the model and w i is the weight associated with the PC. In the linear case, the term b is a constant value that represents the model bias. On the contrary, in the cubic SVM classifier, the terms α i and y i correspond to the duality parameters and the observed response values, respectively. Regarding class encoding, y i = 1 denotes a positive feature, while y i = 1 corresponds to a negative feature. In other words, y 1 , 1 . Figure 11a depicts the data flow and the operations to be performed for each of the developed SVM-based classifiers.
Finally, to assess the computational neural network presented in [28], function f 4 ( x ) is used. In this function, N represents the total number of features, w i denotes the weights associated with neurons, x j corresponds to the input of the neural network, and f refers to an activation function [27,28]. The term b again denotes the bias of the model. Note that the neural network architecture comprises n inputs corresponding to the isotope-specific features, two hidden layers (each layer consists of eight neurons), and an output layer with two neurons. During the computation process, each neuron computes a weighted sum, where the input values are multiplied by their respective weights, and a bias term is added. Subsequently, the result of this weighted sum is subjected to an activation function. Two activation functions were used: the hyperbolic tangent sigmoid function and the “purelin” function, which determine the activation or excitation of the neuron, as depicted in Figure 11b.
Following the application of equation f 4 ( x ) , Table 8 collects the number of operations carried out by the neural network for each pair of isotopes. This comprehensive information offers a quantitative assessment of the computational cost associated with this process, enabling a deeper understanding of the computational workload imposed by the neural network and its impact on the overall efficiency of the isotope classification process.

5.2. Analysis of Results

In this subsection is presented the number of additions and multiplications that are the operations performed by the PCA + linear SVM, PCA + cubic SVM, and neural network algorithms for each isotope pair. It can be seen from Table 9 that a significant amount of computational resources is required, regardless of the algorithm used.
For the 12,13C isotope pair, the neural network performs 3.02 and 4.85 times more operations, encompassing both additions and multiplications, compared to the PCA + cubic SVM; furthermore, it executes 3.05 and 4.35 times more operations than the PCA + linear SVM, respectively.
Concerning the 36,40Ar isotope pair, the neural network carries out 1.70 and 2.73 times more operations, involving additions and multiplications, in comparison to the PCA + cubic SVM; additionally, it conducts 1.71 and 2.78 times more operations than the PCA + linear SVM, respectively.
Finally, for the 80,84Kr isotope pair, the neural network performs 5.46 times more additions and 2.07 times more operations, covering both additions and multiplications, compared to the PCA + cubic SVM; moreover, it executes 5.48 and 2.09 times more operations than the PCA + linear SVM, respectively.

6. Conclusions

Data preprocessing through PCA has proven to be an effective strategy for reducing the information load without compromising the results. The selection of four principal components for the isotope pairs 12,13C and 80,84Kr and six principal components for 36,40Ar, based on cumulative explained variance analysis, has allowed the condensation of most of the features into these components, which is sufficient to achieve the required accuracy. Note that, in spite of the similarity between the energy levels of the 36,40Ar isotope pair, which corresponds to one of the most challenging isotopes to discriminate, our cubic SVM model has demonstrated significantly more efficient classification compared to the linear SVM. The proposed approach in this work for classifying isotopes with similar energy levels has proven to provide high precision in comparison to the other methodologies present in the literature, such as ANNs. Furthermore, the validation metrics and merit factor used have met expectations. Regarding the computational cost analysis, our results indicate that SVM algorithms require fewer computational resources in terms of ’operations’ than ANNs, highlighting the efficiency of this technique in isotope classification applications. The significant reduction in computational cost opens up the possibility of implementing these models for isotope classification in the context of edge computing.

Author Contributions

Conceptualization, F.M.-C., J.L.F.-G., J.A.G.-G. and R.G.-C.; methodology, K.G.-M., J.M.H.-M. and F.M.-C.; software, K.G.-M., J.M.H.-M. and F.M.-C.; validation, K.G.-M., J.M.H.-M., F.M.-C., J.L.F.-G., J.A.G.-G. and R.G.-C.; formal analysis, K.G.-M., J.M.H.-M. and F.M.-C.; investigation, K.G.-M., J.M.H.-M., F.M.-C., J.L.F.-G., J.A.G.-G. and R.G.-C.; resources, J.L.F.-G. and J.A.G.-G.; data curation, K.G.-M., J.M.H.-M., F.M.-C. and J.L.F.-G.; writing—original draft preparation, K.G.-M., J.M.H.-M., F.M.-C., J.L.F.-G., J.A.G.-G. and R.G.-C.; writing—review and editing, K.G.-M., J.M.H.-M., F.M.-C., J.L.F.-G., J.A.G.-G. and R.G.-C.; visualization, K.G.-M., J.M.H.-M., F.M.-C., J.L.F.-G., J.A.G.-G. and R.G.-C.; supervision, J.M.H.-M., F.M.-C., J.L.F.-G., J.A.G.-G. and R.G.-C.; project administration, J.M.H.-M., F.M.-C., J.L.F.-G., J.A.G.-G. and R.G.-C.; funding acquisition, F.M.-C., J.A.G.-G. and R.G.-C. All authors have read and agreed to the published version of the manuscript.

Funding

Grant TED2021-131075B-I00 funded by MCIN/AEI/10.13039/501100011033. Grant PID2021-127711NB-I00 funded by MCIN/AEI/10.13039/501100011033.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request.

Acknowledgments

This research was supported by grant TED2021-131075B-I00 funded by // MCIN/AEI/10.13039/501100011033 and, as appropriate, by the “European Union NextGenerationEU/PRTR”, by grant PID2021-127711NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
ANNArtificial neural network
DPSADigital pulse shape analysis
EURISOLEuropean isotope separation on-line
FAIRFacility for antiproton and ion research
FAZIAFour-pi A and Z identification array
FPGAField-programmable gate array
MLMachine learning
NNeutrons
NTDNeutron transmutation doped
PCPrincipal component
PCAPrincipal component analysis
PSAPulse shape analysis
RIBFRadioactive ion beam facility
SPESSelective production of exotic species
SVMSupport vector machine
ToFTime-of-flight
ZProtons

References

  1. Mazzocco, M. Radioactive Ion Beams: Production and Experiments at INFN-LNL. EPJ Web Conf. 2023, 275, 01010. [Google Scholar] [CrossRef]
  2. FAIR Accelerator. Available online: https://www.gsi.de/en/researchaccelerators/fair (accessed on 20 May 2022).
  3. Eurisol. Available online: http://www.eurisol.org (accessed on 20 May 2022).
  4. National Institute for Nuclear Physics of Legnaro. Available online: http://www.lnl.infn.it/en/welcome-on-the-site-of-the-national-laboratories-of-legnaro (accessed on 20 May 2022).
  5. Farinon, F.; Glodariu, T.; Mazzocco, M.; Battistella, A.; Bonetti, R.; Costa, L.; De Rosa, A.; Guglielmetti, A.; Inglima, G.; La Commara, M.; et al. Commissioning of the EXOTIC beam line. Nucl. Instrum. Methods Phys. Res. Sect. Beam Interact. Mater. Atoms 2008, 266, 4097–4102. [Google Scholar] [CrossRef]
  6. Bonasera, A.; Bruno, M.; Dorso, C.; Mastinu, P. Critical phenomena in nuclear fragmentation. Riv. Del Nuovo C. 2000, 23, 1–101. [Google Scholar] [CrossRef]
  7. Dalla Betta, G.F.; Ye, J. Silicon Radiation Detector Technologies: From Planar to 3D. Chips 2023, 2, 83–101. [Google Scholar] [CrossRef]
  8. Armstrong, W.W.; Burris, W.; Gingrich, D.M.; Green, P.; Greeniaus, L.G.; Hewlett, J.C.; Holm, L.; Mcdonald, J.W.; Mullin, S.; Olsen, W.C.; et al. ATLAS: Technical Proposal for a General-Purpose pp Experiment at the Large Hadron Collider at CERN; CERN: Geneva, Switzerland, 1994. [Google Scholar]
  9. Nappi, E. Advances in charged particle identification techniques. Nucl. Instrum. Meth. A 2011, 628, 1–8. [Google Scholar] [CrossRef]
  10. Valdré, S.; Barlini, S.; Bini, M.; Boiano, A.; Bonnet, E.; Borderie, B.; Bougault, R.; Bruno, M.; Buccola, B.; Camaiani, A.; et al. Charged particle identification using time of flight with FAZIA. Nuovo Cim. C 2020, 43, 10. [Google Scholar] [CrossRef]
  11. Bougault, R.; Barlini, M.; Bini, A.; Boiano, E.; Bonnet, B.; Borderie, R.; Bougault, M.; Bruno, A.; Buccola, A.; Camaiani, G.; et al. The FAZIA project in Europe: R&D phase. Eur. Phys. J. A 2014, 50, 47. [Google Scholar] [CrossRef]
  12. Valdré, S.; Casini, G.; Le Niendre, N.; Bini, M.; Boiano, A.; Borderie, B.; Edelbruck, P.; Poggi, G.; Salomon, F.; Tortone, G.; et al. The FAZIA setup: A review on the electronics and the mechanical mounting. Nucl. Instrum. Meth. A 2019, 930, 27–36. [Google Scholar] [CrossRef]
  13. Ammerlaan, C.; Rumphorst, R.; Koerts, L. Particle identification by pulse shape discrimination in the p-i-n type semiconductor detector. Nucl. Instrum. Methods 1963, 22, 189–200. [Google Scholar] [CrossRef]
  14. Pausch, G.; Moszyński, M.; Wolski, D.; Bohne, W.; Grawe, H.; Hilscher, D.; Schubart, R.; de Angelis, G.; de Poli, M. Application of the pulse-shape technique to proton-alpha discrimination in Si-detector arrays. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 1995, 365, 176–184. [Google Scholar] [CrossRef]
  15. Pausch, G.; Bohne, W.; Hilscher, D. Particle identification in solid-state detectors by means of pulse-shape analysis—Results of computer simulations. Nucl. Instruments Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 1994, 337, 573–587. [Google Scholar] [CrossRef]
  16. NA6-Collaboration Presented by A. Bamberger. Particle Identification Through Pulse Shape Analysis in Proportional Counters. Phys. Scr. 1981, 23, 759. [Google Scholar] [CrossRef]
  17. Marrone, S.; Cano-Ott, D.; Colonna, N.; Domingo, C.; Gramegna, F.; Gonzalez, E.; Gunsing, F.; Heil, M.; Käppeler, F.; Mastinu, P.; et al. Pulse shape analysis of liquid scintillators for neutron studies. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2002, 490, 299–307. [Google Scholar] [CrossRef]
  18. Mengoni, D.; Dueñas, J.; Assié, M.; Boiano, C.; John, P.; Aliaga, R.; Beaumel, D.; Capra, S.; Gadea, A.; Gonzáles, V.; et al. Digital pulse-shape analysis with a TRACE early silicon prototype. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2014, 764, 241–246. [Google Scholar] [CrossRef]
  19. Skulski, W.; Momayezi, M. Particle identification in CsI(Tl) using digital pulse shape analysis. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2001, 458, 759–771. [Google Scholar] [CrossRef]
  20. The GERDA Collaboration; Agostini, M.; Allardt, M.; Andreotti, E.; Bakalyarov, A.M.; Balata, M.; Barabanov, I.; Heider, M.B.; Barros, N.; Baudis, L.; et al. Measurement of the half-life of the two-neutrino double beta decay of 76Ge with the GERDA experiment. J. Phys. G Nucl. Part. Phys. 2013, 40, 035110. [Google Scholar] [CrossRef]
  21. Agostini, M.; Araujo, G.; Bakalyarov, A.M.; Balata, M.; Barabanov, I.; Baudis, L.; Bauer, C.; Bellotti, E.; Belogurov, S.; Bettini, A.; et al. Pulse shape analysis in Gerda Phase II. Eur. Phys. J. C 2022, 82, 284. [Google Scholar] [CrossRef]
  22. Barlini, S.; Bougault, R.; Laborie, P.; Lopez, O.; Mercier, D.; Parlog, M.; Tamain, B.; Vient, E.; Chevallier, E.; Chbihi, A.; et al. New digital techniques applied to A and Z identification using pulse shape discrimination of silicon detector current signals. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2009, 600, 644–650. [Google Scholar] [CrossRef]
  23. Bardelli, L.; Bini, M.; Poggi, G.; Taccetti, N. Application of digital sampling techniques to particle identification in scintillation detectors. Nucl. Instruments Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2002, 491, 244–257. [Google Scholar] [CrossRef]
  24. Kupny, S.; Brzychczyk, J.; Lukasik, J.; Pawłowski, P. Charged-particle flow measured with the KRATTA detector in the ASY-EOS experiment. EPJ Web Conf. 2015, 88, 01010. [Google Scholar] [CrossRef]
  25. Bardelli, L.; Poggi, G.; Bini, M.; Pasquali, G.; Taccetti, N. Time measurements by means of digital sampling techniques: A study case of 100 ps FWHM time resolution with a 100 MSample/s, 12 bit digitizer. Nucl. Instrum. Methods Phys. Res. A 2004, 521, 480–492. [Google Scholar] [CrossRef]
  26. Hamrita, H.; Rauly, E.; Blumenfeld, Y.; Borderie, B.; Chabot, M.; Edelbruck, P.; Lavergne, L.; Le Bris, J.; Legou, T.; Le Neindre, N.; et al. Charge and current-sensitive preamplifiers for pulse shape discrimination techniques with silicon detectors. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2004, 531, 607–615. [Google Scholar] [CrossRef]
  27. Jiménez, R.; Sánchez-Raya, M.; Gómez-Galán, J.; Flores, J.; Dueñas, J.; Martel, I. Implementation of a neural network for digital pulse shape analysis on a FPGA for on-line identification of heavy ions. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2012, 674, 99–104. [Google Scholar] [CrossRef]
  28. Flores, J.; Martel, I.; Jiménez, R.; Galán, J.; Salmerón, P. Application of neural networks to digital pulse shape analysis for an array of silicon strip detectors. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2016, 830, 287–293. [Google Scholar] [CrossRef]
  29. Derkach, D.; Hushchyn, M.; Kazeev, N. Machine Learning based Global Particle Identification Algorithms at the LHCb Experiment. EPJ Web Conf. 2019, 214, 06011. [Google Scholar] [CrossRef]
  30. Graczykowski, L.K.; Jakubowska, M.; Deja, K.R.; Kabus, M.; on behalf of the ALICE collaboration. Using machine learning for particle identification in ALICE. J. Instrum. 2022, 17, C07016. [Google Scholar] [CrossRef]
  31. Eppler, W.; Fischer, T.; Gemmeke, H.; Koder, T.; Stotzka, R. Neural chip SAND/1 for real time pattern recognition. IEEE Trans. Nucl. Sci. 1998, 45, 1819–1823. [Google Scholar] [CrossRef]
  32. ACQUIRIS: High-Speed ADC Cards with Dedicated FPGA Processin. Available online: https://acqiris.com/ (accessed on 20 May 2022).
  33. Mathwork Matlab. Available online: https://www.mathworks.com/products/matlab.html (accessed on 20 May 2022).
  34. Mathwork Statistics and Machine Learning Toolbox. Available online: https://https://www.mathworks.com/products/statistics.html (accessed on 20 May 2022).
  35. Goodfellow, I.J.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 20 May 2022).
  36. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; Springer: Berlin/Heidelberg, Germany, 2013; Volume 112. [Google Scholar]
  37. Burges, C.; Burges, C.J. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  38. Winyard, R.; Lutkin, J.; McBeth, G. Pulse shape discrimination in inorganic and organic scintillators. I. Nucl. Instrum. Methods 1971, 95, 141–153. [Google Scholar] [CrossRef]
  39. Gloria Martínez Vidal, M.P.F. Apuntes de Metodología y Tecnología de la Programación, IS04; Creative Commons Attribution-NonCommercial-ShareAlike License: Mountain View, CA, USA, 2006. [Google Scholar]
Figure 1. Dataset of the three pairs of isotopes: 12,13C, 36,40Ar, and 80,84Kr.
Figure 1. Dataset of the three pairs of isotopes: 12,13C, 36,40Ar, and 80,84Kr.
Sensors 23 09418 g001
Figure 2. Workflow for obtaining the optimal SVM model.
Figure 2. Workflow for obtaining the optimal SVM model.
Sensors 23 09418 g002
Figure 3. Visual example of a raw feature matrix.
Figure 3. Visual example of a raw feature matrix.
Sensors 23 09418 g003
Figure 4. Visual example of a principal component matrix.
Figure 4. Visual example of a principal component matrix.
Sensors 23 09418 g004
Figure 5. Cumulative explained variance of principal components.
Figure 5. Cumulative explained variance of principal components.
Sensors 23 09418 g005
Figure 6. Accuracy for linear and cubic SVM algorithms for (a) 12,13C, (b) 36,40Ar, and (c) 80,84Kr.
Figure 6. Accuracy for linear and cubic SVM algorithms for (a) 12,13C, (b) 36,40Ar, and (c) 80,84Kr.
Sensors 23 09418 g006
Figure 7. Factor of merit for linear and cubic SVM algorithms for (a) 12,13C, (b) 36,40Ar, and (c) 80,84Kr.
Figure 7. Factor of merit for linear and cubic SVM algorithms for (a) 12,13C, (b) 36,40Ar, and (c) 80,84Kr.
Sensors 23 09418 g007
Figure 8. Cumulative explained variance for (a) 12,13C, (b) 36,40Ar, and (c) 80,84Kr.
Figure 8. Cumulative explained variance for (a) 12,13C, (b) 36,40Ar, and (c) 80,84Kr.
Sensors 23 09418 g008
Figure 9. Success–error comparison on a logarithmic scale (from the confusion matrix) for linear SVM (a) 12,13C, (b) 36,40Ar, and (c) 80,84Kr.
Figure 9. Success–error comparison on a logarithmic scale (from the confusion matrix) for linear SVM (a) 12,13C, (b) 36,40Ar, and (c) 80,84Kr.
Sensors 23 09418 g009
Figure 10. Success–error comparison on a logarithmic scale (from the confusion matrix) for cubic SVM (a) 12,13C, (b) 36,40Ar, and (c) 80,84Kr.
Figure 10. Success–error comparison on a logarithmic scale (from the confusion matrix) for cubic SVM (a) 12,13C, (b) 36,40Ar, and (c) 80,84Kr.
Sensors 23 09418 g010
Figure 11. Schematic representation of the techniques (a) SVM and (b) ANN.
Figure 11. Schematic representation of the techniques (a) SVM and (b) ANN.
Sensors 23 09418 g011
Table 1. Accuracy values obtained for linear SVM.
Table 1. Accuracy values obtained for linear SVM.
IsotopeNumber of Principal Components
12345678910
12,13C98.0098.3098.7399.6599.5899.6699.6099.6199.5799.67
36,40Ar54.0789.2491.2192.1891.8691.9391.7591.6991.3291.48
80,84Kr61.5478.8896.8498.6298.7398.6898.8099.2599.3999.35
Table 2. Accuracy values obtained for cubic SVM.
Table 2. Accuracy values obtained for cubic SVM.
IsotopeNumber of Principal Components
12345678910
12,13C51.3297.2999.1499.9299.8899.8899.8799.9499.9799.94
36,40Ar49.7156.9372.6688.4395.3996.2997.3397.2897.4297.35
80,84Kr49.9050.3497.8198.9499.1699.0199.0198.8199.0399.03
Table 3. Factor of merit obtained for linear SVM.
Table 3. Factor of merit obtained for linear SVM.
IsotopeNumber of Principal Components
12345678910
12,13C1.611.812.243.843.733.843.483.703.814.35
36,40Ar0.040.540.620.660.650.660.650.640.620.63
80,84Kr0.110.291.151.781.851.811.912.452.722.47
Table 4. Factor of merit obtained for cubic SVM.
Table 4. Factor of merit obtained for cubic SVM.
IsotopeNumber of Principal Components
12345678910
12,13C0.031.242.517.506.146.475.908.9612.099.09
36,40Ar0.000.060.220.570.921.041.241.271.261.36
80,84Kr0.000.001.402.042.302.111.931.982.132.13
Table 5. Evaluation metrics of the SVM classifier algorithms.
Table 5. Evaluation metrics of the SVM classifier algorithms.
IsotopeLinear SVMCubic SVM
Evaluation Metrics [%]MEvaluation Metrics [%]M
12,13C99.88–99.653.8499.94–99.927.50
36,40Ar91.78–91.930.6696.57–96.291.04
80,84Kr98.73–98.621.7898.90–98.942.04
Table 6. Factor of merit among different methods.
Table 6. Factor of merit among different methods.
MethodsUnitsReferenceM per Isotope Pair
12,13C36,40Ar80,84Kr
Amplitude[mA][22]1.420.810.54
Risetime[ns][22]0.620.360.26
Decay time[ns][22]0.810.480.007
Slope[mA/ns][22]1.350.730.11
m 2 [ns][22]0.910.64≈0
f[i] current signal-[22]1.150.840.50
data[i] current signal-[22]1.530.961.04
Standard ANN-[28]1.710.760.98
Differential ANN-[28]4.480.902.95
Linear SVM-This work3.840.661.78
Cubic SVM-This work7.501.042.04
Table 7. Algorithm Cost Functions.
Table 7. Algorithm Cost Functions.
MethodCost Function
PCA f 1 ( x ) = i = 1 p ϕ i x i
Linear SVM f 2 ( z ) = b + i = 1 N w i · z i
Cubic SVM f 3 ( z ) = i = 1 N α i y i ( 1 + w i · z i ) 3
ANN ([28]) f 4 ( x ) = f ( j = 1 N w j x j + b )
Table 8. Computational cost of the neural network.
Table 8. Computational cost of the neural network.
Computational Cost (ANN)12,13C36,40Ar80,84Kr
SumsProductsSumsProductsSumsProducts
Hidden layer 116008243200162448002424
Hidden layer 2128881288812888
Output layer301630163016
Table 9. Computational cost of the neural network and proposed methods for comparison.
Table 9. Computational cost of the neural network and proposed methods for comparison.
Computational CostFunctions12,13C (4-PCA)36,40Ar (6-PCA)80,84Kr (4-PCA)
SumsProductsSumsProductsSumsProducts
PCA + linear SVM f 1 ( x ) + f 2 ( x ) 304404100612069041204
PCA + cubic SVM f 1 ( x ) + f 3 ( x ) 307420101112309071220
ANN f 4 ( x ) 92817581728335849582528
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guerrero-Morejón, K.; Hinojo-Montero, J.M.; Muñoz-Chavero, F.; Flores-Garrido, J.L.; Gómez-Galán, J.A.; González-Carvajal, R. Principal Component Analysis Applied to Digital Pulse Shape Analysis for Isotope Discrimination. Sensors 2023, 23, 9418. https://doi.org/10.3390/s23239418

AMA Style

Guerrero-Morejón K, Hinojo-Montero JM, Muñoz-Chavero F, Flores-Garrido JL, Gómez-Galán JA, González-Carvajal R. Principal Component Analysis Applied to Digital Pulse Shape Analysis for Isotope Discrimination. Sensors. 2023; 23(23):9418. https://doi.org/10.3390/s23239418

Chicago/Turabian Style

Guerrero-Morejón, Katherine, José María Hinojo-Montero, Fernando Muñoz-Chavero, Juan Luis Flores-Garrido, Juan Antonio Gómez-Galán, and Ramón González-Carvajal. 2023. "Principal Component Analysis Applied to Digital Pulse Shape Analysis for Isotope Discrimination" Sensors 23, no. 23: 9418. https://doi.org/10.3390/s23239418

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop