Next Article in Journal
Influencing Factors of the Length of Lane-Changing Buffer Zone for Autonomous Driving Dedicated Lanes
Next Article in Special Issue
Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence
Previous Article in Journal
A Lightweight Efficient Person Re-Identification Method Based on Multi-Attribute Feature Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Accurate Multiple Sclerosis Detection Model Based on Exemplar Multiple Parameters Local Phase Quantization: ExMPLPQ

1
Department of Radiology, Beyhekim Training and Research Hospital, Konya 42001, Turkey
2
Vocational School of Technical Sciences, Firat University, Elazig 23119, Turkey
3
Department of Neurology, School of Medicine, Malatya Turgut Ozal University, Malatya 44100, Turkey
4
Department of Engineering and Mathematics, Sheffield Hallam University, Sheffield S1 1WB, UK
5
School of Business (Information System), University of Southern Queensland, Toowoomba, QLD 4350, Australia
6
Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
7
Department of Digital Forensics Engineering, Technology Faculty, Firat University, Elazig 23119, Turkey
8
Department of Cardiology, National Heart Centre Singapore, Singapore 599489, Singapore
9
Duke-NUS Medical School, Singapore 169857, Singapore
10
Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, Singapore 599489, Singapore
11
Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore 599491, Singapore
12
Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 413, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(10), 4920; https://doi.org/10.3390/app12104920
Submission received: 7 April 2022 / Revised: 6 May 2022 / Accepted: 10 May 2022 / Published: 12 May 2022
(This article belongs to the Special Issue Decision Support Systems for Disease Detection and Diagnosis)

Abstract

:
Multiple sclerosis (MS) is a chronic demyelinating condition characterized by plaques in the white matter of the central nervous system that can be detected using magnetic resonance imaging (MRI). Many deep learning models for automated MS detection based on MRI have been presented in the literature. We developed a computationally lightweight machine learning model for MS diagnosis using a novel handcrafted feature engineering approach. The study dataset comprised axial and sagittal brain MRI images that were prospectively acquired from 72 MS and 59 healthy subjects who attended the Ozal University Medical Faculty in 2021. The dataset was divided into three study subsets: axial images only (n = 1652), sagittal images only (n = 1775), and combined axial and sagittal images (n = 3427) of both MS and healthy classes. All images were resized to 224 × 224. Subsequently, the features were generated with a fixed-size patch-based (exemplar) feature extraction model based on local phase quantization (LPQ) with three-parameter settings. The resulting exemplar multiple parameters LPQ (ExMPLPQ) features were concatenated to form a large final feature vector. The top discriminative features were selected using iterative neighborhood component analysis (INCA). Finally, a k-nearest neighbor (kNN) algorithm, Fine kNN, was deployed to perform binary classification of the brain images into MS vs. healthy classes. The ExMPLPQ-based model attained 98.37%, 97.75%, and 98.22% binary classification accuracy rates for axial, sagittal, and hybrid datasets, respectively, using Fine kNN with 10-fold cross-validation. Furthermore, our model outperformed 19 established pre-trained deep learning models that were trained and tested with the same data. Unlike deep models, the ExMPLPQ-based model is computationally lightweight yet highly accurate. It has the potential to be implemented as an automated diagnostic tool to screen brain MRIs for white matter lesions in suspected MS patients.

1. Introduction

Multiple sclerosis (MS) is a chronic autoimmune inflammatory disease characterized by demyelination and axonal loss in central nervous system neurons of the brain, spine, and optic nerves [1] that affects approximately 2.8 million people worldwide [2]. There are secular trends of increasing disease incidence and prevalence that are non-homogeneously distributed across populations due to complex gene-gene and gene-environmental interactions and increasing incidence among females. Uncertainties regarding the ascertainment of MS diagnosis and censoring of survival data after diagnosis may confound accurate estimations of incidence and prevalence rates, respectively [3]. Notwithstanding this, it remains indisputable that MS has exacted high economic costs on the healthcare systems of both developed [4] and low- to middle-income countries [5]. The main drivers of healthcare resource utilization are the costs of disease-modifying therapies (DMTs) and the non-medical costs associated with the management of chronic disability in the early and advanced stages of the disease, respectively [4,5]. While patients and their families frequently bear the costs of non-medical interventions, these interventions are nevertheless associated with increased long-term medical and total societal costs [4]. This underscores the need for early diagnosis and intervention with DMTs, which can potentially control disease progression [1,6,7] and have been shown to be cost-effective in improving patients’ quality of life [8]. This provides ample justification and motivation for the ongoing quest for more sensitive and accurate methods of MS diagnosis.
There is no single pathognomonic clinical or laboratory finding that can secure a definitive diagnosis of MS. Rather, the diagnosis is made based on consensus clinical, imaging, and laboratory criteria [1,6]. The 2017 McDonald criteria [9] define typical clinical signs and symptoms as well as lesions on magnetic resonance imaging (MRI) [10] that manifest in time and space, which can be combined with auxiliary examination findings (cerebrospinal fluid examination, visual and somatosensory evoked potentials) to establish the diagnosis of MS. Of note, MRI plays an instrumental role in the identification and localization of characteristic demyelinating plaques in the white matter of the brain and spine that constitute the pathological basis of MS and underpin its neurological presentations [10,11,12]. In particular, T2-weighted fluid-attenuated inversion recovery (FLAIR) MRI [13] offers the optimal contrast-to-noise image signal properties for sensitive detection of plaque lesions and is routinely performed for anatomical MRI screening of the central nervous system in suspected cases of MS [12]. Interpretation of MRI images requires experts to manually scrutinize multiple contiguous image sections for the presence of white matter lesions, with care being taken to distinguish MS plaques from lesions associated with diseases that present with similar symptoms, e.g., ischemic gliosis and central nervous system vasculitis [14,15,16]. MS plaques are hyperintense in T2 sequence, oval-round shaped, at least 3 mm in size, with asymmetric distribution. The typical location of MS plaques is as follows: (1) periventricular: adjacent to the lateral ventricles; (2) juxtracortical and cortical: localized to U fibers; (3) infratentorial: located unilaterally or bilaterally paramedian adjacent to the brain stem, cerebellum, and 4th ventricle; (4) spinal cord: cervical and thoracic localized, shorter than two vertebral segments, axially wedge-shaped, sagittal cigarette shaped, localized in peripherally located posterior and lateral columns [12]. The process of clinical scan reading is time-intensive, fatiguing, and susceptible to intra- and inter-observer variability. These limitations provide an opportunity for harnessing the power of artificial intelligence (AI) for screening large numbers of MRI images to detect MS, which can be posed as a problem of classifying images with and without white matter lesions [17]. AI mimics human intelligence to perform tasks and can progressively attain higher accuracy by collecting more information [18]. These desirable traits have spurred the adoption of AI methods in many healthcare applications, which can potentially ease the workload of medical and paramedical personnel. To this end, AI has been used for computer-aided MS diagnosis [19,20] and prognostication of disease progression [19,20]. Accurate AI-enabled MS detection promises earlier diagnosis and treatment initiation with DMTs, better disease surveillance, and more efficient utilization of healthcare resources. However, questions remain about the reliability and practicality of AI-enabled MS detection.
Some MS detection studies presented in the literature are given below.
Storelli et al. [21] analyzed the MRIs of 373 MS patients using a CNN model and attained accuracy rates of 83.3%, 67.7%, and 85.7% for clinical, cognitive, and combined clinical plus cognitive diagnoses, respectively. The parameter values and optimization methods used in the CNN architecture negatively affected the classification results. Alijamaat et al. [22] proposed a method that incorporated a two-dimensional discrete Haar wavelet transform and CNN to study the MRIs of 38 patients and 20 healthy individuals and attained sensitivity, specificity, precision, and accuracy of 99.14%, 98.89%, 99.43%, and 99.05%, respectively, in their experiments. Oliveira et al. [23] proposed a method for measuring plaque volume using MRIs from four different datasets. Their proposed method achieved 99.69%, 98.51%, 98.51%, and 99.85% accuracy, precision, sensitivity, and specificity. Narayana et al. [24] studied T1, T2, and FLAIR MRIs of 489 healthy and 519 MS patients. Using the Vgg16+FCN network structure, they attained 72%, 70%, and 70% sensitivity, specificity, and accuracy. Barquero et al. [25] studied MRIs of 124 MS patients. Using a RimNet CNN architecture, they attained 62.3%, 75.8%, 95.1%, and 93.8% F1-score, sensitivity, specificity, and accuracy. Ye et al. [26] used diffusion-based spectrum imaging techniques to study the MRIs of 38 MS patients. Using a deep neural network, they attained 97.3%, 99.1%, 97.3%, and 93.4% F1-score, sensitivity, specificity, and accuracy. Vogelsanger and Federau [27] studied a large dataset of 1855 healthy MRIs, 2910 MRIs from 616 MS patients, and 639 MRIs from 625 leukoencephalopathy patients attained precision and recall rates of 92% and 89%, respectively. Shrwan and Gupta [28] studied the MRIs of 38 MS patients using a 2D-CNN network and attained 99.55%, 99.15%, and 99.15% accuracy, precision, and recall, respectively. Afzal et al. [29] used 127 scans from the Medical Image Computing and Computer-Assisted Intervention 2016 and International Symposium on Biomedical Imaging 2015 datasets in their study. In their segmentation study, they attained 67%, 48%, and 90% dice similarity coefficient, sensitivity, and precision. Afzal et al. [30] conducted two experiments using MRIs of 21 MS patients using a CNN network model and attained 83.3% and 100% accuracy rates in the first and second experiments, respectively.
The datasets in the studies presented above [22,28,29,30] are small. In Storelli et al. [21] and Narayana et al. [24], the dataset is large, but has a low accuracy rate. In [21,22,23,24,25,26,27,28,30], computational complexity is high. With our work on an accurate MS detection model, we attempt to address some of the issues and problems raised above. Our solution took the form of a reliable machine learning model for classifying FLAIR images of the brain into MS and non-MS classes accurately. To this end, we created a novel exemplar feature engineering algorithm based on local phase quantization (LPQ), which we named exemplar multiple parameters LPQ (ExMPLPQ). The model was created by fusing ExMPLPQ with a machine learning algorithm and training and testing it on a prospectively acquired brain MRI data set. The model comprised four phases: (1) brain MRI image segmentation; (2) exemplar feature extraction using LPQ; (3) feature selection using iterative neighborhood component analysis (INCA); and (4) classification using shallow k-nearest neighbor (kNN) classifier. The contributions of our work are as follows:
  • A prospective brain MRI dataset was collected to train and test the proposed ExMPLPQ model. The dataset has been made publicly available.
  • The handcrafted ExMPLPQ model attained over 97% classification accuracy on the study dataset. In addition, our results for MS detection were demonstrably superior to 19 state-of-the-art pretrained methods, which included transfer learning and deep learning models.

2. Materials and Methods

This section describes the study dataset, proposed method, including feature extraction and classification process.

2.1. Materials

The study dataset comprised axial and sagittal FLAIR MRI images of the brain that were prospectively acquired from 72 MS and 59 non-diseased “healthy” male and female patients who attended the Ozal University Medical Faculty in 2021. The institutional ethics committee had approved the study. Medical experts read the FLAIR image sections. From the 72 MS patients, 1441 axial and sagittal brain image sections containing identifiable MS lesions were assigned to the MS class; and from the 59 non-diseased patients, 2016 axial and sagittal images sections with normal appearance, i.e., without white matter lesions, were assigned to the healthy class (Table 1). For binary classification into MS vs. healthy, three study data subsets comprising axial only (n = 1652) and sagittal only (n = 1775) (Figure 1), and combined axial and sagittal images were created (n = 3427) (Table 1). The dataset can be downloaded at: https://www.kaggle.com/datasets/buraktaci/multiple-sclerosis (accessed on 7 April 2022).

2.2. Transfer Learning-Based Feature Engineering Model

The ExMPLPQ model combines desirable properties of feature extraction based on exemplar and multiple parameters. LPQ [31], a popular textural feature extractor, was deployed to generate textural features. These were fed to an INCA selector to select the top discriminative features. A kNN classifier [32] was employed for MS vs. non-MS classes (Figure 2).
The pseudocode of the proposed method was presented in Algorithm 1.
Algorithm 1. Detailed flow of the ExMPLPQ technique
Input: The used MS dataset with 3.427 MRIs.
Output: Results.
00: Load MS dataset
01: for k = 1 to 3427 do
02:   I = MS(k); Image reading from MS dataset
03:   Reshape image into 224 × 224 sized image
04:    X(k,1:768) = [lpq(I,3) lpq(I,5) lpq(I,7)]
05:    for i = 1 to 224 step by 32 do
06:      for j = 1 to 224 step by 32 do
07:        exm = res(i:i+31,j:j+31,1)
08:        X(k, counter * 768+1:(counter+1) * 768) = [lpq(exm,3)
          lpq(exm,5) lpq(exm,7)]
09:       end for j
10:     end for i
11:   Extract the 50th feature vector using the resized images.
12:   Merge the generated feature vectors to create the final feature vector.
13: end for k
14: Apply INCA to generated features.
15: Forward the selected features to kNN classifiers.
16: Obtain predict values
The processing steps of the ExMPLPQ algorithm are described below.
Step 1: Read each image from the collected MRI study data subsets.
Step 2: Resize each MRI image to 224 pixels × 224 pixels.
Step 3: Divide each resized image into 49 (=7 × 7) 32 × 32 sized patches/exemplars.
E x t = I m i i + 32 × i 1 , j j + 32 × j 1 ,   i 1 , 2 , , 7 ,   j 1 , 2 , , 7 , t 1 , 2 , , 49 ,   i i 1 , 2 , , 32 ,   j j 1 , 2 , , 32
where E x t represents the tth fixed-size patch with size 32 × 32, and I m is the resized image with size 224 × 224.
Step 4: Generate features by applying the LPQ feature extractor function.
f e a t 1 = L P Q E x t , 3 × 3 ,   t 1 , 2 , , 49
f e a t 2 = L P Q E x t , 5 × 5
f e a t 3 = L P Q E x t , 7 × 7
f t = c o n c f e a t 1 , f e a t 2 , f e a t 3
where f e a t represents LPQ features generated by deploying L P Q . , . function with blocks of varying sizes; and c o n c . , . , . concatenation function. Each of the three f e a t has 256 features, which are concatenated to form f t with a length of 768.
f e a t 1 = L P Q I m , 3 × 3 ,   t 1 , 2 , , 49
f e a t 2 = L P Q I m , 5 × 5
f e a t 3 = L P Q I m , 7 × 7
f 50 = c o n c f e a t 1 , f e a t 2 , f e a t 3
Step 5: Extract the 50th feature vector using the resized images.
Step 6: Merge the generated feature vectors ( f ) to create the final feature vector.
f t v = j + 768 × i 1 , j 1 , 2 , , 768 ,   i 1 , 2 , , 50
where f t v represents the final feature vector with a length of 38,400 (=768 × 50) generated from each MRI image.
The next processing steps involve the INCA feature selection function, during which the algorithm selects 403, 716, and 944 features for the axial, sagittal, and hybrid MRI study data subsets, respectively.
Step 7: Calculate indexes sorted by applying the neighborhood component analysis (NCA) [33] selector.
i n d = ψ f t v , y
where ψ represents the NCA feature selection function; i n d , indexes qualified according to distinctiveness; and y , actual labels.
Step 8: Apply iterative feature selection using the calculated indexes ( i n d ). Moreover, loss values of each selected feature vector deploying the kNN classifier. In this work, 901 feature vectors (the used iteration range is from 100 to 1000) have been selected, and kNN calculates each feature vector’s loss/misclassification rate to choose the best/optimal feature vector. This process is given in below.
l i = κ f t v : , i n d t , y , k f ,   t 1 , 2 , , i + 99 ,   i 1 , 2 , , 901
where l i represents the loss value of the selected ith feature vector; and κ , the kNN classifier. κ incorporates three parameters: feature vector, actual output ( y ), and validation ( k f ). In this work, k f is chosen as 10-fold cross-validation.
Step 9: Select the best feature vector using the calculated loss values in Step 8.
m i n i , i d x = min l
f i n a l = f t v : , i n d g ,   g 1 , 2 , , i d x + 99
where f i n a l represents the selected best feature vector; m i n i are the minimum loss values; and i d x , are the indexes of these (minimum) values. In these equations (see Equations (13) and (14)), the index ( i d x ) of the feature vector with has a maximum classification ratio. In Equation (13), the index of the best feature vector is calculated, and this feature vector (final) is selected using Equation (14).
In the last step, classification is performed.
Step 10: Classify the chosen final feature vector ( f i n a l ) by deploying a shallow kNN algorithm, Fine kNN [34], using 10-fold cross-validation. Of note, Fine kNN has been used for loss/misclassification rate calculations during INCA feature selection and binary classification. The hyperparameters were set to k is one; distance function, Spearman; and voting, none.
As can be seen from these 10 steps above, the proposed local phase quantization-based MR image classification is a parametric method. The used parameters in this work are tabulated in Table 2.
As shown in Table 2, our proposed architecture is mimicked by a deep model, but we have used a handcrafted features-based model. We have selected the size of the patch as 32 × 32 to extract features from local areas, and we have used an effective feature extractor. This feature extractor generates textural features from both the space and frequency domain. Moreover, it uses parameters. We have used three parameters to use the effectiveness of them together.

3. Performance Analysis

MATLAB 2021b was used to implement the ExMPLPQ model algorithms. The implementation was structured with a set of modular functions (main, pre-processor (exemplar division function), Inception local phase quantization (ILPQ), INCA, and kNN). The model was trained and tested with three study data subsets comprising axial images only, sagittal images only, and combined axial and sagittal images. The model performance was compared against 19 pre-trained models, in which kNN was deployed as a classifier, and the classification was repeated 100 times for each pre-trained model. The ExMPLPQ algorithm possessed low computational complexity and was executed on a desktop personal computer with Windows 10.1 pro-OS, 11th generation Intel i9 processor, 32 GB memory, and 1.5 TB hard disk without parallel processing or graphics processing. Standard performance metrics―precision, sensitivity, specificity [35,36], F1-score, accuracy, Matthew’s correlation coefficient (MCC) [35,36]―were used to evaluate ExMPLPQ as well as the 19 pre-trained models.

4. Results

Our model attained excellent binary classification performance with >97% accuracy and >95% performance across all standard evaluation metrics (Table 3), as well as relatively low rates of misclassification in all three study data subsets (Figure 3).
The proposed model was run 100 times for axial, sagittal, and hybrid data using a Fine kNN classifier with 10-fold cross-validation. The mean ± standard deviations are tabulated in Table 4.

5. Discussion

Since the 2000s, deep learning AI techniques have become very popular for diverse applications due to their high performance [37,38,39]. They have been applied in the biomedical field with notable success [40,41,42,43]. Some of these algorithms can veritably be implemented in the clinical environment as medical decision support systems to assist physicians and/or paramedical personnel. Unfortunately, deep learning models are computationally intensive and demand high time costs for parameter tuning. This study proposed a computationally lightweight machine learning model with handcrafted feature engineering for diagnosing MS on FLAIR brain images. The performance of the proposed model was compared with the results of 19 pre-trained models, including deep learning and transfer learning models, that were also trained and tested on a common prospectively acquired brain FLAIR MRI dataset comprising three subsets of images in different orientations. Fixed-sized patches with a size of 32 × 32 have been used to extract deep features using transfer learning. After extracting features from individual images in the study data subsets using the ExMPLPQ algorithm and the 19 pre-trained models, kNN classifier with 10-fold cross-validation was used to perform binary classification of the input images into MS vs. non-MS classes. The proposed ExMPLPQ attained excellent performance with >97% accuracy across all three study data subsets (Table 2). In contrast, the accuracy results after 100 classification runs of the 19 pre-trained models, which incorporated between 1.24 million and 144 million parameters, were all inferior to our model (Table 4). Among the 19 pre-trained models, Efficient b0 had the highest accuracy rates of 93.69%, 90.26%, and 93.22% for the axial, sagittal, and hybrid images, respectively. In contrast, GoogleNet attained the lowest corresponding accuracy rates of 84.49%, 82.02%, and 85.62. Table 5 shows comparative results (classification accuracies) for transfer learning methods.
We performed a non-systematic review of the literature on methods related to the automated classification of MS, are summarized in Table 5. Most of the methods in the literature relied on deep learning, especially convolutional neural network (CNN) models, to attain high classification results. In contrast, we presented a handcrafted feature-engineering machine learning model for detecting MS on brain MRI. Our ExMPLPQ model attained over 97% binary classification accuracy on all three study data subsets. Only Wang et al. [58] attained a higher classification performance than our model, but their dataset had fewer subjects. In addition, they applied data augmentation on their dataset and used a deep learning model to attain the high classification performance [58], which increased the model’s computational complexity. In contrast, our ExMPLPQ algorithm achieved high classification performance with low computational complexity.
Table 6 tabulated that our model attained superior classification results than other previously presented state-of-art methods. The salient points of the proposed ExMPLPQ algorithm are discussed in below.
Benefits:
  • A new brain MRI dataset comprising three study data subsets was prospectively acquired to train and test the model. This dataset has been made publicly available.
  • By design, the ExMPLPQ model exploited the advantages of both exemplar and multiple parameters for feature extraction.
  • The best features were automatically selected for each of the three binary classification problems using INCA.
  • ExMPLPQ attained over 97% accuracies for all study data subsets.
  • ExMPLPQ attained better classification performance compared with 19-pre-trained CNNs. Of note, more than a million parameters were required to be assigned/optimized using the deep learning models, which increased their time complexity considerably.
  • The ExMPLPQ algorithm has a low time complexity of approximately O n log n .
  • The base architecture of ExMPLPQ is parametric and is amenable to modification and optimization to create new models using variable patch sizes and updating of feature extractors, feature selectors, and classifiers.
Limitations:
  • The dataset was new, which precluded direct comparison with extant methods in the literature. Nevertheless, the common dataset was used to test the ExMPLPQ model and 19 comparator deep learning techniques, which demonstrated superior results for our model.
  • Only MS patients admitted to one hospital during one year (2021) were included in the study.
  • Patients with less than 9 MS lesions on brain MRIs were excluded from the study.
  • Patients with poor MRI image quality and motion artifacts were excluded from the study.
  • Patients under 18 years of age were excluded from the study.

6. Conclusions

In this research we show that a handcrafted computer vision model is highly accurate for detecting MS based on brain MRI images. The proposed model used LPQ with three 3 × 3, 5 × 5, and 7 × 7 overlapping blocks to extract features from resized brain images and fixed-size patches. The proposed ExMPLPQ was able to detect MS plaques from brain MRI with high accuracy automatically and is computationally lightweight. It has the potential to be implemented in clinics where it supports high-throughput screening of brain MRI images in suspected MS cases. In future works, we hope to acquire larger brain MRI datasets to train and test our model, including MRIs from patients with other diseases, e.g., migraine and vasculitis, that may mimic MS. In addition, more efficient deep learning and handcrafted approaches can be combined with the current model, resulting in more effective learning.

Author Contributions

Conceptualization, G.M., B.T., I.T., O.F., P.D.B., S.D., T.T., R.-S.T. and U.R.A.; methodology, G.M., B.T., I.T., O.F., P.D.B., S.D., T.T., R.-S.T. and U.R.A.; software, S.D. and T.T.; validation, G.M., B.T., I.T., O.F., P.D.B., S.D. and T.T.; formal analysis, G.M., B.T., I.T., O.F., P.D.B., S.D. and T.T.; investigation, G.M., B.T., I.T., O.F., P.D.B., S.D., T.T., R.-S.T. and U.R.A.; resources, G.M., B.T., I.T. and O.F.; data curation, G.M., B.T., I.T. and O.F.; writing—original draft preparation, G.M., B.T., I.T., O.F., P.D.B., S.D., T.T., R.-S.T. and U.R.A.; writing—review and editing, G.M., B.T., I.T., O.F., P.D.B., S.D., T.T., R.-S.T. and U.R.A.; visualization, G.M., B.T. and I.T.; supervision, U.R.A.; project administration, U.R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This research has been approved on ethical grounds by the Non-Interventional Research Ethics Board Decisions, Turgut Ozal University on 3 March 2022 (2022/03).

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Garg, N.; Smith, T.W. An update on immunopathogenesis, diagnosis, and treatment of multiple sclerosis. Brain Behav. 2015, 5, e00362. [Google Scholar] [CrossRef] [PubMed]
  2. Mult Scler, J. Multiple sclerosis under the spotlight. Lancet Neurol. 2021, 20, 497. [Google Scholar]
  3. Koch-Henriksen, N.; Sørensen, P.S. The changing demographic pattern of multiple sclerosis epidemiology. Lancet Neurol. 2010, 9, 520–532. [Google Scholar] [CrossRef]
  4. Nicholas, R.S.; Heaven, M.L.; Middleton, R.M.; Chevli, M.; Pulikottil-Jacob, R.; Jones, K.H.; Ford, D.V. Personal and societal costs of multiple sclerosis in the UK: A population-based MS Registry study. Mult. Scler. J. Exp. Transl. Clin. 2020, 6, 2055217320901727. [Google Scholar] [CrossRef]
  5. Dahham, J.; Rizk, R.; Kremer, I.; Evers, S.M.; Hiligsmann, M. Economic burden of multiple sclerosis in low-and middle-income countries: A systematic review. Pharmacoeconomics 2021, 39, 789–807. [Google Scholar] [CrossRef]
  6. McGinley, M.P.; Goldschmidt, C.H.; Rae-Grant, A.D. Diagnosis and treatment of multiple sclerosis: A review. JAMA 2021, 325, 765–779. [Google Scholar] [CrossRef]
  7. Vargas, D.L.; Tyor, W.R. Update on disease-modifying therapies for multiple sclerosis. J. Investig. Med. 2017, 65, 883–891. [Google Scholar] [CrossRef]
  8. Noyes, K.; Bajorska, A.; Chappel, A.; Schwid, S.; Mehta, L.; Weinstock-Guttman, B.; Holloway, R.; Dick, A. Cost-effectiveness of disease-modifying therapy for multiple sclerosis: A population-based study. Neurology 2011, 77, 355–363. [Google Scholar] [CrossRef] [Green Version]
  9. Thompson, A.J.; Banwell, B.L.; Barkhof, F.; Carroll, W.M.; Coetzee, T.; Comi, G.; Correale, J.; Fazekas, F.; Filippi, M.; Freedman, M.S. Diagnosis of multiple sclerosis: 2017 revisions of the McDonald criteria. Lancet Neurol. 2018, 17, 162–173. [Google Scholar] [CrossRef]
  10. Rovira, À.; León, A. MR in the diagnosis and monitoring of multiple sclerosis: An overview. Eur. J. Radiol. 2008, 67, 409–414. [Google Scholar] [CrossRef]
  11. De Stefano, N.; Matthews, P.M.; Antel, J.P.; Preul, M.; Francis, G.; Arnold, D.L. Chemical pathology of acute demyelinating lesions and its correlation with disability. Ann. Neurol. 1995, 38, 901–909. [Google Scholar] [CrossRef] [PubMed]
  12. Filippi, M.; Preziosa, P.; Banwell, B.L.; Barkhof, F.; Ciccarelli, O.; De Stefano, N.; Geurts, J.J.; Paul, F.; Reich, D.S.; Toosy, A.T. Assessment of lesions on magnetic resonance imaging in multiple sclerosis: Practical guidelines. Brain 2019, 142, 1858–1875. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Sati, P.; George, I.C.; Shea, C.D.; Gaitán, M.I.; Reich, D.S. FLAIR*: A combined MR contrast technique for visualizing white matter lesions and parenchymal veins. Radiology 2012, 265, 926–932. [Google Scholar] [CrossRef] [PubMed]
  14. Rolak, L.A.; Fleming, J.O. The differential diagnosis of multiple sclerosis. Neurologist 2007, 13, 57–72. [Google Scholar] [CrossRef] [PubMed]
  15. Mader, I.; Rauer, S.; Gall, P.; Klose, U. 1H MR spectroscopy of inflammation, infection and ischemia of the brain. Eur. J. Radiol. 2008, 67, 250–257. [Google Scholar] [CrossRef]
  16. Morgen, K.; McFarland, H.F.; Pillemer, S.R. Central nervous system disease in primary Sjögren’s syndrome: The role of magnetic resonance imaging. Semin. Arthritis Rheum. 2004, 34, 623–630. [Google Scholar] [CrossRef]
  17. Shoeibi, A.; Khodatars, M.; Jafari, M.; Moridian, P.; Rezaei, M.; Alizadehsani, R.; Khozeimeh, F.; Gorriz, J.M.; Heras, J.; Panahiazar, M.; et al. Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging: A review. Comput. Biol. Med. 2021, 136, 104697. [Google Scholar] [CrossRef]
  18. Ranschaert, E.R.; Morozov, S.; Algra, P.R. Artificial Intelligence in Medical Imaging: Opportunities, Applications and Risks; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  19. Schwab, P.; Karlen, W. A deep learning approach to diagnosing multiple sclerosis from smartphone data. IEEE J. Biomed. Health Inform. 2020, 25, 1284–1291. [Google Scholar] [CrossRef]
  20. Tousignant, A.; Lemaître, P.; Precup, D.; Arnold, D.L.; Arbel, T. Prediction of disease progression in multiple sclerosis patients using deep learning analysis of MRI data. In Proceedings of the International Conference on Medical Imaging with Deep Learning, Lübeck, Germany, 7–9 July 2021; pp. 483–492. [Google Scholar]
  21. Storelli, L.; Azzimonti, M.; Gueye, M.; Vizzino, C.; Preziosa, P.; Tedeschi, G.; De Stefano, N.; Pantano, P.; Filippi, M.; Rocca, M.A. A Deep Learning Approach to Predicting Disease Progression in Multiple Sclerosis Using Magnetic Resonance Imaging. Investig. Radiol. 2022. [Google Scholar] [CrossRef]
  22. Alijamaat, A.; NikravanShalmani, A.; Bayat, P. Multiple sclerosis identification in brain MRI images using wavelet convolutional neural networks. Int. J. Imaging Syst. Technol. 2021, 31, 778–785. [Google Scholar] [CrossRef]
  23. De Oliveira, M.; Piacenti-Silva, M.; Rocha, F.C.G.d.; Santos, J.M.; Cardoso, J.d.S.; Lisboa-Filho, P.N. Lesion Volume Quantification Using Two Convolutional Neural Networks in MRIs of Multiple Sclerosis Patients. Diagnostics 2022, 12, 230. [Google Scholar] [CrossRef] [PubMed]
  24. Narayana, P.A.; Coronado, I.; Sujit, S.J.; Wolinsky, J.S.; Lublin, F.D.; Gabr, R.E. Deep learning for predicting enhancing lesions in multiple sclerosis from noncontrast MRI. Radiology 2020, 294, 398–404. [Google Scholar] [CrossRef] [PubMed]
  25. Barquero, G.; La Rosa, F.; Kebiri, H.; Lu, P.-J.; Rahmanzadeh, R.; Weigel, M.; Fartaria, M.J.; Kober, T.; Théaudin, M.; Du Pasquier, R. RimNet: A deep 3D multimodal MRI architecture for paramagnetic rim lesion assessment in multiple sclerosis. NeuroImage Clin. 2020, 28, 102412. [Google Scholar] [CrossRef] [PubMed]
  26. Ye, Z.; George, A.; Wu, A.T.; Niu, X.; Lin, J.; Adusumilli, G.; Naismith, R.T.; Cross, A.H.; Sun, P.; Song, S.K. Deep learning with diffusion basis spectrum imaging for classification of multiple sclerosis lesions. Ann. Clin. Transl. Neurol. 2020, 7, 695–706. [Google Scholar] [CrossRef] [Green Version]
  27. Vogelsanger, C.; Federau, C. Latent space analysis of vae and intro-vae applied to 3-dimensional mr brain volumes of multiple sclerosis, leukoencephalopathy, and healthy patients. arXiv 2021, arXiv:2101.06772. [Google Scholar]
  28. Shrwan, R.; Gupta, A. Classification of Pituitary Tumor and Multiple Sclerosis Brain Lesions through Convolutional Neural Networks. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1049, 012014. [Google Scholar] [CrossRef]
  29. Afzal, H.R.; Luo, S.; Ramadan, S.; Lechner-Scott, J.; Amin, M.R.; Li, J.; Afzal, M.K. Automatic and robust segmentation of multiple sclerosis lesions with convolutional neural networks. Comput. Mater. Contin. 2021, 66, 977–991. [Google Scholar] [CrossRef]
  30. Afzal, H.R.; Luo, S.; Ramadan, S.; Lechner-Scott, J.; Li, J. Automatic prediction of the conversion of clinically isolated syndrome to multiple sclerosis using deep learning. In Proceedings of the 2018 the 2nd International Conference on Video and Image Processing, Hong Kong, China, 29–31 December 2018; pp. 231–235. [Google Scholar]
  31. Rahtu, E.; Heikkilä, J.; Ojansivu, V.; Ahonen, T. Local phase quantization for blur-insensitive image analysis. Image Vis. Comput. 2012, 30, 501–512. [Google Scholar] [CrossRef]
  32. Peterson, L.E. K-nearest neighbor. Scholarpedia 2009, 4, 1883. [Google Scholar] [CrossRef]
  33. Goldberger, J.; Hinton, G.E.; Roweis, S.; Salakhutdinov, R.R. Neighbourhood components analysis. Adv. Neural Inf. Process. Syst. 2004, 17, 513–520. [Google Scholar]
  34. Xu, Y.; Zhu, Q.; Fan, Z.; Qiu, M.; Chen, Y.; Liu, H. Coarse to fine K nearest neighbor classifier. Pattern Recognit. Lett. 2013, 34, 980–986. [Google Scholar] [CrossRef]
  35. Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
  36. Warrens, M.J. On the equivalence of Cohen’s kappa and the Hubert-Arabie adjusted Rand index. J. Classif. 2008, 25, 177–183. [Google Scholar] [CrossRef] [Green Version]
  37. Bengio, Y.; Goodfellow, I.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2017; Volume 1. [Google Scholar]
  38. Faust, O.; Hagiwara, Y.; Hong, T.J.; Lih, O.S.; Acharya, U.R. Deep learning for healthcare applications based on physiological signals: A review. Comput. Methods Programs Biomed. 2018, 161, 1–13. [Google Scholar] [CrossRef] [PubMed]
  39. Kora, P.; Ooi, C.P.; Faust, O.; Raghavendra, U.; Gudigar, A.; Chan, W.Y.; Meenakshi, K.; Swaraja, K.; Plawiak, P.; Acharya, U.R. Transfer learning techniques for medical image analysis: A review. Biocybern. Biomed. Eng. 2021, 42, 79–107. [Google Scholar] [CrossRef]
  40. Barbedo, J.G.A. Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Comput. Electron. Agric. 2018, 153, 46–53. [Google Scholar] [CrossRef]
  41. Cao, C.; Liu, F.; Tan, H.; Song, D.; Shu, W.; Li, W.; Zhou, Y.; Bo, X.; Xie, Z. Deep learning and its applications in biomedicine. Genom. Proteom. Bioinform. 2018, 16, 17–32. [Google Scholar] [CrossRef]
  42. Key, S.; Baygin, M.; Demir, S.; Dogan, S.; Tuncer, T. Meniscal Tear and ACL Injury Detection Model Based on AlexNet and Iterative ReliefF. J. Digit. Imaging 2022, 35, 200–212. [Google Scholar] [CrossRef] [PubMed]
  43. Demir, F.; Taşcı, B. An Effective and Robust Approach Based on R-CNN+ LSTM Model and NCAR Feature Selection for Ophthalmological Disease Detection from Fundus Images. J. Pers. Med. 2021, 11, 1276. [Google Scholar] [CrossRef]
  44. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  45. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
  46. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  47. Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8697–8710. [Google Scholar]
  48. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  49. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  50. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–10 February 2017; pp. 4278–4284. [Google Scholar]
  51. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 84–90. [Google Scholar] [CrossRef]
  52. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
  53. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  54. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18-23 June 2018; pp. 4510–4520. [Google Scholar]
  55. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  56. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  57. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  58. Wang, S.-H.; Tang, C.; Sun, J.; Yang, J.; Huang, C.; Phillips, P.; Zhang, Y.-D. Multiple sclerosis identification by 14-layer convolutional neural network with batch normalization, dropout, and stochastic pooling. Front. Neurosci. 2018, 12, 818. [Google Scholar] [CrossRef]
  59. Plati, D.; Tripoliti, E.; Zelilidou, S.; Vlachos, K.; Konitsiotis, S.; Fotiadis, D.I. Multiple Sclerosis Severity Estimation and Progression Prediction Based on Machine Learning Techniques. In Proceedings of the 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, Glasgow, Scotland, UK, 11–15 July 2022. [Google Scholar]
  60. Eitel, F.; Soehler, E.; Bellmann-Strobl, J.; Brandt, A.U.; Ruprecht, K.; Giess, R.M.; Kuchling, J.; Asseyer, S.; Weygandt, M.; Haynes, J.-D. Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation. NeuroImage Clin. 2019, 24, 102003. [Google Scholar] [CrossRef]
  61. Calimeri, F.; Marzullo, A.; Stamile, C.; Terracina, G. Graph based neural networks for automatic classification of multiple sclerosis clinical courses. In Proceedings of the ESANN, Bruges, Belgium, 25–27 April 2018. [Google Scholar]
  62. Marzullo, A.; Kocevar, G.; Stamile, C.; Durand-Dubief, F.; Terracina, G.; Calimeri, F.; Sappey-Marinier, D. Classification of multiple sclerosis clinical profiles via graph convolutional neural networks. Front. Neurosci. 2019, 13, 594. [Google Scholar] [CrossRef]
Figure 1. Example axial and sagittal FLAIR MRI sections in healthy and multiple sclerosis (MS) classes. Note the presence of hyperintense lesions in the brain’s white matter in the latter.
Figure 1. Example axial and sagittal FLAIR MRI sections in healthy and multiple sclerosis (MS) classes. Note the presence of hyperintense lesions in the brain’s white matter in the latter.
Applsci 12 04920 g001
Figure 2. ExMPLPQ flow diagram.
Figure 2. ExMPLPQ flow diagram.
Applsci 12 04920 g002
Figure 3. Confusion matrices for binary classification using Fine kNN classifier.
Figure 3. Confusion matrices for binary classification using Fine kNN classifier.
Applsci 12 04920 g003
Table 1. The attributes of the MRI dataset used.
Table 1. The attributes of the MRI dataset used.
Male, nFemale, nTotal, nAge, YearsNumber of MRI Images, n
MS-Axial21517228.4 ± 5.66650
MS-Sagittal21517228.4 ± 5.66761
Healthy-Axial27 *30 *57 *29.5 ± 8.321002
Healthy-Sagittal29 *20 *49 *27.4 ± 6.481014
* There is an overlap of subjects in the healthy class, which comprises 29 males and 30 females.
Table 2. The used parameters.
Table 2. The used parameters.
StepParameter
Exemplar division32 × 32 sized patches have been used. To generate features from local areas.
LPQThis step was used to extract textural features with variable parameters (we have used 3, 5, and 7 parameters). Therefore, this feature extractor is named MPLPQ. The prime purpose of this feature extractor is to use the effectiveness of the LPQ by using variable parameters.
Feature concatenationThe proposed model extracts 768 features from each patch and raw image. In our architecture, 50 (=49 + 1) feature vectors have been created, and each feature vector has 768 features. Therefore, the created feature vector has 38,400 (=768 × 50) features.
INCALoop range: from 100 to 1000
Loss vector: kNN
Classification using kNNk: 1
Distance: Spearman
Voting: None
Table 3. Calculated performance results (%) per classifier used.
Table 3. Calculated performance results (%) per classifier used.
Data
Subset
AccuracySensitivitySpecificityPrecisionF-ScoreMCC
Axial98.3796.4699.6099.3797.8996.59
Sagittal97.7595.0199.8099.7297.3195.46
Hybrid98.2296.3999.5099.2797.8196.34
Table 4. The general classification results (%) using a Fine kNN classifier with 10-fold cross-validation ± standard deviations.
Table 4. The general classification results (%) using a Fine kNN classifier with 10-fold cross-validation ± standard deviations.
Data
Subset
Accuracy SensitivitySpecificityPrecisionF-ScoreMCC
Axial98.35 ± 0.0296.45 ± 0.0399.59 ± 0.0199.36 ± 0.0297.88 ± 0.0296.58 ± 0.04
Sagittal97.74 ± 0.01995.00 ± 0.04499.79 ± 0.05499.71 ± 0.07597.30 ± 0.02495.45 ± 0.03
Hybrid98.20 ± 0.0696.38 ± 0.0699.49 ± 0.0699.26 ± 0.0497.80 ± 0.0596.32 ± 0.012
MCC refers to Matthew’s correlation coefficient.
Table 5. Means ± standard deviations of accuracy results attained by the19 comparator pre-trained models after 100 classification runs using Fine kNN classifier with 10-fold cross-validation.
Table 5. Means ± standard deviations of accuracy results attained by the19 comparator pre-trained models after 100 classification runs using Fine kNN classifier with 10-fold cross-validation.
NumberPre-Trained ModelAxial
Accuracy
Sagittal
Accuracy
Hybrid
Accuracy
1GoogleNet [44]84.49 ± 0.5382.02 ± 0.5885.62 ± 0.35
2DarkNet53 [45]87.79 ± 0.4786.49 ± 0.6388.02 ± 0.32
3Inceptionv3 [46]89.06 ± 0.4882.47 ± 0.6288.19 ± 0.33
4NasnetLarge [47]86.20 ± 0.4181.63 ± 0.5288.22 ± 0.30
5NasnetMobile [47]87.48 ± 0.3482.41 ± 0.5688.56 ± 0.30
6VGG19 [48]87.80 ± 0.6083.03 ± 0.5688.58 ± 0.37
7VGG16 [48]88.54 ± 0.4584.78 ± 0.5488.79 ± 0.31
8Resnet101 [49]88.76 ± 0.4785.86 ± 0.5188.90 ± 0.32
9Inceptionresnetv2 [50]90.10 ± 0.3884.18 ± 0.5589.42 ± 0.27
10AlexNet [51]87.38 ± 0.5784.57 ± 0.5189.77 ± 0.34
11ShuffleNet [52]90.25 ± 0.5486.25 ± 0.5490.12 ± 0.35
12Resnet50 [49]90.81 ± 0.5188.33 ± 0.4690.15 ± 0.35
13Xception [53]91.35 ± 0.4486.15 ± 0.5190.24 ± 0.28
14Resnet18 [49]91.50 ± 0.4285.77 ± 0.4890.45 ± 0.32
15Darknet19 [45]89.90 ± 0.5185.61 ± 0.5490.57 ± 0.31
16MobileVnet2 [54]91.08 ± 0.4685.70 ± 0.5091.15 ± 0.31
17DenseNet201 [55]91.88 ± 0.5387.75 ± 0.5091.81 ± 0.30
18SqueezeNet [56]90.76 ± 0.5386.42 ± 0.4991.89 ± 0.32
19Efficient b0 [57]93.69 ± 0.4590.26 ± 0.3993.22 ± 0.28
Table 6. Comparison with other state-of-the-art MS brain MRI classification methods.
Table 6. Comparison with other state-of-the-art MS brain MRI classification methods.
StudyMethodDatasetSubjectsResults, %
Plati et al. [59]Typographic error-based feature extraction, oversampling based feature selection and classification78 records, 51 with EDSS 0–3.5, 18 with EDSS 4.0–5.0, 10 with EDSS 5.5–10.030 MSAccuracy 94.87, *TP rate for low class 90.40, *TP rate for medium-class 94.20, *TP rate for high class 100.00
Wang et al. [58]CNN with 14 layerseHealth Lab and clinic38 MS, 26 healthyAccuracy 98.77, Sensitivity 98.77, Specificity 98.76
Eitel et al. [60]CNN pretrained on Alzheimer’s
neuroimaging initiative dataset
Clinic76 MS, 71 healthyAccuracy 87.04, AUC 96.08
Calimeri et al. [61]Graph neural networkClinic90 MSSpecificity 82, F1-Score 80
Marzullo [62]Graph CNNClinic90 MSSpecificity 92, F1-Score 92
Our modelExMPLPQClinic72 MS, 59 healthyAxial: Accuracy 98.37, Sensitivity 96.46, Specificity 99.60
Sagittal: Accuracy 97.75, Sensitivity 95.01. Specificity 99.80
Hybrid: Accuracy 98.22, Sensitivity 96.39, Specificity 99.50
*TP: true positive rate.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Macin, G.; Tasci, B.; Tasci, I.; Faust, O.; Barua, P.D.; Dogan, S.; Tuncer, T.; Tan, R.-S.; Acharya, U.R. An Accurate Multiple Sclerosis Detection Model Based on Exemplar Multiple Parameters Local Phase Quantization: ExMPLPQ. Appl. Sci. 2022, 12, 4920. https://doi.org/10.3390/app12104920

AMA Style

Macin G, Tasci B, Tasci I, Faust O, Barua PD, Dogan S, Tuncer T, Tan R-S, Acharya UR. An Accurate Multiple Sclerosis Detection Model Based on Exemplar Multiple Parameters Local Phase Quantization: ExMPLPQ. Applied Sciences. 2022; 12(10):4920. https://doi.org/10.3390/app12104920

Chicago/Turabian Style

Macin, Gulay, Burak Tasci, Irem Tasci, Oliver Faust, Prabal Datta Barua, Sengul Dogan, Turker Tuncer, Ru-San Tan, and U. Rajendra Acharya. 2022. "An Accurate Multiple Sclerosis Detection Model Based on Exemplar Multiple Parameters Local Phase Quantization: ExMPLPQ" Applied Sciences 12, no. 10: 4920. https://doi.org/10.3390/app12104920

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop