The following are the overall steps of our experiments:
-
The database of skin lesion or rash photos from six distinct diseases (as compare to three in [34, 35]), which include monkeypox, chickenpox, smallpox, cowpox, measles, and tomato flu, as well as images of normal skin.
-
The dataset has more information for measles, pox, and healthy images that were downloaded from the Internet (i.e., 4925 photographs) than previous comparable datasets used in [34, 35]. (1234 and 1080 images of monkeypox skin lesions and healthy skin images, respectively).
-
Finally evaluated the efficacy of four state-of-the-art deep learning models for diseases classification using digital skin images (comparison to one and three deep learning models in [34, 35]). Performances in classifying diseases for VGG16 [29], ResNet50 [29], Inception-V3 [31], and Ensemble [30] were examined.
-
Although their dataset was smaller than ours, the experiment used 10-fold cross-validation tests on each of the PSOMPX to examine results.
The database collection has 4925 photos separated into 7 groups, which include both sick and healthy skin images of the human body. Table 1 illustrates the seven classes and the number of photos used in each class. The proposed model was trained with around 703 photos of each class using a 65%-35% Splitting train-cross validation to test the performance of the suggested strategy on a unique dataset and to keep track of the classifier in this approach. The testing dataset had 69 photos for each of the seven classes, for a total of 483 images.
Table 2
Different Classifiers cross-validation test result analysis
Classifier | Accuracy | Precision | Recal | F1-Score |
PSO Optimizer | 90.01 | 86.16 | 85.58 | 85.87 |
VGG16 | 82.94 | 80.18 | 79.17 | 79.68 |
ResNet50 | 84.87 | 82.81 | 81.60 | 82.20 |
Inception V3 | 84.53 | 82.51 | 82.30 | 82.40 |
Ensemble | 87.13 | 85.44 | 85.47 | 85.46 |
After building the classifier for seven classes, 100 features were obtained. The 24 best features were gathered and used to train our classifier using PSO, GLCM-SVM classifier, and the Hidden Markov Model (HMM). Approximately 4925 photos were used for training the ResNet50 feature extraction model for seven classes; while Cross-validation was performed using 1516 images. The ratio of cross-validation to training was 65–35%. A total of 4,925 images were used to train the model. Table 2 displays the accuracy and performance metrics obtained for the specified number of iterations. Each image has 100 characteristics extracted. From the 100 features collected using binary PSO that was used to choose the best feature subset. The values of the PSO parameters are depicts below in Table 3.
Table 3
Population | 100 |
Error Criterion | 0.01 |
C1 | 1.00 |
C2 | 1.00 |
Table 4
Results of the PSO optimizer
gbest fitness | 0.978 |
gbest | 6621208546588 7782960042360860 |
Iteration | 101 |
The PSO findings are summarized in Table 4, and the "gbest" fitness value offers the global best fitness value as well as the optimum global parameter, which relates to the best subgroup. The global best argument, defines which column must be chosen when converted to binary. The binary equivalents of 6621208546588 7782960042360860 are: 1101010111110001010110100111100101011000011101110101110101101001000111101011111 000100 000000011100. The 24 features with indices of '1', '3', '5', '7', '8', '10', ‘17’, '20', '21', '25', '27', '28', '29', '31', '34', ‘37’, '41', '44', '48', '50', '55', ‘58’, '67', ‘69’ were used for feature selection. The classifier was trained using this ideal set of attributes of GLCM-SVM classifier. The cross-validation results for well-known DL- classifiers are shown in Fig. 4 in terms of accuracy, precision, recall, and F1- score. The following classifiers: On the basis of their cross-validation F1-scores, the PSO Optimizer, VGG16, ResNet50, Inception V3, and Ensemble are compared. PSO Optimizer was chosen for classification because, with an F1-score of 85.87%, it performed better than the other classifiers. The ideal feature subset, which consists of 24 features, was used to train the PSO Optimizer classifier. Only 9 images out of a total of 4925 were correctly classified, with 69 images from each class being included in the testing. As a result, a total accuracy of 90.01% (F1-Score: 85.87%) was achieved. The classification's full findings are presented in Table 2 and Fig. 4. The precision, recall, and F1-score for each class are evaluated, giving an F1-score of 85.87%. The precision-recall curve shown in Fig. 5 was created using the information from Table 2's results.
The outcomes of this approach are similar to those of the one-versus-rest technique when used directly. The available constraints, such as the required precision, the time allotted for development, the processing time, and the nature of the classification problem, will define the strategy to be used in practical situations. Figure 6 shows the accuracy and loss training and test curves for the PSO optimizer. Finally, employing image processing methods and a machine learning application, the experimental findings demonstrated (in Fig. 7) an automated detection of monkeypox lesions. Image capturing and extraction, image preprocessing, feature extraction, feature selection, and classification are all parts of it. The development of an autonomous detection system using cutting-edge information techniques, such as image processing, that helps clinicians identify infections early and gives crucial data for virus management.
Results:
Refer to figure 7.