Next Article in Journal
Upraising Stenotrophomonas maltophilia in Critically Ill Patients: A New Enemy?
Previous Article in Journal
ColpoClassifier: A Hybrid Framework for Classification of the Cervigrams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Stacked Restricted Boltzmann Machine with Sobel Directional Patterns for Melanoma Prediction in Colored Skin Images

by
A. Sherly Alphonse
1,
J. V. Bibal Benifa
2,*,
Abdullah Y. Muaad
3,
Channabasava Chola
2,
Md Belal Bin Heyat
4,
Belal Abdullah Hezam Murshed
3,
Nagwan Abdel Samee
5,
Maali Alabdulhafith
5,* and
Mugahed A. Al-antari
6,*
1
School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
2
Department of Studies in Computer Science and Engineering, Indian Institute of Information Technology, Kottayam 686635, India
3
Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
4
IoT Research Center, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
5
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
6
Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
*
Authors to whom correspondence should be addressed.
Diagnostics 2023, 13(6), 1104; https://doi.org/10.3390/diagnostics13061104
Submission received: 31 January 2023 / Revised: 7 March 2023 / Accepted: 9 March 2023 / Published: 14 March 2023
(This article belongs to the Section Medical Imaging and Theranostics)

Abstract

:
Melanoma, a kind of skin cancer that is very risky, is distinguished by uncontrolled cell multiplication. Melanoma detection is of the utmost significance in clinical practice because of the atypical border structure and the numerous types of tissue it can involve. The identification of melanoma is still a challenging process for color images, despite the fact that numerous approaches have been proposed in the research that has been done. In this research, we present a comprehensive system for the efficient and precise classification of skin lesions. The framework includes preprocessing, segmentation, feature extraction, and classification modules. Preprocessing with DullRazor eliminates skin-imaging hair artifacts. Next, Fully Connected Neural Network (FCNN) semantic segmentation extracts precise and obvious Regions of Interest (ROIs). We then extract relevant skin image features from ROIs using an enhanced Sobel Directional Pattern (SDP). For skin image analysis, Sobel Directional Pattern outperforms ABCD. Finally, a stacked Restricted Boltzmann Machine (RBM) classifies skin ROIs. Stacked RBMs accurately classify skin melanoma. The experiments have been conducted on five datasets: Pedro Hispano Hospital (PH2), International Skin Imaging Collaboration (ISIC 2016), ISIC 2017, Dermnet, and DermIS, and achieved an accuracy of 99.8%, 96.5%, 95.5%, 87.9%, and 97.6%, respectively. The results show that a stack of Restricted Boltzmann Machines is superior for categorizing skin cancer types using the proposed innovative SDP.

1. Introduction

An extremely dangerous type of melanoma, termed malignant melanoma, develops in skin cells known as melanocytes [1]. Melanocytes are skin cells that are located in the top layer of skin. They are responsible for the production of the pigment known as melanin, which is responsible for the color of skin. Eumelanin and pheomelanin are the two forms of melanin that can be found in the human body. The melanocytes are stimulated to create more melanin when skin is damaged by ultraviolet (UV) radiation from the sun or tanning beds. However, only the eumelanin pigment in the skin makes an effort to protect the skin by causing the skin to darken or tan. Melanoma arises when DNA is damaged from sunburns or tanning beds as a result of ultraviolet radiation, which then causes alterations (mutations) in the melanocytes, which ultimately leads to uncontrolled cellular proliferation [2]. This form of skin cancer is riskier than other common types including basal cell carcinoma and squamous cell carcinoma. If it is not treated at an early stage, it has the potential to swiftly spread to other organs, making it more difficult to cure [3].
In the United States in 2022, it was anticipated that there would be 99,780 recently diagnosed occurrences of invasive melanoma and 97,920 newly diagnosed instances of in situ melanoma and that cancer would claim the lives of 7650 individuals [4]. Occurrence rates are greater in women than in men prior to the age of 50, whereas, after that age, they are increasingly higher in men. This is mainly due to age gaps in historical workplaces and outdoor exposure to UV radiation, as well as the usage of in-door tanning among many young women. Men have a higher risk of developing skin cancer after the age of 50. Variations in the methods of early detection and the utilization of health care could also be contributing factors. About 300,000 new cases of melanoma were reported worldwide in 2018, making it the most frequent malignancy in both men and women [2]. Over a million new cases of basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) were diagnosed in 2018, making them the second and third most common forms of skin cancer after melanoma [5]. A greater number of cases of skin cancer are diagnosed annually compared to any other type of cancer in the United States. The good news is that early detection considerably increases the likelihood of a successful treatment. Without spreading, a 99% 5-year survival rate is reported for melanoma patients [5]. When it spreads to other parts of the body, the prognosis is not as good. However, the success of a dermatologist’s diagnosis depends heavily on their experience and training, as the earliest signs of skin cancer are usually not obvious. Skin cancer of the non-melanoma variety is quite prevalent. A total of 300,000 new cases of melanoma, which is ranked 19th among the most prevalent cancer kinds, were discovered in 2018. In 2019 alone, there were approximately 2490 females and 4740 males who lost their lives to melanoma. There are an estimated 2–3 million new cases annually [4], with over 40% attributable to disorders other than melanoma. It is especially common in those with very fair skin. A lower mortality rate can be achieved with earlier cancer detection. Better treatment for the patient is another benefit.
Clinicians typically do screenings for skin cancer via eye inspection, which is not only more time-consuming but also more prone to error and subjectivity. Dermoscopy is a noninvasive imaging technology that removes the skin’s surface reflection, allowing for better illumination and magnification of skin abnormalities. Melanoma is generally predicted by using the ABCDE rule. Specialists assess a mole based on its Asymmetry, Border, Dimension, Color, and Edge. Yet, diagnoses based solely on visual inspection tend to be more off-base. Different methods have been proposed in the literature. However, diagnosing the specific form of skin cancer might be challenging. Reduced mortality rates from skin cancer can be achieved with diligent early detection efforts. Professionals need some time to make a correct early diagnosis. However, the dermatologist accuracy in detecting melanoma using dermoscopy images was lower than 80% in traditional clinical settings [6,7,8]. Using machine learning techniques [9,10,11], this analysis could be automated, leading to a framework in medicine that would provide experiential relevance, improve clinical accuracy, aid physicians in interacting objectively, reduce errors caused by human stress, and lower mortality rates [12,13,14,15]. One step in the right direction is the development of a machine-learning algorithm that can distinguish between malignant and benign lesions [16,17]. To identify cancerous skin lesions as soon as possible, this work makes use of Machine Learning, and deep learning algorithms to reliably categorize pigmented lesions in dermoscopic skin images.
Skin malignancies develop as a result of aberrant cell growth and can spread to other body parts [1]. The PH2 dataset divides skin cancer into three major types. They are atypical nevus, melanoma, and common nevus. A common nevus is a benign skin growth that occurs because melanocytes, or pigment cells, proliferate. It appears as a little dot in the epidermis, and determining whether it is benign or malignant is exceedingly challenging. A typical application of biopsy is also not acceptable. The creation of a completely automated melanoma identification system is crucial for assisting dermatologists with diagnosis [2].
In general, the classification of skin cancer is difficult due to the presence of artifacts, differences in picture resolution, and fewer distinguishing characteristics across the many forms of cancer. As a result of these issues, the efficient framework may be deemed an ideal model for skin cancer classification due to its compound scaling property. This will help to strengthen the accuracy of the classification. Within the scope of this study, an end-to-end framework for the effective and accurate classification of skin lesions is provided. The proposed framework is composed of preprocessing, segmentation, feature extraction, and classification modules. During the preprocessing stage, the DullRazor algorithm is utilized to remove hairs from the material. This assists in reducing the influence of artifacts caused by the presence of hairs in skin imaging. After that, a semantic segmentation strategy that makes use of Fully Connected Neural Networks (FCNNs) is developed to extract precise and clear Regions of Interest (ROIs). After that, the key features included within the ROIs are retrieved by utilizing an enhanced version of the Sobel Directional Pattern (SDP), a methodology we are proposing here in this research to extract relevant features contained within skin images. When it comes to analyzing skin images, the Sobel Directional Pattern approach is preferable to the more conventional feature extraction strategy known as the ABCD rule [18]. As a last step, a stacked Restricted Boltzmann Machine, also known as a stacked RBM, is introduced as a solution for the classification of skin ROIs. The stacked RBMs are being presented due to their outstanding performance in the classification of skin melanoma [19].
Artificial intelligence (AI) and associated technologies are starting to be adopted by healthcare organizations as they become increasingly widespread in the industrial and medical sectors [20,21,22,23,24]. Studies [25,26,27,28,29,30,31,32,33] have proven that AI is as good as, or better than, human doctors when it comes to medical diagnosis. Recently, machine learning and deep learning algorithms [18] have been more accurate than radiologists in detecting malignant tumors. They are also aiding researchers in figuring out how to assemble study populations for costly clinical trials. Since modern graphics processing units (GPUs) are capable of handling massive amounts of data, such models may also have hundreds of thousands of hidden features. One of the most common uses of deep learning in healthcare is the detection of potentially malignant tumors in medical images. The field of radiomics, which uses deep learning to reveal hidden clinically relevant patterns in imaging data, is gaining traction. Typically, radiomics and deep learning are used together in oncology-specific picture analysis. With both, CAD systems can make more accurate and precise diagnoses. Artificial intelligence has been supported by many researchers recently as a means to automatically detect and diagnose skin disorders [34,35,36,37,38]. Gonzalez-Castro et al. [39] suggested categorizing skin lesions using color and texture descriptors based on morphology. In their method, the hue and mathematical morphology of the color texture are examined. Additionally, they have employed Kohonen Self-Organizing Maps (SOM). They do not segregate anything at all. For each pixel, mathematical morphology generates a descriptor. Clusters are produced in SOM. The descriptors do not depend on location. Color Adaptive Neighborhoods are represented mathematically through morphology. However, it is never easy to get a perfect mapping using this method. Using the ABCD rule, Kasmi et al. [3] used ABCD instruction of dermoscopy as a procedure to detect melanoma in their patients.
For categorization, the shape, color, and Pyramid Histogram of Oriented Gradients (PHOG) properties are provided in the literature. This method [3] proposes an approach that automatically identifies melanoma using the ABCD rule. Gabor filters are used in the preprocessing step to identify the hair and geodesic contours are used to identify the borders. The strategy used by Kasmi et al. [3] incorporates both conventional and cutting-edge techniques. To extract the properties of ABCD attributes, algorithms are used. This method’s disadvantage is that the performance needed to be raised. Convolutional neural network (CNN) use has been suggested by Zhang et al. [7] for the categorization of skin cancer. This approach has produced positive outcomes. Color characteristics and an instance-based learning method were utilized by Pereira et al. [4], achieving an accuracy of 61.7 percent. Deep Convolutional Neural Networks for classification have been proposed by Harangi et al. [5]. For classification, they used convolutional neural networks (CNNs). The results of various deep network topologies were combined. However, this proposed method had a flaw in that it required more data for training and took a long time due to the multiple layers involved [8]. The idea of melanoma detection using image processing techniques was put forth by Garg et al. [16]. Dermoscopy and light microscopy were employed in the procedure. However, the automatic diagnosis technique was less expensive. To reduce the additional noise in the image, preprocessing was done. After that, segmentation was applied. Mukherjee et al. [17] suggested a metaheuristic technique that is inspired by nature and finds ideal solutions quickly and effectively. Multi-Layer Perceptron (MLP) classifies melanoma, achieving an accuracy of more than 91 percent. When compared to other works in the literature, it produces positive outcomes. The parameters of the optimization approaches are examined in a two-dimensional space. The optimization process for this method takes a long time. By combining the features from the various methodologies, Hagerty et al. [40] suggested a strategy that blends traditional image processing and deep learning. Deep learning and traditional image processing were the two methods they suggested. For classification, deep learning with the ResNet-50 is utilized. For prediction, logistic regression is used. It identifies only color features when there is a lesion, which is a drawback. According to Kaur et al. [41], dermoscopy picture classification might be accomplished by combining texture and color data. The texture is extracted using a local binary pattern (LBP), which abstracts a histogram and scale-adaptive patterns from each pixel. Their histograms were HSV ones. For categorization, concatenated features are provided. Although this method produced good results, it has the disadvantage that the LBP does not collect border information. For more accuracy than current methods, the suggested Sobel Directional Pattern (SDP) for feature extraction integrates key characteristics of skin melanoma, such as texture, color, and boundary information, into a feature vector. Compared to other approaches, this model uses stacked Restricted Boltzmann Machines (SRBMs) that are quicker and more precise. When dealing with unbalanced datasets, the stacked RBM also produces good results. The currently used feature descriptors either record edge or texture information. The suggested method successfully captures edge, texture, and color information.
The pixels of biomedical images are used by computer vision to categorize the different forms of skin cancer. Shape, borders/edges, texture, color, and other details are estimated using the dermoscopy images individually in the literature. In the present efforts, Support Vector Machines (SVMs) are primarily utilized [42]. These can only classify things in binary. When utilized as a one vs. one model for multiclassification, this method takes more time. When there are fewer samples available, Deep Neural Networks (DNN) [43] are employed in some works, which leads to the overfitting issue. There is a lack of generalization and poor classification accuracy because of images with lower resolution and the differences in dermoscopy of the images. In order to emphasize the importance of the work that we are about to discuss, we have made a list of the contributions that the current study has made, which are as follows:
  • A hybrid AI-based framework based on stacked Restricted Boltzmann Machines and Sobel Directional Patterns is proposed for Skin Melanoma Prediction;
  • The Sobel Directional Pattern (SDP) is a new method for feature extraction that combines texture, color, and edge data into a single feature vector using a Sobel filter. The presented image preprocessing phase helps in removing the noise, enhancing the image quality by stitching the histogram, and removing the hairs from the images. This process also removes the random noise that is introduced to an image during its acquisition;
  • An automatic semantic segmentation using FCCNs is provided for extracting the ROIs from the skin melanoma lesions. Stacked RBMs are used to accurately classify the segmented cutaneous lesions;
  • Preprocessing is performed to reduce the hairs, and the need to align images into random datasets during image acquisition is eliminated;
  • The newly discovered and adapted Sobel filter-based Sobel Directional Pattern (SDP) extracts features and mixes texture, color, and edge data into a single feature vector.
The suggested method improves image quality by removing hairs through preprocessing. The suggested SDP also eliminates noisy data, improving accuracy. The most discriminating data are determined as a feature vector utilizing SDP. The SDP feature extraction approach is used in this method to encode the color, edges, and op-ponent color and texture data as a feature vector. For categorization, a stacked RBM is employed. The suggested model is faster and more accurate than the ones currently in use. For comparing grayscale and color texture features, the SDP operator is created as a combined color-texture operator. Accuracy is enhanced by the use of both color and texture elements. The term “Opponent colors” refers to all pairs of color channels. Here, each color channel receives a distinct application of the directional pattern produced by the Kirsch masks operator. For further calculations, only the maximum responses are employed.
This eliminates the erratic noise that was picked up during image capture. Each pair of color channels is also used to derive the patterns of the opposition. A neighborhood’s center pixel and surrounding pixels are drawn from different channels. Feature extraction, preprocessing, and classification are the steps in the proposed system’s framework for classifying skin cancer. With its greater processing power and quicker learning, the stacked RBM in the Deep Belief Network achieves accurate prediction in a shorter period. For unbalanced datasets, the stacked RBM in the Deep Belief Network also performs well.
The suggested methodology is described in Section 3, which also covers the capture of images, the preprocessing method, feature extraction with SDP, and classification with RBM. Section 4 of the report discusses the experimental findings. The proposed strategy is also contrasted with other cutting-edge methods in Section 4.

2. Materials and Methods

2.1. Skin Image Datasets

The data that were used in this study were gathered from five different public datasets including PH2 [44], ISIC 2016 [45], ISIC 2017 [45], DermIS (https://www.dermis.net/doia/ accessed on 1 December 2022), and DermNet NZ (https://dermnetnz.org/image-library accessed on 1 December 2022). The images in PH2 are divided into melanoma and non-melanoma categories. Both cancerous (melanoma) and noncancerous (benign) skin lesions are represented in the 2016 dataset collected by the International Skin Imaging Collaboration (ISIC). There are a total of 1279 images available in the ISIC 2016 dataset, including 900 training images and 379 test images. Both the training and testing sets include ground truth data that indicate whether or not each lesion is cancerous. Out of a total of 2600 images, 2000 are used for training and 600 are used for testing in the ISIC 2017 dataset. Melanoma, seborrheic keratosis, and nevus are the classes that are represented, and the ground truth and patient metadata are included in the training and testing sets. The Dermnet Skin Disease Atlas has tagged 23,000 images on the platform. Images from 23 classes are included in this dataset. The tests use three kinds of disorders from Dermnet, including Molluscum contagiosum, Seborrheic Keratosis, and Metastatic Melanoma. Melanoma, Seborrheic keratosis, and lupus erythematosus are the three skin conditions from DermIS that are taken into consideration for the tests. Figure 1 depicts several sample images.

2.2. Proposed End-to-End CAD Framework for Skin Lesions

This research presents an end-to-end system for the diagnosis of skin melanomas using SDP and stacked RBM algorithms. Major elements of the introduced framework are shown in Figure 2. Preprocessing techniques are used on captured images to improve their quality, to align images, and to get rid of distracting hairs. Then, in order to extract informative Regions of Interest (ROIs), a semantic segmentation technique based on FCNN is created. This study introduces a comprehensive framework for identifying skin melanomas with the use of SDP and stacked RBMs. Once the image has been removed of its noise, it is put into the SDP feature extraction algorithm, which chooses just the maximum response data produced by the customized Sobel filter-based masks. This technique can be used to isolate subtle color differences, textures, and outlines of fine lines. If you use one of the alternative filters, you will get a significant amount of extra noise around the edges. The resulting feature vectors are then used as input for classification by the stacked RBM. According to the attributes extracted by SDP, the images are then classified into several skin cancer classes.

2.2.1. Image Preprocessing

The first thing that has to be done is some preprocessing on the images in order to find and get rid of any hairs that are visible on the skin. It is possible that classification errors will occur if there are hairs present in the skin imaging data. As a consequence of this, the procedure of removing hairs using DullRazor is utilized at this stage of preparation. It is able to do this by carrying out an activity called broad grayscale morphological closing, which allows it to recognize the locations of dark hair. It does this by alternating the pixels that have been validated with bilinear interpolation and confirming that the structure of the hair pixel outline is thin and extended. An adaptive median filter is applied in order to smooth out the pixels that represent the changed hair. In order to generate a hair mask, the images including hair must first undergo preprocessing. As can be seen in Figure 3, images are preprocessed to eliminate hair by making use of an algorithm called DullRazor [46]. Because the images are not all the exact size, each one of them has to have its dimensions adjusted. Each image has been scaled down to the exact dimensions, which are 760 pixels wide by 570 pixels tall. Images were selected from the Pedro Hispano Hospital (PH2) dataset in accordance with their respective average sizes and are given after hair removal in Figure 3.

2.2.2. Image Segmentation

Using the segmentation method, the skin lesion is divided into its subcomponents [47]. Figure 4 presents the images after they have been segmented. Morphological operations can be performed on the data obtained from skin imaging to help locate skin lesions. It is also possible to obtain the image’s complementary black and white version. There is a clear distinction between the background and the skin lesion. It has been shown that the strategy of increasing regions is particularly helpful for detecting skin. The region-growing strategy is another method that can be utilized in the process of extracting the lesion from the images of the skin. In this proposed work, FCCNs are utilized for segmentation purposes. Fully Connected Convolutional Networks are a form of architecture that are utilized in semantic segmentation. They exclusively employ convolution, pooling, and up sampling as their locally connected layers to build their models. FCCNs give each pixel a classification in order to achieve a certain level of semantic segmentation for images. Figure 4 depicts an example of a segmented image.

2.2.3. Feature Extraction

Dermatologists define melanoma using the ABCD rule [3], which they utilize to evaluate skin lesions. Images are examined for irregular borders, asymmetry, and uneven distribution. DermIS, PH2, ISIC 2016, and ISIC 2017 are the datasets that were utilized. For categorizing the images of skin cancer in this study, novel SDP and stacked RBM are used. It is no longer necessary to use different computational methods to extract information about texture, color, and edges from skin cancer images.
The method of mining the essential data from the obtainable raw images is called feature extraction. When used as input by a machine learning algorithm, the collected features have to be non-redundant and produce good outcomes. Using the feature extraction method, the dataset’s images are condensed into a tiny feature vector. The suggested feature extraction method uses a minimal number of computations to merge the edge, color, and texture data into a feature vector. The skin pictures with the lesion are given measurable information via the proposed SDP approach. This method can be utilized as a skin cancer early detection tool. By doing this, more painful procedures of skin cancer diagnosis are avoided. Digital images are used in this manner. The ABCD rule of dermoscopy is the foundation for the feature extraction techniques seen in the literature. ABCD stands for Asymmetry, Border, Color, and Diameter. Dermatologists utilize these characteristics to categorize melanomas. Asymmetry, irregular boundaries, hue, uneven distribution, and a diameter larger than 6 mm are the characteristics examined:
Asymmetrical Shape: Lesions are uneven or asymmetrical in shape, denoted by the letter A. Other moles are regular and benign;
Border: The borders of non-cancerous moles are even and smooth. The borders of the melanoma lesions are erratic;
Color: More hues including blue, black, brown, and tan are found in melanoma. The uneven distribution of hue is an indication of possible melanoma. Moles that are benign only come in one hue of brown;
Diameter: Lesions with melanoma have a diameter of more than 6 millimeters.
The suggested SDP takes the skin cancer images and extracts the color, texture, and edge-based properties. In the suggested study, the characteristics can be taken from the various color spaces, such as RGB, HSV, and YCbCr, and provided as input to the classifier for melanoma prediction [45]. The RGB, HSV, and YCbCr color spaces all allow for effective color discrimination. The RGB, HSV, and YCbCr space color and texture information can be extracted using the SDP operator. The SDP operator is applied separately to each color channel in an HSV image in this proposed approach as follows. The various color channel pairs are employed to collect various color patterns. Numerous color channels are used to choose the epicenter and location pixels. In SDPH,S, the middle pixel in a 3×3 region is designed with the pixel at center position from H and the nearby pixels from S. In the H-channel image, Hc,d is the pixel at the center and Hc,d, Hc+1,d, Hc−1,d, Hc−1,d+1, Hc,d+1, Hc+1,d+1, Hc−1,d−1, Hc,d−1, and Hc+1,d−1 are the eight adjacent pixels in a block. In the S-channel image, Sc,d is the pixel at the center and Sc,d, Sc+1,d, Sc−1,d, Sc−1,d+1, Sc,d+1, Sc+1,d+1, Sc−1,d−1, Sc,d−1, and Sc+1,d−1 are the pixels present at the eight adjacent sides in a block. In the V-channel image, Vc,d is the pixel at the center and Vc,d, Vc+1,d,Vc−1,d, Vc−1,d+1, Vc,d+1, Vc+1,d+1, Vc−1,d−1, Vc,d−1, and Vc+1,d−1 are the pixels present at the eight adjacent sides in a block. SDPH,H, SDPS,S, SDPV,V, SDPH,S, SDPH,V, and SDPS,V are the combined channel images. Here, each 3 × 3 block is formed with the subsequent equations:
SDPH,H (p,q,θ) = <center (Hc,d), neighbors (Hc + i,d + j) >,
SDPS,S (p,q,θ) = <center (Sc,d), neighbors (Sc + i,d + j) >,
SDPV,V (p,q,θ) = <center (Vc,d), neighbors (Vc + i,d + j) >,
SDPH,S (p,q,θ) = <center (Hc,d), neighbors (Sc + i,d + j) >,
SDPH,V (p,q,θ) = <center (Hc,d), neighbors (Vc + i,d + j) >,
SDPS,V (p,q,θ) = <center (Sc,d), neighbors (Vc + i,d + j) >,
where, for a 3 × 3 block 1 ≥ p ≤ 4 and 1 ≥ q ≤ T, T is the total number of blocks in each image.
i , j = i = 1 , j = 1     i f   θ = 45 ° i = 1 , j = 1     i f   θ = 315 °   i = 1 , j = 1     i f   θ = 225 ° i = 1 , j = 1     i f   θ = 135 ° i = 0 , j = 1     i f   θ = 270 ° i = 0 , j = 1     i f   θ = 90 ° i = 1 , j = 0     i f   θ = 360 ° i = 1 , j = 0     i f   θ = 180 ° .
The greatest value obtained after the convolution of the mask and the picture is used to calculate the edge magnitude. The mask that creates the greatest magnitude determines the orientation of the edge. The different channel combinations of the skin images (combined channel images) SDPH,H, SDPS,S, SDPV,V, SDPH,S, SDPH,V, and SDPS,V are provided to SDP as input to create the feature vector. Three stages make up the feature extraction procedure for each channel image: compass mask filtering of skin images, code image generation based on the maximum response, and feature vector construction. The feature extraction procedure is also applied to the other color channels, such as RGB and YCbCr [19,47,48]. After taking into account the preprocessed images, fractional-order Sobel masks [49] are combined with them to analyze the pattern using the proposed SDP. The fractional-order masks, have shown in Figure 5, Figure 6 and Figure 7.
The customized Sobel illustration can be made by altering the design of the Ga and Gb parts of the special Sobel filter. Both the x- and y-axes contain the gradient-based parts of the integer-order Sobel operator. The components’ differential form is denoted by:
G a = 2 f a + 1 , b 1 a + 2 f a + 1 , b a + f a + 1 , b + 1 a
G b = 2 f a 1 , b + 1 b + 2 f a , b + 1 b + f a + 1 , b + 1 b
In the actual Sobel filter, the Ga and Gb components are reorganized as in Figure 5 and Figure 7. The differential form of the effective Sobel filter is determined from the gradient operator in the equation above, and it is then translated to the fractional-order domain. Two novel masks, the left fractional Sobel mask and the right fractional mask, are produced using the Grunwald–Letnikov (GL) fractional-order differential operator. The left GL derivative is signified as:
Mag   ( α f ) S t α f ( t ) = 1   m α m 0 + l i m j = 0 1 j α j f t j m = G a + G a .
To attain a symmetric filter, the GL operator is functional on the updated Sobel filter.
The right GL derivatives are signified as:
S t α f ( t ) = 1   m α m 0 + l i m j = 0 1 j α j f t + j m .
When applied to the images, these modified fractional-order Sobel masks assist in extracting the thin edges as opposed to the thick edges produced using the integer-order Sobel masks. Some responses are created after convolution of each pixel with the adapted fractional-order Sobel masks/filters. Using the maximum intensity value among the responses obtained for each pixel, a code image is created. The DOG filtering method is used to remove noise from the code image. After that, the code image is divided into smaller grids, and the histogram is measured for each grid. The final feature vector is produced after computing the feature vector for each grid, grouping it, and then combining it. In order to get a better presentation compared to the standard feature descriptors, a multi-scale feature descriptor with rotation invariance and low complexity is suggested in this work as SDP.

2.2.4. Enhanced Sobel Masks Representation

The cropped images are convoluted with the fractional-order Sobel masks. As the value α of the fractional mask varies, the convoluted output differs. The fractional-order filters are highly sensitive to variations in edge compared to the normal Sobel masks. These fractional-order masks, as shown in Figure 5, Figure 6 and Figure 7, help to capture more details regarding the texture, resulting in high classification accuracy of the human emotions. The values of α range from 0.1 to 1 and λ ranges from 1 to 5 (λ = {3.5, 7, 14, 28, 56}). The value of α = 1 results in a conventional Sobel mask.
The fractional-order Sobel masks are convoluted with the cropped images. The output of the convolved algorithm differs depending on the value of the fractional mask. Compared to the standard Sobel masks, the fractional-order filters are much more sensitive to edge alterations. These fractional-order masks, as displayed in Figure 3 and Figure 4, aid in capturing additional texture-related features and have a high classification accuracy for human emotions. Furthermore, α have values ranging from 0.1 to 1 and λ from 1 to 5, respectively (λ = 3.5, 7, 14, 28, 56). A conventional Sobel mask is produced when α = 1.
To determine the best value for, experiments we have conducted using a range of values between α = 0.1 and α = 1, and the dataset images are classified using a K-Neural Network (KNN) classifier. The left and right fractional Sobel masks are combined with the segmented images. The segmented images are convolved with the fractional Sobel masks that are suggested in this paper. The highest response value possible is chosen since each pixel receives four responses. The answers are { S θ 0 , S θ 1 . S θ 7 } .   The following is the formula for choosing the Maximum Response (MR):
MR x , y = max ( S i a , b | 0   i   4 ) .
Here, T θ i a , b denotes the response attained at an exact pixel position a , b . Then, the Difference of Gaussian (DOG) filter is applied on the MR image as follows:
D = DOG a , b ; σ 1 , σ 2 = 1 2 π σ 1 2 e a 2 + b 2 2 σ 1 2 1 2 π σ 2 2 e a 2 + b 2 2 σ 2 2   ,
where   σ 1 is the standard deviation that is higher than σ 2 .
f x , y = M x , y D .
Convolution of the response images and the DOG filter help to improve classification accuracy by removing random noise and sharpening the edges. Figure 8 represents the code images created as a result of SDP. The code images for R, G, and B channels, as in Figure 8a–c, signify both the textural and edge-based information gathered from images and also indicate the portions of the image that result in effective classification. The grids made over the code image are used to construct the histogram. The feature vectors are fed to a stack of Restricted Boltzmann Machines for prediction.

2.2.5. Skin ROIs Classification

In this work, SRBM is utilized for classification purposes. When compared to stacked Restricted Boltzmann Machines, SRBM is distinguished by the fact that it prohibits lateral connections inside a layer in order to make analysis simpler. On the other hand, the stacked Boltzmann method combines a supervised top layer for class recognition with an unsupervised three-layer network that has symmetric weights. This results in a hybrid model. The stacked Boltzmann method is applied in the comprehension of natural languages, the retrieval of documents, the creation of images, and the classification of these. These functions can be trained through either unsupervised preliminary training or through supervised fine-tuning. In contrast to the top layer, which is symmetric but without any direction, the RBM connecting layer is asymmetric and bidirectional. The restricted Boltzmann connection, which brings together two separate networks into a single entity, is composed of three layers with asymmetric weights. Both stacked Boltzmann Machines and Restricted Boltzmann Machines have this in common: the neural building pieces that make up their networks are composed of stochastic binary Hopfield neurons. Gibb’s probability measure takes into account the energy from Restricted Boltzmann as well as RBM; Boltzmann is analogous to RBM. There is no back propagation in the restricted Boltzmann train, which processes one layer at a time, makes a three-segment pass to an approximation of the equilibrium state, and does not use back propagation. The Restricted Boltzmann method performs pre-training for classification and recognition using supervised and unsupervised training on different RBMs.

2.3. Evaluation Methods

For the experiments, ten-fold cross-validation is used. The proposed strategy is compared to other cutting-edge techniques using the following metrics. Regarding accuracy, Harmonic Mean (HM), positive predictive rate (pp), sensitivity, specificity, and F-score [32,50,51,52,53,54] are the measures used to assess performance and they are defined by Equations (10)–(15). Sensitivity refers to the percentage of true positives that have been accurately detected or the number of individuals who have been appropriately identified as having melanoma. Accuracy refers to the proportion of a sample out of the complete population that has been correctly classified. Specificity is evidence that the patients concerned do not relate to any kind of skin melanoma. The percentage of tests that correctly identify a patient is referred to as the positive predictive rate. The F1-score represents the weighted mean harmonic average. The definition of these Metrics is represented by true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN).
Accuracy   ACC = TP + TN TP + TN + FP + FN ,
Specificity   SP = TN TN + FP ,
Positive   Predictive   Rate   pp = TP TP + FP ,
Sensitivity   SE = 1 TP FN   ,
F Score = 2 Sensitivity pp   Sensitivity + pp ,
Harmonic   Mean   HM = 2 Sensitivity Specifity Sensitivity + Specifity   .

2.4. Model Training and Hyperparameters

Cross-validation is an iterative method for preventing the practice of overfitting in predictive methods. Each individual part of the dataset had to be separated out into its own section. In order to carry out a standard K-fold cross-validation, the data had to be segmented into k-folds first. Then, while we were repeatedly retraining the algorithm on k−1 folds, we included the remaining holdout fold as the test set. In this research we used k cross-validation on to 10-fold.
The hyper-parameter values that are used in RBM are detailed in Table 1. Because it is more capable of generalization than the conventional DNN, the suggested model makes use of a stacked RBM in order to classify the images of skin cancer. The particle swarm optimization approach is utilized in order to fine-tune the RBM hyper-parameters. The persistent contrastive divergence is utilized in order to generate a rough estimate of the likelihood gradient. Only the very first and very last repetitions of the convergence process are successful when the Markov chain has a low mixing. After that, an RBM model constructed with each training sample is used. Following the completion of each Gibbs iteration, the model is reconstructed, and the aforementioned method is then repeated for each epoch.

3. Results

In order to conduct our investigation, we relied on both a graphics processing unit (GPU) and a central processing unit (CPU) developed by Intel. The algorithms were implemented using version 7.12 of MATLAB. The different sets of images acquired from the databases do not overlap with one another in any way. On five different datasets, we conducted an analysis to determine how well the newly developed framework, Hybrid SDP, and stacked RBM, classified skin lesions as either cancerous or benign. This analysis was based on the values that were generated by the performance metrics. In addition, the effectiveness of the classification system was evaluated alongside more traditional techniques, such as the support vector machine (SVM), Gradient Boosting (GB), and Random Forest (RF). Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11 present the findings of the experiments performed on the five datasets. For specificity and sensitivity, the average of the values is reached. Compared to Support Vector Machine (SVM) and Gradient Boosting (GB), the random forest classifier produces better results. However, compared to the GB, Random Forest (RF), and SVM, the stacked RBM used in the suggested approach produces the greatest results. When distinguishing melanoma from dysplastic nevi, the suggested method produces the best results. The texture, edge, and color information in the proposed study helps all of the classifiers achieve better outcomes by producing greater SE and SP. There is a 0.2 percent increase in sensitivity for ISIC 2016 and ISIC 2017 when utilizing stacked RBM. The proposed technique performs well for the Dermnet and DermIS datasets. The images in the collection were captured using a variety of tools and under a variety of lighting situations. SDP performs well in comparison to the other feature descriptors since it is immune to variations in illumination. Tables denote specificity as SP, sensitivity as SE, Positive predictive rate as PP, Harmonic Mean as HM, and accuracy as ACC. Three channel images were created from the original photographs. Six possible combinations of the three channel images were gained for the SDP algorithm. Using stacked RBM, the feature vectors were predicted, and the outcomes were then obtained. Three channels made up the image. The visuals for the red, green, and blue channels were then produced. Then, several channel combinations were made, and the SDP method extracted the final feature vector from the code image that SDP generated. Utilizing histograms, the code image produced by the SDP technique was used to construct feature vectors. When the photos from the PH2 dataset were utilized for the tests, the confusion matrix shown in Table 2 was created. The overall positive rate for common nevi was found as 100%. Atypical nevi have a TPR of 99%, while melanoma have a 100% TPR. The suggested SDP algorithm is compared to the other feature extraction methods in the literature in Table 3. The findings show that the proposed method obtains a high classification accuracy of 99.8%. The RBF kernel was utilized for categorization using a Least Square Support Vector Machine (LS-SVM), SVM, and Extreme Learning Machine (ELM). There are 1 to 50 hidden layers in a multi-layer perceptron, which uses the Levenberg–Marquardt optimization. The TPR for the benign group is 100%, whereas the TPR for melanoma is 94%. The confusion matrix and the categorization outcomes for the photos from the ISIC 2016 dataset are shown in Table 4 and Table 5. Compared to LBP (Local Binary Pattern), CLDP (Color Local Directional Pattern) [18,55,56,57,58] has the highest accuracy. The accuracy of 97.2 percent shows that, compared to GLCM [59], LBP more accurately captures the texture of skin cancer images [18]. In comparison to LBP, CLDP achieves higher sensitivity and specificity. When used in conjunction with stacked RBM, the suggested method for SDP yields the best accuracy. It has a 99.8% accuracy rate for the PH2 dataset, which is the best. The achieved sensitivity is 98.8 and the achieved specificity is 99.6. The HM is 99.4, the PP is 99.6, and the F-Score is 99.6. By choosing only the most relevant answer information, the proposed SDP eliminates all the noisy information and outperforms the other existing descriptors in terms of accuracy.
The ISIC 2016 dataset’s poor representation of the lesion’s size and location from the PH2 dataset leads to incorrect classifications of the lesions. For the ISIC 2016 datasets, the proposed technique achieves specificity of 92.5 and sensitivity of 95.7 in Table 5. When compared to the outcomes obtained by LBP and CLDP, the suggested method’s classification accuracy is 96.5 percent, which is high. The ISIC 2017 dataset’s confusion matrix is shown in Table 6. The TPR for benign types is 96%, whereas the TPR for melanoma types is 97%. The results obtained from utilizing the photos from the ISIC 2017 dataset are shown in Table 7.
The classification outcomes for ISIC 2017 are shown in Table 7. For the ISIC 2017 datasets, the suggested method yields a specificity of 98.5 and a sensitivity of 99.9. Comparing the suggested method’s classification accuracy to those obtained by LBP and GLCM [27], it is high at 95.5 percent. In order to calculate the AUC, the average of the cross-validation results obtained for each dataset is calculated.
Table 8, Table 9, Table 10 and Table 11 show that, when the suggested work is used to classify the datasets Dermnet and DermIS, a high level of classification accuracy is attained. For unbalanced datasets, the stacked RBM in the Deep Belief Network also performs well. The SDP feature and the stacked RBM in the DBN are used to attain the highest classification. Several descriptors from the literature are used in place of the proposed feature descriptor SDP, and the results are equated for all those datasets, as shown in Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. Comparing the proposed feature descriptor SDP to all existing feature extraction methods in the literature, it obtains good performance because of its capacity to capture the edge information, histogram information from rival colors, and texture information. SDP more successfully recovers the spatial data of the texture, edges, and opponent color information while also removing noise. Other feature descriptors, such as LTP, Color SIFT, Gradient information, CLDP, Color Gabor wavelet, and multi-feature extraction, do not completely reduce the noise [8,18,55,56,58]. In contrast to LBP and additional descriptors used in the existing methods for the diagnosis of skin lesions, SDP is likewise insensitive to changes in light.
Differential diagnosis of melanoma includes other pigmented lesions such as basal cell carcinoma (ISIC 2019), Bowen disease, Actinic keratosis, and squamous cell carcinoma, which are also analyzed from the ISIC 2019 dataset from images that achieve a TPR rate of 91% for basal cell carcinoma, 83% for Bowen disease, and 62% for squamous cell carcinoma, as in Supplementary Table S1. The proposed approach is compared with the other state-of-the-art approaches in Supplementary Table S2. The effectiveness of various cutting-edge techniques is measured against the outcomes shown in Supplementary Table S2 to compare performance. For the datasets used in the research for the prediction of the images using depth and 3-D shape, Satheesha et al. [60] have suggested a 3D skin lesion reconstruction. Bi et al. [61] have proposed a method that uses a multiscale lesion-based portrayal and classification utilizing a combined reverse approach. They used photos that had been preprocessed to change the contrast. Waheed et al. [62] used color and texture factors as well as contrast adjustments to categorize melanoma. Gutman et al. [45] analyzed skin lesions with a 91.6 percent accuracy rate. Lopez et al. [63] used CNN [32] to analyze spatial domain, but the suggested method makes use of color, structural, and textural data to produce accurate classification findings. Matsunaga et al. [64] classified data using DNN. Deep learning methods achieve good accuracy for skin cancer classification [65,66]. By using the Newton–Raphson approach, Khan et al. [67] have presented a region-based convolutional neural network. Using the DermIS dataset, DNN was implemented by Bajwa et al. [68]. Using the Dermnet dataset, Rajinikanth et al. [69] employed the Bat algorithm. As shown in Table S2, the suggested method using stacked RBM in the Deep Belief Network delivers higher prediction accuracy compared to CNN, DNN, and other cutting-edge methods.

4. Limitations and Future Work

The feature values acquired using feature extraction algorithms are frequently dispersed when the images have diverse zooming settings, variable lighting, and different resolutions. In the suggested method, a normalization strategy is used to get around this issue. The information required to complete the normalization procedure can be found in [70]. The classification technique’s stacked RBM generates a reliable and ideal prediction. When numerous boundaries or ambiguous borders are present in the images, the classification fails. In those circumstances, crucial details are lost, which causes the accuracy to drop, as demonstrated by the trials. Principal Component Analysis can be used to reduce the feature vector’s rise in dimension. Even in unbalanced datasets, the stacked RBM in the DBN produces good results. Even images with poor quality and little contrast produce excellent results. There are not many works in the literature that discuss the categorization of melanoma and dysplastic lesions. The proposed effort produces superior outcomes in this study, similar to those seen in the results presented for SE and SP. SDP is thus perfectly suited for use in real-time applications because of its high accuracy and low processing complexity. The suggested SDP enhances the image and eliminates any potential noise using Difference of Gaussian filters. The retrieved features make this feature extraction method superior to those currently in use, in addition to being scale- and rotation-invariant. However, the Sobel operator’s primary shortcoming is its signal-to-noise ratio. As noise levels increase, the gradient magnitude of the edges becomes smaller, which could lead to some incorrect results. The use of derivatives based on fractional orders on better edge detection methods will be used in the future to address this SDP restriction. Our proposed model could be implemented by doctors and dermatologists for clinical use. Finally, many challenges and ideas from other domains such as NLP and image processing can be investigated by applying hybrid models in future. The proposed work will be improved by including other fractional-order-based derivatives to enhance the performance of these categories in future works. Furthermore, it is possible to adapt the proposed algorithms to identify features that are included in a seven-point checklist with pattern analysis for dermoscopic equivocal melanocytic lesions while applying other fractional-order-based derivatives with some other better edge detection methods in future works.

5. Conclusions

SDP results from the interaction of color, edge, and texture elements, which dynamically acquires the structural qualities of the image and increases data discrimination. The results show that the suggested methodology pulls more sensitive data from all of the photographs when compared to other methods in the literature. Our findings illustrate that the suggested strategy is a very valuable computational model. The goal is to evaluate this method in additional databases for future research, in addition to integrating new representations for application and investigation in dermoscopy images. As computer vision technology develops, machine learning is gaining popularity as a technique of automated medical picture recognition. Skin cancer screening techniques based on machine learning have been presented in the past research projects. However, this procedure produces high classification accuracy compared to other approaches. SDP and stacked RBM are employed in this work to categorize skin cancer photos. Different computational techniques are no longer required to extract color, texture, and edge information from skin cancer images. Clinicians will benefit from real-time skin cancer diagnosis if the suggested method can be implemented as a smartphone application.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/diagnostics13061104/s1, Table S1: ISIC 2019 dataset confusion matrix; Table S2: Comparison with other cutting-edge methods.

Author Contributions

A.S.A., J.V.B.B., A.Y.M. and C.C.: conceptualization, methodology, writing—original draft, and visualization; M.B.B.H.: visualization, data curation, project administration, writing—review and editing, software, investigation, and supervision; B.A.H.M., N.A.S. and M.A.: writing—review and editing, formal analysis, and funding acquisition; M.A.A.-a.: validation, visualization, supervision, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R407), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The following datasets are publicly available: (i) BCN_20000 Dataset from Department of Dermatology, Hospital Clínic de Barcelona; (ii) HAM10000 Dataset from ViDIR Group, Department of Dermatology, Medical University of Vienna; and (iii) MSK Dataset from Anonymous (accessed on January 2023).

Acknowledgments

The authors would like to express their gratitude to Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R407), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2022-00166402).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ouhtit, A.; Ananthaswamy, H.N. A Model for UV-Induction of Skin Cancer. J. Biomed. Biotechnol. 2001, 2001, 5–6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Akram, T.; Khan, M.A.; Sharif, M.; Yasmin, M. Skin Lesion Segmentation and Recognition Using Multichannel Saliency Estimation and M-SVM on Selected Serially Fused Features. J. Ambient Intell. Humaniz. Comput. 2018. [Google Scholar] [CrossRef]
  3. Kasmi, R.; Mokrani, K. Classification of Malignant Melanoma and Benign Skin Lesions: Implementation of Automatic ABCD Rule. IET Image Process. 2016, 10, 448–455. [Google Scholar] [CrossRef]
  4. Pereira, P.R.; Odashiro, A.N.; Lim, L.A.; Miyamoto, C.; Blanco, P.L.; Odashiro, M.; Maloney, S.; De Souza, D.F.; Burnier, M.N. Current and Emerging Treatment Options for Uveal Melanoma. Clin. Ophthalmol. 2013, 7, 1669–1682. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Harangi, B. Skin Lesion Classification with Ensembles of Deep Convolutional Neural Networks. J. Biomed. Inform. 2018, 86, 25–32. [Google Scholar] [CrossRef]
  6. Al-masni, M.A.; Al-antari, M.A.; Choi, M.T.; Han, S.M.; Kim, T.S. Skin Lesion Segmentation in Dermoscopy Images via Deep Full Resolution Convolutional Networks. Comput. Methods Programs Biomed. 2018, 162, 221–231. [Google Scholar] [CrossRef]
  7. Zhang, N.; Cai, Y.X.; Wang, Y.Y.; Tian, Y.T.; Wang, X.L.; Badami, B. Skin Cancer Diagnosis Based on Optimized Convolutional Neural Network. Artif. Intell. Med. 2020, 102, 101756. [Google Scholar] [CrossRef]
  8. Alphonse, A.S.; Starvin, M.S. A Novel Maximum and Minimum Response-Based Gabor (MMRG) Feature Extraction Method for Facial Expression Recognition. Multimed. Tools Appl. 2019, 78, 23369–23397. [Google Scholar] [CrossRef]
  9. Akhtar, F.; Heyat, M.B.B.; Li, J.P.; Patel, P.K.; Rishipal; Guragai, B. Role of Machine Learning in Human Stress: A Review. In Proceedings of the 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 18–21 December 2020; pp. 170–174. [Google Scholar] [CrossRef]
  10. Teelhawod, B.N.; Akhtar, F.; Heyat, M.B.B.; Tripathi, P.; Mehrotra, R.; Asfaw, A.B.; Al Shorman, O.; Masadeh, M. Machine Learning in E-Health: A Comprehensive Survey of Anxiety. In Proceedings of the 2021 International Conference on Data Analytics for Business and Industry, ICDABI 2021, Sakheer, Bahrain, 26–27 October 2020; IEEE: Piscataway, NJ, USA, 2021; pp. 167–172. [Google Scholar]
  11. Guragai, B.; Alshorman, O.; Masadeh, M.; Heyat, M.B.B. A Survey on Deep Learning Classification Algorithms for Motor Imagery. In Proceedings of the International Conference on Microelectronics, ICM, Aqaba, Jordan, 14–17 December 2020. [Google Scholar]
  12. Heyat, M.B.B.; Akhtar, F.; Sultana, A.; Tumrani, S.; Teelhawod, B.N.; Abbasi, R.; Amjad Kamal, M.; Muaad, A.Y.; Lai, D.; Wu, K. Role of Oxidative Stress and Inflammation in Insomnia Sleep Disorder and Cardiovascular Diseases: Herbal Antioxidants and Anti-Inflammatory Coupled with Insomnia Detection Using Machine Learning. Curr. Pharm. Des. 2022, 28, 3618–3636. [Google Scholar] [CrossRef]
  13. Sultana, A.; Rahman, K.; Heyat, M.B.B.; Sumbul; Akhtar, F.; Muaad, A.Y. Role of Inflammation, Oxidative Stress, and Mitochondrial Changes in Premenstrual Psychosomatic Behavioral Symptoms with Anti-Inflammatory, Antioxidant Herbs, and Nutritional Supplements. Oxid. Med. Cell. Longev. 2022, 2022, 3599246. [Google Scholar] [CrossRef]
  14. Sultana, A.; Heyat, M.B.B.; Rahman, K.; Kunnavil, R.; Fazmiya, M.J.A.; Akhtar, F.; Sumbul; Vidal Mazón, J.L.; Rodríguez, C.L.; De La Torre Díez, I. A Systematic Review and Meta-Analysis of Premenstrual Syndrome with Special Emphasis on Herbal Medicine and Nutritional Supplements. Pharmaceuticals 2022, 15, 1371. [Google Scholar] [CrossRef]
  15. Sultana, A.; Begum, W.; Saeedi, R.; Rahman, K.; Heyat, M.B.B.; Akhtar, F.; Son, N.T.; Ullah, H. Experimental and Computational Approaches for the Classification and Correlation of Temperament (Mizaj) and Uterine Dystemperament (Su’-I-Mizaj Al-Rahim) in Abnormal Vaginal Discharge (Sayalan Al-Rahim) Based on Clinical Analysis Using Support Vector Mach. Complexity 2022, 2022, 5718501. [Google Scholar] [CrossRef]
  16. Garg, N.; Sharma, V.; Kaur, P. Melanoma Skin Cancer Detection Using Image Processing. In Advances in Intelligent Systems and Computing; Springer: Singapore, 2018; Volume 651, pp. 111–119. [Google Scholar] [CrossRef]
  17. Mukherjee, S.; Adhikari, A.; Roy, M. Melanoma Identification Using MLP With Parameter Selected by Metaheuristic Algorithms. In Intelligent Innovations in Multimedia Data Engineering and Management; IGI Global: Hershey, PA, USA, 2018; pp. 241–268. [Google Scholar]
  18. Ojala, T.; Valkealahti, K.; Oja, E.; Pietikäinen, M. Texture Discrimination with Multidimensional Distributions of Signed Gray-Level Differences. Pattern Recognit. 2001, 34, 727–739. [Google Scholar] [CrossRef] [Green Version]
  19. Krajsek, K.; Mester, R. Signal and Noise Adapted Filters for Differential Motion Estimation. In Proceedings of the Lecture Notes in Computer Science, Vienna, Austria, 31 August–2 September 2005; Volume 3663, pp. 476–484. [Google Scholar]
  20. Ukwuoma, C.C.; Qin, Z.; Heyat, M.B.B.; Akhtar, F.; Bamisile, O.; Muaad, A.Y.; Addo, D.; Al-antari, M.A. A Hybrid Explainable Ensemble Transformer Encoder for Pneumonia Identification from Chest X-ray Images. J. Adv. Res. 2022; in press. [Google Scholar] [CrossRef]
  21. Pal, R.; Adhikari, D.; Heyat, M.B.B.; Guragai, B.; Lipari, V.; Brito Ballester, J.; De la Torre Díez, I.; Abbas, Z.; Lai, D. A Novel Smart Belt for Anxiety Detection, Classification, and Reduction Using IIoMT on Students’ Cardiac Signal and MSY. Bioengineering 2022, 9, 793. [Google Scholar] [CrossRef]
  22. Heyat, M.B.B.; Lai, D.; Khan, F.I.; Zhang, Y. Sleep Bruxism Detection Using Decision Tree Method by the Combination of C4-P4 and C4-A1 Channels of Scalp EEG. IEEE Access 2019, 7, 102542–102553. [Google Scholar] [CrossRef]
  23. Heyat, M.B.B.; Akhtar, F.; Khan, M.H.; Ullah, N.; Gul, I.; Khan, H.; Lai, D. Detection, Treatment Planning, and Genetic Predisposition of Bruxism: A Systematic Mapping Process and Network Visualization Technique. CNS Neurol. Disord.-Drug Targets 2020, 20, 755–775. [Google Scholar] [CrossRef]
  24. Chola, C.; Heyat, M.B.B.; Akhtar, F.; Al Shorman, O.; Bibal Benifa, J.V.; Muaad, A.Y.M.; Masadeh, M.; Alkahatni, F. IoT Based Intelligent Computer-Aided Diagnosis and Decision Making System for Health Care. In Proceedings of the 2021 International Conference on Information Technology (ICIT), Amman, Jordan, 14–15 July 2021; pp. 184–189. [Google Scholar] [CrossRef]
  25. AlShorman, O.; Masadeh, M.; Heyat, M.B.B.; Akhtar, F.; Almahasneh, H.; Ashraf, G.M.; Alexiou, A. Frontal Lobe Real-Time EEG Analysis Using Machine Learning Techniques for Mental Stress Detection. J. Integr. Neurosci. 2022, 21, 20. [Google Scholar] [CrossRef]
  26. Ullah, H.; Heyat, M.B.B.; Akhtar, F.; Muaad, A.Y.; Ukwuoma, C.C.; Bilal, M.; Miraz, M.H.; Bhuiyan, M.A.S.; Wu, K.; Damaševičius, R.; et al. An Automatic Premature Ventricular Contraction Recognition System Based on Imbalanced Dataset and Pre-Trained Residual Network Using Transfer Learning on ECG Signal. Diagnostics 2023, 13, 87. [Google Scholar] [CrossRef]
  27. Ali, L.; He, Z.; Cao, W.; Rauf, H.T.; Imrana, Y.; Heyat, M.B.B. MMDD-Ensemble: A Multimodal Data–Driven Ensemble Approach for Parkinson’s Disease Detection. Front. Neurosci. 2021, 15, 754058. [Google Scholar] [CrossRef]
  28. Heyat, M.B.B.; Akhtar, F.; Abbas, S.J.; Al-Sarem, M.; Alqarafi, A.; Stalin, A.; Abbasi, R.; Muaad, A.Y.; Lai, D.; Wu, K. Wearable Flexible Electronics Based Cardiac Electrode for Researcher Mental Stress Detection System Using Machine Learning Models on Single Lead Electrocardiogram Signal. Biosensors 2022, 12, 427. [Google Scholar] [CrossRef] [PubMed]
  29. Ullah, H.; Heyat, M.B.B.; Alsalman, H.; Khan, H.M.; Akhtar, F.; Gumaei, A.; Mehdi, A.; Muaad, A.Y.; Islam, M.S.; Ali, A.; et al. An Effective and Lightweight Deep Electrocardiography Arrhythmia Recognition Model Using Novel Special and Native Structural Regularization Techniques on Cardiac Signal. J. Healthc. Eng. 2022, 2022, 3408501. [Google Scholar] [CrossRef]
  30. Heyat, M.B.B.; Akhtar, F.; Ansari, M.A.; Khan, A.; Alkahtani, F.; Khan, H.; Lai, D. Progress in Detection of Insomnia Sleep Disorder: A Comprehensive Review. Curr. Drug Targets 2020, 22, 672–684. [Google Scholar] [CrossRef]
  31. Chola, C.; Mallikarjuna, P.; Muaad, A.Y.; Bibal Benifa, J.V.; Hanumanthappa, J.; Al-antari, M.A. A Hybrid Deep Learning Approach for COVID-19 Diagnosis via CT and X-ray Medical Images. Comput. Sci. Math. Forum 2022, 2, 13. [Google Scholar] [CrossRef]
  32. Mehrrotraa, R.; Ansari, M.A.; Agrawal, R.; Tripathi, P.; Heyat, M.B.B.; Al-Sarem, M.; Muaad, A.Y.M.; Nagmeldin, W.A.E.; Abdelmaboud, A.; Saeed, F. Ensembling of Efficient Deep Convolutional Networks and Machine Learning Algorithms for Resource Effective Detection of Tuberculosis Using Thoracic (Chest) Radiography. IEEE Access 2022, 10, 85442–85458. [Google Scholar] [CrossRef]
  33. Chola, C.; Benifa, J.V.B.; Guru, D.S.; Muaad, A.Y.; Hanumanthappa, J.; Al-antari, M.A.; Alsalman, H.; Gumaei, A.H. Gender Identification and Classification of Drosophila Melanogaster Flies Using Machine Learning Techniques. Comput. Math. Methods Med. 2022, 2022, 4593330. [Google Scholar] [CrossRef]
  34. Ukwuoma, C.C.; Zhiguang, Q.; Heyat, M.B.B.; Mohammed Khan, H.; Akhtar, F.; Masadeh, M.S.; Bamisile, O.; Alshorman, O.; Nneji, G.U. Detection of Oral Cavity Squamous Cell Carcinoma from Normal Epithelium of the Oral Cavity Using Microscopic Images. In Proceedings of the 2022 International Conference on Decision Aid Sciences and Applications, DASA, Chiangrai, Thailand, 23–25 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 29–36. [Google Scholar]
  35. Ullah, H.; Heyat, M.B.B.; Akhtar, F.; Sumbul; Muaad, A.Y.; Islam, M.S.; Abbas, Z.; Pan, T.; Gao, M.; Lin, Y.; et al. An End-to-End Cardiac Arrhythmia Recognition Method with an Effective DenseNet Model on Imbalanced Datasets Using ECG Signal. Comput. Intell. Neurosci. 2022, 2022, 9475162. [Google Scholar] [CrossRef]
  36. Ukwuoma, C.C.; Qin, Z.; Heyat, M.B.B.; Akhtar, F.; Smahi, A.; Jackson, J.K.; Furqan Qadri, S.; Muaad, A.Y.; Monday, H.N.; Nneji, G.U. Automated Lung-Related Pneumonia and COVID-19 Detection Based on Novel Feature Extraction Framework and Vision Transformer Approaches Using Chest X-Ray Images. Bioengineering 2022, 9, 709. [Google Scholar] [CrossRef]
  37. Chola, C.; Muaad, A.Y.; Heyat, M.B.B.; Benifa, J.V.B.; Naji, W.R.; Hemachandran, K.; Mahmoud, N.F.; Samee, N.A.; Al-Antari, M.A.; Kadah, Y.M.; et al. BCNet: A Deep Learning Computer-Aided Diagnosis Framework for Human Peripheral Blood Cell Identification. Diagnostics 2022, 12, 2815. [Google Scholar] [CrossRef]
  38. Tripathi, P.; Ansari, M.A.; Akhtar, F.; Heyat, M.B.B.; Mehrotra, R.; Yatoo, A.H.; Teelhawod, B.N.; Asfaw, A.B.; Baig, A.A. Automatic Epileptic Seizure Detection Based on the Discrete Wavelet Transform Approach Using an Artificial Neural Network Classifier on the Scalp Electroencephalogram Signal. Comput. Intell. Healthc. Appl. 2022, 157–173. [Google Scholar] [CrossRef]
  39. Gonzalez-Castro, V.; Debayle, J.; Wazaefi, Y.; Rahim, M.; Gaudy-Marqueste, C.; Grob, J.-J.; Fertil, B. Automatic Classification of Skin Lesions Using Color Mathematical Morphology-Based Texture Descriptors. In Proceedings of the Twelfth International Conference on Quality Control by Artificial Vision, Le Creusot, France, 3–5 June 2015; Volume 9534, p. 953409. [Google Scholar]
  40. Hagerty, J.R.; Stanley, R.J.; Almubarak, H.A.; Lama, N.; Kasmi, R.; Guo, P.; Drugge, R.J.; Rabinovitz, H.S.; Oliviero, M.; Stoecker, W.V. Deep Learning and Handcrafted Method Fusion: Higher Diagnostic Accuracy for Melanoma Dermoscopy Images. IEEE J. Biomed. Health Inform. 2019, 23, 1385–1391. [Google Scholar] [CrossRef]
  41. Kaur, R.K.G. Skin Cancer? Melanoma Detection in Skin Images Using Local Binary Pattern (LBP) and GLCM. Int. J. Sci. Res. 2015, 4, 134–139. [Google Scholar]
  42. Iqbal, M.S.; Abbasi, R.; Heyat, M.B.B.; Akhtar, F.; Abdelgeliel, A.S.; Albogami, S.; Fayad, E.; Iqbal, M.A. Recognition of MRNA N4 Acetylcytidine (Ac4C) by Using Non-Deep vs. Deep Learning. Appl. Sci. 2022, 12, 1344. [Google Scholar] [CrossRef]
  43. Nawabi, A.K.; Jinfang, S.; Abbasi, R.; Iqbal, M.S.; Heyat, M.B.B.; Akhtar, F.; Wu, K.; Twumasi, B.A. Segmentation of Drug-Treated Cell Image and Mitochondrial-Oxidative Stress Using Deep Convolutional Neural Network. Oxid. Med. Cell. Longev. 2022, 2022, 5641727. [Google Scholar] [CrossRef]
  44. Mendonca, T.; Ferreira, P.M.; Marques, J.S.; Marcal, A.R.S.; Rozeira, J. PH2—A Dermoscopic Image Database for Research and Benchmarking. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Osaka, Japan, 3–7 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 5437–5440. [Google Scholar]
  45. Codella, N.C.F.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin Lesion Analysis toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), Hosted by the International Skin Imaging Collaboration (ISIC). In Proceedings of the International Symposium on Biomedical Imaging, Washington, DC, USA, 4–7 April 2018; Volume 2018, pp. 168–172. [Google Scholar]
  46. Lee, T.; Ng, V.; Gallagher, R.; Coldman, A.; McLean, D. Dullrazor®: A Software Approach to Hair Removal from Images. Comput. Biol. Med. 1997, 27, 533–543. [Google Scholar] [CrossRef]
  47. Kumar, S.; Negi, A.; Singh, J.N.; Verma, H. A Deep Learning for Brain Tumor Mri Images Semantic Segmentation Using FCN. In Proceedings of the 2018 4th International Conference on Computing Communication and Automation, ICCCA, Greater Noida, India, 14–15 December 2018. [Google Scholar]
  48. Alphonse, A.S.; Dharma, D. Enhanced Gabor (E-Gabor), Hypersphere-Based Normalization and Pearson General Kernel-Based Discriminant Analysis for Dimension Reduction and Classification of Facial Emotions. Expert Syst. Appl. 2017, 90, 127–145. [Google Scholar] [CrossRef]
  49. Lavín-Delgado, J.E.; Solís-Pérez, J.E.; Gómez-Aguilar, J.F.; Escobar-Jiménez, R.F. A New Fractional-Order Mask for Image Edge Detection Based on Caputo–Fabrizio Fractional-Order Derivative Without Singular Kernel. Circuits Syst. Signal Process. 2020, 39, 1419–1448. [Google Scholar] [CrossRef]
  50. Said, R.R.; Heyat, M.B.B.; Song, K.; Tian, C.; Wu, Z. A Systematic Review of Virtual Reality and Robot Therapy as Recent Rehabilitation Technologies Using EEG-Brain–Computer Interface Based on Movement-Related Cortical Potentials. Biosensors 2022, 12, 1134. [Google Scholar] [CrossRef]
  51. Ukwuoma, C.C.; Heyat, M.B.B.; Masadeh, M.; Akhtar, F.; Zhiguang, Q.; Bondzie-Selby, E.; Alshorman, O.; Alkahtani, F. Image Inpainting and Classification Agent Training Based on Reinforcement Learning and Generative Models with Attention Mechanism. In Proceedings of the International Conference on Microelectronics, ICM, Cairo, Egypt, 19–22 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 96–101. [Google Scholar]
  52. Ukwuoma, C.C.; Urama, G.C.; Qin, Z.; Heyat, M.B.B.; Mohammed Khan, H.; Akhtar, F.; Masadeh, M.S.; Ibegbulam, C.S.; Delali, F.L.; Alshorman, O. Boosting Breast Cancer Classification from Microscopic Images Using Attention Mechanism. In Proceedings of the 2022 International Conference on Decision Aid Sciences and Applications, DASA, Chiangrai, Thailand, 23–25 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 258–264. [Google Scholar]
  53. Tripathi, P.; Ansari, M.A.; Gandhi, T.K.; Mehrotra, R.; Heyat, M.B.B.; Akhtar, F.; Ukwuoma, C.C.; Muaad, A.Y.M.; Kadah, Y.M.; Al-Antari, M.A.; et al. Ensemble Computational Intelligent for Insomnia Sleep Stage Detection via the Sleep ECG Signal. IEEE Access 2022, 10, 108710–108721. [Google Scholar] [CrossRef]
  54. Qayyum, S.; Sultana, A.; Heyat, M.B.B.; Rahman, K.; Akhtar, F.; ul Haq, A.; Alkhamis, B.A.; Alqahtani, M.A.; Gahtani, R.M. Therapeutic Efficacy of a Formulation Prepared with Linum usitatissimum L., Plantago Ovata Forssk., and Honey on Uncomplicated Pelvic Inflammatory Disease Analyzed with Machine Learning Techniques. Pharmaceutics 2023, 15, 643. [Google Scholar] [CrossRef]
  55. Warsi, F.; Khanam, R.; Kamya, S.; Suárez-Araujo, C.P. An Efficient 3D Color-Texture Feature and Neural Network Technique for Melanoma Detection. Inform. Med. Unlocked 2019, 17, 100176. [Google Scholar] [CrossRef]
  56. Saba, T. Computer Vision for Microscopic Skin Cancer Diagnosis Using Handcrafted and Non-Handcrafted Features. Microsc. Res. Tech. 2021, 84, 1272–1283. [Google Scholar] [CrossRef] [PubMed]
  57. Jain, S.; Jagtap, V.; Pise, N. Efficient Gabor Filter Using Vedic Mathematic for High Speed Convolution in Skin Cancer Detection. In Proceedings of the 1st International Conference on Computing, Communication, Control and Automation, ICCUBEA, Pune, India, 26–27 February 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 800–804. [Google Scholar]
  58. Gautam, A.; Raman, B. Towards Accurate Classification of Skin Cancer from Dermatology Images. IET Image Process. 2021, 15, 1971–1986. [Google Scholar] [CrossRef]
  59. Novitasari, D.C.R.; Asyhar, A.H.; Thohir, M.; Arifin, A.Z.; Mu’jizah, H.; Foeady, A.Z. Cervical Cancer Identification Based Texture Analysis Using GLCM-KELM on Colposcopy Data. In Proceedings of the 2020 International Conference on Artificial Intelligence in Information and Communication, ICAIIC, Fukuoka, Japan, 19–21 February 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 409–414. [Google Scholar]
  60. Satheesha, T.Y.; Satyanarayana, D.; Prasad, M.N.G.; Dhruve, K.D. Melanoma Is Skin Deep: A 3D Reconstruction Technique for Computerized Dermoscopic Skin Lesion Classification. IEEE J. Transl. Eng. Health Med. 2017, 5, 4300117. [Google Scholar] [CrossRef]
  61. Bi, L.; Kim, J.; Ahn, E.; Feng, D.; Fulham, M. Automatic Melanoma Detection via Multi-Scale Lesion-Biased Representation and Joint Reverse Classification. In Proceedings of the International Symposium on Biomedical Imaging, Prague, Czech Republic, 13–16 April 2016; IEEE: Piscataway, NJ, USA, 2016; Volume 2016, pp. 1055–1058. [Google Scholar]
  62. Waheed, Z.; Waheed, A.; Zafar, M.; Riaz, F. An Efficient Machine Learning Approach for the Detection of Melanoma Using Dermoscopic Images. In Proceedings of the 2017 International Conference on Communication, Computing and Digital Systems, C-CODE, Islamabad, Pakistan, 8–9 March 2017; pp. 316–319. [Google Scholar]
  63. Romero Lopez, A.; Giro-I-Nieto, X.; Burdick, J.; Marques, O. Skin Lesion Classification from Dermoscopic Images Using Deep Learning Techniques. In Proceedings of the 13th IASTED International Conference on Biomedical Engineering, BioMed 2017, Innsbruck, Austria, 20–21 February 2017; ACTAPRESS: Calgary, AB, Canada, 2017; pp. 49–54. [Google Scholar]
  64. Matsunaga, K.; Hamada, A.; Minagawa, A.; Koga, H. Image Classification of Melanoma, Nevus and Seborrheic Keratosis by Deep Neural Network Ensemble. arXiv 2017, arXiv:1703.03108. [Google Scholar]
  65. Jiang, S.; Li, H.; Jin, Z. A Visually Interpretable Deep Learning Framework for Histopathological Image-Based Skin Cancer Diagnosis. IEEE J. Biomed. Health Inform. 2021, 25, 1483–1494. [Google Scholar] [CrossRef]
  66. Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin Lesion Segmentation in Dermoscopic Images with Ensemble Deep Learning Methods. IEEE Access 2019, 8, 4171–4181. [Google Scholar] [CrossRef]
  67. Khan, M.A.; Sharif, M.; Akram, T.; Bukhari, S.A.C.; Nayak, R.S. Developed Newton-Raphson Based Deep Features Selection Framework for Skin Lesion Recognition. Pattern Recognit. Lett. 2020, 129, 293–303. [Google Scholar] [CrossRef]
  68. Bajwa, M.N.; Muta, K.; Malik, M.I.; Siddiqui, S.A.; Braun, S.A.; Homey, B.; Dengel, A.; Ahmed, S. Computer-Aided Diagnosis of Skin Diseases Using Deep Neural Networks. Appl. Sci. 2020, 10, 2488. [Google Scholar] [CrossRef] [Green Version]
  69. Rajinikanth, V.; Satapathy, S.C.; Dey, N.; Fernandes, S.L.; Manic, K.S. Skin Melanoma Assessment Using Kapur’s Entropy and Level Set—A Study with Bat Algorithm. In Smart Innovation, Systems and Technologies; Springer: Singapore, 2019; Volume 104, pp. 193–202. ISBN 9789811319204. [Google Scholar]
  70. Rebouças Filho, P.P.; Peixoto, S.A.; Medeiros da Nóbrega, R.V.; Hemanth, D.J.; Medeiros, A.G.; Sangaiah, A.K.; de Albuquerque, V.H.C. Automatic Histologically-Closer Classification of Skin Lesions. Comput. Med. Imaging Graph. 2018, 68, 40–54. [Google Scholar] [CrossRef]
Figure 1. Samples of utilized images in different datasets.
Figure 1. Samples of utilized images in different datasets.
Diagnostics 13 01104 g001
Figure 2. Proposed framework of the current study.
Figure 2. Proposed framework of the current study.
Diagnostics 13 01104 g002
Figure 3. Stages of hair removal of images after applying the DullRazor algorithm.
Figure 3. Stages of hair removal of images after applying the DullRazor algorithm.
Diagnostics 13 01104 g003
Figure 4. An example for a segmented skin image.
Figure 4. An example for a segmented skin image.
Diagnostics 13 01104 g004
Figure 5. The adapted Sobel representation.
Figure 5. The adapted Sobel representation.
Diagnostics 13 01104 g005
Figure 6. Left fractional-order Sobel.
Figure 6. Left fractional-order Sobel.
Diagnostics 13 01104 g006
Figure 7. Right fractional-order Sobel.
Figure 7. Right fractional-order Sobel.
Diagnostics 13 01104 g007
Figure 8. Sobel code images of the PH2 dataset; the size of the images are 238.2 pixels/mm.
Figure 8. Sobel code images of the PH2 dataset; the size of the images are 238.2 pixels/mm.
Diagnostics 13 01104 g008
Figure 9. Comparison of performance using various feature descriptors in PH2.
Figure 9. Comparison of performance using various feature descriptors in PH2.
Diagnostics 13 01104 g009
Figure 10. Comparison of performance using various features in ISIC 2016.
Figure 10. Comparison of performance using various features in ISIC 2016.
Diagnostics 13 01104 g010
Figure 11. Comparison of performance using various features in ISIC 2017.
Figure 11. Comparison of performance using various features in ISIC 2017.
Diagnostics 13 01104 g011
Figure 12. Comparison of performance using various features in DermIS.
Figure 12. Comparison of performance using various features in DermIS.
Diagnostics 13 01104 g012
Figure 13. Comparison of performance using various features in Dermnet.
Figure 13. Comparison of performance using various features in Dermnet.
Diagnostics 13 01104 g013
Table 1. The hyper-parameters in RBM.
Table 1. The hyper-parameters in RBM.
The learning rate0.1
The momentum0.7
Highest count of epochs in training300
Batch size25
The Delay of gap stop3
The Delay of momentum0.7
Table 2. PH2 dataset confusion matrix.
Table 2. PH2 dataset confusion matrix.
TypeTotal Amount of Test ImagesAtypical Nevus (%)Common Nevus (%)Melanoma (%)Total Positive Rate (TPR) (%)
Atypical Nevus40990199
Common Nevus4001000100
Melanoma20409696
Table 3. Classification evaluation performance for the PH2.
Table 3. Classification evaluation performance for the PH2.
DescriptorsClassifierSPSEPPF-ScoreHMACC ± SD
LBPK-NN97.092.480.188.485.489.3 ± 1.23
Bayes88.285.283.482.188.284.8 ± 0.25
LS-SVM92.792.588.392.592.192.8 ± 1.26
ELM89.282.393.282.179.091.9 ± 3.45
MLP91.978.288.382.178.178.2 ± 4.53
Stacked RBM99.299.496.397.496.681.1 ± 4.27
CLDPK-NN98.296.498.198.398.498.33 ± 0.44
Bayes90.187.689.487.888.287.2 ± 2.34
LS-SVM(RBF)94.792.990.394.594.194.8 ± 0.77
ELM(RBF)91.282.393.282.179.091.9 ± 0.83
MLP90.978.288.382.178.178.2 ± 2.34
Stacked RBM98.597.898.698.698.498.8 ± 4.03
Proposed SDPK-NN97.295.497.197.398.498.3 ± 2.24
Bayes91.287.289.186.587.687.1 ± 4.23
LS-SVM(RBF)94.789.989.393.593.193.8 ± 0.23
ELM(RBF)91.281.393.282.179.091.9 ± 4.03
MLP90.977.289.382.178.178.2 ± 5.23
Stacked RBM99.698.899.699.699.499.8 ± 1.20
Table 4. ISIC 2016 dataset confusion matrix.
Table 4. ISIC 2016 dataset confusion matrix.
TypeTotal Amount of Test ImagesBenign (%)Melanoma (%)TPR (%)
Benign type2501000100
Melanoma type5069494
Table 5. Classification outcomes for the pictures of ISIC 2016.
Table 5. Classification outcomes for the pictures of ISIC 2016.
DescriptorClassifierSPSEPPF1-ScoreHMACC ± SD
LBPK-NN83.082.481.176.371.371.3 ± 1.42
Bayes71.278.276.472.172.276.3 ± 0.44
LS-SVM(RBF)72.780.382.384.883.186.8 ± 1.45
ELM(RBF)71.282.184.282.568.068.5 ± 2.54
MLP71.979.289.382.187.179.2 ± 1.98
Stacked RBM90.290.486.386.484.681.1 ± 7.14
CLDPK-NN88.286.490.198.388.490.3 ± 1.45
Bayes80.080.289.087.988.387.0 ± 5.44
LS-SVM(RBF)74.782.590.390.591.191.8 ± 2.57
ELM(RBF)81.272.390.282.179.091.9 ± 4.54
MLP80.978.288.382.178.178.2 ± 12.32
Stacked RBM88.592.193.392.693.095.5 ± 2.54
Proposed SDPK-NN80.285.491.199.389.492.3 ± 2.67
Bayes81.081.288.088.987.388.0 ± 5.54
LS-SVM (RBF)72.783.591.392.590.192.8 ± 2.22
ELM (RBF)83.271.391.283.177.090.9 ± 0.52
MLP81.979.289.381.176.179.2 ± 0.54
Stacked RBM92.595.795.395.695.296.5 ± 2.54
Table 6. ISIC 2017 dataset confusion matrix.
Table 6. ISIC 2017 dataset confusion matrix.
TypeTotal Amount of Test ImagesBenign (%)Melanoma (%)TPR (%)
Benign type30896496
Melanoma type5839797
Table 7. Classification outcomes of pictures from ISIC 2017.
Table 7. Classification outcomes of pictures from ISIC 2017.
DescriptorsClassifierSPSEPPF-ScoreHMACC ± SD
LBPK-NN71.062.465.166.362.360.3 ± 6.29
Bayes61.268.266.462.163.265.3 ± 4.22
LS-SVM(RBF)61.763.361.364.864.164.8 ± 2.29
ELM(RBF)62.264.064.262.359.051.5 ± 4.27
MLP64.569.267.362.168.068.2 ± 0.29
Stacked RBM79.279.469.369.375.680.1 ± 2.34
CLDPK-NN65.266.470.188.378.470.3 ± 5.45
Bayes60.060.279.077.978.377.0 ± 2.29
LS-SVM (RBF)61.762.570.380.581.171.8 ± 0.88
ELM (RBF)61.272.360.272.169.071.9 ± 6.23
MLP70.972.268.372.168.168.2 ± 0.29
Stacked RBM96.598.180.382.783.294.5 ± 1.28
Proposed SDPK-NN66.277.472.188.378.470.3 ± 5.29
Bayes61.072.279.077.978.377.0 ± 4.29
LS-SVM (RBF)62.774.572.380.581.171.8 ± 6.23
ELM (RBF)63.273.362.272.169.071.9 ± 6.28
MLP72.972.268.372.168.168.2 ± 4.54
Stacked RBM98.599.982.384.785.295.5 ± 2.12
Table 8. DermIS dataset confusion matrix.
Table 8. DermIS dataset confusion matrix.
TypeTotal Amount of Test ImagesMalignant Melanoma (%)Seborrheic Keratosis (%)Lupus Erythematosus (%)TPR (%)
Malignant Melanoma (%)18972197
Seborrheic keratosis (%)48099199
Lupus Erythematosus (%)24309797
Table 9. Classification outcomes of pictures from DermIS.
Table 9. Classification outcomes of pictures from DermIS.
DescriptorsClassifierSPSEPPF-ScoreHAACC ± SD
LBPK-NN75.661.46.166.362.360.3 ± 0.22
Bayes62.467.265.462.163.265.3 ± 2.22
LS-SVM(RBF)65.862.360.363.863.165.8 ± 4.22
ELM(RBF)64.262.063.276.365.061.5 ± 2.67
MLP65.568.268.362.168.068.2 ± 8.75
Stacked RBM69.979.467.368.377.683.1 ± 9.52
CLDPK-NN68.266.470.188.388.480.3 ± 0.29
Bayes56.090.289.067.988.387.0 ± 9.29
LS-SVM(RBF)82.792.590.190.181.181.8 ± 2.29
ELM(RBF)89.272.390.279.189.071.9 ± 0.44
MLP90.992.288.392.188.168.2 ± 0.56
Stacked RBM95.999.990.392.783.296.6 + 3.44
Proposed SDPK-NN68.266.470.188.388.480.3 ± 6.29
Bayes55.095.289.068.989.387.0 ± 5.44
LS-SVM(RBF)85.798.597.192.189.181.8 ± 5.67
ELM(RBF)89.272.392.279.189.072.9 ± 0.29
MLP91.993.289.393.189.168.2 ± 4.54
Stacked RBM96.999.992.394.784.297.6 ± 3.67
Table 10. Dermnet dataset confusion matrix.
Table 10. Dermnet dataset confusion matrix.
TypeTotal Amount of Test ImagesMelanoma and Melanocytic Nevi (%)Seborrhoeic Keratosis and Other Type of Benign Tumors (%)Common Warts, Molluscum Contagiosum
and Other (%)
TPR (%)
Melanoma and Melanocytic Nevi (%)6358910189
Seborrhoeic Keratosis and other type of Benign Tumors (%)23971087387
Common Warts, Molluscum contagiosum and other (%)15499.51.58989
Table 11. Classification outcomes of Dermnet.
Table 11. Classification outcomes of Dermnet.
DescriptorsClassifierSPSEPPF-scoreHMACC ± SD
LBPK-NN72.361.469.166.362.360.3 ± 5.34
Bayes63.960.264.462.162.265.3 ± 3.87
LS-SVM (RBF)64.862.363.364.564.162.8 ± 0.29
ELM (RBF)64.267.061.476.369.062.5 ± 0.32
MLP65.562.264.362.168.065.2 ± 0.67
Stacked RBM76.978.475.276.375.680.1 ± 4.53
CLDPK-NN67.266.470.178.378.480.3 ± 6.34
Bayes72.070.278.067.978.377.0 ± 3.45
LS-SVM(RBF)72.781.570.171.181.171.8 ± 6.78
ELM (RBF)73.262.370.269.179.071.9 ± 5.89
MLP70.972.278.372.178.168.2 ± 0.54
Stacked RBM98.987.886.188.585.287.0 ± 6.29
Proposed SDPK-NN68.266.470.178.378.480.3 ± 3.22
Bayes73.070.279.067.978.377.0 ± 2.23
LS-SVM (RBF)71.780.570.170.181.171.8 ± 5.67
ELM (RBF)71.262.370.269.179.071.9 ± 5.66
MLP71.972.278.372.178.168.2 ± 0.23
Stacked RBM99.988.386.388.785.287.9 ± 2.33
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alphonse, A.S.; Benifa, J.V.B.; Muaad, A.Y.; Chola, C.; Heyat, M.B.B.; Murshed, B.A.H.; Abdel Samee, N.; Alabdulhafith, M.; Al-antari, M.A. A Hybrid Stacked Restricted Boltzmann Machine with Sobel Directional Patterns for Melanoma Prediction in Colored Skin Images. Diagnostics 2023, 13, 1104. https://doi.org/10.3390/diagnostics13061104

AMA Style

Alphonse AS, Benifa JVB, Muaad AY, Chola C, Heyat MBB, Murshed BAH, Abdel Samee N, Alabdulhafith M, Al-antari MA. A Hybrid Stacked Restricted Boltzmann Machine with Sobel Directional Patterns for Melanoma Prediction in Colored Skin Images. Diagnostics. 2023; 13(6):1104. https://doi.org/10.3390/diagnostics13061104

Chicago/Turabian Style

Alphonse, A. Sherly, J. V. Bibal Benifa, Abdullah Y. Muaad, Channabasava Chola, Md Belal Bin Heyat, Belal Abdullah Hezam Murshed, Nagwan Abdel Samee, Maali Alabdulhafith, and Mugahed A. Al-antari. 2023. "A Hybrid Stacked Restricted Boltzmann Machine with Sobel Directional Patterns for Melanoma Prediction in Colored Skin Images" Diagnostics 13, no. 6: 1104. https://doi.org/10.3390/diagnostics13061104

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop