Abstract

The most predominant kind of disease that is normal among ladies is breast cancer. It is one of the significant reasons among ladies, regardless of huge endeavors to stay away from it through screening developers. An automatic detection system for disease helps doctors to identify and provide accurate results, thereby minimizing the death rate. Computer-aided diagnosis (CAD) has minimum intervention of humans and produces more accurate results than humans. It will be a difficult and long task that depends on the expertise of pathologists. Deep learning methods proved to give better outcomes when correlated with ML and extricate the best highlights of the images. The main objective of this paper is to propose a deep learning technique in combination with a convolution neural network (CNN) and long short-term memory (LSTM) with a random forest algorithm to diagnose breast cancer. Here, CNN is used for feature extraction, and LSTM is used for extracted feature detection. The experimental results show that the proposed system accomplishes 100% of accuracy, a sensitivity of 99%, recall of 99%, and an F1-score of 98% compared to other traditional models. As the system achieved correct results, it can help doctors to investigate breast cancer easily.

1. Introduction

Breast cancer is probably the deadliest infection influencing ladies around the world. This infection is the main source for cancer growth passing. Women may develop tumors due to the breast cells' unusual development. These huge tumor cells divide into cancer and noncancer cells depending on the region, size, and location. The term “benign” refers to the original tumor area of the noncancerous tumor, while the term “malignant” refers to the secondary tumor area of the cancerous tumor. The lives of women are not at risk from benign tumor since they are curable, and their growth can be slowed down by taking the right treatments. A malignant tumor can only be treated if the patient receives the necessary medical care, such as surgery or radiation. The disease assessment includes classes such as tumor class or not, intermittent occasion or no-repetitive occasion, and harmless or threatening. Numerous past investigations are directed through the machine learning methods such as K-nearest neighbor, decision tree, support vector machine, and so on, which give the better presentation. However presently, an advanced approach is utilized to characterize breast cancer called deep learning to defeat machine learning. It is a procedure that is often utilized in data science, with top methods used in deep learning as convolution neural networks (CNN), recurrent neural networks (RNN), classic neural networks (multilayer perceptrons), long short term-memory networks (LSTMs), and others. Lately, deep learning has drawn in wide consideration inside the field. The calculation of clinical requirements is still deeply dependent upon, particularly related with deep learning. The utilization of significant learning demand applications has been extensively applied and has ended up in a strong methodology to face composite issues. To classify breast cancer, some of the studies used a pretrained network or created fresh deep neural networks [1].

Worldwide, among women breast cancer is a frequent disease according to the World Health Organization. One in every three women who are impacted will die. The current analysis strategies for the disease incorporate mammography, MRI, and pathology examinations. A physical pathologist should compute what the anatomical pathology research center has created while conveying results. Due to a breast cancer patient's conclusion, a doctor's computations are out of date. As a result, it encourages the employment of computer-assisted procedures. MRI has been recognized as a better imaging technique for detecting any kind of tumor [1].

Deep learning is a computer science approach that is used. It has received a lot of interest in the field of categorization in recent years. In clinical necessities, the order calculation actually has a ton of clout, particularly assuming that it is joined with deep learning. These applications have been widely used and have shown an effective way of dealing with complicated challenges. Recent deep learning-based research has focused on the idea of medical imaging on automated CAD systems. The greatest method for classifying and recognizing medical images is deep learning. Three processes in consecutive are necessary for a deep learning-based CAD system: preprocessing and parameter, setup, and deep feature extraction and diagnosis. From the same original breast picture, deep learning techniques can be used to immediately build massive low-high levels deep hierarchical feature maps. This demonstrates that deep learning is the most effective and reliable medical imaging method when using breast image data from the same source. Deep learning is the most effective and reliable medical imaging technique.

1.1. Motivation

Breast tumors can be successfully considered over their initial identification. Hence, the accessibility about appropriate masking strategies persists in significant recognition of an underlying indication of the disease. Different imaging techniques are used for utilization for masking and recognizing the sickness, and the notable procedures are ultrasonography, sonography, and thermography. Among these commonly used strategies for early identification of bosom disease is ultrasonography. The techniques that are prominently utilized are that ultrasonography is not successful for robust cancer.

A convolution neural network (CNN) comprises various convolution layers which can separate elements that address the different settings of images without highlight designing. Thus, CNN has turned into the most broadly involved strategy for picture translation errands in numerous areas such as the discovery and analysis of breast cancer [2]. After the accomplishment of CNNs in object location assignment [3], a few investigations have taken advantage of the benefits of deep CNNs to conquer the downsides of traditional mass identification models [413].

Deep learning is a classification of machine learning. According to the data selected, deep learning utilizes unsupervised learning to study the data either as unstructured or unlabeled [14]. Considering the above-said problems, little multitude can be circumvented by emission from radioactivity and thermometry which might be more compelling than the ultrasound method in diagnosing more modest but dangerous commonalities.

In this paper, we intend to bring in the concepts of deep learning techniques to diagnose breast cancer by combining CNN and LSTM, where CNN is used for feature extraction and LSTM is used for extracted feature detection.

1.2. Related Work

Breast cancer has been adequately studied since its initial recognition. So, accessing appropriate masking strategies is significant as distinguishing every underlying indication of the disease. Different imaging methods are utilized over masking to recognize this illness; the most famous methodologies are mammography, ultrasound, and the thermograph. One of the biggest strategies for early detection of breast cancer growth is mammography. Illustrative monograph techniques are available, that are commonly used as mammography, and are not convincing over some types of breast cancer. Thinking about these complexities, little groups can be skirted by emissions from radiography, and the monograph may be more suitable than the ultrasound system in diagnosing more unassuming harmful masses [15].

These days, important evolution in computer science has occurred, for example, artificial intelligence, machine learning, and CNN are the fastest growing fields in the medical care industry [16]. These technologies are found in the examination field that manages and works on mechanical frameworks to determine complex errands by lessening the need for human insight. An investigation was performed by [17] to assess the determination of bosom disease with mammograms utilizing CNN. They show that exhibition appraisal in determination is completed on two datasets of mammographic mass, for example, DDSM-400 and CBIS-DDSM, with variations in the exactness of the comparing division guides of ground truth. In a paper [18], authors have used deep CNN to detect breast cancer automatically from various classes of images with five layers on mammograms and ultrasound.

In the earlier examination, authors [19] utilized CNNs to research mechanized identification of breast malignant growth using IDC-type. A few researchers utilized ML-based programmed location procedures to distinguish something similar, and it is intended to acquire the right results to reduce the blunders found in the finding technique. A few past investigations have proposed involving AI too such as CNN for picture recognition and medical services observing [20]. However, as for clinical side arrangement, which is about 60% for all group recognition, 75% for as it was categorized, and 100% calculative, the precision rate is excessively lacking [21]. In the paper [22], the authors have proposed a novel approach combining class structure-based deep convolution neural networks for providing reliable and accurate multi classification of breast cancer.

1.2.1. CNN Using ML

The authors [23], proposed a framework that utilizes different convolution neural organization (CNO) models to consequently identify bosom malignant growth, contrasting the outcomes and those from AI (ML) calculations. All structures were directed by an enormous dataset of around 275,000, 50 × 50-pixel RGB picture patches. The authors used machine learning classifiers such as logistic regression, logistic regression, and support vector machines.

The implementation is conducted using the sci-kit-learn machine learning framework in Python with pandas, NumPy, Matplotlib functions, and Seaborn frameworks. Four CNN models have been used to predict invasive cancer such as CNN models 1, 2, 3, and 4. To boost the quantity of highlights, convolution layers are significantly increased by three times here [6]. In CNN Model 3, the disease is more profound than Models 1 and 2, with a five-layer CNN used to distinguish the disease [24].

1.2.2. Multilayer Perceptron

Multilayer perceptron includes generally utilized to be a strategy fit for managing complex issues. The author [25] has performed research to utilize multilayer perceptron to identify and validate by using x-fold cross-validation. A fabricated model using a dataset from Medical Center University uses multilayer perceptron, Weka 3.8, and 5-fold cross approval assessments. This model can be used to assist clinical social affairs with choosing the sorts of redundant and nonintermittent bosom harmful development [26].

Followed by Mujarad’s examination that identified cancer malignant growth utilizing multifacet perceptron with a prescient exactness upsides of 65.21%. Amidst of various methodologies, multilayer perceptrons are broadly used methods for detecting cancer. They are also widely used for predicting complex types of issues in cancer with higher rates in accuracy [27]. Similar work has been carried out by authors [28] that combines CNN–LSTM to detect COVID- 19 using X-rays that automatically identify the diseases before they spread. Recent research focuses on how to use deep learning-based network models to build an automated system with LSTM that accepts a patient’s health records to determine the probability of being infected by breast cancer [29].

1.2.3. Random Forest Classifier

In paper, [30] presents the exploration foundation of machine learning structure, and connected with the generally involved random forest calculation, advances the exploration destinations and content, proposes a work on versatile adaptive random forest algorithm (ARF), and based on the ARF method, construction is carried out. In another paper [31], the authors used an active contour that implies one of divisions.

Methods are used to separate the pixels of interest from the representation for examination and handling the same. In another paper [9], authors have proved random forest classifiers with decision trees that give better results during the segmentation process. Thus, in [12], authors have showed that the random forest technique has given better performance compared to other machine learning techniques. Similarly, in [32], authors have created a random forest classifier with LSTM to encode amino acids called as LEMP that gives better performance.

In a recent work, the authors [33] suggested a method that uses the modified contrast enhancement technique to improve the edges of source mammography images. To improve performance, the transferable texture convolutional neural network (TTCNN) was then proposed. In this study, the authors used an energy layer, and methods of classification are merged to extract the texture features from the layer of convolution. In another work, authors [34] proposed a quantization-assisted U-Net approach for segmentation of breast lesions in two steps such as (1) U-Net and (2) quantization. U-Net-based segmentation assists quantization to identify the isolated regions of specific lesions with sonography images.

2. Proposed Methodology to Analyze Novel Deep-CNN LSTM

In this article, the breast cancer data are forecasted using a random forest classifier. The data from the UCI Repository's Wisconsin Breast Cancer Database (WBCD) have been used in this work. Malignant and benign types of cancer are the two major categories of information. The above dataset is used by most of the researchers for detecting cancer in the breast. The Wisconsin Breast Cancer Database data are analyzed using the random forest algorithm, and it may be diagnosed based on 32 attributes.

2.1. Data Processing

All of the characteristics in the data have been scaled. On the data, we planned to use deep learning classification algorithms. As a result, we designed the scale to work with the approaches. Since the current dataset is arbitrary and there is no information mark yet, the principal phase of prehandling is to upgrade it by arranging and naming information and depicting the many kinds of cycles that interaction crude information to set it up for later cycles.

2.2. Methodology

The dataset was used as an input to the random forest classifier, which was used to classify breast cancer. We trained the deep forest algorithm in the suggested architecture after feeding it the input. For convolution layers, we employed Leaky RELU nonlinearity as shown in the following equation:

The above Figure 1 illustrates breast cancer detection with different phases. During the collection phase, various image datasets are collected. In the preprocessing phase, resizing of images is performed to classify the data into training and test data. In each epoch, by using a 5 fold cross-validation confusion matrix, accuracy sensitivity and F1 scores are determined. To solve the grading problem from the dataset of the images, a network-based LSTM model is utilized along with CNN. The following Figure 2 shows that feature extraction is performed with CNN and classification with LSTM.

2.3. Combined Network: CNN and LSTM

A CNN is a form of multilayer perceptron, unlike with a deep learning architecture, a simple neural network cannot learn complicated features. In a variety of applications, CNNs have been shown to be extremely effective. The objective principle of CNN is that it can extract local features from high-layer inputs and pass them down to lower layers for more complicated features. Convolutional, pooling, and fully connected (FC) layers make up a CNN. LSTM is an improved RNN (recurrent neural network) that consists of memory blocks rather than traditional RNN to solve vanishing and gradient problems. The key difference between LSTM and RNNs is that it adds a cell state to save long-term states; also, an LSTM network can recall and connect information from the past with data from the present. The network has various convolutional layers, pooling layers, one fully connected layer, one LSTM layer, and finally one output layer with a softmax function.

3. Results and Discussion

To prepare and test the model, the random forest grouping procedure is utilized. The credits of inward focusing on the mean, region mean, span mean, border mean, and concavity mean have been utilized to depict the expectation. Assuming the consequence of the expectation to be 1, the UI will show that the ‘Individual is in danger of being determined to have cancer disease later on.’ ‘Individual is not at risk of being determined to have bosom malignant growth in what’s to come’ is introduced on the UI in the event that the forecast accomplished is 0.

The following Figure 3 shows the prediction values that stacks the data based on the features selected, where the features are separated into two groups according to malignant and benign types of cancer. From the selected features, mean values for radius, texture, perimeter, area, smoothness, compactness, concavity, and concave values are determined.

3.1. Data Structure

The datasets employed produced two groups of non-recurrence events and recurrence events as a result of the data structure. For the medical image tasks, models from the pretrained images are fine-tuned using a random forest classifier, which is called as “Transfer Learning.”

Because of the great computational expense, convergence difficulty, and restricted amount of good quality labelled samples, gaining from clinical imaging information without any preparation is by and large not the most attainable method.

3.2. Quantitative Analysis by the Graph

To visualize the performance of a classifier, curves of receiver operating characteristics (ROC) are used. It employs a diagrammatic design to show the proposed neural network’s diagnostic capabilities in distinguishing between malignant and benign breast cancer samples.

We were able to pick the best feasible optimum forest model for BC classification using the random forest classifier method. The ROC bends likewise show how a huge neural organization based model works. The greater the classifier’s performance, the higher the values. The following graph depicts some of the features that have been selected according to their importance, and ranking is obtained as shown in Figure 4.

3.3. Exploratory Data Analysis (EDA)

Information perception is an incredibly important piece of expertise in AI. It is liable for giving a subjective comprehension of the given information. Exploratory data analysis (EDA) is utilized to track down examples and connections in information. The perceptions we use in this venture are heat maps and a couple plot of the relative multitude of boundaries being thought off. Simple data analysis and visualization were used to gain a better grasp of the data set and determine which features would be relevant to a deep learning model. Heat maps are exceptionally helpful to utilize while understanding complex informational collections. A heat map is a 2D portrayal of information in which values can be addressed by various shading plans. Heat maps are primarily used to observe solid connections between different boundaries as they are an extraordinary sign of relationship [35].

This map highlights which features add new information to the problem and which features are comparable to others in the collection. The correlation heat map matrix is shown with colored dimensions for mapping data to be visually represented in a scatterplot as shown in Figure 5.

In our work, CNN-LSTM and multilayer perceptron models were compared and executed with a proposed model based on random forest classifier tests which were determined with measures up to 5-fold cross validation.

3.4. Qualitative Analysis by Confusion Matrices

We input the data to an excellent model of the data to receive output in probabilities after it has been cleaned and preprocessed. It is, after all, a deep learning classification performance metric. A confusion matrix, otherwise called an error matrix, is a unique table development that licenses the portrayal of the introduction of estimation, regularly a managed learning one, in the issue of measurable grouping. The confusion matrix without cross validation is shown in Figure 6 and with cross validation is shown in Figure 7.

The system has successfully played out 20 evaluations by using 5-fold cross endorsement of the chest threatening development data and using unpredictable boondocks estimation. Table 1 summarizes the classification report, and the graphical representation in Figure 8 shows that the proposed model has better performance than other conventional models.

3.5. Limitations

Due to the considerable heterogeneity among images of the cancer subtype, standard machine learning algorithms for multiclass classification perform poorly. Thus, to enhance computer-aided diagnosis, complex approaches such as deep CNN with LSTM are employed. The algorithm is used with the random forest classifier to generalize big data sets using data properties with amazing results. These findings are mostly due to the design, which takes into consideration the tasks implied in the field of computer vision that use the two-convolution method. The proposed model mentioned above has certain weaknesses in addition to the excellent accuracy it has attained. Sufficiently labelled data and an uneven number of classes are two of the most common problems encountered while using deep learning models. The absence of labelled data causes the models to provide biased findings, which is referred to as the “over-fitting problem.” Also, data augmentation approaches are required to handle issues and generate efficient categorization.

4. Conclusion

It is a difficult challenge to automate breast cancer screening in order to improve patient care. In this study, we have suggested a deep CNN–LSTM with a random forest approach that automates predicting systems for the identification of breast cancer. Convolutional, max-pooling, and fully connected layers were used in the pretraining phase, and a classification layer was used to separate the benign samples from the malignant samples after the pretraining phase. We have fostered an assessment model to decide malignant growth repetitive and no-intermittent utilizing multifacet perceptron. The accuracy is obtained after execution of 5-fold cross-validation tests, and performance measures are calculated. It tends to expand the dataset as here we have considered estimation of mass-circularity in the dataset, and from factual examination, we observed that this quantitative estimation prompts straightforwardness in our decision obtaining 100% accuracy, and other performance metrics are also attained at high levels when compared with other models. In the future, the proposed model can be compared with other datasets available with a greater number of images [36, 37].

Data Availability

No data were used to support the findings of this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.