CNN-Based Segmentation and Detection of Brain Tumors MRI Images: A Review

Abstract


A. Introduction
In the context of the brain or spinal cord, the term "brain tumor" refers to both benign and malignant growths that may sometimes develop inside these structures [1,2].When it comes to medical treatment, brain tumors are a significant and problematic issue.There are many different cell types that might be the source of these tumors [3].Some of these cell types include glial cells and meninges, amongst others.To ensure that patients get effective treatment and that their prognosis is improved, it is of the highest significance that brain tumors be diagnosed in a timely manner and with the best possible accuracy.Magnetic resonance imaging (MRI) has been the diagnostic tool of choice for brain cancers due to its ability to capture particular anatomical information.This advantage has led to its widespread use [4,5].The manual segmentation and detection of brain tumors in MRI images, on the other hand, not only takes a considerable amount of time but also provides space for human error within the process [6,7].This is because the procedure also involves human interaction.
Furthermore, because of this, it is essential to study the feasibility of using automated algorithms in order to interpret MRI data for the presence of brain tumors.Deep learning architectures, and more especially convolutional neural networks (CNNs), have been shown to be highly effective methods for achieving this objective, as has been brought to light.Convolutional neural networks (CNNs) are especially useful for automatically detecting and extracting important properties from magnetic resonance imaging (MRI) images [8].This allows tasks such as tumor segmentation and detection to be performed.
The goal of this review article is to study the application of neural network approaches for the detection and segmentation of brain tumors in magnetic resonance imaging (MRI) photographs.Specifically, the target audience for this paper is medical professionals [9].Within the scope of this area of study, we provide a comprehensive evaluation that carefully summarizes the most recent research advancements that have been carried out.With the help of this research, some of the most significant approaches that are used in this industry are brought to light.Customized deep neural networks, methods for region-based segmentation, transfer learning processes, and ensemble methodologies are some of the methodologies that fall under this category [10,11].
The research also includes a complete review of the performance of various methodologies over a wide range of datasets.This is in addition to the previous point.For the goal of performing an objective assessment of the efficacy of various approaches, we place a particular emphasis on quantitative indicators such as accuracy, precision, recall, and the Dice similarity coefficient.This is done in order to ensure that the evaluation is comprehensive and objective [12,13].The objective of this study is to provide a complete review of the current landscape in brain tumor analysis that makes use of neural networks on MRI data in order to provide major insights and pave the way for future research directions in this vital area of medical image analysis.This will be accomplished by giving a full analysis of the current landscape [14,15].
The remaining section of this paper are structured as follows: section 2 gives a comprehensive overview of CNN model.Section 3 illustration of the image segmentation technique .Section 4 About (CNN) for Image Detection.Section 5 literature review covers a review of CNN applications in healthcare, such as MRI image , tumore brain detection and image segmentation and its types.section 6 Discussion for the table summary of the literature review.Section 7 Conclusion.

B. Convolutional Neural Networks (CNN) MODEL
Neural Networks (CNNs) are advanced deep learning systems that are exceptionally good at handling image data, making them ideal for medical purposes like analyzing MRI scans.A CNN processes images in layers that mimic the human visual system: it has convolutional layers that filter the input to highlight features, pooling layers that simplify these features by reducing their size, and fully connected layers that use these features to make decisions, such as identifying whether a tumor is present.This multi-layered approach is particularly effective for finding and outlining brain tumors in MRI images, where it's crucial to accurately spot and characterize abnormal tissues.
When it comes to detecting and segmenting brain tumors from MRI images, CNNs use their multilayered structure to distinguish between normal brain tissues and tumors [16,17].This is a challenging task because tumors can vary greatly in how they look, where they're located, and their size.CNNs are trained on large sets of MRI images that have been marked to show where tumors are, learning to pick up on the small differences in texture, shape, and brightness that might suggest a tumor.Some sophisticated CNN models like U-Net or V-Net are especially designed for this, with special features that help maintain the clarity and context of the image details, which is vital for accurate segmentation [26].The use of CNNs in brain tumor analysis on MRI scans has significant benefits for medical practices [27].By automating tumor detection and segmentation, these networks lessen reliance on manual analysis, which can be slow and subject to human error .Furthermore, CNNs' ability to process and learn from huge datasets can lead to more consistent and precise diagnoses, supporting tailored treatment plans for patients [28] .Consequently, CNN-based techniques are revolutionizing the field of neuro-oncology, enhancing early detection and treatment precision, and ultimately aiming to improve outcomes for patients.Their integration into clinical diagnostic processes highlights their potential to boost both the accuracy and efficiency of medical imaging [29].

C. Utilizing Convolutional Neural Networks (CNNs) for Medical Image Segmentation
Convolutional Neural Networks (CNNs) have become a fundamental tool in medical image segmentation, especially effective due to their ability to independently analyze and interpret complex imaging data.The structure of CNNs includes convolutional layers that identify features automatically, pooling layers that simplify these features by reducing their size, and dense layers that classify the data, making them especially good for processing MRI scans [18].In MRI brain imaging, CNNs are adept at differentiating between various brain tissues and spotting pathological changes like tumors.These networks are trained on extensive datasets that include a wide range of brain tumor appearances, allowing them to detect subtle differences that might be missed by the human eye [19,20].
The precision with which CNNs can segment brain tumors from MRI images is crucial for accurate clinical diagnosis and effective treatment planning.During segmentation, CNNs outline the exact borders of tumors within the scans, a task they perform exceptionally well by utilizing cutting-edge deep learning algorithms that improve image clarity and detail.Specialized models like U-Net, designed specifically for medical image segmentation, feature a design that integrates both detailed and broader contextual information to ensure thorough detection of tumors [21].By automating this segmentation and detection, CNNs not only lighten the workload for medical staff but also minimize the chances of human error, leading to faster and more precise diagnoses [22].This advancement is transforming the use of medical imaging in brain tumor management, increasing the accuracy of treatments and potentially enhancing patient outcomes [23].

D. Convolutional Neural Network (CNN) for Image Detection
Convolutional Neural Networks (CNNs) have emerged as a very efficient and effective method for conducting image detection tasks, such as segmenting brain tumors and identifying them in MRI images [24].When it comes to discerning patterns inside photographs, this specific kind of deep learning framework has exceptional efficacy [25].The reason for this is due to the presence of convolutional layers.These layers are able to automatically learn important parts from the input by gradually extracting higher-level characteristics from basic ones.CNNs excel in the domain of medical image processing due to their ability to effectively capture the spatial connections among pixels inside an MRI scan.The underlying rationale is that in order to ascertain the presence of brain cancer, it is essential to differentiate between malignant brain tissue and normal tissue .In this investigation, we will study the utilization of Convolutional Neural Networks (CNNs) in the segmentation and recognition of brain tumors in MRI photos.This analysis will contribute to a deeper understanding of earlier research in this field [29].

E. Literature Review
Maqsood et.al. [30] in (2022) proposed method for brain tumor identification and classification consisted of five phases: determining the image's edge, segmenting the image using a custom deep neural network, extracting features with a modified MobileNetV2, selecting informative features based on entropy using M-SVM, and finally classifying the tumor using M-SVM.This approach achieved high accuracy and provided interpretable AI insights, outperforming previous techniques on the BraTS 2018 and Figshare datasets.
Kemi et.al. [31] in (2018) presented a novel automated technique for precisely classifying brain cancers from 3D-MRI images was presented.This method combined brain symmetry analysis with region-based and border-based segmentation approaches.The segmentation process consisted of three steps: region growth in conjunction with a 3D deformable model to detect tumor boundaries, automatic tumor recognition based on brain symmetry, and image pre-processing to extract the brain and eliminate noise.The proposed approach was tested on 285 individuals with various tumor forms and shapes, and the results were encouraging, achieving near-perfect match with ground truth data.
Sharif et.al. [32] in (2021) introduced a four-phase technique for brain tumor detection in MRI images.It addressed image noise using a homomorphic wavelet filter, then extracted features from an Inceptionv3 pre-trained model.These features were optimized using a genetic algorithm for classification.Tumor localization was achieved using YOLOv2-Inceptionv3, followed by segmentation using McCulloch's Kapur entropy method.Validation on three databases showed prediction scores exceeding 0.90 for localization, segmentation, and classification, with superior outcomes compared to existing methods.
Majib et.al. [33] in (2021) focused to covered the creation and examination of several hybrid and conventional machine learning models for automatically categorizing brain tumor photos, as documented by Majib et al. (2021).In addition, the best transfer learning model for this task was assessed from sixteen models.Ultimately, a stacked classifier that outperformed all other models in terms of performance was proposed and named VGG-SCNet (VGG Stacked Classifier Network).VGG-SCNet attained impressive F1 scores, precision, and recall, with scores of 99.2%, 99.1%, and 99.2%, respectively.
Dewik & ferretti [34] in (2022) presented an efficient, automatic method for identifying and classifying brain cancers in MRI data, as described by Dweik & Ferretti (2022).This method combined a deep learning model initiated with transfer learning and geometric active contour segmentation.Initially, images were pre-processed to enhance clarity.Subsequently, a previously trained neural network, refined using 388 T1-weighted MRIs with meningioma tumors, located the tumors and delineated them using bounding boxes.These boxes then guided the active contours to accurately segment the tumors.The innovation lay in the effective integration of these methods and the minimal training required.The technique was highly accurate, achieving 97.92% precision, 96.91% recall, 97.41% F-measure, and a Dice similarity coefficient greater than 0.95 when tested on 97 MRI scans.
Khan et.al. [35] in (2022) presented two deep learning models that utilized MRI images to categorize brain cancers into multiclass schemes (meningioma, glioma, pituitary tumors) or binary schemes (normal vs. abnormal).To train and evaluate these models, two publicly accessible datasets were utilized: one comprising 3064 photos and the other containing 152 images.To tackle the abundance of data available for training, the authors developed a model based on a customized 23-layer convolutional neural network (CNN) for the larger dataset.Conversely, they employed transfer learning to adapt to the smaller dataset, where data scarcity posed a risk of overfitting.To effectively learn from the limited data, this approach integrated features from their original 23-layer CNN model with the VGG16 architecture.
Amin et.al. [36] in (2022) proposed a method for categorizing brain cancers.It utilized brain scan data and applied a deep learning model called Inceptionv3 to extract features.Subsequently, tumors were classified as pituitary tumors, gliomas, meningiomas, or no tumors using a quantum variational classifier (QVR).Finally, the severity of the tumors was analyzed using a segmentation network.The researchers tested their model using three datasets and achieved a tumor detection accuracy of over 90%.
Ali et.al. [37] in (2020) enhanced tumor segmentation accuracy by merging two networks, a 3D CNN and a U-Net, into an ensemble strategy.They independently trained each on the BraTS-19 dataset, creating unique tumor region maps.By blending these maps, the ensemble improved accuracy, achieving impressive Dice scores of 0.750, 0.906, and 0.846 for different tumor segments, surpassing current methods and demonstrating the strategy's effectiveness.
Aboussaleh et.al. [38] in (2024) proposed the 3DUV-NetR+ architecture for brain tumor segmentation, which enhanced 3D multimodal MR image segmentation by combining the advantages of V-Net and 3DU-Net with transformers.When tested against other models using the BraTS2020 dataset, the model performed quite well.Future work was planned to focus on refining the segmentation of specific tumor locations and creating a 3D Segmentation System Assistant for Radiologists (3D2SAR) to facilitate data annotation and improve medical image analysis.
Akter et.al. [39] in (2024) tested a classification model across six datasets, comparing its effectiveness with and without a segmentation model.Using metrics like accuracy, recall, precision, and AUC, they found that integrating segmentation improved the model's performance, surpassing state-of-the-art pre-trained models.With segmentation, accuracy rose slightly from 98.7% to 98.8% on a combined dataset and reached up to 97.7% on individual datasets.These results indicate that their new framework for automatically identifying and segmenting brain tumors from MRI scans could be valuable for clinical applications.
Reyes & Sanchez [40] in (2024) investigated the effectiveness of convolutional neural networks (CNNs) for classifying brain tumors, examining architectures like VGG, ResNet, EfficientNet, and ConvNeXt.The study proposed a network structure incorporating convolutional layers, batch normalization, and max-pooling, and tested it on over 3000 MRI images of meningiomas, gliomas, pituitary tumors, and non-tumorous scans.Using training and validation sets, optimal hyperparameters were identified.Techniques such as training from scratch, data augmentation, transfer learning, and fine-tuning were employed using TensorFlow and Keras.The best model achieved a 98.7% accuracy rate, matching state-of-the-art results, with VGG as the most effective model.Salama et.al. [41] in (2023) Provided a fully automated method based on deep convolutional neural networks for brain tumor detection.The Brain Tumor Image Segmentation (BRATS 2020) datasets, which included 1352 cases of brain tumors, were used to assess the technique.Conclusion: By using a cross-validation technique, our approach demonstrated that it was possible to outperform previously published studies in terms of segmentation accuracy.
Cinar & Yildirim [42] in (2020) focused To diagnose brain tumors using MRI scans, investigated deep learning techniques employing convolutional neural networks (CNNs) for categorization.By incorporating custom layers into the final layers of a modified ResNet50 model, tumor recognition accuracy reached 97.2%.The proposed model surpassed all other pre-trained CNN models, including AlexNet, DenseNet201, InceptionV3, and GoogLeNet, based on a performance comparison.These findings demonstrated the effectiveness of the developed deep learning method for brain tumor identification, establishing it as a viable option for computer-aided diagnostic systems.
Aggarwal et al. [43] in (2023) proposed an effective technique for brain tumor segmentation utilizing an Improved Residual Network (ResNet).By enhancing the existing ResNet architecture, the model achieved faster learning and improved precision through optimized projection shortcuts and maintained connection linkages.Experimental investigation using BRATS 2020 MRI data demonstrated competitive performance against conventional techniques such as CNN and Fully Convolutional Neural Network (FCN), with over 10% improvements in accuracy, recall, and f-measure.Keywords included brain tumor, CNN, deep neural network, segmentation, ResNet, healthcare, and prediction models.
Chawla et.al. [44] in (2022) proposed for brain tumor diagnosis, ranging from preprocessing MRI images by downsizing and eliminating brain features to dry waveform manipulation.Integration of the Bat Algorithm's Convolutional Neural Network (CNN) capabilities exhibited outstanding classification accuracy compared to similar methods.Accuracy was reported as follows: 94% for the Bat Algorithm, 90.4% for the CNN Algorithm, and 99.5% for the BA + CNN combination, utilizing data from the entire brain image and the BrainTumor dataset.The study suggested potential integration of additional artificial intelligence modalities to enhance accuracy and efficiency.Additionally, image downsizing was employed to optimize efficiency in terms of time and cost.
Ahuja et.al. [45] in (2020) suggested a transfer learning-based strategy for brain tumor detection and segmentation using a superpixel technique was described.The methodology comprised two stages: segmentation of tumors within LGG and HGG images, and classification of MRI slices into three groups (normal, LGG, and HGG).
In the first phase, the suggested approach employed the VGG-19 architecture, achieving a high accuracy of 96.32% on validation data and 99.82% on training data.The AUC (area under the curve) was 0.99, specificity was 100%, sensitivity was 97.81%, and accuracy was 99.30% throughout testing.
Rahman & Islam [46] in (2023) focused a novel parallel deep convolutional neural network (PDCNN) was introduced to enhance brain tumor diagnosis.The PDCNN collected both global and local data and utilized batch normalization and dropout regularizers to mitigate overfitting.Prior to data augmentation, the model preprocessed images by scaling and converting them to grayscale.To ensure comprehensive feature extraction, it employed parallel processing using two CNNs with different window sizes.Tested on three MRI datasets, the PDCNN surpassed existing approaches by efficiently capturing precise data characteristics, achieving accuracy rates ranging from 97.33% to 98.12%.
Rajagopal et.al. [47] in (2023) presented a new method for extracting features from images using the Gray Level Co-occurrence (GLC) matrix method.Specifically, a U-Net and a 3D CNN were utilized as convolutional neural networks (CNNs) to enhance brain tumor segmentation accuracy.By merging these networks, the research achieved improved and more accurate segmentation results.Two models were developed and evaluated in the study, revealing that each model generated segmentation maps with distinct delineations of tumor subregions.Combining the predictions from both models resulted in the final forecast.This method surpassed current state-of-the-art designs, achieving remarkable precision rates of 98.35%, 98.5%, and 99.4% for tumor core, enhanced tumor, and total tumor, respectively.
Patil & kirange [48] in (2022) presented A deep ensemble model was employed to address a medical imaging problem.Initially, MRI images were utilized to develop and evaluate a VGG16 network and a shallow convolutional neural network (SCNN).The fusion of features from these models enhanced the classification accuracy of three tumor types.The ensemble deep convolutional neural network model (EDCNN) achieved an accuracy of up to 97.77%, enhancing multiclass classification accuracy by mitigating overfitting in imbalanced datasets.The proposed framework demonstrated favorable comparisons with previous state-of-the-art research.
Sangui et.al. [49] in (2022) proposed a deep learning framework with a modified U-Net architecture for brain tumor detection and segmentation based on MRI scans.The model's performance was evaluated using real photos from the BRATS 2020 dataset, achieving a test accuracy of 99.4%.A comparative analysis with previous research showed that our U-Net-based model surpassed other deep learning techniques for this task.
Sajid et.al. [50] in (2019) proposed a deep learning method for brain tumor segmentation using MRI scans.It involved combining different MRI modalities and utilizing a hybrid CNN architecture with a patch-based approach for segmentation.Techniques such as dropout regularization and batch normalization were employed to address overfitting, while a two-phase training procedure was implemented to handle data imbalance.Preprocessing steps included image normalization and bias field correction.Evaluation on the BRATS 2013 dataset demonstrated significant improvements over existing methods, with high dice scores and enhanced sensitivity and specificity for tumor segmentation.
Zhang et.al. [51] in (2024) introduced a new model, the Augmented Transformer U-Net (AugTransU-Net), was introduced for brain tumor segmentation, addressing limitations observed in previous transformer-based U-Net models.The AugTransU-Net incorporated sophisticated transformer modules into a U-shaped architecture, strategically placing augmented modules to preserve feature diversity and enhance interaction.Paired attention modules were utilized to establish long-range interactions, allowing each layer to comprehend the overall tumor structure and extract semantic information.Experimental results demonstrated the effectiveness of AugTransU-Net, showing competitive segmentation performance on the BraTS2019-2020 validation datasets.
Shirin Kordnoori [52] in (2024) introduced A unique automatic model was for the simultaneous identification of tumor type and border in brain magnetic resonance imaging.The model featured a multi-layer perceptron for classifying three common primary brain tumors, a decoder for segmentation, and a shared encoder for feature representation.When tested on a brain tumor dataset, the model demonstrated promising performance, achieving 97% accuracy for both classification and segmentation tasks in a multi-task learning context.In clinical settings, this model holds the potential to serve as the primary screening tool for the early detection of common primary brain tumors with high success rates.
Hanwat & J [53] in (2019) aimed to utilize the Convolutional Neural Network (CNN) algorithm based on Brain MRI data to classify the stages of brain tumors.CNN was compared to various machine learning classifiers, including K Nearest Neighbors and Random Forest.Based on the results, CNN emerged as one of the top classifiers, achieving an average accuracy of 98%, cross-entropy of 0.097, and validation accuracy of 71%.These findings highlight the effectiveness of CNN in classifying brain tumors at different stages.
Kesav & jibukumar [54] in (2023) introduced a novel architecture based on the RCNN technique for brain tumor classification and tumor type object recognition.Two publicly accessible datasets from Figshare and Kaggle were utilized for analysis.The objective was to employ a low-complexity framework to reduce execution time in brain tumor analysis.Initially, a Two Channel CNN achieved 98.21% accuracy in distinguishing between MRI samples of healthy tumors and gliomas.This architecture was further extended to detect tumor locations in meningioma and pituitary tumors by utilizing it as a feature extractor for an RCNN.The approach attained an average confidence level of 98.83% and significantly reduced execution times compared to current architectures.
Rai & Chatterjee [55] in (2020) introduced LU-Net, a Deep Neural Network customized for tumor identification with fewer layers and a simplified design.Utilizing 253 high-resolution brain MRI scans, it classified images into normal and pathological categories.Preprocessing techniques were employed to enhance model training.LU-Net's performance was compared to Le-Net and VGG-16 using five statistical measures.CNN models were trained on augmented images and validated with 50 additional data points, resulting in impressive accuracy: 88% for Le-Net, 90% for VGG-16, and notably, 98% for LU-Net.
Nizamani et.al. [56] in (2023) new feature-enhanced hybrid UNet models (FE-HU-NET) were introduced: FE1-HU-NET, FE2-HU-NET, and FE3-HU-NET.Various methods were employed to prioritize feature augmentation during image preparation.Segmentation results were enhanced using CNN-based postprocessing and customized UNet designs.The performance surpassed that of the state-of-the-art models at the time, achieving accuracy rates exceeding 99% on two datasets.
Sarkar et.al. [57] in (2020) involved identifying the type of brain tumor (pituitary, meningioma, or glioma) from MRI scans.To analyze the MRI data, a 2D Convolutional Neural Network (CNN) was employed.The system demonstrated the capability to detect all tumor types with high recall rates (88% for gliomas, 81% for meningiomas, and 99% for pituitary tumors), achieving an overall accuracy of 91.3%.
Chattopadhyay & maitra [58] in (2022) introduced an approach utilizing a convolutional neural network (CNN) with an SVM classifier and various activation strategies to segment brain tumors from 2D MRI data.The suggested method was efficiently implemented using TensorFlow and Keras in Python.According to the findings, the CNN surpassed previous results, achieving an accuracy of 99.74%.The researchers concluded that brain cancers could be reliably identified in MRI images using their CNN-based model, potentially expediting treatment processes.
Naseer et.al. [59] in (2021) analyzed brain MRI scans using a unique neural network (CNN).The model demonstrated high accuracy (98.8%) and specificity (99%) across multiple datasets after training on a substantial dataset.Performance for unseen data was enhanced further through data augmentation techniques.The study concluded that this approach outperformed existing technologies for diagnosing brain tumors.

F. Conclusion
This part provides a comprehensive review of the studies discussed in the previous sections, highlighting important discoveries, new directions, and wider implications for medical picture categorization and brain tumor detection and Segmentation .Table 1 provides a summary of the data sets, methods, and results obtained.
High Accuracy across Diverse Models: The reported accuracies in the studies vary widely, with some models achieving near-perfect accuracy rates (e.g., Chattopadhyay (Ahuja et al., 2020) illustrate creative attempts to refine segmentation accuracy and model efficiency.These approaches highlight the ongoing exploration for optimizing neural network applications in medical imaging.Despite the high accuracy rates reported, challenges remain in ensuring model generalizability across different MRI datasets and imaging conditions.For example, Khan et al. (2022) report a 100% accuracy on one dataset but a slightly lower rate on another, pointing to the potential issue of overfitting and the need for models to generalize across varied data.The study by Aboussaleh et al. (2024) explores the use of CNNs with multiple imaging modalities, including CT, MRI, and PET scans.This approach underscores the potential of deep learning models to leverage complementary information from different imaging techniques, enhancing tumor detection and segmentation accuracy.Future Directions and Opportunities for Improvement: Despite the progress, there is a clear need for more interpretable models that can provide insights into their decision-making processes, addressing the "black box" nature of deep neural networks.Furthermore, the development and public availability of large, annotated, and diverse datasets could significantly advance the field by enabling more robust and generalizable model training and evaluation.

G. Conclusion
In conclusion, this review provides a comprehensive analysis of the advancements in convolutional neural network (CNN)-based methodologies for the segmentation and detection of brain tumors in Magnetic Resonance Imaging (MRI) images.Given the critical importance of early and accurate diagnosis in CNNs work like a black box -good results, but hard to understand how they reach those results, which can be risky in medicine.

98.8%
enhancing patient outcomes, MRI remains a pivotal tool due to its ability to provide precise anatomical information.However, manual segmentation and identification of brain tumors from MRI images are time-consuming and susceptible to human error.The adoption of CNNs offers a promising avenue for automating these processes.Throughout this review, we have elucidated the significant challenges associated with brain tumor analysis using MRI and explored diverse CNN architectures employed for this purpose.By evaluating the performance metrics of these models, we have highlighted both their potential and limitations in facilitating efficient brain tumor analysis.This synthesis of current research underscores the evolving landscape of CNN-based approaches in MRI-based brain tumor diagnosis, shedding light on avenues for future research and emphasizing the importance of continued innovation in this domain.Ultimately, this review contributes to a deeper understanding of the state-of-the-art techniques in CNNbased brain tumor analysis, thereby paving the way for enhanced diagnostic accuracy and therapeutic efficacy in clinical practice.

Table 1 .
Summary of Related Work & Maitra, 2022; Nizamani et al., 2023) and others showing more modest results (e.g., Hanwat & J, 2019).This variance can be attributed to several factors, including differences in model architecture, dataset complexity, and evaluation metrics.Notably, models like VGG-SCNet (Majib et al., 2021) and variations of CNNs incorporating advanced strategies (e.g., 3D CNN and U-Net) by Ali et al., 2020 demonstrate high efficacy in tumor detection and segmentation, reflecting advancements in neural network design tailored to MRI data characteristics.Some studies integrate novel methodologies with CNNs to improve performance.For instance, the use of the Bat algorithm alongside CNNs (Chawla et al., 2022) and the superpixel technique with VGG-19