Fusion-Based Deep Learning with Nature-Inspired Algorithm for Intracerebral Haemorrhage Diagnosis

Natural computing refers to computational processes observed in nature and human-designed computing inspired by nature. In recent times, data fusion in the healthcare sector becomes a challenging issue, and it needs to be resolved. At the same time, intracerebral haemorrhage (ICH) is the injury of blood vessels on the brain cells, which is mainly liable for stroke. X-rays and computed tomography (CT) scans are widely applied for locating the haemorrhage position and size. Since manual segmentation of the CT scans by planimetry by the use of radiologists is a time-consuming process, deep learning (DL) is used to attain effective ICH diagnosis performance. This paper presents an automated intracerebral haemorrhage diagnosis using fusion-based deep learning with swarm intelligence (AICH-FDLSI) algorithm. The AICH-FDLSI model operates on four major stages namely preprocessing, image segmentation, feature extraction, and classification. To begin with, the input image is preprocessed using the median filtering (MF) technique to remove the noise present in the image. Next, the seagull optimization algorithm (SOA) with Otsu multilevel thresholding is employed for image segmentation. In addition, the fusion-based feature extraction model using the Capsule Network (CapsNet) and EfficientNet is applied to extract a useful set of features. Moreover, deer hunting optimization (DHO) algorithm is utilized for the hyperparameter optimization of the CapsNet and DenseNet models. Finally, a fuzzy support vector machine (FSVM) is applied as a classification technique to identify the different classes of ICH. A set of simulations takes place to determine the diagnostic performance of the AICH-FDLSI model using the benchmark intracranial haemorrhage data set. The experimental outcome stated that the AICH-FDLSI model has reached a proficient performance over the compared methods in a significant way.


Introduction
In the last few years, traumatic brain injury (TBI) is the primary cause of growing death rates and disability in the USA. Nearly 30% of injury deaths have been reported [1].
After that, TBI, extra-axial intracranial tumours such as intracranial hemorrhages (ICH), may take place. e ICH disease is the main reason for death worldwide that happens for all ages. At first, the disease is initiated in the brain due to the leakage in the blood vessel and removes the path of interactions (follows the brain function and instruction consequently) and internal organ that results in inactive body functions such as memory loss, loss of eyesight, speech, and so on [2][3][4]. e most important risk factors such as high blood pressure (BP), head trauma, leakage in veins, and infected blood vessel walls are related to the ICH. To inspect this disorder, the screening modalities such as single-photon emission computed tomography (SPECT), X-ray, positron emission tomography (PET), and computed tomography (CT) are accessed via brain haemorrhage imaging. In comparison to other methods, a CT scan is widely employed in haemorrhage diagnosis as it is widely available, limited duration, and inexpensive for imaging. erefore, CT scans are highly desired for ICH detection. e manifestation of ICH clots on CT scans depends on external factors such as volume, density, location, and slice intensity. e early prediction of ICH is indispensable for sufficient scheduling of scanning and providing better treatment. erefore, enormous designers have used the computerbased detection (CAD) method for ICH segmentation. e recently proposed computer-based CAD method of ICH is based on the aspects such as automatic segmentation of haemorrhage that can be forecasted without manual segmentation, professional contribution, in which the human experts have to offer a suitable input for segmentation. e current deployment in convolutional neural network (CNN) and deep learning (DL) served remarkable performances in automatic image segmentation and classification processes [5]. us, the DL technique can able to make automated ICH segmentation and prediction.
In recent times, researchers have attempted to employ the DL technique for the diagnosis of ICH on CT scans [6].
is DL technique is a kind of machine learning (ML) that employs various processing layers to learn a representation of data with many levels of abstraction. Earlier researchers utilizing this technique presented tremendous diagnostic performances to detect ICH in every single CT scan, same as that of expert radiotherapists. Additionally, the fully 3D DL method (not on single CT scans) for diagnosing ICH has been stated. Few researchers utilized the back-propagation (BP) model for the learning approach and the CNN that has pattern recognition and self-organization capacities without human programming. Consequently, this method is a problem agnostic and generic technique, not a problemspecific and rule-based model [7]. But it remains challenging to explicate how this technique generates the outcomes from the input data.
is paper presents an automated intracerebral haemorrhage diagnosis using fusion-based deep learning with swarm intelligence (AICH-FDLSI) algorithm.
e AICH-FDLSI model employs a seagull optimization algorithm (SOA) with Otsu multilevel thresholding is employed for image segmentation. Besides, the fusion-based feature extraction using the Capsule Network (CapsNet) and Effi-cientNet is applied to extract a useful set of features. At the same time, deer hunting optimization (DHO) algorithm is utilized for the hyperparameter optimization of the CapsNet and DenseNet models. Lastly, a fuzzy support vector machine (FSVM) is employed as a classifier to determine various classes of ICH. To showcase the improved classifier results of the proposed model, a wide range of experiments is performed using the test benchmark intracranial haemorrhage data set. e rest of the study is planned as follows. Section 2 provides the related works; Section 3 offers the proposed model; Section 4 discusses the performance validation; and Section 5 concludes the study.

Literature Review
Mansour et al. [8] proposed an innovative DL-based ICH diagnoses and classification (DL-ICH) method with the help of optimum image segmentation using inception network. e presented method includes segmentation, preprocessing, classification, and feature extraction. First, the input data undergoes conversion format in which the NIfTI files are transformed into JPEG form. Anupama et al. [9] presented DL-based ICH diagnoses with GrabCut-based segmentation using SDL, called GC-SDL algorithm. Furthermore, GrabCut-based segmentation is utilized to identify the infected portion efficiently in an image. To execute the process of feature extraction, the SDL method is employed, and lastly, the SM layer is applied as a classifier.
Venugopal et al. [10] proposed a unique multimodal data fusion-based feature extraction method using a DL algorithm, called FFE-DL for ICH Classification and Detection, named as FFEDL-ICH. e presented method consists of classification, preprocessing, image segmentation, and feature extraction. First, the input images are preprocessed by the GF method for removing noise. Next, the DFCM method is employed for segmenting the image. Moreover, the fusion-based feature extraction method is performed by deep features (residual network 152) and handcrafted features (local binary patterns) for extracting appropriate features. Lastly, the DNN method is performed as a classification method to distinguish different types of ICH. A new DL method for ANN, totally distinct from the BP algorithm, was proposed in earlier research [11]. e objective is to measure the possibility of utilizing the model for ICH classification and detection of its subclasses, without applying the CNN method.
Wang et al. [12] focused on evaluating the accuracy and performance of a DL-based automatic segmentation method in segmenting spontaneous ICH volume either with/without IVH extensions. ey related this automatic method with two manual segmentation methods. Ginat [7] examines the execution of DL for the work list prioritization and detection of acute ICH on NCCT in different medical sceneries at an academic medical centre. e images were categorized based on the type and presence of haemorrhage, whether this is follow-up/initial images, and patient visit location, involving outpatient, emergency or trauma, and inpatient sections. Yu et al. [13] intended to improve a strong DL segmentation technique for accurate and fast HV analyses via CT. Luong et al. [14] presented a CAD that integrates a DL method and image processing methods for determining patient who suffers from ICH because of their CT scans. e DL methodbased MobileNetV2 framework was trained.
Ngo et al. [15] developed a newfangled method for training slice-level classifier on CT-based descriptor of the nearby slices alongside the axis; all of them are extracted by the CNN method. is technique focuses on predicting the existence of ICH and categorizes it into five distinct subclasses. ey examine a two-phase training system. Initially, CT images are processed simply as a group of two-dimensional images, and an advanced CNN classifier is trained that is pretrained on ImageNet. In the training phase, all the slices are tested together with the three slices beforehand and the three slices afterward, which makes the batch size a multiple of 7. Next, the output descriptor of all the blocks of seven successive slices attained from phase 1 are stacked into images and fed into other CNNs for the last predictions of middle slices. Hssayeni et al. [16] developed a method for collecting and eighty-two CT scan data sets of subjects with a traumatic brain injury. en, the ICH regions were manually delineated in every slice by a consensus decision of two radiologists. e data set is an open-source platform at the PhysioNet repository for upcoming comparisons and analyses. Besides publishing the data set, that is, the major objective of this manuscript, they executed a deep FCN model called as UNet, for segmenting the ICH region from the CT images in a fully automatic methodology.

The Proposed Model
is paper has developed a novel AICH-FDLSI technique for ICH detection and classification. e proposed AICH-FDLSI technique encompasses MF-based preprocessing, SOA with Otsu multilevel thresholding-based segmentation, DHO-based feature extraction, and FSVM-based classification. e detailed working of these processes is offered in the succeeding sections.

Image Preprocessing.
Primarily, the MF technique is applied as a preprocessing tool to eliminate the presence of noise involved in it. e MF is nonlinear statistical filtering that changes the existing pixel values with the median value of pixels under the adjacent area. A naive execution primary makes a cumulative histogram to the neighbor area and afterward defines the primary index elsewhere half the amount of pixels from the histograms. An essential issue of this manner on GPU is all the threads required for computing whole histograms. For 8-bit images, a histogram made of 256 bins is generated. It can be useless on present GP as there are not sufficient hardware registers obtainable to all the threads, and utilizing global memory to histogram calculation was too slow. For resolving this issue, the presented model depends upon a bisection search on histogram ranges. is technique does not calculate the actual histogram then iteratively improves the histogram range that contains the median value. In all rounds, the existing valid range was separated into two halves, and the half that is the huge amount of pixels is elected to the next iteration. is procedure was repeated still the range converged to a single bin.

Image Segmentation.
During the image segmentation process, the SOA with Otsu multilevel thresholding is applied to determine the affected regions.
e Otsu is also named as maximal difference between clusters [17]. An image histogram as fundamental and maximal difference between target as well as background as the selective condition, this technique obtained an optimum threshold from several cases. An image whose gray-scale range has 0, 1, . . . , L − 1 { } was separated as to destination and background by thresholds t. e possibility of gray i is p i . e likelihood of objective has ω 0 (t) � t i�0 p i . e possibility of background is An optimum threshold t * generates the difference maximal. erefore, the process of multithreshold segmentation is as follows: (1) Optimum thresholds t * 1 , t * 2 , . . . , t * k create the entire difference maximal as defined below: In this study, the optimal threshold values of the Otsu method are decided by the SOA. e SOA is based on the migration and attacking behavior of the seagulls in nature [18]. e mathematical model of attacking and migrating the prey is described below. e migration (exploration) method inspires how the group of seagulls moves everywhere. In this stage, the seagulls need to fulfill three criteria: To prevent collision between neighbors (i.e., other seagulls), a further parameter A is applied for the assessment of the new search location as follows: where C → s signifies the location of search agent that does not collide with other searching agents, P → s implies the existing location of the search agent, and x means the existing iteration as follows: where x � 0, 1, 2, . . . Max iteration . f c controls the frequency of A that is decreased gradually from f c to 0. In this study, the value of f c is set to 2. After evading the collision between neighbors, the searching agent is moving towards the direction of the optimal neighbor.
Let M �→ s be the position of searching agent P → s towards the optimal fit searching agent P → bs (viz., appropriate seagull). e behavior of B is randomly assigned, that is, accountable for appropriate balancing between exploitation and exploration. B is evaluated by where r d represents an arbitrary value within [0.1]. Finally, the searching agent could upgrade its location regarding optimal search agent as follows: Journal of Healthcare Engineering Let D → s be the distance between the optimal fit search agent and search agent (viz., optimal seagulls that fitness value is lesser). e exploitation focuses on exploiting the history and experience of the searching method. Seagulls are capable of changing the speed and angle of attack continuously in migration.
ey retain their altitude with their weight and wings. During prey attacking, the spiral movement behavior takes place in the air. e x, y, and z planes are shown as follows: where r indicates the radius of every turn of the spiral, k represents an arbitrary value within [0 ≤ k ≤ 2A], u and v denote constant for determining the spiral shape, and e represents the base of the natural logarithm. It can be evaluated by where P → s (x) saves the optimal solutions and upgrades the position of another search agent. e presented SOA initiates by an arbitrarily made population. e search agent might update their location regarding the optimum search agent in the iteration method. For smooth transition between exploitation and exploration, B is in charge. erefore, the SOA is regarded as a global optimizer as a result of its good exploitation and exploration capacity.

Feature Extraction.
Once the images are segmented, the next stage is to derive a fusion of feature vectors using the CapsNet and EfficientNet models. e two vectors can be defined as follows: In addition, the derived individual features are combined into a single vector, using the following equation: where f represents fused vectors (1 × 1186). e entropy is applied on the features vectors to choose optimal features based on the score to the classifier for differentiating the healthier and glioma images.

CapsNet Model.
To address the limitations of CNN, Hinton [19] presented a higher dimension vector named "capsule" for representing an entity (object or a portion of object) by a set of neurons instead of an individual neuron. e activity of the neuron in the active capsule signifies different features of a certain entity, that is, existing in an image. Every capsule learns an implicit description of a visual entity that outputs the likelihood and a group of instantiated parameters that includes the accurate posture (orientation, position, and size), albedo, hue, texture, deformation, and so on. e framework of CapsNet is dissimilar to other DL methods. e outcomes of input and output of CapsNet are vector that direction and norm represent the various attributes and existence probability of the entity, correspondingly. e similar levels of capsule assist to forecast the instantiation parameter of a high-level capsule over a conversion matrix, and then dynamic routing is adapted for making the predictions reliable. Once the various predictions are reliable, the high-level of the single capsule would turn out to be active.
A simple CapsNet framework has been demonstrated in Figure 1, where the framework is shallow by only one fully connected layer (EntityCaps) and two convolution layers (PrimaryCaps and Convl). Especially, Convl is the typical convolution layer that converts the output to PrimaryCaps and images to primary features via a convolutional filter with 13 × 13 × 256 size. In case, the original images are not appropriate for the input of the primary layer of the Cap-sNet, and the primary feature afterward convolution is adapted. e next convolution layer creates the respective vector as input of the capsule layer. e standard convolutions of all the outputs are a scalar; however, the convolution of PrimaryCaps is dissimilar to the standard one. It is considered as a two-dimensional convolution of eight distinct weights for the input of 15 × 15 × 256. e Pri-maryCaps generate a thrity-two size of 11 × 11 steps to 2 convolutions and output. e third layer (EntityCaps) is the output layer, which has nine traditional capsules respective to nine distinct categories.

EfficientNet Model.
e EfficientNet technique was utilized as a feature extraction component for generating a helpful group of feature vectors of the input satellite image [20]. e DL is the most well-known framework as DL approaches have been learned significant features in an input image at a different convolutional level similar to the purpose of the human brain.
e DL was solving complex problems usually well as well as quickly with high classifier accuracy and lower error rate. e DL approach was contained different modules (convolutional, pooling layer, and fully connected (FC) layers, and activation function). e DL models have the capability of attaining optimal performance over the machine learning models with high computational complexity. Distinct from other existing DL approaches, the EfficientNet structure was a compound scaling manner that employs the compound coefficients to uniformly scale network width, depth, and resolution. An EfficientNet has eight different methods from B0 to B7. e EfficientNet employs inverted bottleneck convolution which is primarily well-known from the MobileNetV2 approach that is a layer that primarily expands the network and next compresses the channel. is structure reduced computation with the factor of 2 as compared with normal convolution, where f signifies the filter size. It is depicted that EfficientNetB0 was the easiest of all eight approaches as well as employs minimal parameters. So it can be directly employed EfficientNetB0 to evaluate performance.

DHO-Based Hyperparameter
Tuning. In this work, a new metaheuristic DHO method has been developed for the hyperparameter tuning process, stimulated from deer hunting by a group of hunters [21]. For deer hunting, the hunter encircles it as well as gets closer to them by using some strategies. is strategy includes the deliberation of several parameters, such as the deer position, wind angle, and so on. Cooperation between the hunters is another relevant standard that makes hunting very efficient. Lastly, they attain the target as per the location of the successor and leader.
e objective function of this presented model is shown below: e weight optimization with the DHO method is described as follows: because of the unique capabilities of deer, it could escape easily from hunting. e process initiates by a vector of an arbitrary population named hunter. It is described by the following equation: where m means the amount of hunter's population (weight), and the overall amount of weight employed to the optimization is denoted as follows. Next, the key parameters such as position angle (weight) and wind angle are employed. e whole searching space is deliberated as a circle; hence, the wind angle can be defined as the circumference of the circle.
where the arbitrary value within � [0, 1] is denoted as a, and the existing iteration is signified as J. Now, θ implies the wind angle. Subsequently, the location propagation with the leader position (X l ) and successor location (X s ) for optimization is presented. e successor location defines the location of subsequent weights, while the leader location defines the primary location of the hunter.
Propagation via (X l ). Afterward initiating the optimal location, all the weights in the population try to attain the optimal location. en, the location updating algorithm starts by modeling the encircling behavior as follows: Let X j be the location at the existing iteration and the succeeding iteration location is denoted as X j+1 . e Z and K coefficient vectors are involved in this process. e arbitrary value, that is, presented by considering the wind speed is denoted as p, and it comprises values fi i om0 to 2. e expression to estimate the Z and K coefficient vectors are given below: where the maximal iteration is denoted as j max . e b variable has values ranging from −1 to 1, besides the value of other variables lies within [0, 1]. e first location of the hunter is signified as (X, Y) that gets upgraded according to the location of prey. Both Z and K coefficient vectors are modified to reach the optimal location (X b , Y b ). When the value of p <1, the location updation algorithm takes place that implies the hunter could arbitrarily move in a different direction without considering the angle location. Propagation through Angle Location. e angle location updating is considered to rise the searching space. For making the hunting method more efficient, it is crucial to describe the angle location of the hunter. It can be implemented by Journal of Healthcare Engineering where p denotes the arbitray values and the optimal location can be depicted as B � φ j+1 , X b j and p. e individual location is found the opposite to the angle location; hence, the prey does not have any alertness of the hunter. Propagation via Successor Location. In the exploration, the vector K is presented within the encircle behavior. At first, the arbitrary searching method is performed by considering the K values as less than 1. Lastly, the location updating algorithm takes place based on a successor location instead of considering the optimal location. Next, the global searching is carried out by e location updating method is performed for identifying the optimal location (viz., termination condition).

Image Classification.
At the final stage, the FSVM model is applied to determine the suitable class labels for the test images. In conventional SVM, each data point is regarded as equally significant and allotted a similar penal variable. ough in several real-time classification applications, few sample points, such as noises/outliers, may not be accurately allotted to one of these two classes, and all the sample points do not have a similar meaning to the decision surface. e hyperplanes in the SVM model are shown in Figure 2. To resolve this issue, the FSVM concept was initially presented [22]. Fuzzy membership to all the sample points is proposed so that discrete sample points might generate distinct contributions to the generation of decision surfaces. e trained sample is considered as follows: Let x i ∈ R n be the n-dimensional sample point, y i ∈ −1, +1 { } signifies its class label, and s i (i � 1, . . . , N) implies a fuzzy membership that fulfills σ ≤ s i ≤ 1 with small constant σ > 0. e quadratic optimization problems for classification can be represented by min w,s,ξ where w indicates a standard vector of the separating hyperplane, b denotes a bias, and C represents a parameter that needs to be determined earlier to control the trade-offs amongst the cost of misclassification error and the classification margin. As s i represent the attitude of the respective point x i towards one class and the slack parameter ξ i is a measure of error, the s i ξ i the term could consider the measure of error with discrete weights. It is considered that the larger the s i is, the more significantly the respective point is treated; the lower the s i is, the less outstandingly the respective point is treated. Hereafter, FSVM could discover a strong hyperplane by maximalizing the margin by letting some misclassification of lesser significant points.
For resolving the FSM problems, (2) is transformed into the subsequent dual problem by introducing Lagrangian multiplier α i as follows: When compared to the typical SVM, the above stated has only a small difference, that is, the upper bounds of the value of α i . By solving this dual problem in (3) for optimal α i , w, and b could be recovered in the same way as in the typical SVM.

Performance Validation
e performance validation of the proposed model takes place using a benchmark CT ICH data set, including 341 images [23]. It comprises 171 images under epidural (EPI) class, 24 images under intraventricular (IVT), 72 images under intraparenchymal (IPC), 56 images under subdural (SBD), and 18 images under subarachnoid (SAD) class. e size of the image is 512 * 512 pixels. Figure 3 shows the sample test images. e data sets include ICH masks and CT scans, in JPG and NIfTI format at PhysioNet repository. NIfTI is a type of file format for neuroimaging, which is used very commonly in imaging informatics for neuroscience and even neuroradiology research. Figure 4 showcases the confusion matrix of the AICH-FDLSI technique on the test images under run-1. e figure reported that the AICH-FDLSI technique has classified 19 images under IVT, 64 images under IPC, 12 images under SAD, 170 images under EPI, and 54 images under SBD. Table 1 reports the ICH classification results analysis of the AICH-FDLSI technique under run-1.
e results demonstrated that the AICH-FDLSI technique has classified the IVT class with the sens y , spec y , prec n , and accu y of 0.7917, 0.9811, 0.7600, and 0.9677, respectively. In line with, the AICH-FDLSI technique has identified the IPC class with the sens y , spec y , prec n , and accu y of 0.8889, 0.9888, 0.9552, and 0.9677, respectively. Moreover, the AICH-FDLSI technique has identified the instances under SBD with the sens y , spec y , prec n , and accu y of 0.9643, 0.9895, 0.9474, and 0.9853, respectively.  e experimental values stated that the AICH-FDLSI technique has classified the IVT class with the sens y , spec y , prec n , and accu y of 0.8333, 0.9811, 0.7692, and 0.9707, respectively. Moreover, the AICH-FDLSI technique has categorized the IPC class with the sens y , spec y , prec n , and accu y of 0.8889, 0.9888, 0.9552, and 0.9677, respectively. Eventually, the AICH-FDLSI technique has determined the images under SBD with the sens y , spec y , prec n , and accu y of 0.9643, 1.0000, 1.0000, and 0.9941, respectively.      Figure 7 offer an overall result analysis of the AICH-FDLSI technique under three different runs.
e results show that the AICH-FDLSI technique has accomplished maximum classification performance under three test runs. For instance, under run-1, the AICH-FDLSI technique has classified the ICH with the sens y , spec y , prec n , and accu y of 0.8611, 0.9812, 0.8950, and 0.9742, respectively. Likewise, under run-2, the AICH-FDLSI technique has classified the ICH with the sens y , spec y , prec n , and accu y of 0.8695, 0.9810, 0.9052, and 0.9754, respectively. Similarly, under run-3, the AICH-FDLSI technique has classified the ICH with the sens y , spec y , prec n , and accu y of 0.8861, 0.9833, 0.9106, and 0.9777, respectively. Figure 8 investigates the accuracy graph of the AICH-FDLSI technique on the test data set. e figure demonstrated that the AICH-FDLSI technique has resulted in improved training and validation accuracies. e loss graph analysis of the AICH-FDLSI technique takes place on the test data set in Figure 9.
e results highlighted that the loss values tend to decrease with the increased epoch count, and it is observable that the validation loss seems to be lower than the training loss. Table 5 provides a brief result analysis of the AICH-FDLSI with recent techniques. A brief sens y analysis of the AICH-FDLSI technique with existing approaches [16,[24][25][26][27] is provided in Figure 10. e figure shows that the UNet, WANN, and SVM techniques have attained lower sens y values of 63.10%, 60.18%, and 76.38%, respectively. Eventually, the WEM-DCNN and convolutional NN techniques have resulted in reasonable sens y of 83.33%, and 87.06%, respectively. But the AICH-FDLSI technique has surpassed the other ones with the increased sens y of 88.61%.
A comparative prec n analysis of the AICH-FDLSI technique with other techniques is shown in Figure 11. e figure reported that the WANN and SVM techniques have     Table 6 offers a comparative results analysis of the AICH-FDLSI with recent techniques in terms of spec y and accu y . Figure 12 depicts the comparative spec y analysis of the AICH-FDLSI system with other techniques. From the figure, it is notable that the UNet, WANN, Res-NexT, convolutional NN, and SVM techniques have accomplished minimal classification performance with the spec y values of 88.60%, 70.13%, 90.42%, 88.18%, and 77.53%, respectively. Next to that, the DN-ELM and WEM-DCNN techniques have resulted to reasonable spec y of 97.70%, and 97.48%, respectively. However, the AICH-FDLSI technique has gained improved performance with the superior spec y of 98.33%.    Finally, the CT analysis of the AICH-FDLSI methodology with recent approaches is shown in Figure 14. e results portrayed that the WANN, Res-NexT, and SVM models have obtained worse outcomes with maximum CT of 78 s, 80 s, and 89 s, respectively. Following that, the WEM-DCNN and convolutional NN techniques have attained moderately closer CT of 75% and 74%, respectively. Along with that, the DN-ELM and UNet models have obtained    reasonable CT of 29 s and 42 s, respectively. However, the AICH-FDLSI technique has accomplished improved performance with the CT of 24 s. From the above-mentioned results, it is evident that the AICH-FDLSI process is found to be an efficient tool for ICH detection and classification.

Conclusion
is paper has developed a novel AICH-FDLSI technique for ICH detection and classification.
e proposed AICH-FDLSI technique encompasses MF-based preprocessing, SOA with Otsu multilevel thresholding-based segmentation, fusion-based feature extraction, DHO-based feature extraction, and FSVM-based classification. e application of SOA and DHO algorithms helps improvise the overall ICH classification performance. To showcase the improved classifier results of the proposed model, a wide range of experiments is performed using the test benchmark intracranial haemorrhage data set. e experimental outcome stated that the AICH-FDLSI model has reached a proficient performance. erefore, the proposed AICH-FDLSI technique can be applied as a proficient tool for ICH diagnosis and classification. In the future, the ICH classification performance of the AICH-FDLSI technique can be improvised by the use of hybrid DL models.
Data Availability e data set used in this paper is publicly available at https:// physionet.org/content/ct-ich/1.3.1/ Ethical Approval is article does not contain any studies with human participants performed by any of the authors.

Conflicts of Interest
e authors declare that they have no conflicts of interest.

Authors' Contributions
e manuscript was written through the contributions of all authors. All authors have given approval to the final version of the manuscript.