Next Article in Journal
Numerical Analysis of the Crown Displacements Caused by Tunnel Excavation with Rock Bolts
Previous Article in Journal
A Viscosity Approximation Method for Solving General System of Variational Inequalities, Generalized Mixed Equilibrium Problems and Fixed Point Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Modal Rigid Image Registration and Segmentation Using Multi-Stage Forward Path Regenerative Genetic Algorithm

1
Computer Systems Engineering, University of Engineering & Technology, Peshawar 091, Pakistan
2
National Center for Big Data and Cloud Computing (NCBC), University of Engineering & Technology, Peshawar 091, Pakistan
3
Department of Quantitative Methods and Economic Informatics, Faculty of Operation and Economics of Transport and Communication, University of Zilina, 01026 Zilina, Slovakia
4
Department of Telecommunications, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, 70800 Ostrava, Czech Republic
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(8), 1506; https://doi.org/10.3390/sym14081506
Submission received: 7 July 2022 / Revised: 13 July 2022 / Accepted: 18 July 2022 / Published: 22 July 2022

Abstract

:
Medical image diagnosis and delineation of lesions in the human brain require information to combine from different imaging sensors. Image registration is considered to be an essential pre-processing technique of aligning images of different modalities. The brain is a naturally bilateral symmetrical organ, where the left half lobe resembles the right half lobe around the symmetrical axis. The identified symmetry axis in one MRI image can identify symmetry axes in multi-modal registered MRI images instantly. MRI sensors may induce different levels of noise and Intensity Non-Uniformity (INU) in images. These image degradations may cause difficulty in finding true transformation parameters for an optimization technique. We will be investigating the new variant of evolution strategy of genetic algorithm as an optimization technique that performs well even for the high level of noise and INU, compared to Nesterov, Limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm (LBFGS), Simulated Annealing (SA), and Single-Stage Genetic Algorithm (SSGA). The proposed new multi-modal image registration technique based on a genetic algorithm with increasing precision levels and decreasing search spaces in successive stages is called the Multi-Stage Forward Path Regenerative Genetic Algorithm (MFRGA). Our proposed algorithm is better in terms of overall registration error as compared to the standard genetic algorithm. MFRGA results in a mean registration error of 0.492 in case of the same level of noise (1–9)% and INU (0–40)% in both reference and template image, and 0.317 in case of a noise-free template and reference with noise levels (1–9)% and INU (0–40)%. Accurate registration results in good segmentation, and we apply registration transformations to segment normal brain structures for evaluating registration accuracy. The brain segmentation via registration with our proposed algorithm is better even in cases of high levels of noise and INU as compared to GA and LBFGS. The mean dice similarity coefficient of brain structures CSF, GM, and WM is 0.701, 0.792, and 0.913, respectively.

1. Introduction

Medical images may be of the same or different types, such as Magnetic Resonance Imaging (MRI), Computerized Tomography (CT), and Positron Emission Tomography (PET). No single image modality provides complete information. Images of different modalities can provide more comprehensive information than those provided by a single modality. However, target and source images of the same object can be different and may have different alignments due to images taken at different time instants, different sources such as MRI, CT, and PET, or having different angles in order to have a 2D or 3D perspective [1]. Moreover, it is almost impossible to have the affected person’s head placed inside the scanner exactly the same way as the first time, so the information in the acquired images will not be spatially aligned [2]. Alignment of images is important to obtain complementary anatomic and functional information from multiple modalities for precise diagnosis and treatment of the patient [3]. Image registration is a fundamental problem in medical imaging and is often extensively used as a preliminary step to establishing correspondence between two images [4]. This enables us to compare common features in different images. In registration problems, one image is considered fixed and the other is considered a moving image. Aligning a moving image with any fixed image is the goal of registration. The alignment is done through the computation of spatial transformation [5,6]. Registration can be divided into four main components: feature space, search space, search strategy, and similarity. Each component provides essential information to decide which registration technique is to be used in a particular scenario. The area of interest used as the basis for registration is referred to as feature space, such as outlines, tumors, and edges. The transformation of moving an image to align to the source image is referred to as search space. Search strategy is the determination of the transformation technique to be chosen based on previous transformation results. The similarity is a metric of comparison between the source and target images that are being aligned. This forms the basis of how a registration problem can be framed [7]. This process has been rendered less experimental and more relevant recently by advances in image registration research. Registration methods have traditionally focused on the matching correspondence features in the images, but the research community has recently gained interest in similarity measures of a global correspondence nature. In this case, image registration performance has a direct dependency on the effectiveness of similarity measure for computing the similarity among images [8]. Multi-modality image registration based on comparing intensity values is not trivial because of the difference in intensity values of the same tissues in CT and MR images. Among the brain imaging modalities, MRI is able to scan tumor borders with a great level of detail and provides the best discrimination between soft tissues inside the brain [9]. An image can be geometrically represented and transformed in several ways, each with its own pros and cons. For accurate comparison and study, it is important that key biological landmarks are in the same place. Each approach does not work for all cases due to a number of different types of problems occurring while registering images [10]. Image registration is crucial in situations involving brain tumors, particularly well-defined glioblastoma multiforme, as the extraction of accurate morphological features depends on the correct alignment of the tumor zone. Image registration of multi-modal MRI scans of the brain is useful in the diagnosis of abnormalities. The accurate alignment of MRI sequences [11] results in accurate feature extraction and segmentation of the brain tumor. Moreover, deformable image registration is used in atlas-based image registration and segmentation [12]. In this strategy, an atlas (grayscale and labeled image) is registered to the patient’s brain MRI using its grayscale image. The deformation found using the non-rigid image registration technique is then applied to the counterpart of the atlas labeled image. The labels from registered labeled data of the atlas are then transferred to the patients’ data that directly segment brain structures along with anomalies. Image registration and segmentation can be used interchangeably. These processes can be used one after the other. The image registration aids in image segmentation while the results of image segmentation can be further used to improve image registration results, and hence image segmentation. Moreover, joint image registration and segmentation algorithms [13] also improve image registration and segmentation results. Furthermore, the segmentation results can be used to assess the registration accuracy as the image registration ground truth values are difficult to find as compared to the manual segmentation of brain structures by medical experts.
The main challenges in medical image registration are the presence of noise and INU. The performance of image registration may reduce as the pixels get affected by noise and INU, creating false local contours. The optimization techniques that are used to find a global optimum solution may be stuck in the local optimum due to noise and INU.
The main contribution of our research work is to introduce an evolutionary-based algorithm called MFRGA that performs well and finds a global optimum solution even for an increased level of noise and intensity INU. Our proposed methodology is not only successful in providing a global optimum solution but also the constituent multiple stages improve the accuracy of results.

2. Literature Review

Multi-modal image registration focuses on finding correspondence between images of different modalities and provides in depth visual information by fusion of multi-modal images. It is an important task in medical image analysis. Different studies are made in order to fully utilize its potential. A complete software framework based on multiresolution deformable transform, tackling elastic deformations possibly occurring during the surgical procedure, is presented [14] that enables the registration of MR and ultrasound (US) scans of the brain. The methods that involve the image registration technique for segmentation use the location-based information of the anatomical features. The atlas is used and transformed to segment the target image. It also preserves the brain structures information along with the detection of brain abnormalities [15]. However, the clustering methods [16,17] applied for the segmentation of both normal and abnormal tissues simultaneously are rare. The MRI sequence FLAIR is used for the segmentation of the brain tumor using sliding window and fuzzy c-means clustering to find the exact location of the tumor [18]. The segmentation of brain tumors, known as glioma, is done by applying a unified algorithm on multi-modal images from the BRATS 2015 dataset. This is comprised of estimation of the region of interest using fuzzy c-means clustering and region growing and then refining the glioma border using region merging and improved distance regularization level set method [19]. The MRI sequences T1, T2, and FSPRG T1C are used for brain tumor segmentation using a computational algorithm [20]. The algorithm unifies the enhancement of the region of interest by disconnecting adjacent brain structures and segmentation using detection by coordinates and arithmetic mean. Moreover, morphological operators are used to further improve the results.
Moreover, Razlighi et al. [8] have shown that the measure of similarity with high robustness is more efficient in the registration of degraded images. In their analysis, five different brain image modalities (T1, T2, PD, EPI, PET) and four different forms of misregistration (translation, rotation, scaling, and B-spline) are used to compute the robustness of selected similarity measures (SMs). They have shown that images with higher robustness are not only more tolerant to brain image degradation, but also more effective for intermodal image registration. Uss et al. [21] demonstrated that single complex combined SM based on five existing SMs, namely, Modality Independent Neighborhood Descriptor (MIND), Logarithmic-Likelihood Ratio (logLR), scale-invariant feature transform-octave (SIFT-OCT), phase correlation (PC), histogram of orientated phase congruency (HOPC), can be applied to general cases as well to a particular case under consideration for registration. In comparison to existing multi-modal SMs, the proposed complex combined SM increases the area under the curve by 1% to 21%. Haskins et al. [22] proposed a deep convolutional neural network to learn the similarity metric for MR–TRUS (trans-rectal ultrasound) registration. The learned similarity metric using a deep convolutional neural network outperformed the existing classic mutual information and feature-based methods [23,24]. A dual supervised deep learning model called BIRNet is proposed for image registration by prediction of deformation of the image from its appearance. The proposed model achieved state-of-the-art performance without the need for tuning parameters [25]. Mutual information as a similarity measure was first introduced in [26,27]. These studies were based on the assumption that the similar tissue region in one image would correspond to a similar region in the other image and will have similar grey values. Wood’s measure was adapted by [28]. They constructed a feature space by combining grey values for all corresponding points in each of the two images using a two-dimensional plot. However, they defined the regions in feature space with the difference to Wood’s method. The region in their method was based on clustering of registered images in feature space. Mutual information is a good choice for multi-modal image registration; however, local intensity variation in volumes and consideration of only statistical information are its limitations. To overcome these limitations, Ref. [29] proposed a method of adding spatial and geometrical information about the voxels via a 3D Harris operator. They focused on the registration of the low- and high-resolution images. Their proposed method came up with accurate registration and high performance with respect to other standard registration methods. Moreover, our previous work is related to the mono-modal image registration for brain MRI using MFRGA [30] with Structural Similarity Index Measure (SSIM) and performed better even for higher noise and INU.

3. Materials and Methods

Multi-modal image registration requires aligning images from different modalities so that the similarity measure is maximized between them. The dataset used here is a BrainWeb dataset [31]—an online simulator of the brain. The multi-modal images are taken from the mentioned dataset. The BrainWeb online simulator provides the facility of selecting T1, T2, and PD modalities of MRI sequences, varieties of color map, slice thickness, noise levels, and INU. T1 and T2 MRI images are selected as template and reference images. In this research, multi-modal image registration is performed using an evolutionary algorithm. Here, we proposed an improved version of the genetic algorithm with better performance in terms of accuracy even with high levels of noise and INU. Mutual information is used as a similarity measure between multi-modal images, with a higher value representing a greater degree of alignment and accuracy. The methodology of our research is described in this section.

3.1. Multi-Modal Image Registration

The registration of multi-modal images is a primary step in combining data from two or more images that are collected using different modalities. In addition to structural differences and intensity variations among images, partial or full information overlap among them adds an extra hurdle to the success of the registration process. Multi-modal image registration algorithms focus on finding the correspondence between images generated using various modalities and providing intensive visual information from the fusion of different medical imaging modalities. Registration while aligning two images into one geometry is always treated as an optimization problem. It is an iterative process, which is stopped by the optimizer. There are many optimizers, metrics, and different transformations and interpolators for implementation in the registration process.

3.2. Mutual Information

Mutual information (MI) is an information theory-based solution to image registration problems. Particularly, the MI similarity metric is used for the registration of multi-modal medical images [32]. It compares two images and measures the statistical dependence between pixel intensities in these images. MI as a registration measure was first presented by A. Collignon et al. [33], and it became a leading method in medical image registration. MI could be defined as a measure of information that one image contains about another. When the amount of information that images contain about each other is maximum, the images are believed to be correctly registered. If the images are multi-modal, then the predictability of one model from another is necessary. Predictability is closely associated with the notion of entropy. There is low entropy for a predictable random variable, while an unpredictable random variable has high entropy.
MI can be presented in many ways. The most common formulae are based on Shannon entropy and Kullback–Leibler distance measure between two probability distributions. If H ( A ) and H ( B ) are the entropies of random variables A and B , respectively, while H ( A , B ) is their joint entropy, then mutual information could be presented as in (1) [34]:
M I ( A , B ) = H ( A ) + H ( B ) H ( A , B )  
where H ( A ) and H ( B ) are Shannon’s entropy of the image A and B calculated from the probability distribution of its pixels’ intensities.
The Kullback–Leibler measure MI is denoted by M I ( A , B ) , which measures the degree of dependence of A and B by measuring the distance between the joint distribution P A B ( a , b ) and the distribution associated to the case of complete independence P A ( a ) · P B ( b ) and is expressed in (2) [35];
M I ( A , B ) = a , b P A , B ( a , b ) log ( P A , B ( a , b ) P A ( a ) · P B ( b ) )  
For the two random variables A and B to be statistically independent, P A , B ( a , b ) = P A ( a ) · P B ( b ) and M I ( A , B ) = 0 . For maximal statistical dependence of two random variables, one-to-one correspondence between them is T : P A ( a ) = P B ( T ( a ) ) = P A B ( a , T ( a ) ) . The random variables A and B are considered to be the two images to be registered. One is the reference image, and the other is the template image. The pair of corresponding voxels in the images to be registered are referred as image intensity values a and b . The geometric transformation T relates the intensity values a and b in images A and B , respectively. The two images are best aligned with the geometric transformation T when M I ( A , B ) is maximal.

3.3. Optimization Techniques

The following optimization techniques are used in this research.

3.3.1. Nesterov’s Accelerated Gradient

Nesterov’s Accelerated Gradient (NAG) [36] is an extension to gradient descent optimization. In the classical momentum, the update of parameters is performed by finding the gradient in the direction of the updated accumulated gradient. On the other hand, NAG first takes a big step by using the previous accumulated gradient and then calculating the gradient for making the correction [37]. It also avoids the problem of overshooting the minima. The update rule (3)–(5) is as follows as described in [38,39].
f ( θ ) = θ t γ u p d a t e t 1
u p d a t e t = γ u p d a t e t 1 + η θ f ( θ )
θ t + 1 = θ t u p d a t e t
where f ( θ ) is the objective function with θ as the parameters of the model to update. η is the step size, γ is the momentum term, and u p d a t e t is the current time step update vector.

3.3.2. Simulated Annealing

Simulated annealing (SA) is an optimization technique that is based on the theory of thermodynamics for annealing ideal crystals [40]. It models the physical process of heating and then slowly cooling the temperature of the material [41]. The controlled lowering of temperature increases the crystal size and hence reduces the defects. SA is used to find the global optimum solution where there are many local optimum solutions. The brief mechanism of SA is as follows.
Let the cost of the current state θ be represented as f ( θ ) and the cost of the neighboring state θ be represented as f ( θ ) . The difference D between f ( θ ) and f ( θ ) is as described in [42].
D = f ( θ ) f ( θ )
If D < = 0 , the neighboring state cost is less than or equal to the current state cost. θ is selected as the current state due to the acceptance of downhill criteria. If D > 0 , such that e D T > V R N D the acceptance chances of θ as the current state is more as compared to the random value V R N D , where 0 < V R N D < 1 and T is the controlling parameter temperature. Moreover, the current state θ continues to be the candidate solution if D > 0 , such that e D T < = V R N D .

3.3.3. LBFGS (Limited-Memory BFGS)

Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. Limited-memory BFGS works well with large datasets because it uses less memory than the standard BFGS. It is also referred to as software for large-scale unconstrained optimization.
The algorithm’s target problem is to minimize f ( x ) over unconstrained values of the real vector x , where f is a differentiable scalar function. The algorithm starts with an initial estimate of the optimal value, X 0 , and proceeds to refine the estimate iteratively with a sequence of better estimates, X 1 , X 2 , . As a key driver of the algorithm, the derivatives of the function g k : = Δ f ( X k ) are used to define the steepest descent path and also to estimate the Hessian matrix (second derivative) of f ( x ) .

3.3.4. Single-Stage Genetic Algorithm

Single-Stage Genetic Algorithm (SSGA) is similar to a standard genetic algorithm. Here, individuals are rounded to increasing decimal places in successive stages and then fitness scores are calculated. Search space constraints for SSGA are also user-defined and vary if it is part of MFRGA. The description of SSGA for multi-modal rigid image registration is mentioned in Algorithm 1. The population size of 50 is selected with stall generations of 50 and tolerance function of 10−6. The results are achievable if it reaches a maximum number of generations 300 or stall generations of 50. Here, stall generation 50 shows the average change in fitness function value is less than 10−6 for 50 generations.

3.4. Proposed Optimization Technique for Multi-Modal Image Registration

3.4.1. Multi-Stage Forward Path Regenerative Genetic Algorithm

Multi-stage Forward Path Regenerative Genetic Algorithm (MFRGA) is our proposed strategy, which contains iterative stages of (SSGA) with increasing precision levels of individuals and decreasing search space constraints to obtain accurate results. MFRGA is different from the standard genetic algorithm in terms of the use of multiple stages, restricted search space, and precision levels of individuals. In each proceeding stage, the search space decreases, and the precision levels of individuals increase. The image registration result of the previous stage is used by the next stage. The use of multiple stages of SSGA ensures to produce more accurate results by improving image registration results in each preceding stage. The description of MFRGA for multi-modal rigid image registration is mentioned in Algorithm 2.

3.4.2. Image Registration Error and Segmentation Accuracy

The performance of our proposed algorithm is computed using image registration error and segmentation accuracy. Less image registration error means more segmentation accuracy. The following metrics are considered for registration and segmentation accuracy.

Absolute Percentage Registration Error in Each Dimension

Percentage registration error is computed in each dimension, i.e., x , y and θ . Percentage registration error for each direction is as follows:
Δ i = | C i G i G i | × 100 ,   f o r   ( i = x , y , θ )     ( G i 0 )
where Δ x , Δ y and Δ θ are absolute percentage error in x , y translation and θ rotation, respectively. C x , C y and C θ are computed values from applied image registration and G x , G y and G θ are ground truth transformation for x , y translations and θ rotation, respectively.
Algorithm 1: SSGA (Single-Stage Genetic Algorithm) for Multi-Modal Rigid Image Registration.
Result: Rigid image transformations ( x , y   a n d   θ )
I R Reference Image of modality M 1
I T Template Image of modality M 2
C m i n Minimum constraint of search space
C m a x Maximum constraint of search space
S p o p Population Size
D Number of decimal places
Function SSGA( I R , I T , C m i n , C m a x , D ):
Initial random population of individuals containing three parameters of x , y translation and θ rotation of S p o p is generated. Individuals are rounded to D decimal places.
while No. of generations < Maximum no. of generations do
If ( x , y   a n d   θ ) [ C m i n , C m a x ] then
1. 
Each individual takes part in evaluating fitness value in fitness function using MI as a similarity measure. The best fitness value is the smallest value selected from the population.
2. 
Fitness scores are used for the selection of parents within population.
3. 
Cross over is used to generate offspring.
4. 
Mutation is applied to each offspring.
5. 
Fitness of intermediate population is evaluated.
6. 
Fittest individuals are promoted to next generation.
else
return fittest_individuals
end
end
End Function

Overall Registration Error

Overall registration error includes all dimensional transformation errors in one metric. It is defined as follows:
R E = i | C i G i | ,   f o r   ( i = x ,   y ,   θ )
where R E is the overall registration error, which is the sum of absolute differences of computed transformations from the image registration algorithm, i.e., C i and the ground truth transformations, i.e., G i with i corresponds to dimension index in the image registration algorithm.

Dice Similarity

The template image with both grayscale and labeled image can be used not only for registration but also segmentation. Here, segmentation accuracy is computed after registering grayscale images and applying the same transformations to the labeled image to further segment the reference image. Segmentation accuracy can be found using the Dice Similarity Index (DSI). We found DSI of individual normal brain structures as well as overall brain containing all structures together considered as brain pixels. DSI is defined as follows:
D S C = 2 | S I G T | | S I | + | G T |
where D S C is the Dice Similarity Coefficient, segmented image is denoted by S I , and ground truth of the image is denoted by G T .
Algorithm 2: Multi-Stage Forward Path Regenerative Genetic Algorithm (MFRGA) for Multi-Modal Rigid Image Registration
Result: Registered image I T k with rigid image transformations ( x , y   a n d   θ )
I R Reference Image of modality M 1
I T Template Image of modality M 2
C m i n Minimum constraint of search space
C m a x Maximum constraint of search space
S p o p Population Size
D Number of decimal places
x 1 , x 2 , x 3 First stage parameters
x 1 i , x 2 i , x 3 i Other stages parameters,
where i { , , }
S t a g e _ N o Stage number
S t a g e _ N o = 1
while S t a g e _ N o < 4 do
If ( S t a g e _ N o = = 1 ) then
1.   Call SSGA ( I R , I T , 10 , 10 , 0 )
2.   Store the output of SSGA function
3.   return x 1 , x 2 , x 3
4.   Apply image transformations x 1 , x 2 , x 3 to get registered template image I T k 1
end
end if ( S t a g e _ N o = = 2 ) then
Call SSGA ( I R , I T k 1 , 0.5 , 0.5 , 1 )
1.   Store the output of SSGA function
2.   return x 1 , x 2 , x 3
3.   Apply image transformations x 1 , x 2 , x 3 to get registered template image I T k 2
else if ( S t a g e _ N o = = 3 ) then
1.   Call SSGA ( I R , I T k 2 , 0.25 , 0.25 , 2 )
2.   Store the output of SSGA function
3.   return x 1 , x 2 , x 3
4.   Apply image transformations x 1 , x 2 , x 3 to get registered template image I T k 3
else if ( S t a g e _ N o = = 4 ) then
1.   Call SSGA ( I R , I T k 3 , 0.125 , 0.125 , 3 )
2.   Store the output of SingleStageGA function
3.   return x 1 , x 2 , x 3
4.   Apply image transformations x 1 , x 2 , x 3 to get registered template image I T k 4
else
return fittest_individuals
S t a g e _ N o   = S t a g e _ N o + 1
end
end

4. Results

In order to evaluate the performance of our algorithm in rigid image registration, we need a reference image translated in the x and y directions and also rotated. We translated reference image T2 five pixels in the x-direction and seven pixels in the y-direction and 30 rotations. These are considered ground truth transformations, and we are here to find these unknown transformations to which template image will best align with the reference image. The total number of brain MRI sequence combinations of template and reference images is 31. We divide our MRI sequence combination of template and reference into two sets. The first set contains MRI sequences where we have the same levels of noise and INU for template and reference images. The second set contains MRI sequences with template images free of noise and INU, while the reference images have different levels of noise and INU. We have evaluated our results in three different aspects, which are as follows.

4.1. Template and Reference Images with Same Levels of Noise and INU

In the first case shown in Figure 1a, both template T1 and reference T2 have the same levels of noise and INU. We can add noise levels of (0, 1, 3, 5, and 7)% and (0, 20, and 40)% of INU. We generated 16 cases of both template and reference images with varying noise levels and INU. The noise field added in the real and imaginary components of BrainWeb MRI sequences is the Gaussian noise. Experiments are performed using both Single-Stage Genetic Algorithm and MFRGA and accuracy is computed where the template and reference images under experimentation are at same levels of noise and INU.
Table 1 shows the performance of MFRGA vs. Single-Stage Genetic Algorithm which is clear in terms of less registration error in the case of MFRGA. All the available combinations of noise levels and INU have been selected. In the first stage, search space constraints are [−10,10], the result of single stage is near the final or true transformation but further improvement in the results can be made with our proposed MFRGA, having successive stages of GA with reducing search space constraints and iterative GA.
The result from the first stage is within error limits of [−0.5, +0.5]. Keeping in mind this fact, the second, third, and fourth stages of GA contains reducing search space of [−0.5, 0.5], [−0.25, 0.25], and [−0.125, 0.125], improving the accuracy even in the case of increasing noise levels and INU. For most of the cases in Table 2, Δ x , Δ y and Δ θ for MFRGA is less than single-stage genetic algorithm. The results of Table 3 are obtained for the case of reference image with INU and noise 0% and template image with all the cases of increasing INU and noise levels. It is clear that the results of MFRGA are more close to the ground truth transformations as compared to SSGA. Table 4 shows that the overall registration error for all cases of MFRGA is less than single-stage genetic algorithm and is also shown in Figure 2a–c with increasing INU of 0, 20, and 40%, respectively.

4.2. Segmentation Accuracy via Image Registration

In the third case shown in Figure 1c, we evaluated the robustness of our proposed algorithm of genetic algorithm in comparison to Nesterov, SA, LBFGS, and GA. Here, we have taken the case of the template that was free of noise and INU and reference image with all possible combinations of noise and INU. The template image has both a grayscale MRI image and a labeled image. The more accurate the image registration is, the more accurate the segmentation will be. Nesterov, SA, LBFGA, GA, and MFRGA all are applied for image registration, and the same transformations are applied to the labeled image. In this way, images are not only registered but also get segmented. Segmentation results are computed using Dice Similarity (DS) index. A higher value of DS shows more segmentation accuracy, which is the result of accurate image registration. Table 5 shows the segmentation results of Nesterov, SA, LBFGS, GA, and MFRGA. Here, the labeled image of the template contains labels of three normal brain structures, i.e., GM (Gray Matter), WM (White Matter), and CSF (Cerebellum Spinal Fluid). Clearly, the MFRGA has better results than that of Nesterov and LBFGS and comparable for the case of GA and SA. Figure 3 shows that MFRGA performs well even in the case of increasing noise levels and INU. We have already proved in Table 3 that MFRGA performs well in comparison to GA.

5. Discussion

Multi-modal image registration is an essential task in medical image analysis. Information from different modalities is combined to form a more accurate analysis. Sensors of imaging modalities may undergo, translations and rotation. In this regard, it is necessary to align both images to obtain useful information. If one of the images has its corresponding segmentation, then the labeled image can also be transformed using the same transformations as being applied on the grayscale image to align with the reference image. Our proposed algorithm consists of iterative and successive stages of the Genetic Algorithm with reduced search space and increased precision of GA chromosomes. The geometrical transformations obtained from MFRGA as compared to the SSGA are closed to the true transformations and are shown in Table 1 and Table 3. It is clear that the x and y translational and rotational transformations from MFRGA as compared to SSGA are closer to 5, 7, and 3 pixels, respectively. As a result of these geometrical transformations, the overall registration error of MFRGA is reduced as compared to SSGA and is clear from Table 2 and Table 4. The reduced overall registration error refers to the fact that results are better and closer to the true transformations.
The statistical analysis of results from Table 6 proved MFRGA to be better as compared to GA with a mean registration error of 0.492 for the same level of noise and INU in both reference and template images and 0.317 in case of the template with no noise and INU and varying levels of noise and INU for the reference image. Here, the minimum registration error of the proposed methodology is 0.035 and the maximum is 2.02. The minimum registration error found in MFRGA is 10 times less than that in GA. The maximum registration error in GA is 3.847, which is higher than that found in MFRGA. Better registration accuracy provides better segmentation results. Dice similarity is used to evaluate the segmentation accuracy of brain structures, i.e., CSF, GM, and WM. The comparison is performed with Nesterov, SA, GA, and LBFGS optimization techniques. The higher value of dice coefficient shows better segmentation results. The mean dice value using MFRGA for CSF, GM, and WM is 0.701, 0.792, and 0.913. The minimum and maximum dice values of MFRGA are higher than Nesterov, GA, and LBFGS and comparable with SA. The higher value of the minimum and maximum dice values shows that MFRGA is robust and yields reliable results even for increasing noise and INU values. In almost all of the experiments, MFRGA provides good segmentation accuracy.

6. Conclusions

Evolutionary algorithms are considered in the class of the most popular optimization techniques. The robustness of the evolutionary algorithm is computed in this research along with the effectiveness of our proposed MFRGA. Multiple stages of MFRGA guarantee to provide more accurate results in comparison to single-stage genetic algorithm. Decreasing search space and increasing precision levels force the converged results to reduce the registration errors and hence provide more accurate results. Its performance is prominent in our experiments with increasing noise up to 9% and INU up to 40% levels. The overall registration error, including x, y translations and rotation of MFRGA, is much less than single-stage genetic algorithm in all cases of increasing noise and INU levels. The mean registration error of MFRGA is less than 0.5. Even the maximum registration error using MFRGA found out of all experiment cases is less than that of GA. This shows good performance of MFRGA even for the worst case. Moreover, good segmentation accuracy of normal brain structures was recorded via registration using MFRGA in comparison to GA and LBFGS. The mean dice similarity coefficient for CSF, GM, and WM obtained is 0.701, 0.792, and 0.913, respectively, with a standard deviation 10 times less than 0.1. The difference between minimum and maximum dice coefficient values is almost less than 0.05, with minimum and maximum dice similarity values higher than GA and LBFGS.
The statistical analysis shows the effectiveness of MFRGA in all cases of changing noise and INU levels. MFRGA can further be extended to the most challenging deformations at pixel level called non-rigid image registration where an increased accuracy can be achieved with better performance even for higher noise and INU levels. Moreover, the rigid image registration is crucial for extracting useful information from multiple modalities. It is used to remove global transformation differences among the multi-modal medical images. The aligned images aid the physicians and surgeons in analyzing medical diagnostics. In future, the atlas-based image registration combined with improved clustering methods [16,17] may result in more accuracy for simultaneous segmentation of both normal and abnormal.

Author Contributions

Formal analysis and investigation, M.A. and N.M.; conceptualization, M.A. and N.M.; visualization, M.A., J.F. and L.B.; writing—original draft preparation, M.A. and N.M.; writing—review and editing, J.F. and L.B.; validation, M.A., J.F. and L.B.; funding acquisition, J.F. and L.B.; resources, M.A.; supervision: N.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministry of Education, Youth and Sports of the Czech Republic under the grant SP2022/5 conducted by VSB—Technical University of Ostrava, Czechia.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset analyzed during the current study is availableonline, https://brainweb.bic.mni.mcgill.ca/ (accessed on 22 October 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Haskins, G.; Kruger, U.; Yan, P. Deep learning in medical image registration: A survey. Mach. Vis. Appl. 2020, 31, 8. [Google Scholar] [CrossRef] [Green Version]
  2. Ganzetti, M.; Liu, Q.; Mantini, D. A spatial registration toolbox for structural MR imaging of the aging brain. Neuroinformatics 2018, 16, 167–179. [Google Scholar] [CrossRef] [PubMed]
  3. Pushpa, B.R.; Amal, P.S.; Kamal, N.P. Detection and stagewise classification of Alzheimer disease using deep learning methods. Int. J. Recent Technol. Eng. (IJRTE) 2019, 7, 206–212. [Google Scholar]
  4. Shen, M.; Deng, Y.; Zhu, L.; Du, X.; Guizani, N. Privacy-preserving image retrieval for medical IoT systems: A blockchain-based approach. IEEE Netw. 2019, 33, 27–33. [Google Scholar] [CrossRef]
  5. Mohammed, H.A.; Hassan, M.A. The image registration techniques for medical imaging (MRI-CT). Am. J. Biomed. Eng. 2016, 6, 53–58. [Google Scholar]
  6. Sotiras, A.; Davatzikos, C.; Paragios, N. Deformable medical image registration: A survey. IEEE Trans. Med. Imaging 2013, 32, 1153–1190. [Google Scholar] [CrossRef] [Green Version]
  7. Brown, L.G. A survey of image registration techniques. ACM Comput. Surv. (CSUR) 1992, 24, 325–376. [Google Scholar] [CrossRef]
  8. Razlighi, Q.R.; Kehtarnavaz, N.; Yousefi, S. Evaluating similarity measures for brain image registration. J. Vis. Commun. Image Represent. 2013, 24, 977–987. [Google Scholar] [CrossRef] [Green Version]
  9. Petkovska, S.; Kraleva, S. Necessity of medical imaging registration for brain tumor radiotherapy. In Proceedings of the Third Conference on Medical Physics and Biomedical Engineering, Skopje, North Macedonia, 18–19 October 2013. [Google Scholar]
  10. Maintz, J.A.; Viergever, M.A. A survey of medical image registration. Med. Image Anal. 1998, 2, 1–36. [Google Scholar] [CrossRef]
  11. Begum, N.; Badshah, N.; Ibrahim, M.; Ashfaq, M.; Minallah, N.; Atta, H. On two algorithms for multi-modality image registration based on gaussian curvature and application to medical images. IEEE Access 2021, 9, 10586–10603. [Google Scholar] [CrossRef]
  12. Zhu, W.; Myronenko, A.; Xu, Z.; Li, W.; Roth, H.; Huang, Y.; Milletari, F.; Xu, D. Neurreg: Neural registration and its application to image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass, CO, USA, 1–5 March 2020; pp. 3617–3626. [Google Scholar]
  13. Qiu, L.; Ren, H. U-RSNet: An unsupervised probabilistic model for joint registration and segmentation. Neurocomputing 2021, 450, 264–274. [Google Scholar] [CrossRef]
  14. Ponzio, F.; Macii, E.; Ficarra, E.; Di Cataldo, S. A multi-modal brain image registration framework for US-guided neuronavigation systems. In Integrating MR and US for minimally invasive neuroimaging. In Proceedings of the 10th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2017), Porto, Portugal, 21–23 February 2017; SciTePress: Vienna, Austria; Volume 2, pp. 114–121. [Google Scholar]
  15. Moualhi, W.; Ezzeddine, Z. Tumor growth model for atlas based registration of pathological brain MR images. In Proceedings of the Seventh International Conference on Machine Vision (ICMV 2014), Milan, Italy, 14 February 2015; SPIE: Washington, DC, USA; Volume 9445, pp. 317–322. [Google Scholar]
  16. Lei, T.; Jia, X.; Zhang, Y.; Liu, S.; Meng, H.; Nandi, A.K. Superpixel-based fast fuzzy C-means clustering for color image segmentation. IEEE Trans. Fuzzy Syst. 2018, 27, 1753–1766. [Google Scholar] [CrossRef] [Green Version]
  17. Tang, Y.; Ren, F.; Pedrycz, W. Fuzzy C-means clustering through SSIM and patch for image segmentation. Appl. Soft Comput. 2020, 87, 105928. [Google Scholar] [CrossRef]
  18. Saxena, S.; Kumari, N.; Pattnaik, S. Brain tumour segmentation in FLAIR MRI using sliding window texture feature extraction followed by fuzzy C-Means clustering. Int. J. Healthc. Inf. Syst. Inform. (IJHISI) 2021, 16, 1–20. [Google Scholar] [CrossRef]
  19. Li, Q.; Gao, Z.; Wang, Q.; Xia, J.; Zhang, H.; Zhang, H.; Liu, H.; Li, S. Glioma segmentation with a unified algorithm in multimodal MRI images. IEEE Access 2018, 6, 9543–9553. [Google Scholar] [CrossRef]
  20. Mascarenhas, L.R.; Ribeiro, A.D.; Ramos, R.P. Automatic segmentation of brain tumors in magnetic resonance imaging. Einstein (São Paulo) 2020, 18. [Google Scholar] [CrossRef] [Green Version]
  21. Uss, M.L.; Vozel, B.; Abramov, S.K.; Chehdi, K. Selection of a similarity measure combination for a wide range of multimodal image registration cases. IEEE Trans. Geosci. Remote Sens. 2020, 59, 60–75. [Google Scholar] [CrossRef]
  22. Haskins, G.; Kruecker, J.; Kruger, U.; Xu, S.; Pinto, P.A.; Wood, B.J.; Yan, P. Learning deep similarity metric for 3D MR–TRUS image registration. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 417–425. [Google Scholar] [CrossRef] [Green Version]
  23. Holia, M.S.; Thakar, V.K. Mutual information based image registration for MRI and CT SCAN brain images. In Proceedings of the 2012 International Conference on Audio, Language and Image Processing, Shanghai, China, 16 July 2012; IEEE: Piscataway, NJ, USA; pp. 78–83. [Google Scholar]
  24. Wu, G.; Kim, M.; Wang, Q.; Shen, D. Hierarchical Attribute-Guided Symmetric Diffeomorphic Registration for MR Brain Images. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2012; pp. 90–97. [Google Scholar]
  25. Fan, J.; Cao, X.; Yap, P.T.; Shen, D. BIRNet: Brain image registration using dual-supervised fully convolutional networks. Med. Image Anal. 2019, 54, 193–206. [Google Scholar] [CrossRef]
  26. Woods, R.P.; Cherry, S.R.; Mazziotta, J.C. Rapid automated algorithm for aligning and reslicing PET images. J. Comput. Assist. Tomogr. 1992, 16, 620–633. [Google Scholar] [CrossRef]
  27. Woods, R.P.; Mazziotta, J.C.; Cherry, S.R. MRI-PET registration with automated algorithm. J. Comput. Assist. Tomogr. 1993, 17, 536–546. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Hill, D.L.; Hawkes, D.J.; Harrison, N.A.; Ruff, C.F.A.; Ruff, C.F.. A Strategy for Automated Multimodality Image Registration Incorporating Anatomical Knowledge and Imager Characteristics. In Biennial International Conference on Information Processing in Medical Imaging; Springer: Berlin/Heidelberg, Germany, 1993; pp. 182–196. [Google Scholar]
  29. Woo, J.; Stone, M.; Prince, J.L. Multimodal registration via mutual information incorporating geometric and spatial context. IEEE Trans. Image Process. 2014, 24, 757–769. [Google Scholar] [CrossRef] [Green Version]
  30. Ashfaq, M.; Minallah, N.; Rehman, A.U.; Belhaouari, S.B. Multistage forward path regenerative genetic algorithm for brain magnetic resonant imaging registration. Big Data 2021, 10, 65–80. [Google Scholar] [CrossRef] [PubMed]
  31. BrainWeb: Simulated Brain Database. Available online: http://www.bic.mni.mcgill.ca/brainweb/ (accessed on 22 October 2021).
  32. Egnal, G.; Daniilidis, K. Image Registration Using Mutual Information; Technical Reports; University of Pennsylvania, Department of Computer and Information Science (CIS): Philadelphia, PA, USA, 2000; Volume 117. [Google Scholar]
  33. Viola, P.; Wells, W.M., III. Alignment by maximization of mutual information. Int. J. Comput. Vision 1997, 24, 137–154. [Google Scholar] [CrossRef]
  34. Kosiński, W.; Michalak, P.; Gut, P. Robust image registration based on mutual information measure. Sci. Res. 2012, 3, 19558. [Google Scholar] [CrossRef] [Green Version]
  35. Maes, F.; Collignon, A.; Vandermeulen, D.; Marchal, G.; Suetens, P. Multimodality image registration by maximization of mutual information. IEEE Trans. Med. Imaging 1997, 16, 187–198. [Google Scholar] [CrossRef] [Green Version]
  36. Nesterov, Y. A method of solving a convex programming problem with convergence rate O(1/k2). Sov. Math. Dokl. 1983, 27, 2. [Google Scholar]
  37. Nesterov Accelerated Gradient. Available online: https://paperswithcode.com/method/nesterov-accelerated-gradient (accessed on 1 September 2021).
  38. Ruder, S. An Overview of Gradient Descent Optimization Algorithms. Available online: https://ruder.io/optimizing-gradient-descent/ (accessed on 20 March 2020).
  39. Chandra, A.L. Learning Parameters, Part 2: Momentum-Based & Nesterov Accelerated Gradient Descent. Available online: https://towardsdatascience.com/learning-parameters-part-2-a190bef2d12 (accessed on 7 September 2021).
  40. Sieniutycz, S.; Jeowski, J. 1-Brief Review of Static Optimization Methods. In Energy Optimization in Process Systems and Fuel Cells; Elsevier: Amsterdam, The Netherlands, 2013; pp. 1–43. [Google Scholar] [CrossRef]
  41. What Is Simulated Annealing? Available online: https://www.mathworks.com/help/gads/what-is-simulated-annealing.html (accessed on 7 September 2021).
  42. Almarashi, M.; Deabes, W.; Amin, H.H.; Hedar, A.R. Simulated annealing with exploratory sensing for global optimization. Algorithms 2020, 13, 230. [Google Scholar] [CrossRef]
Figure 1. Multi-modal image registration using MFRGA for (a) same noise and INU levels of template and reference image and (b) different noise and INU levels of reference image and template (INU and Noise 0%). (c) Segmentation through MFRGA based image registration.
Figure 1. Multi-modal image registration using MFRGA for (a) same noise and INU levels of template and reference image and (b) different noise and INU levels of reference image and template (INU and Noise 0%). (c) Segmentation through MFRGA based image registration.
Symmetry 14 01506 g001
Figure 2. Overall Registration Error: (ac) template and reference image at same level of noise (1%, 3%, 5%, 7%, and 9%) with INU (0%, 20%, and 40%, respectively) and (df) template (noise- and INU-free) and reference image at different levels of noise (1%, 3%, 5%, 7%, and 9%) with INU (0%, 20%, and 40%, respectively).
Figure 2. Overall Registration Error: (ac) template and reference image at same level of noise (1%, 3%, 5%, 7%, and 9%) with INU (0%, 20%, and 40%, respectively) and (df) template (noise- and INU-free) and reference image at different levels of noise (1%, 3%, 5%, 7%, and 9%) with INU (0%, 20%, and 40%, respectively).
Symmetry 14 01506 g002
Figure 3. Segmentation Results: ((ac) DSC of CSF, (df) DSC of GM, (gi) DSC of WM) using template (noise- and INU-free) and reference image at different levels of noise (1%, 3%, 5%, 7%, and 9%) with INU (0%, 20%, and 40%) for first, second, and third column, respectively.
Figure 3. Segmentation Results: ((ac) DSC of CSF, (df) DSC of GM, (gi) DSC of WM) using template (noise- and INU-free) and reference image at different levels of noise (1%, 3%, 5%, 7%, and 9%) with INU (0%, 20%, and 40%) for first, second, and third column, respectively.
Symmetry 14 01506 g003
Table 1. Performance of MFRGA vs. GA with same level of noise and INU of template and reference images.
Table 1. Performance of MFRGA vs. GA with same level of noise and INU of template and reference images.
INUNoiseStage 1Stage 2Stage 3Stage 4Final
X1X2X3X1′X2′X3′X1″X2″X3″X1‴X2‴X3‴XYθ
0%0%5.3827.3763.151−0.363−0.3900.242−0.0030.003−0.240−0.0010.001−0.0515.0156.9893.102
0%1%4.9646.7992.5020.1470.191−0.001−0.0030.0340.225−0.117−0.035−0.0014.9916.9892.724
20%1%4.6967.1712.2510.269−0.1170.490−0.0020.0610.0030.000−0.0480.0984.9637.0672.842
40%1%4.8546.7873.4140.2260.2120.1980.0000.0140.0870.003−0.008−0.0455.0837.0053.655
0%3%5.3316.9542.937−0.3220.0660.179−0.001−0.026−0.1690.000−0.010−0.0045.0096.9852.942
20%3%5.2846.5413.351−0.2790.4770.0180.0010.0020.2110.000−0.0010.0165.0067.0183.596
40%3%4.8316.6883.2340.2140.272−0.094−0.0010.0230.041−0.0120.004−0.0585.0316.9873.123
0%5%4.1377.8135.1700.462−0.474−0.4570.249−0.2490.2120.125−0.1250.0324.9736.9654.958
20%5%4.5807.2623.1010.387−0.156−0.2000.246−0.251−0.0330.120−0.1240.0845.3336.7312.953
40%5%4.7537.4093.0900.217−0.365−0.0340.000−0.001−0.2550.002−0.0020.0334.9727.0412.835
0%7%5.2426.8751.650−0.2580.2990.491−0.072−0.127−0.0100.0340.0630.0184.9467.1102.149
20%7%5.4427.0473.030−0.358−0.081−0.008−0.033−0.0120.229−0.005−0.0030.1115.0466.9513.362
40%7%5.2446.8223.113−0.2400.2440.155−0.004−0.033−0.080−0.001−0.0040.0674.9987.0283.255
0%9%5.1167.0062.519−0.028−0.0980.089−0.1030.1290.2190.040−0.0690.0595.0246.9682.887
20%9%5.3157.2883.025−0.252−0.2830.016−0.0140.000−0.2260.000−0.0010.0895.0497.0042.903
40%9%5.4366.7402.834−0.3700.193−0.207−0.0130.024−0.203−0.0050.0030.0625.0486.9612.486
Table 2. Image Registration Error of MFRGA vs. GA with same level of noise and INU of the template and reference images.
Table 2. Image Registration Error of MFRGA vs. GA with same level of noise and INU of the template and reference images.
Template T1 and Reference T2 ImagesRegistration Error
Using Single-Stage GA
Registration Error
Using MFRGA
Overall Registration Error Using Single-Stage GA
Y ( % )
Overall Registration Error Using MFRGA
θ ( % )
INUNoise X ( % ) Y ( % ) θ ( % ) INUNoise X ( % )
0%0%7.6405.3715.0330%0%7.6405.3715.033
0%1%0.7132.87516.6170%1%0.7132.87516.617
20%1%6.0902.43624.96320%1%6.0902.43624.963
40%1%2.9223.04213.78640%1%2.9223.04213.786
0%3%6.6200.6512.1040%3%6.6200.6512.104
20%3%5.6896.55311.71220%3%5.6896.55311.712
40%3%3.3864.4607.79340%3%3.3864.4607.793
0%5%17.26811.61672.3410%5%17.26811.61672.341
20%5%8.3903.7393.38020%5%8.3903.7393.380
40%5%4.9455.8402.99740%5%4.9455.8402.997
0%7%4.8441.78144.9880%7%4.8441.78144.988
20%7%8.8380.6640.99720%7%8.8380.6640.997
40%7%4.8832.5483.76540%7%4.8832.5483.765
0%9%2.3170.09316.0350%9%2.3170.09316.035
20%9%6.2954.1070.82420%9%6.2954.1070.824
40%9%8.7153.7125.53140%9%8.7153.7125.531
Table 3. Image Registration Error of MFRGA vs. GA with template T1 (INU and Noise 0%) and reference T2 with different level of noise and INU.
Table 3. Image Registration Error of MFRGA vs. GA with template T1 (INU and Noise 0%) and reference T2 with different level of noise and INU.
Reference
T2 Image
Registration Error
Using Single-Stage GA
Registration Error
Using MFRGA
Overall Registration Error Using Single-Stage GA
Y ( % )
Overall Registration Error Using MFRGA
θ ( % )
INUNoise X ( % ) Y ( % ) θ ( % ) INUNoise X ( % )
0%1%9.8393.44112.7930%1%9.8393.44112.793
20%1%4.1282.2618.85420%1%4.1282.2618.854
40%1%27.3822.3737.53440%1%27.3822.3737.534
0%3%6.7380.3948.3620%3%6.7380.3948.362
20%3%3.1091.9431.20020%3%3.1091.9431.200
40%3%6.0902.60313.39440%3%6.0902.60313.394
0%5%1.97419.2161.1030%5%1.97419.2161.103
20%5%6.3256.8077.09920%5%6.3256.8077.099
40%5%0.1776.42215.54440%5%0.1776.42215.544
0%7%6.5631.5923.7710%7%6.5631.5923.771
20%7%6.0206.20710.97020%7%6.0206.20710.970
40%7%5.2896.2249.78340%7%5.2896.2249.783
0%9%5.6180.88311.3190%9%5.6180.88311.319
20%9%1.1933.10111.38020%9%1.1933.10111.380
40%9%4.1956.2345.42840%9%4.1956.2345.428
Table 4. Performance of MFRGA vs. GA with template T1 (INU and Noise 0%) and reference T2 with different levels of noise and INU.
Table 4. Performance of MFRGA vs. GA with template T1 (INU and Noise 0%) and reference T2 with different levels of noise and INU.
Reference
T2 Image
Stage 1Stage 2Stage 3Stage 4Final
INUNoiseX1X2X3X1′X2′X3′X1″X2″X3″X1‴X2‴X3‴XYθ
0%1%5.4926.7593.384−0.4560.330−0.082−0.004−0.0560.124−0.003−0.005−0.0015.0297.0273.424
20%1%5.2067.1583.266−0.155−0.2200.032−0.0230.077−0.124−0.0040.000−0.0115.0257.0153.162
40%1%6.3697.1662.774−0.491−0.188−0.021−0.2470.0230.196−0.1240.0010.0895.5077.0023.039
0%3%5.3376.9723.251−0.2680.0610.246−0.045−0.080−0.2490.0010.105−0.0185.0257.0583.230
20%3%5.1556.8642.964−0.0790.1110.021−0.0550.0000.234−0.001−0.0010.0155.0206.9743.234
40%3%4.6956.8183.4020.2670.233−0.0560.040−0.0470.1300.0030.001−0.0655.0057.0043.411
0%5%5.0995.6552.967−0.1940.497−0.2540.0810.2470.243−0.0330.123−0.0344.9526.5222.922
20%5%4.6847.4772.7870.299−0.5370.3330.0020.009−0.0230.0000.004−0.0964.9856.9533.001
40%5%4.9916.5502.5340.0900.4080.254−0.1000.033−0.1380.0450.000−0.1115.0276.9912.539
0%7%4.6727.1112.8870.266−0.1290.1280.0470.0000.0020.002−0.002−0.0384.9876.9812.980
20%7%5.3017.4343.329−0.345−0.389−0.0020.0450.0040.0500.013−0.0030.0655.0147.0463.443
40%7%4.7367.4363.2930.263−0.4050.0400.0040.007−0.2070.000−0.001−0.0795.0037.0373.047
0%9%4.7197.0622.6600.299−0.1430.3500.0010.0910.0380.002−0.019−0.0525.0226.9912.996
20%9%5.0606.7833.3410.0500.2270.0340.004−0.0120.0190.0010.008−0.1105.1147.0053.284
40%9%5.2106.5642.837−0.2360.359−0.194−0.0020.0210.0830.0080.0000.0834.9806.9422.809
Table 5. Normal brain structure segmentation via registration techniques with template (INU and Noise 0%) and reference with noise (1%, 3%, 5%, 7%, and 9%) and INU (0%, 20%, and 40%).
Table 5. Normal brain structure segmentation via registration techniques with template (INU and Noise 0%) and reference with noise (1%, 3%, 5%, 7%, and 9%) and INU (0%, 20%, and 40%).
DSCWMDSCGMDSCCSFDSCWMDSCGMDSCCSFDSCWMDSCGMDSCCSFDSCWMDSCGMDSCCSFDSCWMDSCGMDSCCSFNoiseINU
0.910.770.690.910.780.690.880.80.710.9170.8040.7090.680.4910.3971%0%
0.920.80.70.910.790.70.880.80.710.9130.7920.6980.6930.5090.3951%20%
0.920.810.710.870.670.570.880.80.710.8840.7140.6330.6940.510.3921%40%
0.910.790.70.910.790.70.640.410.250.9190.8070.7130.6920.5090.3943%0%
0.910.790.70.920.810.710.880.80.710.9070.7730.6820.6930.5090.3943%20%
0.910.780.690.910.770.680.880.80.710.9170.8020.7090.6940.5110.3933%40%
0.920.80.710.870.690.610.640.410.250.9190.8070.7130.6870.5040.3915%0%
0.920.810.710.910.790.70.880.810.710.9090.7850.6920.6870.5040.395%20%
0.910.770.670.90.770.690.640.410.250.9190.8060.710.6880.5080.3975%40%
0.920.810.710.920.80.70.640.410.250.9190.8070.7120.6860.5010.397%0%
0.910.770.690.910.780.690.640.410.250.9140.7920.6970.6870.5030.3937%20%
0.920.80.710.910.780.690.640.410.250.9140.7970.7070.6870.5040.3927%40%
0.920.810.710.910.780.690.640.410.250.9150.7980.7060.6810.4910.3939%0%
0.910.790.70.910.780.70.640.410.250.8950.7570.6840.680.4910.3949%20%
0.910.80.70.920.80.70.880.80.710.9190.8070.7110.6810.4910.3919%40%
Table 6. Statistical Analysis of Results.
Table 6. Statistical Analysis of Results.
ResultModelMean ± Standard DeviationMinMax
Overall
Registration Error
GA *1.008 ± 0.8200.4403.847
Proposed (MFRGA) *0.492 ± 0.4870.0832.020
GA **0.898 ± 0.3690.3271.761
Proposed (MFRGA) **0.317 ± 0.1950.0350.604
Segmentation Accuracy
(Dice CSF)
Nesterov **0.393 ± 0.0020.3900.397
SA **0.698 ± 0.0210.6330.713
LBFGS **0.461 ± 0.2390.2450.710
GA **0.681 ± 0.0380.5720.712
MFRGA **0.701 ± 0.0120.6740.713
Segmentation Accuracy
(Dice GM)
Nesterov **0.502 ± 0.0080.4910.511
SA **0.790 ± 0.0250.7140.807
LBFGS **0.592 ± 0.2050.4060.805
GA **0.772 ± 0.0390.6720.805
MFRGA **0.792 ± 0.0130.7730.807
Segmentation Accuracy
(Dice WM)
Nesterov **0.687 ± 0.0050.6800.694
SA **0.912 ± 0.0100.8840.919
LBFGS **0.753 ± 0.1260.6390.884
GA **0.905 ± 0.0160.8680.918
MFRGA **0.913 ± 0.0050.9050.919
* Image registration with template and reference image at the same level of INU (0–40%) and noise (0–9%). ** Image registration with template (0% noise and 0% INU) and reference (1–9% noise and 0–40% INU).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ashfaq, M.; Minallah, N.; Frnda, J.; Behan, L. Multi-Modal Rigid Image Registration and Segmentation Using Multi-Stage Forward Path Regenerative Genetic Algorithm. Symmetry 2022, 14, 1506. https://doi.org/10.3390/sym14081506

AMA Style

Ashfaq M, Minallah N, Frnda J, Behan L. Multi-Modal Rigid Image Registration and Segmentation Using Multi-Stage Forward Path Regenerative Genetic Algorithm. Symmetry. 2022; 14(8):1506. https://doi.org/10.3390/sym14081506

Chicago/Turabian Style

Ashfaq, Muniba, Nasru Minallah, Jaroslav Frnda, and Ladislav Behan. 2022. "Multi-Modal Rigid Image Registration and Segmentation Using Multi-Stage Forward Path Regenerative Genetic Algorithm" Symmetry 14, no. 8: 1506. https://doi.org/10.3390/sym14081506

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop