Multimodal Brain Image Fusion using Graph Intelligence Method

Image fusion plays a vital role for enhancing the quality of images in medical applications. It is known that CT images of brain shows the details of the bone structure and MRI images of brain shows the details of the soft tissue. The Objective of this research is to fuse CT (Computed Tomography) and MRI(Magnetic Resonance Imaging) of normal brain images and tumor affected brain images and to find out structural similarity(SSIM) of the fused image. Axial slice of normal brain and brain tumor images are taken for image fusion. Totally, 24 brain images has been taken out of which 6 pairs are normal brain images and another 6 pairs are tumor affected brain images. Techniques used are Graph-cut method for segmentation, Maximum method for fusion and Swarm Intelligence method for optimization. The proposed fusion method increases SSIM (Structural Similarity) when compared to conventional method of fusion. Tumor size in the fused image is also extracted and this fused image is helpful for doctors to analyse the post radio therapy patient or operated patient whether any tumor residues still exist. Also this method minimises the number of pixels and increases the information content in a single fused image. This technique aids the physician to analyse complementary details in a single image.


Image fusion
Image fusion method is stated as collecting all of the necessary data from various images and fusing it into a single fused image. More informative data will be obtained from the single fused image than any of the input images, and it contains all the mandatory data. The aim of the image fusion is not only to scale back the number of information but also to construct images that are more applicable and suitable for the machine and human perception.
Basically, image fusion is de ined as a process of combining a particular information from two or more images into a single fused image. In remote sensing applications, the accessibility will be increased, which offers inspiration for numerous image fusion algorithms. Many things in image fusion need more spatial and spectral information. All the accessible instrumentation will not have that much capable of giving such information convincingly. An Image fusion method permits the blending of assorted data sources. The resultant image can have corresponding spectral and spatial information characteristics. However, the standard image fusion techniques can deform the spectral data from the information, whereas combining.

Medical Image Fusion
Image fusion has been widely used tool for enhancing the standard of the images in medical applications. Nowadays, medical image fusion has become a common term in the ield of medical diagnostics and treatment. This term is used where there are a number of registered images of a patient which seems to be dif icult to diagnose the disease and so the image fusion will be used to fuse the registered images which will give a single fused image. Fused images will be done by the various images of same modality or by images from different modalities such as Magnetic Resonance Imaging (MRI), Tomography (CT), Positron Emission Tomography (PET) and Single Positron Emission Computed Tomography (SPECT). Mostly in the ield of Oncology, these fused images acts in different purposes. For instance, CT images can be used more often to certain differences in tissue density, whereas MRI images are used in the diagnosis of brain tumours.
Radiologists must collect a lot of data from various image formats for the proper diagnosis of the patients. The output fused images has become more bene icial in cancer treatment. According to these new technologies, Radiation oncologists take full advantage of Intensity Modulated Radiation Therapy (IMRT).
The conversion of diagnostic images will result in high accuracy IMRT target volumes. In the case of medical Image fusion, reference metrics are more suitable compared to the non-reference metrics for the proper fusing of the output image.

Need For Image Fusion
The need for image fusion is to obtain a single fused image which is enhanced anatomically, highly desirable spectral than compared to a raw single scanned image. The image fusion can be done by using different medical images to obtain the fused image. In the case of disease, the fused image will be helpful in extracting the size of the tumour. Both the anatomical structures and the changes are endured with effectively reduced distortion.
It decreases the volume of data, holds signi icant features, removes artifacts and it provides an output image which will be more suitable for interpretation. The image fusion can improve reliability and capability of the images by complementing the information in the input images. The fusion also decreases the storage of data required and the time for transmission. Image fusion is done to extract the useful information from the source images and to fuse it into a single fused image.

Types of Image fusion
Image Fusion can be broadly categorized as follows

Multi-view Image Fusion
This fusion is done where the images to be fused are from the same modality and the images which takes the same amount of time under different conditions. The main aim of this fusion is to have all the complementary information under different conditions in the output fused image.

Multi-temporal Image Fusion
This fusion is done where the images to be fused are from the same modality and those images which will be taken at different times. In this fusion, the process is done by subtracting the input images and detecting the changes in the image at different times.

Multi-focus Image Fusion
The images are obtained from different modalities like CT and MRI, different resolutions, IR and visible, and distinctive sizes. Therefore pre-processing is the initial phase in image fusion processing. Registration process is utilized to change the spatial alignments. By decomposing the image volume to frequency-bands, the fusing process starts. The images are separated into frequency sub-bands by using the double tree discrete wavelet transform. Each image of the frequency bands will be analyzed to determine which image can be combined and which has to be expelled from the least volume of coef icients by the fusion rules. To get back the fused image, the inverse transform must be applied so that it is very much informative when compared with the single images.

Multi-modal Image Fusion
Various modalities like CT and MRI images can be fused. The CT image gives the less distortion of the thick structures like bones and hard tissues which cannot recognize the physiological changes. The MRI image gives the normal and pathological soft tissues information which will not support the bones data.  The fused image can also be used for classi ication or detection.
In this method inal fused image have a less spatial resolution.
Decision level fusion Combines the result from multiple algorithms to yield a inal fused decision.
Complexity of method increases.

Pixel Level fusion
It is a combination of input data from multiple sources and converting into an image with a single resolution. This will be more informative when compared to the source images. It will be useful in remote sensing, medical applications and night vision applications Figure 7. For example, images from different modalities will be fused to produce an accurate result in medical imaging. Some of the prerequisites that are imposed on the output fused images are 1. The fusing process should protect all the remarkable data in the input images, 2. The process should not give an introduction to any inconsistencies or artifacts, 3. The output process must be invariantly shifted.
Image fusion using Maximum method, Image fusion using Minimum method and Image fusion using averaging comes under Pixel level fusion.  Figure 2, if we take two images, it is needed to check the pixels in both the images and then the image with the highest pixels will be shown as the output.

Figure 3: (a) Input MRI (b) Input CT image
This method functions similar to the maximum method, but it takes the lowest pixel values and ignores the other pixels and does the fusion process Figure 3 and Figure 4. The images having the lowest brightness can use this type of fusion.

Image Fusion Using Averaging Method
The images can be fused using averaging method. In this method, it takes up the two images and the resultant images will have average pixels of both the images Figure 5. The pixels of each image will be taken and added and it will be divided by the number of images used Figure 6. All the pixels in the images will continue this method and the output fused image will be obtained.

Feature Level Fusion
This fusion is the extraction of multiple features in an image like the corners, parameters, edges, textures etc. It collects the different features from multiple images and combines it into a single image having one or more features. This will be helpful in the image processing as an input data where the image can be segmented and the colour can be detected.

This level fusion algorithm comprises of four steps
1. Transforming the input images into multiresolution images 2. The regions of different coef icients will be extracted 3. A simple features will be computed in all the regions 4. Using this method, the regions with all the same features should be combined and the inverse transformation should be applied. The comparison of the three types of fusion at different levels are shown below in Table 1.

Decision level Fusion
This type of fusion will fuse the results from multiple techniques to give a inal fused decision. If the results are shown as con idences rather than decisions, then it is called soft fusion. Then the other results may be called hard fusion. This level of fusion consists of statistical methods, fuzzy logic based methods and voting methods.
The advantages and disadvantages of the existing image fusion techniques are shown in Table 2 below

Literature Survey
In (Abdulkareem, 2018), this paper presents a development of multimodality image fusion using Discrete Wavelet Transform, and the output fused image is highly anatomical and has a spectral information. These images can be used in the clinical diagnosis of patients. In (Li et al., 2017), this paper presents an advanced data fusion using both the MRI and MRSI for brain tumour patients to improve the differentiation accuracy of the tumour in MRSI only.
In (Bosco et al., 2017) , this paper proves that the fusion of the images will result in valuable reciprocal data. This paper explains that CT gives the better information on denser tissue with less twisting, while MRI offers better information on fragile tissue with all the more bending. In (Suthakar et al., 2014), this paper explains the different types of image fusion and all the techniques which will be useful for the concept of Image Fusion. In (Miles et al., 2013), this paper explores the CT and MRI spine image combination calculated based upon the graph cuts. This calculation enables doctors to evaluate the delicate tissue and hard details on a single picture taking out mental arrangement and relationship required when both CT and MR pictures are required for diagnosis. In (Altaf, 2015), this paper inspects the utility of coordinating images from CT and MRI and analyse Gross tumor volume depicted in the informational collections of the fused image independently. Depiction of GTV is characterized by combining the two imaging modalities which could give critical distinction. In (Haddadpour et al., 2017), this paper explains about the fusion of PET and MRI images using 2-D Hilbert transform and its main objective is to preserve the spectral and spatial features of the input source images. In (Xia et al., 2017) , this paper explains the research of image combination innovation for showing the situation of the anodes contrasted and postoperative MRI. The situation of the cathodes was exceptionally related between the combination and postoperative MRI. The CT-MRI combination pictures could be utilized to dodge the potential dangers of MRI after DBS in patients with PD. In (Eapen et al., 2015), this paper proposes a swarm intelligence motivated edge-versatile weight work for managing the vitality minimization of the conventional graph cut model. The model is approved both subjectively (by clinicians and radiologists) and quantitatively on publically accessible igured tomography (CT) datasets. The trial result delineates the productivity and adequacy of the proposed strategy. In (Bhavana and Krishnappa, 2016), this paper presents a detailed literature review done on image fusion and also the concepts and materials that help for clear understanding of various fusion techniques. Fusing various modalities of images in the medical ield into a distinct image with more detailed anatomical information and high spectral information is highly desired in clinical diagnosis.

Slices of the Brain
There are three planes to get the slices of the Brain. They are 1. Coronal Plane 2. Horizontal Plane

sagittal Plane
Process of these three plane 1. The Coronal plane is also called a Frontal plane. This plane cuts the slices of the brain, similar to the slices cut from a loaf of bread.
2. The Horizontal plane cuts as like cutting a bagel or a hamburger bun. This plane is also known as an axial slice.
3. The sagittal plane divides the brain into left and right parts. It cuts the brain as like cutting a potato from its middle.

Axial Slice of Brain
In the axial plane, there are 24 slices of the Brain. The axial slice has been chosen because it covers all the parts of the brain and shows many tissues in a single slice.

CT Scan
A Computed Tomography (CT) which was earlier known as Computerised Axial Tomography (CAT) scan makes use of computer processed combinations of X-ray measurements taken from different angles which will produce cross sectional (tomographic) images. More details will be shown in the CT scan when compared to the X-ray. The structure details of the bones, the pelvis, blood vessels, brain, lungs and heart will be collected from a CT scan.
The three-dimensional volume images from multiple images of two-dimensional radiographic images have been generated using digital geometry processing. The application of the X-ray and CT scan is the medical imaging which is widely used.
CT scan uses a narrow X-ray beam that circles around the parts of the human body in 360 degrees. By this, a number of images will be collected from different angles. CT scan produces information which can detect bone and joint problems. In the case of cancer or heart disease, a CT scan will spot the part on which the disease is caused. It can also detect the presence of a tumour and its size and how much it has affected the tissues nearby.
The advantages of CT scan are short scan time, high resolution, wider scanning area and higher penetration depth. The applications where the CT scan is used are tumour simulation, brain diagnostic and treatment, tumour detection, deep brain simulation and brain tumour surgery.

MRI Scan
Magnetic Resonance Imaging (MRI) is a widely used technique in radiology which gives the details of soft tissues. MRI scanners use strong magnetic ield and radio frequencies rather than ionizing radiation, which is used in X-ray and CT scan.
Because of the hazards of X-rays, MRI has provided to be a better choice compared to CT scan in medical imaging. It is used in hospitals and clinics for medical diagnosis, to ind the stages of the disease and not to expose the human body into radiation which will create risk in the body. MRI scans will take longer and are much louder than compared to the CT scan. People with some medical implants or other nonremovable metal inside the body may be unable to undergo the MRI process safely.
The magnetic ield strength of an MRI machine is measured in Tesla (T). The majority of MRI scanners in clinical MRI will be performed at 1.5-3T. These produce an extremely strong magnetic ield up to 50,000 times that is of the earth's magnetic ield. The MRI consist of the primary magnet, gradient magnets, radio frequency (RF) coils and the computer system. In clinical and research MRI, hydrogen atoms are widely used to generate a detectable radio frequency signal that is received by antennas.
Hydrogen atoms are naturally abundant in people and other biological organisms particularly in water and fat. In this case, most MRI scans plot the location of water and fat in the body. Pulses of radio waves excite the nuclear spin energy transition and magnetic ield gradients localise the signal in space. Different contrasts may be generated between the tissues based on the relaxation properties of the hydrogen atoms by varying the parameters of the pulse sequence.

Proposed image fusion technique
The proposed Graph intelligence method consists of following stepsFigure 8.

Pre-processing
Median iltering is a nonlinear method used to remove noise from images. It is widely used as it is very effective at, removing noise while preserving edges. It is particularly effective at removing 'saltand-pepper' type noise. The median ilter works by moving through the image pixel by pixel, replacing each value with the median value of neighbouring pixels. The pattern of neighbours are called the "window", which slides, pixel by pixel over the entire image. The median is calculated by irst sorting all the pixel values from the window into numerical order, and then replacing the pixel being considered with the middle (median), pixel value.

Segmentation using graph-cut
Segmentation is an important part of image analysis. It refers to the process of partitioning an image into multiple segments. Image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The goal of segmentation is to

Figure 11: (a) Input MRI image (b) CT images
simplify and change the representation of an image into something that is more meaningful and easier to analyse. To segment an image by constructing a graph such that the minimal cut of this graph will cut all the edges connecting the pixels of different objects with each other.

Fusion using averaging method
The images can be fused using averaging method. In this method, it takes up the two images and the resultant images will have average pixels of both the images. The pixels of each image will be taken and added and it will be divided by the number of images used. All the pixels in the images will continue this method and the output fused image will be obtained.

Optimisation using swarm intelligence
The fused output is optimised using Swarm Intelligence method. In Swarm Intelligence, Particle Swarm Optimisation (PSO) is one of the type used in optimising the output fused image.

Graph Cut Algorithm
The study of graphs is called as the Graph theory Figure 9. It is an associate degree abstract illustration of set of objects, wherever many parts of the objects are connected by links. It is used to model pair-wise relations between objects from a certain collection, and it is a mathematical structure.
It introduces some de initions to give a more mathematical representation of the graphs: In a graph G = (V, E), let V and E denotes the vertices and edges of G graph. A directed graph associates a con ident label (weight) with each in the graph. It consists of a pair of vertices V and a set of wellorganized pair's of edges. A weighted directed graph with two identi ied nodes is called the s-t graph. The nodes present in the s-t graph are the source\and the basin t. An s-t cut, c ( , t), in a graph G is a set of edge E cut if there is no path from the source to the sink when E cut has removed from G. The sum of the edge weights in E cut is the cost of the cut E cut . The low f (u,v) is a representing f : E → R + ,(u, v) → f(u, v) which\ful ills the conservation of lows and the weight constraint. The value of the low f (u,v)is de ined by |f| = ∑ v∈V f ( s,v), where the source of the graph. It denotes the amount of low passing as of the source to the sink.
The most maximum low problem is to boost |f|, that way to course as much stream as possible from the source to the sink. The minimum cut problem is to limit c(S, T) for example, to discover an s-t cut with an insigni icant expense. The maximum stream mincut hypothesis expresses: The greatest estimation of an s-t low is equivalent to the base weight of an s-t cut. The objective is to section an image by building a graph to such an extent that the minimal cut of this chart will cut all the edges connecting the pixels of various objects with one another.
The number of pixels grouped together based on similarity is known as LABEL. This Graph cut algo-

Figure 12: Output Fused Image using (a) Maximum method (b) Minimum method (c) Average method (d) Graph-cut & Swarm Intelligence
rithm takes a minimum cut that is a minimum number of edges used to represent the label. It reduces the number of edges needed to represent a label.

Algorithm for graph cut steps
Step 1: Input Image Step 2: Separation of foreground and background Step 3: Weights assigned between adjacent pixels in the foreground based upon the connectivity to the foreground.
Step 4: Values for weight is high for pixels in the region of interest.
Step 5: Values for weight is less for pixels, not in the region of interest.
Step 6: Pixels are segmented based on the minimum cut and maximum low (maximum information pro-vided in the fused image) Figure 10.

Swarm intelligence
Swarm intelligence is used to optimize the image segmented by any algorithms. It focuses on the feature of the extraction of the features in image processing.

Two of the most important algorithms are
1. Ant Colony Optimization (ACO)

Ant colony optimization
Ant colony Optimization or ACO is a class of Optimization algorithms modelled on the actions of an ant colony. Arti icial ants' simulation agents I-locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions. One variation on this approach is the bees' algorithm, which is more analogous to the foraging patterns of the honey bee.

Particle swarm optimization
Particle swarm optimization or PSO is a global optimization algorithm for dealing with problems in which the best solution can be represented as a point or surface in an n-dimensional space.
Hypotheses are plotted in this space and seeded with an initial velocity, as well as a communication channel between the particles. Particles then move through the solution space and are evaluated according to some itness criterion after each timestamp. Over time, particles are accelerated towards those particles within their communication grouping, which have better itness values. The main advantage of such an approach over other global minimization strategies such as simulated annealing is that the large numbers of members that make up the particle swarm make the technique impressively resilient to the problem of local minima. In this study, we will learn about some of the major applications of swarm intelligence.

Input Images
Source -Brain Tumour CT and MRI Images collected from The Whole Brain Atlas www.med.harvard.edu /aanlib/

Structural Similarity Index
The Structural Similarity (SSIM) Index quality assessment index is based on the computation of three terms, namely the luminance term, the contrast term and the structural term. The overall index is a multiplicative combination of the three terms.
If α = β = γ = 1 (the default for Exponents), and C 3 = C 2 /2 (default selection of C 3 ) the index simpli ies to: SSIM (x, y) = (2µ x µ y + C 1 ) (2σ xy + C 2 ) (µ 2 x + µ 2 y + C 1 ) (σ 2 x + σ 2 y + C 2 ) Discussion The 12 sets have been taken for fusion. 6 sets are the images of Brain Tissue, and 6 sets are the images of Brain Tumour images Figure 11. The set of images has been taken from the Axial slice of the brain because this plane covers all the parts of the brain and shows many tissues in a single slice. The fusion is done in order to get the complementary information from the fused images Figure 12. The extraction of the pixel size of the tumour can be done from the output fused image Figure 13. Image segmentation is done using Graph-cut algorithm, and then the output is optimised using Swarm Intelligence. Particle Swarm Optimization (PSO) is the technique used for the calculation of Structural Similarity Index (SSIM) of the brain Tables 3 and 4. It is inferred that the graph cut results show better performance than the averaging method and the other two methods of Pixel level fusion Figure 14 and Figure 15. This Graph cut method is ef icient since it has high SSIM.

CONCLUSIONS
The 12 sets of brain images have been taken for fusion. Out of these, 6 sets are the Brain Tissue images, and the other 6 sets are the Brain Tumour images. From the fused images, complementary details such as bone information and tissue information can be identi ied, and it aids the physicians to analyse the tumour details approximately. The proposed method Graph cut and Swarm Intelligence minimises the pixel size infusion and provides more information with minimum pixels. This method has been compared with the pixel level method, but SSIM and SNR for this method is found to be high which means more similar information with respect to the original image has been obtained in the fused image. This method can be extended by combining the graph cut algorithm with advanced Swarm Intelligence Optimisation techniques.