Multi-Atlas Brain MRI Segmentation with Multiway Cut

. Characterization of anatomical structure of the brain and ef-ﬁcient algorithms for automatically analyzing brain MRI have gained an increasing interest in recent years. In this paper, we propose an algo-rithm that automatically segments the anatomical structures of magnetic resonance human brain images. Our method uses the prior knowledge of labels given by experts to statistically investigate the spatial correspondences of brain structures in subject images. We create a multi-atlas by registering the training images to the subject image and then propagating corresponding labels to the graph of the image. Label fusion then combines these multiple labels of atlases into one label at each voxel with intensity similarity based weighted voting. Finally we cluster the graph using multiway cut in order to achieve the ﬁnal 3D segmentation of the subject image. The promising evaluation results of our segmentation method on the MRBrainS13 Test Dataset shows the eﬃciency and robustness of our algorithm.


Introduction
We propose an algorithm that automatically segments the anatomical structures of magnetic resonance human brain images.There is a strong need for a computer aided system that will automatically and accurately define the anatomical structures of the brain: automatic segmentation would be a significant improvement as it would decrease the need for manual interpretations, the subjectivity of these interpretations and also the dependency on experienced physicians at diagnosis, pre-surgery and post-surgery evaluation stages.
Existing brain MRI segmentation algorithms range from algorithms that use low-level based intensity information and appearance models to algorithms on higher-order morphological information of the structures of the brain.Various learning algorithms using SVM and artificial neural networks [1,2], appearance and shape-based algorithms [3], labeling with registration based approaches and deformation maps [4] have been proposed.
Characterization of anatomical structure of the brain via multi-atlas segmentation has gained interest in recent years and proved to be efficient for the use of medical imaging community [5][6][7][8][9].Multi-Atlas segmentation and label fusion methods allow us to make use of the expert knowledge while evaluating a new subject.By propagating this priori knowledge, we are able to statistically investigate the spatial and morphometric correspondences of brain structures.We first align the training images and the labeled templates to the target image with registration to create atlases.After the registration step, for each atlas image, a fully connected graph on the subject image and its correspondence with the atlas image is constructed.We construct the graph on the voxels of the subject image, we define the correspondences with the atlas by adding edges between the nodes and the four terminal nodes we add to the graph, each representing a label.We define the edge weights based on the intensity similarity between the subject voxels and the corresponding atlas voxels.Label fusion then propagates the labels of multiple atlases and combines them into one label at each voxel.The common approaches for label fusion of multi-atlases mostly rely on only voting, however we use an intensity similarity based weighted voting.This helps us to use the intensity similarity information of corresponding pixels as well as the spatial correspondence.Finally we cluster the graph using a multiway cut in order to achieve the final 3D segmentation of the subject image.Classical graph cut algorithms are commonly used in the segmentation of medical images, in our work we show that multiway cut with a greedy splitting approximation algorithm could be used as well, to successfully segment brain MRI for multiple labels.

Preprocessing
In order to increase the performance of our method, we first denoise the images using the publicly available MRI Analysis Software: FSL's SUSAN 3D noise reduction method [10,11].SUSAN noise reduction uses nonlinear filtering to reduce the noise by only averaging the voxels with similar intensities.

Image Registration
Automatically performing segmentation on brain structures in MRI scans calls for the need to build a common reference to anatomical regions [12].In order to infer spatial relationships of the data and to classify this data into meaningful structures, we need to first establish spatial correspondences across brain scans.Automatically determining these anatomical correspondences is achieved by registering brain scan images to one another or to a template [12].
To find the spatial correspondence between the images, we perform multiple pair-wise non-rigid image registrations from the training image atlases to the subject image.With this approach, the training images and the label atlases are aligned to the subject image using geometric transformations.For this step we use NiftyReg software that is publicly available [13].
NiftyReg provides an efficient registration library.Each training MRI scan is registered with an affine transformation then non-rigidly aligned to the subject image.NiftyReg uses the algorithm by Ourselin et al. [14,15] for the affine registration and the work of Modat et al. [16] for non-rigid registration.The affine registration algorithm by Ourselin et al. searches for correspondences between the training images and the registered subject image using a block matching approach.Then a rigid, affine transformation is computed by minimizing the distance between matched points using Least Trimmed Squares (LTS).This process is iteratively repeated until convergence to reach the best transformation.
Rueckert et al. [17] show that the non-rigid registration performs better than using only rigid or affine registration algorithms as they are more capable of recovering the deformations.In order to use the labels given by experts for the brain MRI scans and define the anatomical structures, it is important that the images are not only affinely transformed but also that they correspond in positions and in sizes.We hence follow the affine registration with non-rigid registration with NiftyReg.The non-rigid registration algorithm of NiftyReg is based on the Free-Form Deformation presented by Rueckert et al. [17].

Multi-Atlas Labeling and Label Fusion
Label fusion methods allow us to make use of the expert knowledge while evaluating a new subject.After the registration step, we need to fuse the labels propagated by different atlases for the same target image.We construct graphs representing each correspondence between a training image and the subject image.
Before constructing the graphs, we further process our registered images by running a brain extraction algorithm.We use FSL's BET (Brain Extraction Tool) tool [18,19] to delete non-brain tissue from the images.Using only the relevant brain tissue in the data helps us to work on smaller graphs leading to a better efficiency.In each graph, voxels of the subject image are represented as nodes and edge weights are based on the intensity similarity.We match the voxels of each training image with the spatially corresponding voxel of the subject image.We then propagate the labels of the training voxels to the corresponding subject voxels.We do this by adding four terminal nodes to the graph, each representing a label, then we link these nodes and the subject image nodes by adding edges between them.For each training image we construct a graph that looks similar to Figure 2.
We construct the graph of subject image I with the labels from the atlas I k , which is the training image that is registered to the subject image.
Let p and q be two pixels in I. p k is the corresponding pixel of p in the registered image of I k and f p k is the label of p k .
After we add four terminal nodes to the graph, each representing a label, we set the labels of p k to t i where i = {1, 2, 3, 4} Then we define the distances based on intensity similarity.We define the distance between the subject image nodes and terminal nodes as: The following step is based on the assumption that the registration has at least 0.5 precision for each structure: And we define the distance between two nodes in the subject image, p and q, as: After defining the distances, we define the edge cost between any two nodes in the graph as: Finally, we fuse the labels and combine them into one label at each voxel with intensity similarity based weighted voting (Figure 3).

Segmentation with Multiway Cut
The concept of graph cut algorithms proposed by Boykov and Jolly [20] has been widely used to solve computer vision problems.A graph cut is basically the process of partitioning a graph into disjoint sets.In the computer vision community, to make use of graph cut algorithms, the images are defined as graphs whose nodes represent the pixels or units and the edges are assigned weights.A cut C in a graph G = (V, E) is a set of edges, whose removal from G disjoins the graph in such a way that there exists no path between these disjoint sets.An energy is associated with each cut, which is the total weight of all edges to be removed to form this cut.Optimally, in a graph whose edges represent the similarity between the nodes, we wish to find cuts of minimum energy, or cost.This ensures that we are partitioning the graph into disjoint sets that are separated with the edges where the similarity of the sets are at lowest.[21] is one of the works that adapted the graph cut algorithm to a medical imaging problem.
The multiway cut [24] is a generalization of the graph cut approach for partitioning the graph into multiple disjoint sets.Given a set S = s 1 , s 2 , s 3 , .., s k of k vertices, we wish to find a cut of minimum cost that will separate each disjoint set of vertices s i from others in the graph.Multiway cuts are used in segmentation of both vision and also in medical imaging problems [23,22].This problem for binary cases where there are only two sets k = 2, could be solved using the polynomial time Ford-Fulkerson method that computes min-max flow in a graph [25].However, multiway cut for k ≥ 3 is known to be NP-hard [26].
Since we wish to segment the gray matter, white matter, cerebrospinal fluid and the background of the brain MRI scans of the MRBrainS13 dataset, we have four labels (k = 4), and thus four disjoint sets to be segmented.As multiway cut is NP-hard, we use the algorithm proposed by [27] with the approximation ratio of 2−2/k.With this algorithm we first find an isolating cut for each vertex in the set S, that is for each terminal node that represents the labels, then we perform a greedy search to spot the minimum cost isolating cut C i .We remove the edges of this cut from the graph with the vertex S i .On the newly constructed graph, we repeat the same procedure to recompute the values of the isolating cuts for remaining vertices in S.
Given the set S = s 1 , s 2 , s 3 , .., s k of k vertices and the graph G, the steps of our greedy algorithm is as below: 1.For each vertex in the set S, compute isolating cuts that separates Si from all other vertices in S. 2. Search for the minimum of the isolating cuts computed in Step 1 and call it Cj.3. Remove the edges of Cj from the graph to get a new graph.4. Add the edges of Cj to C to form final the cut. 5. Remove the vertex Si from S: the set of vertices.6. Repeat the steps above for k-1 times on the newly constructed graph with remaining vertices.
With our algorithm we partition the graph at each step and repeat the same procedure on the newly constructed graph with remaining vertices.This greedy approach might be time-consuming for multiple labels as at each step we compute the isolating cuts for each vertex, however for four labels of gray matter, white matter, cerebrospinal fluid and background, the algorithm has an efficient performance.After repeating this process for k − 1 times, we reach the final partition of the graph into k disjoint sets as each of the vertices in set S will be disconnected from all other vertices in S. The resulting set C is the cut we are looking for.

Experiments and Results
The evaluation results of our automatic segmentation method on the MRBrainS13 Test Dataset are in Table 1.The mean and standard deviation of all metrics (Dice, Hausdorff Distance and percentage Absolute Volume Difference [28]) over all patients are reported in Table 1.Three individual results on the patients are shown in Figures 4, 5 and 6.The results show that our algorithm is able to segment the structures of the brain efficiently.Especially for the white and gray matter structures of the brain, our algorithm shows a robust performance.However for the cerebrospinal fluid, our algorithm results has room for improvement to handle the large standard deviation.

Evaluation of our Algorithm
Our algorithm uses only the thick-slice T1-weighted, IR and FLAIR scans.However our system could be improved by adding a multi-resolution mechanism to use thin sliced scans as well.We use the MRBrainS13 training dataset which provides scans of healthy brain structures.The registration step of our algorithm, used while creating multi-atlas labels, plays a key role in the accuracy.Our method assumes that the registration has at least 0.5 precision for each  structure.The performance of our algorithm, therefore, may decrease if there are major deformations in the dataset images that the registration can not recover.However, since we have not fine-tuned our training set, our algorithm could be used with various datasets.
For each subject image our algorithm takes about 30 minutes to perform, roughly 5 minutes for the registration of each training image and another five minutes to fuse the labels and segment the anatomical structures of the subject brain image on an Intel Core i7-3770K, 3.50 GHz processor and 16GB memory system.In our work, we focus on accuracy rather than speed.It is possible to change our algorithm to align and transform all the subject images by registering to each other, and then creating a multi-atlas with label fusion.With this map, we can register the subject images to this combined image.This would dramatically decrease the time required as it would require only one registration for each subject image.We could then follow a similar approach for the remaining steps of our algorithm.We recommend this approach also for bigger training datasets.

Conclusion
In this work, we have proposed an algorithm that automatically segments the anatomical structures of magnetic resonance human brain images.We show that it is possible to propagate the prior knowledge of labels given by experts to the subject images to statistically investigate the spatial correspondences of brain structures.In addition to spatial correspondence, we use an intensity similarity based weighted voting for label fusion.We also show how multiway cut can be used for 3D segmentation of brain MRI.The evaluation results of our segmentation method on the MRBrainS13 Test Dataset shows the efficiency and robustness of our algorithm.

Fig. 1 .
Fig. 1.A sample registration.The first image on the left is the training image and the last image at the end of the row is the subject image.When we register the training image to the subject image, the image in the middle is the output of this process.Notice how the training image is aligned and transformed to look similar to the subject image.

Fig. 2 .
Fig.2.We construct the graph with voxels as the nodes and edge weights based on the intensity similarity, we then propagate the labels of the training voxels to the corresponding subject voxels via four nodes each representing a label class.

Fig. 3 .
Fig. 3. Label fusion is done by propagating the labels and combining these multiple labels of atlases into one label at each voxel with weighted voting.

Fig. 5 .
Fig. 5. Another sample test image of a patient, and the segmentation result of our algorithm on the image.The Dice metric over the patient is also given in the table.

Fig. 6 .
Fig.6.A sample showing the limitations of our algorithm.The test image of a patient, and the segmentation result of our algorithm on the image is shown.The Dice metric over the patient is also given in the table.This is an image on which our algorithm performed poorly on segmenting the cerebrospinal fluid.

Table 1 .
Evaluation results of our method with the mean and standard deviation of Dice, Hausdorff Distance and percentage Absolute Volume Difference metrics over all patients are shown in the table.
Fig. 4. A sample test image of a patient and the segmentation result of our algorithm on the image.The Dice metric over the patient is also given in the table.