Spinal cord gray matter segmentation using deep dilated convolutions

Gray matter (GM) tissue changes have been associated with a wide range of neurological disorders and were recently found relevant as a biomarker for disability in amyotrophic lateral sclerosis. The ability to automatically segment the GM is, therefore, an important task for modern studies of the spinal cord. In this work, we devise a modern, simple and end-to-end fully-automated human spinal cord gray matter segmentation method using Deep Learning, that works both on in vivo and ex vivo MRI acquisitions. We evaluate our method against six independently developed methods on a GM segmentation challenge. We report state-of-the-art results in 8 out of 10 evaluation metrics as well as major network parameter reduction when compared to the traditional medical imaging architectures such as U-Nets.


Introduction
Gray matter (GM) and white matter (WM) tissue changes in the spinal cord (SC) have been linked to a large spectrum of neurological disorders [2].For example, using magnetic resonance imaging (MRI), the involvement of the spinal cord gray matter (SCGM) area in multiple sclerosis (MS) was found to be the strongest correlate of disability in multivariate models including brain GM and WM volumes, FLAIR lesion load, T1-lesion load, SCWM area, number of spinal cord T2 lesions, age, sex and disease duration [34].Another study showed that SCGM atrophy is a relevant biomarker for predicting disability in amyotrophic lateral sclerosis [26].
The ability to automatically assess and characterize these changes is, therefore, an important required step [11] in the modern pipeline to study both the in vivo and ex vivo SC.The segmentation outcome can also be used for co-registration and spatial normalization to a common space.Moreover, the fully-automated segmentation is very useful for longitudinal studies, where the delineation of gray matter is very time-consuming [11].
While recent cervical cord cross-sectional area (CSA) segmentation methods have achieved near-human performance [9], the accurate segmentation of the GM is still a remaining challenge [30].The main properties that make the GM area difficult to segment are: inconsistent surrounding tissue intensities, image artifacts and pathology-induced changes in the image contrast [11].
Other factors also contribute to the complexity of the GM segmentation task, such as lack of standardized data sets, differences in MRI acquisition protocols, different pixel sizes, different methods to acquire gold standard segmentations and different performance metrics to assess segmentation results [30].In Fig- ure 1, we show some MRI samples (axial slices) acquired in different centers, where we can visually see the variability present in different acquisitions.
However, despite these difficulties, the scientific community recently organized a joint collaboration effort called "Spinal Cord Gray Matter Segmentation Challenge" (SCGM Challenge) [30] to characterize the state-of-the-art and compare six independent developed methods [29] [5][8] [13][4] [28] on a public available standard data set created through the collaboration of four internationally recognized spinal cord imaging research groups (University College London, Polytechnique Montreal, University of Zurich and Vanderbilt University), providing therefore a ground basis for method comparison that was previously unfeasible.
In the past few years, we witnessed the fast and unprecedented development of Deep Learning [22] methods, that not only achieved human-level performance but also surpassed it [17], even in health domain applications [31].After the groundbreaking results presented in the seminal paper of the AlexNet [21], the community embraced the successful Deep Learning approach for machine learning and consequently developed many methods that are nowadays state-of-the-art and pervasive in many different fields such as image classification [16], image segmentation [6], speech recognition [1], natural language processing (NLP), among others.
Deep Learning is characterized by a major shift from the past traditional handcraft feature extraction to a hierarchical representation learning approach where multiple levels of automatically discovered representations are learned from raw data [22].
On a recent survey [24] that reviewed over 300 papers using Deep Learning techniques for medical image analysis, the authors found that Deep Learning techniques have spread throughout the entire field of medical image analysis, with a rapid increase in the number of published studies between the years of 2015 and 2016.The survey also found that Convolutional Neural Networks (CNNs) were more prevalent in the medical image analysis, with Recurrent Neural Networks (RNNs) gaining more popularity.
Although the enormous success of Deep Learning has attracted a lot of attention of the research community, some challenges in the medical imaging domain still remain open: • Data acquisition is usually very expensive and require time-consuming specialist annotation to create gold standards; • Standardized data sets are still a major problem due to variability in equipment from different vendors, acquisition protocols/parameters/contrasts, especially in the MRI domain; • Data availability is also limited due to privacy/ethics concerns or regulations [24]; In this work, we propose a new simple pipeline with an end-to-end learning approach for fully automated spinal cord gray matter segmentation using a novel Deep Learning architecture based on the Atrous Spatial Pyramid Pooling (ASPP) [7][6], where we achieved state-of-the-art results on many metrics in an in vivo independent data set evaluation.We also show excellent generalization on an ex vivo high-resolution acquisition data set where only a few axial-slices were annotated to accurate segment an MRI volume with more than 4000 axial slices.
We also provide an evaluation comparing our method with the traditionally used U-Net [33] architecture and with other six independently developed methods.

Related Work
Many methods for spinal cord segmentation were proposed in the past.Regarding the presence or absence of manual intervention, the segmentation methods can be separated in two main categories: semi-automated and fully-automated.In [11], the authors also classify spinal cord segmentation methods in the following categories: In [4], they propose a probabilistic method for segmentation called "Semi-supervised VBEM", where the MRI signals are assumed to be observed data generated by warping of an average shaped reference anatomy [30].The observed image intensities are modeled as random variables drawn from a Gaussian mixture distribution, where the parameters are estimated using a variational version of the Expectation-Maximization (EM) [4] algorithm.The method can be used in a fully unsupervised fashion or by incorporating training data with manual labels, hence the semi-supervised scheme [30].
The SCT (Spinal Cord Toolbox) segmentation method [13], uses an atlas-based approach and was built based on a previous work [3] but with additional improvements such as the use of vertebral level information and linear intensity normalization to accommodate multi-site data [13].The SCT approach first builds a dictionary of images using manual WM/GM segmentations after a pre-processing step, then the target image is also pre-processed and normalized, after that, the target image is projected into the PCA (Principal Component Analysis) space of the dictionary images where the most similar dictionary slices are selected using an arbitrary threshold, and finally, the segmentation is done using label fusion between the manual segmentations from the dictionary images that were selected [30].The SCT method is freely available as an open-source software1 package [10].
In [29], they propose a method called "Joint collaboration for spinal cord gray matter segmentation" (JCSCS), where two existing label fusion segmentation methods were combined.The method is based on a multi-atlas segmentation propagation using registration and segmentation in 2D slice-wise space.In JCSCS, the "Optimized PatchMatch Label Fusion" (OPAL) [14] is used to detect the spinal cord, where the cord localization is achieved by providing an external data set of spinal cord volumes and their associated manual segmentation [29], after that, the "Similarity and Truth Estimation for Propagated Segmentations" (STEPS) [19] is used to segment the GM in two steps, first the segmentation propagation, and then a consensus segmentation is created by fusing best-deformed templates (based on locally normalized cross-correlation) [29].
In [8], the Morphological Geodesic Active Contour (MGAC) algorithm uses an external spinal cord segmentation tool (Jim, from Xinapse Systems) to estimate the spinal cord boundary as well as a morphological geodesic active contour model to segment the gray matter.The method has five steps: first, the original image spinal cord is segmented with the Jim software and then a template is registered to the subject cord, after that the same transformation is applied to the GM template.The transformed gray matter template is then used as an initial guess for the active contour algorithm [8].
The "Gray matter Segmentation Based on Maximum Entropy" (GSBME) algorithm [30] is a semi-automatic, supervised segmentation method for the GM.The GS-BME is comprised of three main stages.First, the image is pre-processed, in this step the GSBME uses the SCT [10] to segment the spinal cord using Propseg [9] with manual initialization, after that the intensities are normalized and denoised.In the second step, the images are slice-wise thresholded using a sliding window where the optimal threshold is found by maximizing the sum of the GM and WM intensity entropies.In the third and last stage, an outlier detector discards segmented intensities using morphological features such as perimeter, eccentricity and Hu moments among others [30].
In the Deepseg approach [28], built on top of [5], they use a Deep Learning architecture similar to the U-Net [33], where a CNN has a contracting and expanding path.The contracting path aggregates information while the expanding path upsamples the feature maps in order to achieve a dense prediction output.To recover spatial information loss, shortcuts are added between contracting/expanding paths of the network.In Deepseg, instead of using upsampling layers like in U-Net, they use an unpooling and "deconvolution" approach such as in [40].The network architecture used has 11 layers and is pre-trained using 3 convolutional restricted Boltzmann Machines [23].Deepseg also uses loss function with a weighted sum of two different terms, the mean square differences of the GM and non-GM voxels, balancing sensitivity and specificity [30].Two models were trained (independently), one for the full spinal cord segmentation and another for the GM segmentation.
We compare our method with all the aforementioned methods on the SCGM Challenge [30] data set.

Methods and Materials
As we saw in the Related Work section, the majority of the previously developed GM segmentation methods usually rely on registered templates/atlases, arbitrary distance and similarity metrics or complex pipelines that aren't optimized in an end-to-end fashion, neither efficient during inference time.
In this work, we focus on the development of a simple Deep Learning method that can be trained in an endto-end fashion and that generalizes well even with a small number of 2D labeled axial slices of a 3D MRI volume.

Note on U-Nets
Many modern Deep Learning CNN classification architectures use alternating layers of convolutions and subsampling operations to aggregate semantic information and discard spatial information across the network, leading to certain levels of translation and rotation invariance that are important for classification.However, in segmentation tasks, a dense full-resolution output is required.In medical imaging, the most traditional architecture for segmentation is the wellknown U-Net [33], where two distinct paths (encoderdecoder/contracting-expanding) are used to aggregate semantic information and recover the spatial information with the help of shortcut connections between the paths.
The U-Net architecture, however, causes a major expansion of the parameter space due to the two distinct paths that form the U-shape.We also found, such as in [41], that the gradient flow in the high-level layers of the U-Nets (bottom of the U-shape) is problematic.Since the final low-level layers have access to the earlier low-level features, the network optimization will find the shortest path to minimize the loss, thus reducing the gradient flow in the bottom of the network.
By visualizing feature maps from the U-Net using techniques described in [38], we found that the features extracted in the bottom of the network were very noisy while the features extracted in the low-level layers were the only ones showing meaningful patterns.By removing the bottom layers of the network, we found that the network performed the same or sometimes better than the deeper network.

Proposed method
Our method is based on the state-of-the-art segmentation architecture called "Atrous Spatial Pyramid Pooling" (ASPP) [6] that uses "Atrous convolutions", also called "dilated convolutions" [39].We performed modifications to improve the segmentation performance on medical imaging by handling imbalanced data with a different loss function and also by extensively removing decimation operations from the network such as pooling, trading depth (due to memory constraints) to improve the translational equivariance of the network and also parameter reduction.
Dilated convolutions allow us to exponentially grow the receptive field with linearly increasing number of parameters, providing a significant parameter reduction while increasing the effective receptive field.Dilated convolutions works by introducing "holes" [7] in the kernel as illustrated in Figure 2.For an 1D signal x[i], the y[i] output of a dilated convolution with the dilation rate r and a filter w[s] with size S is formulated as: The dilation rate r can also be seen as the stride to which the input signal is sampled [7].Dilated convolutions, like standard convolutions, also have the advantage of being translationally equivalent, which means that translating the image will result in a translated version of the original input, as seen below: Where g(•) is a translation operation and f (•) a convolution operation.However, since we don't need to introduce pooling to capture multi-scale features when using dilated convolutions, we can keep the translational equivariance property in the network, which is very important for spatial dense prediction tasks.The overall proposed architecture can be seen in Figure 3.Our architecture works with 2D slice-wise axial images and is composed of (a) two initial layers of standard 3x3 convolutions, followed by (b) two layers of dilated convolutions with rate r = 2, followed by (c) six parallel branches with two layers each of a 1x1 standard convolution, 4 different dilated convolution rates (6/12/18/24) and a global averaging pooling that is repeated at every spatial position of the feature map.After that, the feature maps from the six parallel branches are concatenated and forwarded to (d) a block of 2 layers with 1x1 convolutions in order to produce the final dense prediction probability map.Each layer is followed by Batch Normalization [18] and Dropout [36] layers.
Figure 4 illustrates the pipeline of our training/inference process.An initial resampling step downsamples/upsamples the input axial slice images to a common pixel size space, then a simple intensity normalization is applied to the image, followed by the network inference stage.
Contrary to the task of natural images segmentation, the task of GM segmentation in medical imaging is usually very unbalanced.In our case, only a small portion of the entire axial slice encompasses the GM (the rest being comprised of other structures such as the white matter, cerebrospinal fluid, bones, muscles, etc.).Due to this imbalance, we employed a surrogate loss for the DSC (Dice Similarity Coefficient) called the Dice Loss, which is insensitive to imbalancing and was employed by many works in medical imaging [25] [12].The Dice Loss can be formulated as: Where p and r are the predictions and gold standard respectively.The term is used to ensure the loss stability by avoiding the numerical issues.We experimentally found that the Dice Loss yielded better results when compared to the weighted cross-entropy (WCE) used by [33], which is more difficult to optimize due to the added weighting hyper-parameter.
Medical image data sets are usually smaller than natural image data sets by many orders of magnitude, therefore regularization and data augmentation is an important step.In this work, the following data augmentation strategies were applied: rotation, shifting, scaling, flipping, noise and elastic deformation.
The main differences when we compare our architecture with [6], are the following: Initial pooling/decimation: our network does not use initial pooling layers as we found them detrimental to the segmentation of medical images; Padding: we extensively employ padding across the entire network to keep feature map sizes fixed, trading depth to reduce memory usage of the network; Dilation Rates: since we don't use initial pooling, we kept the parallel dilated convolution branch with rate r = 24, as we found improvements by doing so, due to the large feature map size that doesn't cause filter degeneration as seen in [6]; Loss: contrary to natural images, our task of GM segmentation is very unbalanced, instead of traditional cross-entropy, we used the Dice Loss; Data Augmentation: in this work we applied not only scaling and flipping as seen in [6] but also rotation, shifting, added channel noise and elastic deformations [35].
Table 1 compares the setup parameters of our approach as well as the methods that participated in the SCGM Segmentation Challenge [30].

U-Net architecture
For the U-Net [33] architecture model that was used for comparison, we employed a 14-layers network using standard 3x3 2D convolution filters with ReLU nonlinearity activations.For a fair comparison, we used the same training protocol and loss function.For the data augmentation strategy, we employed a more aggressive augmentation due to overfitting issues with the U-Net that we'll discuss later.We also performed a extensive architecture exploration, and used the best performing U-Net model architecture.

Data sets
In this subsection, we present the data sets used for evaluation in this work.

Spinal Cord Gray Matter Challenge
The Spinal Cord Gray Matter Challenge [30] (SCGM Challenge) data set is comprised by 80 healthy subjects (20 subjects from each center).The demographics ranges from a mean age of 28.3 up to 44.3 years old.Three different MRI systems were used (Philips Achieva, Siemens Trio, Siemens Skyra) with different acquisition parameters.The voxel size resolution of the data set ranges from 0.25x0.25x2.5 mm up to 0.5x0.5x5.0 mm.The data set is split between training (40) and test (40) with the test set hidden.For each labeled slice in the data set, 4 gold standard segmentation masks were produced by 4 independent expert raters (one per site).Examples of the data set for each center are shown in the Figure 1.

Gray Matter Segmentation
During the development of this work, we found some misclassified voxels in the training set, these issues were reported, however for the sake of fair comparison, all the evaluations done in this work used the original pristine training data set.

Ex vivo high-resolution spinal cord
To evaluate our method on an ex vivo data set, we used an MRI acquisition that was performed on an entire human spinal cord, from the pyramidal decussation to the cauda equina using a 7T horizontal-bore small animal MRI system.
Although the acquisition was obtained from a deceased adult male with no known history of neurologic disease, the review of images revealed a clinically occult SC lesion close to the 6th thoracic nerve root level, with imaging features suggestive of a chronic compres-sive myelopathy or possible sequela of a previous viral infection such as herpes zoster.
The volume is comprised by a total of 4676 axial slices with 100 µm isotropic resolution and the acquisition time took approximately 120 hours.

Spinal Cord Gray Matter Challenge
In this subsection we show the training protocol for the SCGM Challenge [30] data set experiments.
Resampling and Cropping: All volumes were resampled to a voxel size of 0.25x0.25 mm, the highest resolution found between acquisitions.All the axial slices were center-cropped to a 200x200 pixels size; Normalization: We performed only mean centering and standard deviation normalization of the volume intensities; Train/validation split: For the train/validation split, we used 8 subjects (2 from each site) for validation and the rest for training.The test set was defined by the challenge.We haven't employed any external data or used the vertebral information from the provided dataset.Only the provided GM masks were used for training/validation; Batch size: We used a small batch size of only 11 samples; Optimization: We used Adam [20] optimizer with a small learning rate η = 0.001; Batch Normalization: We used a momentum φ = 0.1 for BatchNorm due to the small batch size; Dropout: We used a dropout rate of 0.4; Learning Rate Scheduling: Similar to [6], we used the "poly" learning rate policy where the learning rate is defined by: Where η t0 is the initial learning rate, N is the number of epochs, n the current epoch and p the power with p = 0.9; Iterations: We trained the model for 1000 epochs (w/ 32 batches at each epoch); Data augmentation: We applied the following data augmentations: rotation, shift, scaling, channel shift, flipping and elastic deformation [35].The data augmentation parameters were chosen using random search; Contrary to the very smooth decision boundaries that models trained using the traditional cross-entropy present, the Dice Loss has the property of creating very sharp decision boundaries and models with high recall rate.We found experimentally that thresholding the dense predictions with a threshold τ = 0.999 provided a good compromise between precision/recall, however no optimization was employed to choose the threshold τ value for the output predictions.
Since the test data set is hidden from the challenge participants, to evaluate our model we sent our produced test predictions to the challenge website 2 .Results are presented in Table 2 on the column "Proposed Method", along with the other six previously developed methods and 10 different metrics.
The training time on a single NVIDIA P100 GPU took approximately 3 19 hours.While inference time took less than 1 second per subject.

Inter-rater variability as label smoothing regularization
The training data set provided by the SCGM Challenge is comprised of 4 different masks that were manually and independently created by raters for each axial slice.
As in [5], we used all the different masks as our gold standard.We also found that this approach shares the same principle of using label smoothing as seen in [37].Label smoothing is a mechanism that make the model be less confident by preventing the network from assigning a full probability to a single class, usually an evidence of overfitting.In [27], they also found a link between label smoothing and confidence penalty through the direction of the Kullback-Leibler divergence.
Since the different gold standard masks for the same axial slices diverges usually only in the border of the GM, it is easy to see that this has a label smoothing 2 http://cmictig.cs.ucl.ac.uk/niftyweb 3 Using single-precision floating-point (fp32) and TensorFlow 1.3.0framework with cuDNN 6 effect on the contour of the GM, encouraging the model to be less confident in the contour prediction of the GM, a kind of "contour smoothing".
This interpretation suggests that one could also incorporate this contour smoothing by artificially adding label smoothing on the contours of the target anatomy, where raters usually disagree on the manual segmentation, leading to potentially better model generalization on many different medical segmentation tasks where the contours are usually the region of raters disagreement.
We leave the exploration of this contour smoothing to future work.

Ex vivo high-resolution spinal cord
In this subsection we show the training protocol for the ex vivo high-resolution spinal cord data set.
Cropping: All the slices were center-cropped to a 200x200 pixels size; Normalization: We performed only mean centering and standard deviation normalization the volume intensities; Train/validation split: For the training set we selected only 15 evenly spaced axial slices out of 4676 total slices from the volume.For the validation set, we selected 7 (evenly spaced) axial slices and our test set was comprised of 8 axial slices (also evenly distributed across the entire volume); Batch size: We used a small batch size of only 11 samples; Optimization: We used Adam [20] optimizer with a small learning rate η = 0.001; Dropout: We used a dropout rate of 0.4; Learning Rate Scheduling: Similar to [6], we used the "poly" learning rate policy where the learning rate is defined by: Where η t0 is the initial learning rate, N is the number of epochs, n the current epoch and p the power with p = 0.9; Iterations: We trained the model for 600 epochs (w/ 32 batches at each epoch); Data augmentation: For this dataset, we used the following aforementioned augmentations: rotation, shift, scaling, channel shift, flipping and elastic deformation [35].We didn't employed random search to avoid overfitting due to the data set size; Like in the SCGM Segmentation task, we used a threshold τ = 0.999 to binarize the prediction mask.
The training time on a single NVIDIA P100 GPU took approximately 4

Results
In this section, we discuss the experimental evaluation of the method in the presented data sets.

Spinal Cord Gray Matter Challenge
In this subsection we show the evaluation on the SCGM Challenge [30] data set.

Qualitative Evaluation
In Figure 5, we show the segmentation output of our model in four different subjects from acquisitions of four different centers on the test set of the SCGM Segmentation Challenge.The majority voting segmentation was taken from [30].As we can see in Figure 5, our approach was able to capture many properties of the GM anatomy, providing good segmentations even in presence of blur as seen in the samples from the Site 1 and Site 3.
When compared with the segmentation results from Deepseg [28], that uses a U-Net like structure with pre-training and 3D-wise training, we can see that our doesn't fail to segment the gray commissure of the GM structure as seen in the Figure 4 of [30].

Quantitative Evaluation
As we can see in Table 2, our approach achieved stateof-the-art results in 8 out of 10 different metrics and surpassed 4 out of 6 previously developed methods on all metrics.
We can also see that the Dice Loss is not only an excellent surrogate for the Dice Similarity Coefficient (DSC) but also a surrogate for distance metrics, as can we note that our model not only achieved state-of-theart results on overlap metrics (i.e.DSC) but also on distance and statistical metrics.
The True Negative Rate (TNR) and Positive Predictive Value (PPV) or precision, were the ones where the model didn't achieve the best results, however we note that the TNR was very close to the other methods results.We also hypothesize that the suboptimal results of the precision (PPV) are an effect of the sharp decision boundary produced by our model due to the Dice Loss.We believe that prediction threshold optimization can certainly yield better results, however this cost optimization would require further investigations.
When compared to Deepseg [28] method, the only method using Deep Learning in the challenge, where an U-Net based architecture was employed, our proposed approach performed better in 8 out of 10 metrics, even though our method didn't employed 3D convolutions, pre-training or threshold optimization as in Deepseg [28].

Ex vivo high-resolution spinal cord
In this subsection we show the evaluation on the ex vivo high-resolution spinal cord data set.

Qualitative Evaluation
In the Figure 8, we show a qualitative evaluation of the segmentations produced by our method and the U-Net model, contrasting the segmentations against the original and gold standard images.
As can be seen in the test sample depicted in the first column of Figure 8, the predictions of the U-Net "leaked" the gray matter segmentation up to the cerebrospinal fluid (CSF) close to the dorsal horn (see green rectangle on first column), while our proposed segmentation was much more contained on the gray matter region only.
Also, in the third column of the Figure 8, the U-Net significantly oversegmented a large portion of the gray matter region, extending the segmentation up to the white matter close to the right lateral horn of the gray matter anatomy (see the green rectangle), while our proposed method performed well.
We also provide in Figure 7 a 3D rendered representation of the segmented gray matter using our method.

Quantitative Evaluation
As we can see in Table 3, where we show the quantitative results of our approach, our method achieved better results on 6 out of 8 different metrics.One of the main advantages that we can see from these results is that our method uses 6x less parameters than the U-Net architecture, leading to less chance of overfitting and potentially better generalization.
During the training of the two architectures (U-Net and our method), we noticed that even with a high dropout rate of 0.4, the U-Net was still overfitting, forcing us to use a more aggressive data augmentation strategy to achieve better results, especially for the shifting parameters of the data augmentation; we hypothesize that this is an effect of the decimation on the contracting path of the U-Net, that disturbs the translational equivariance property of the network, leading to a poor performance on segmentation tasks.

Discussion
In this work, we devise a simple, but efficient and endto-end method that achieves state-of-the-art results in many metrics when compared to six independently developed methods, as detailed in Table 2. To the best of our knowledge, our approach is the first to achieve better results in 8 out of 10 metrics evaluated in the SCGM Segmentation Challenge [30].
One of the main differences with other methods from the challenge is that our method employs an  [30] for each evaluated metric.Our method is shown as "Proposed".Best viewed in color.end-to-end learning approach, where the entire prediction pipeline is optimized using backpropagation and gradient descent, contrasting with the other methods that usually employ separate registrations, external atlases/templates data and label fusion stages.
As we can also see in Table 3, when we compare our method to the most traditionally used method (U-Net) for medical image segmentation, our method provides not only better results in many metrics but also a major parameter reduction (more than 6 times).
In the lens of Minimum Description Length (MDL) theory [32], that describes models as languages for describing properties of the data and sees inductive inference as finding regularity in the data [15], when two competing explanations for the data explains the data well, MDL will prefer the one that provides a shorter description of the data.As we can see, our approach using dilated filters provides more than 6x parameter reduction than U-Nets, but is also able to outperform other methods in many metrics, an evidence that the model is parameter-efficient and can capture a more compact description of the data regularities when compared with more complex models such as U-Nets.
Our approach is limited to 2D slices, however, the model doesn't restrict the use of 3D dilated convolutions and we believe that incorporating 3D context information into the model would certainly improve the segmentation results, however, at the expense of increased memory consumption.
We also believe that our method can be expanded to take leverage of semi-supervised learning approaches due to the strong smoothness assumption that holds for axial slices in most volumes, especially in ex vivo high-resolution spinal cord MRI.

Acknowledgments
We acknowledge NVIDIA Corporation for the donation of a GPU Titan X board, Compute Canada for the GPU cluster, Zhuoqiong Ren for the help with gray matter gold standard, organizers of the SCGM Segmentation Challenge and participant teams that invested so much effort in this challenge.We also acknowledge United States National Institutes of Health awards P41 EB015897 and 1S10OD010683-01 for funding the ex vivo study.

Author Contributions
C.S.P conceived the method, conducted the experiments, manual segmentations and wrote the paper.J.C.A. provided expert guidance and wrote the paper.E.C. provided the volume and information for the highresolution ex vivo dataset.All authors reviewed the paper.

Additional Information
Competing financial interests: C.S.P., E.C. and J.C.A. declare no competing financial interests.

Figure 1 :
Figure 1: In vivo axial-slice samples from four centers (UCL, Montreal, Zurich, Vanderbilt) that collaborated to the SCGM Segmentation Challenge [30].Top row: original MRI images.Bottom row: crop of the spinal cord (green rectangle).

Figure 2 :
Figure 2: Dilated convolution.On the left, we have the dilated convolution with dilation rate r = 1, equivalent to the standard convolution.In the middle with have a dilation r = 2 and in the right a dilation rate of r = 3.All dilated convolutions have a 3x3 kernel size and the same number of parameters.

Figure 3 :Figure 4 :
Figure 3: Architecture overview of the proposed method.The MRI axial slice is fed to the first block of 3x3 convolutions and then to a block of dilated convolutions (rate 2).Then, six parallel modules with different rates (6/,12/18/24), 1x1 convolution, and a global average pooling are used.After the parallel modules, all feature maps are concatenated and then fed into the final block of 1x1 convolutions to produce the final dense predictions.

Figure 5 :Figure 6 :
Figure 5: Qualitative evaluation of our proposed approach on the same axial slice for subject 11 of each site.From top to bottom row: input image, majority voting segmentation gold standard and the result of our segmentation method.Adapted from [30].

Figure 7 :
Figure 7: Lumbosacral region 3D rendered view of the ex vivo high-resolution spinal cord data set segmented using the proposed method.The gray matter is depicted in orange color while the white matter and other tissues are represented in transparent gray color.

Figure 8 :
Figure 8: Qualitative evaluation of the U-Net and our proposed method on the ex vivo high-resolution spinal cord data set.Each column represents a random sample of the test set (regions from left to right: sacral, thoracic, cervical).Green rectangles shows oversegmentation made by the U-Net model.

Table 1 :
[30]meters of each compared method.Values replicated from[30].Time per slice are estimated values, since different hardware were employed by the different techniques.
2 hours.While inference time took approximately 25 seconds to segment 4676 axial slices.

Table 2 :
[30]arison of different segmentation methods that participated in the SCGM Segmentation Challenge[30]against each of the four manual segmentation masks of the test set, reported here in the format: mean (std).For of fair comparison, the metrics are the same as used in[30]and the results from other methods are replicated here, where we have: Dice similarity coefficient (DSC), mean surface distance (MSD), Hausdorff surface distance (HSD), skeletonized Hausdorff distance (SHD), skeletonized median distance (SMD), true positive rate (TPR), true negative rate (TNR), positive predictive value (PPV), Jaccard index (JI) and conformity coefficient (CC).In bold font, we represent the best-obtained results on each metric.We also note that MSD, HSD, SHD and SMD metrics are in millimeters and that lower values mean better results.(11.87)49.52 (20.29) 29.36 (29.53) 33.69 (24.23) 6.46 (30.59) -44.25 (90.61) 64.24 (10.83)

Table 3 :
Quantitative metric results comparing a U-Net architecture and our proposed approach on the ex vivo high-resolution spinal cord data set.