Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology

: Accurate quantification of retinal layer thicknesses in mice as seen on optical coherence tomography (OCT) is crucial for the study of numerous ocular and neurological diseases. However, manual segmentation is time-consuming and subjective. Previous attempts to automate this process were limited to high-quality scans from mice with no missing layers or visible pathology. This paper presents an automatic approach for segmenting retinal layers in spectral domain OCT images using sparsity based denoising, support vector machines, graph theory, and dynamic programming (S-GTDP). Results show that this method accurately segments all present retinal layer boundaries, which can range from seven to ten, in wild-type and rhodopsin knockout mice as compared to manual segmentation and has a more accurate performance as compared to the commercial automated Diver segmentation software.


Introduction
Accurate quantification of retinal layer thicknesses in spectral domain optical coherence tomography (SD-OCT) images of mouse eyes is crucial for the study and initial treatment evaluation of many ophthalmic and neurologic diseases in humans [1,2]. However, segmenting these layers manually [1,3] is time-consuming, limiting its practicality for use in large-scale studies. Furthermore, layer thicknesses calculated from manual segmentations are inherently subjective due to variability between graders.
While many automated algorithms for segmenting retinal layers in human eyes have been developed [4][5][6][7][8][9][10][11][12][13], few have addressed the segmentation of murine eyes. These include a 3D segmentation algorithm by Ruggeri and colleagues that segments two retinal layer boundaries [14], and a two-algorithm method by Molnár and colleagues that segments three retinal layer boundaries by first calculating borders using row projections in a sliding window and then refining these borders iteratively [15]. The method by Yazdanpanah and colleagues utilizes active contours to segment retinal layers in SD-OCT images of rat eyes [16]. However, that paper was limited in application, as the test images were preselected based on three limiting criteria: 1) The test images were chosen from wild-type (WT) or diseased eyes in which no retinal layer was completely missing. 2) The test images were limited to the central slices of the volumes where the retinal layers were clearly visible, eliminating images from the periphery of the retinal volumes where retinal layers had lower quality and images of the optic nerve where several layers disappear. 3) The algorithm only segmented six retinal layer boundaries, ignoring the NFL-GCL, IIS-OS, OIS-OS, and RPE-Choroid. Finally, while this paper was under review, a new work by Antony and colleagues was published which addressed a graph-based method for the automated segmentation of 10 retinal layer boundaries in normal mice, excluding the optic nerve head (ONH) region [17].
Here, we present a novel segmentation methodology that accurately segments the retinal layers in images from all sections of the retina, including the periphery and the ONH of WT and rhodopsin knockout (Rho(−/−)) mice with missing layers and significant pathology, captured with a commercial Bioptigen Inc. (Research Triangle Park, NC) SD-OCT system. We previously developed a framework for segmenting retinal layers in human eyes based on graph theory and dynamic programming (GTDP) [4]. Section 2 briefly reviews the layers of the murine retina and the GTDP and support vector machine (SVM) techniques in the context of the present problem. Section 3 introduces our layer segmentation technique for the mouse retina. The new algorithm, which we name S-GTDP, combines the GTDP framework with an SVM algorithm to detect pathological eyes. Section 4 demonstrates the accuracy of our algorithm by quantitatively comparing our automated results against manual segmentation and the commercially available Diver software (Bioptigen Inc.), and Section 5 outlines conclusions and future directions.

Review
In this section, we briefly review the layers of the murine retina, the GTDP framework originally developed for human retinal [4,5,18] and corneal [19] layer segmentation, as well as the basics of SVM classification [20]. While the general GTDP framework is similar for different applications, in the following we have modified and extended the core formula in the context of murine retinal layer boundary segmentation. Figure 1 shows example SD-OCT images of WT and Rho(−/−) mice retinas.

GTDP layer segmentation
The GTDP framework represents an SD-OCT B-scan as a graph consisting of nodes (i.e. image pixels) and edges that connect adjacent pixels. Weights are assigned to each of the edges based on a priori information about the layer boundaries. A cut is defined as a path that traverses edges from the leftmost column of the image to the rightmost column of the image. The desired cut is the path with the minimum summed weight of traversed edges.
In this paper, we modify and extend the weighting scheme in [4] as follows: w ab is the weight of the edge connecting nodes a and b, g j is the normalized vertical gradient of the image at node j ∈ {a,b}, λ s is the "similarity factor" weight, i j is the normalized intensity of node j ∈ {a,b}, w v is the "vertical penalty" term to add extra weight to edges going up, down, or diagonal, w min is the minimum weight term (1 x 10 −5 ) added for numerical stability. Due to the first "gradient" term in the right side of the equation, the boundaries between layers prefer to be located at pixels with large vertical gradients. The second "similarity" term prefers boundaries with similar or smoothly changing intensity pixels. The phrase normalization refers to linearly projecting pixel values between zero and one. The third "vertical penalty" term is included to prevent the segmentation from "hopping" between boundaries. For efficient and simplified computation, we only create edges from any node (pixel) to its upper right neighbor, its right neighbor, and its lower right neighbor. The only exceptions are the ONH and vessel regions (with steep boundaries), where we allow vertical edges as discussed in Section 3.7.
We use an iterative method to segment all retinal layers in each SD-OCT B-scan using GTDP. As detailed in our previous publication [4], once a new layer boundary is segmented, it is used to limit the search space for the subsequent layer boundaries. We use Dijkstra's algorithm, initialized by the zero-weight endpoint selection method of [4], to find the lowest summed weight path across the image.

SVM
In summary, an SVM is a supervised classification algorithm that takes as input a set of training examples, each consisting of a feature vector and a binary label. The algorithm maps each training example to the space of the provided features and finds the maximum-margin hyper-plane that separates the positive (labeled as one) and negative (labeled as zero) training examples. While SVMs in their basic form are binary linear classifiers, a simple kernel trick [20] turns them into nonlinear classifiers and thus extends their application.
SVMs and other machine learning classifiers have previously been used for various classification problems, including automatic and semiautomatic retinal layer segmentation by classifying pixels as belonging to different layers [21,22], glaucoma detection [23][24][25][26][27], and segmentation of the ONH [28]. In this work, we exploit a novel utilization of SVMs to detect the images of diseased eyes in which some retinal layers may be missing.

Methods
This section introduces our method for segmenting up to ten layer boundaries in SD-OCT images of the mouse retina. The algorithm is an extension of our previously presented technique for segmenting layered structures via GTDP in normal human eyes [4]. The core steps are outlined in Fig. 2 and described in detail in the following subsections.

Volume denoising
SD-OCT images are corrupted by speckle noise, so it is beneficial to denoise them to reduce the effect of noise on the segmentation results. Using B-scan averaging or other special scanning patterns [29,30] reduces noise but decreases the image acquisition speed. Thus, to improve the quality of our captured images, we denoise individual B-scans in the SD-OCT volume using two different sparsity based denoising methods, which are freely available online. That is, we create two sets of denoised images for each mouse and utilize them to calculate appropriate graph weights for each layer boundary. The first of these denoising techniques is called sparsity based simultaneous denoising and interpolation (SBSDI) [31]. Based on our empirical experiments on a training data set, we have found that SBSDI provides the most accurate segmentation results for the Vitreous-NFL, IPL-INL, INL-OPL, OPL-ONL, OIS-OS, and OS-RPE layer boundaries. This training data set was composed of 400 B-scans from four WT mice and 400 B-scans from four Rho(−/−) mice with advanced retinal degeneration. We also denoise each B-scan with the block-matching and 3D filtering (BM3D) algorithm [32], which we have found to be most appropriate for the segmentation of the other layer boundaries. Note that it is possible to use either of these methods alone in our segmentation framework, which reduces the overall computation time. However, the resulting segmentation will be less accurate than the proposed method that utilizes both denoising algorithms. After denoising, the gray-level values of all images are normalized to values between zero and one.

SVM based volume classification
We trained an SVM to classify each B-scan in a volume as belonging either to the group with all 10 retinal layer boundaries (Group A) or the group with eight or fewer boundaries (Group B). In our experiments, the latter group consisted solely of Rho(−/−) mice while the former group consisted of WT mice without advanced retinal degeneration. To train an SVM for classifying B-scans as belonging to either Group A or B, we used the training data set described in Section 3.1.
To calculate each feature vector, we first need to find the top left corner of the visible retina within each B-scan (Fig. 3). We detect the left boundary of the retina within the B-scan by thresholding the image at the intensity value of 0.6, removing all connected components with fewer than 50 pixels, and then finding the first column with non-zero values. Next, we compute a pilot estimate of the Vitreous-NFL boundary, as detailed below in Section 3.4. Starting from 120 µm to the right of the leftmost column of the visible retina, we extract a 320 µm deep and 480 µm wide (corresponding to 200 pixels and 400 pixels, respectively in our experiments) rectangle from the image. Then, we average each row of the rectangle, resulting in a [200 × 1] vector that we use as our feature vector for each training example. The feature vector values were linearly projected to values between zero and one. Figure 3 shows an example rectangle extracted from an SD-OCT B-scan from a WT mouse and the corresponding feature vector. Each training example was assigned a label, zero if the training example is WT and one if the training example had missing layers. Our proposed SVM utilized a Gaussian kernel function with a sigma value of 10 to enable non-linear decision capability. We used MATLAB (The MathWorks Inc. Natick, MA) functions svmtrain and svmclassify to implement the proposed SVM. We use the trained SVM to classify all B-scans in each volume. Finally, we use the mode of all B-scan classifications in a volume to decide whether the mouse belongs to Group A or B.

Gradient image creation
As discussed in detail in our previous publication, it is beneficial to construct two sets of vertical gradients, also known as dark-to-light and light-to-dark, to better separate neighboring retinal layers ( Fig. 4) [4]. To calculate these gradients, we convolve each denoised image with either [1;-1] (MATLAB notation) for the dark-to-light gradient or [-1;1] for the light-to-dark gradient, set all negative values to zero, and normalize the image to values between zero and one.

Pilot layer segmentation using GTDP
Common mouse retinal OCT scans are centered at the ONH. Considering the significant deformation of retinal layers near the ONH, our method utilizes the location of the ONH to improve the accuracy of retinal layer segmentation.
After determining the number of layers to segment in each volume (Section 3.2), we preliminarily segment all present layers in each scan of the volume so that we can use this pilot segmentation to determine the location of vessels and the ONH.
When segmenting each SD-OCT B-scan, we start by selectively segmenting the Vitreous-NFL boundary. Starting with the dark-to-light gradient image ( Fig. 4(a)), we iterate through each column and set every pixel in the column below the innermost intensity peak to zero, based on the assumption that the innermost dark-to-light boundary corresponds to the Vitreous-NFL. The edge weights are then calculated and the Vitreous-NFL boundary is segmented (see Tables 1 and 2 for implementation details and parameters used for all layer boundary segmentations outlined in this section).
Next, we selectively segment the most prominent dark-to-light boundary under the NFL. In most cases of Group A, this is the IIS-OS boundary; in Group B, this is the OS-RPE. In some cases, this assumption does not hold throughout a B-scan, and the ELM gradient may appear more prominently than the IIS-OS or OS-RPE gradients. To address this issue, we note that the ELM is a very thin dark-to-light layer boundary above our target boundary and below a thick low intensity region (ONL). Thus, we threshold the dark-to-light gradient image at the value of 0.1 and set any nonzero pixel above the second intensity peak (corresponding to an estimate of the IIS-OS or OS-RPE boundary) that is bordered above and below by zero-intensity pixels to zero. The edge weights are then calculated and the IIS-OS or OS-RPE boundary is segmented.
We then segment the INL-OPL boundary in a straightforward fashion. To achieve an accurate INL-OPL boundary segmentation, we utilize the dark-to-light gradient based weighting, and limit our search region between our estimated Vitreous-NFL and IIS-OS or OS-RPE boundaries (for Groups A and B, respectively).
Next, we segment the ELM boundary. For Group A, where the ELM is assumed to be always present, we limit the search region in the dark-to-light gradient image, calculate the edge weights, and segment the ELM. However, this process is different for Group B because the ELM is only intermittently present. To address this, we test regions of the pilot ELM segmentation to see where the segmentation is valid and where the ELM does not exist. First, we record the maximum and minimum dark-to-light gradient values 6.4 µm below and above the pilot ELM segmentation, respectively. Then, we divide the pilot ELM segmentation into 24-µm wide segments and calculate the mean of the noted maximum and minimum dark-tolight gradient values. If the mean of the maximum gradient values is above or equal to 0.09 and the mean of the minimum gradient values is below or equal to 0.02, that section of the ELM segmentation is declared as valid. Otherwise, we assume that the ELM in that 24-µm wide segment does not exist.
In Group A, after segmenting the ELM, we segment the OS-RPE. To do this, we utilize the dark-to-light gradient and the parameters of Tables 1 and 2 in a straightforward fashion.
Next, we segment the boundaries that are most prominent in the light-to-dark gradient image ( Fig. 4(b)). These include the NFL-GCL, IPL-INL, OPL-ONL, and OIS-OS, which can be segmented in a straightforward fashion by isolating the search region and using the parameters in Tables 1 and 2. Note that we segment the OIS-OS boundary only in Group A (the IS-OS is absent in Group B).
Finally, we segment the RPE-Choroid boundary in a similar straightforward fashion. For this boundary, we use different gradients for Group A and Group B because utilizing the light-to-dark boundary for Group A results in more accurate segmentation, while the opposite is true for Group B.

ONH segmentation
Since internal retinal layers do not exist within the ONH, it is important for us to know where the nerve head is located in our B-scans so that we do not segment internal layers in those regions. To segment the ONH, we make use of our pilot layer segmentation. The pilot Vitreous-NFL is designed to cut through the hyper-reflective peak of the ONH center due to the lack of vertical edges as well as the utilization of severe diagonal edge weights. Thus, we obtain the ONH center as the location of the maximum value in the summed voxel projection (SVP) of the SD-OCT volume created by averaging intensities above the pilot Vitreous-NFL boundary (Fig. 5(a)).
After estimating the ONH center, we search for the boundaries of the entire ONH area. We create another SVP by averaging the intensities between the pilot OPL-ONL and RPE-Choroid boundaries in Group A, or between the pilot INL-OPL and RPE-Choroid boundaries in Group B (Fig. 5(b)). The quality of the SVP is enhanced using the BM3D algorithm. We segment this area by fitting an ellipse to this SVP, which is centered at the previously calculated center of the ONH and oriented along the x and y axes (Fig. 5(c)).

Vessel segmentation
The presence of large blood vessels appearing as hyper-reflective bulges in the NFL makes accurate segmentation difficult. As shown in Fig. 6, if the appearance of vessels is not carefully considered in algorithm design, inconsistent segmentation of the NFL-GCL boundary will occur. This is important since small changes in the NFL thickness due to inconsistent segmentation may be erroneously associated to glaucoma and other neurological diseases. To consider the vessels in our segmentation, we must first determine the locations of these retinal vessels, and we do this by using our pilot layer segmentation. We first create two SVPs of our SD-OCT volume. The first SVP is created by averaging the intensities between the pilot OPL-ONL and RPE-Choroid for volumes in Group A, or between the pilot INL-OPL and RPE-Choroid in Group B. The second SVP is created by averaging the intensities between the pilot Vitreous-NFL and IPL-INL. In the first SVP the vessels have dark shadows while in the second SVP they appear bright (Fig. 7(a) and 7(b)).
Next, to enhance the contrast between vessel and non-vessel pixels in both SVPs, we employ a multi-scale approach using Laplacian-of-Gaussian (LoG) filters and Gabor wavelets, as detailed in [33,34]. Briefly, this method first convolves the SVP with a bank of LoG filters of various standard deviations, and keeps the maximum response at each pixel. Next, the LoG filtered image is convolved with a bank of Gabor wavelets of varying wavelengths, scales, and orientations, and the maximum response at each pixel is kept. We sum the two SVPs to combine their information and enhance the quality of the combined SVP (Fig. 7.c) with the BM3D algorithm, and convert the combined SVP into a mask of vessel locations by thresholding each pixel at the intensity of 0.6. Since the vessel bulge is larger than its observed shadow, we horizontally dilate our segmented vessels by convolving the vessel mask with a [1 × 7] averaging filter with uniform weights and setting all non-zero values to one.

Second pass segmentation
We segment each B-scan in the SD-OCT volume a second time to incorporate the new information from the segmented ONH and vessels. To deal with the missing layers in the ONH, each B-scan that crosses the ONH area is separated into three sections: the image to the left of the ONH, the ONH, and the image to the right of the ONH (Fig. 8). The left and right images are treated as independent images, and all internal retinal layer boundaries are segmented in both. Since we segment the Vitreous-NFL and RPE-Choroid boundaries in the ONH, the entire B-scan is used for segmenting these two boundaries. The Vitreous-NFL boundary is preliminarily segmented using the method detailed in Section 3.4 with a slight variation in the graph connectivity scheme. When segmenting the Vitreous-NFL boundary in the presence of a hyper-reflective peak above the ONH (consisting of the nerve fibers and the hyaloid artery without a clear border), we also create edges to vertically connected neighbors in our graph representation of the image so that the path can make the vertical leap necessary to segment the hyper-reflective peak.
Next, we remove hyper-reflective flecks (Fig. 8) within the NFL from the gradient image by setting pixels that are both above and below high intensity pixels in the denoised image to zero (the ratio of the mean intensity of the pixels within 16 µm below the target pixel to the mean intensity of the pixels within 16 µm above the target pixel is less than 1.45). Next, we test if our preliminary Vitreous-NFL segmentation cuts through a hyper-reflective peak in the NFL by measuring the mean pixel intensity in each column above the preliminary Vitreous-NFL segmentation. For every column that this value is greater than 0.043, we set all pixels below the innermost intensity peak to zero before segmenting the Vitreous-NFL boundary again.
To segment the RPE-Choroid boundary in scans within the ONH region, we use the same method as detailed above in section 3.4, except we segment the light-to-dark gradient image instead of the dark-to-light gradient image because the dark-to-light RPE-Choroid boundary disappears in the center of the ONH region.
In our second pass segmentation, we also take the segmented vessel locations into consideration. Our algorithm segments the NFL-GCL boundary under the vessels' hyperreflective bulges by adjusting the search region in columns where vessels are detected. In these columns, the search region for the NFL-GCL boundary is decreased from the inner side by 16 µm, so that we detect the light-to-dark gradient corresponding to the bottom of the hyper-reflective bulge instead of any gradient within the bulge. However, if there is no prominent light-to-dark gradient within the top 35.2 µm of this adjusted search region column, we assume that we have missed the bottom of the bulge, so we revert to the original NFL-GCL search region. In vessel regions, we also create edges to vertically connected neighbors in our graph representation of the image so that the path can make the vertical leaps necessary to segment the hyper-reflective bulges. The vessel locations are not considered in B-scans of the ONH area because the vessels extend radially from the optic nerve.

Parameters
This section lists all algorithmic parameters needed for reproducing the results reported in this paper. Note that all parameters in this paper were determined empirically based on our training data set (described in Section 3.1), and that we used the same parameters for all experiments. As explained in Section 3.2, our SVM based algorithm automatically determines if a particular image benefits most from using the parameters for Group A (Table 1) or Group B (Table 2).

Animal models and SD-OCT imaging hardware, software, and protocol
All imaging was performed in vivo utilizing a Bioptigen Inc. Envisu R2200 Ultra-highresolution SD-Ophthalmic Imaging System (SD-OIS), which is commercially available for small animal imaging (840 nm SD-OCT with customized 180 nm Superlum Broadlighter source, providing 1.6 µm digital axial resolution and 2 µm optical axial resolution). For comparison with our segmentation results, we employed the commercially available automated mouse retina layer segmentation software (Diver 2.0) also developed by Bioptigen Inc.
The retina of each animal was imaged using unaveraged 100 equally distanced B-scans, each with 1000 A-scans, spanning a 1.2mm × 1.2mm region centered at the optic nerve. In our first sets of experiments, which were used for validating the accuracy of our algorithm, Groups A and B consisted of the eyes of 10 WT (6 to 9 weeks old) and 10 Rho(−/−) [35] mice (4-8 weeks old), respectively. In WT mice, no significant change in the retinal layer thicknesses takes place after the fourth week (e.g [36].), thus the mice from the age range we used can be considered appropriate to serve as controls in the context of this study. Rho(−/−) mice fail to develop rod photoreceptor outer segments, which causes progressive photoreceptor degeneration. By 4-5 weeks old, the number of rod nuclei decreases by 10-15 percent and by 12 weeks the retinas lose over 90% of the rod photoreceptors [35]. There was no overlap between these 20 mice and those used for training the SVM and segmentation algorithms. Finally, as an anecdotal demonstration of the algorithm's applicability to other types of pathologies, we imaged one 25 day old mouse with the retinal degeneration 10 (Rd10) pathology [37] and one mouse displaying abnormal morphology in the OPL of unknown origin (Fig. 9). We used the exact same algorithmic parameters for all experiments in this paper.
All images used for validation were also manually segmented by two graders using a custom software (DOCTRAP version 19.6) previously developed at Duke University [5]. Both manual graders started from scratch and were blinded to the automatic segmentation and the other grader's marking. We chose the more experienced grader as the gold standard to which all other methods (S-GTDP, Bioptigen's Diver software, and the second grader) were compared.
Our automated algorithm was implemented in MATLAB and was incorporated as an extension of the DOCTRAP software package. For all experiments, our software was executed fully automatically.

Results
This section shows the retinal layer segmentation results obtained using the procedures described in Section 3. Section 4.1 examines the results of our SVM classification method, and Section 4.2 compares our S-GTDP fully automatic segmentation results to the gold standard manual segmentation.

SVM classification performance
To validate our SVM classification algorithm, we used the full data set consisting of 2000 unaveraged B-scans from 10 WT mice and 10 Rho(−/−) mice. We counted the fraction of images that were correctly classified in each of Groups A and B, as summarized in Table 3. We distinguished WT retinas from Rho(−/−) retinas with 100% accuracy by using the mode of the B-scan classifications as the classification of the entire retina.

Segmentation validation
For quantitative comparison, a subset of 200 images, 10 linearly spaced images per volume, was selected from the 2000 un-averaged B-scans and manually segmented independently by two graders and automatically by S-GTDP and Bioptigen's Diver software. While performing these experiments, we noted that Bioptigen Inc.'s Diver software does not segment any layers in certain B-scans. Even in B-scans that the Diver software segments, selected sections of layer boundaries are reported as zero whenever the segmentation algorithm fails. Out of the 200 B-scans we used for comparison, 192 (99 for Group A and 93 for Group B) were segmented by the Diver software, and all of these had sections where layer boundaries were reported as zero, signifying invalid segmentation results. Furthermore, the Diver software only segments 8 layer boundaries, skipping the ELM and OIS-OS boundaries.
To provide a fair comparison between our algorithm and the Diver software, we set up two separate experiments. The first compared segmentation results in only the A-scans where the Diver software both segmented the B-scan and did not set segmentation values to zero. That is, we chose the very best possible results that could be obtained using Bioptigen's commercial software. In this subset, we compared both S-GTDP and Bioptigen to the same manual grader and also compared the two manual graders to estimate inter-grader variability. These comparison results can be seen in Table 4 for Group A and Table 5 for Group B. Next, we demonstrated the accuracy of S-GTDP in the all cases, including those where Bioptigen's algorithm failed, by comparing S-GTDP to a manual grader and also comparing the two manual graders to estimate inter-grader variability for the full set of 200 images. These comparison results can be seen in Table 6 for Group A and Table 7 for Group B.
For each B-scan, we calculated the absolute value of the average pixel difference between the locations of the automatic and manual segmentations for every retinal layer boundary. Then, we computed the average, standard deviation, and median of these absolute pixel differences for the entire data sets of Group A and Group B. These pixel values were converted to µm by multiplying the pixel values by our system's axial resolution of 1.6 µm per pixel.
To account for the fact that the retina is not present throughout the width of each B-scan, we automatically calculated the left and right sides of the retina within each B-scan and only compared the segmentation results within those boundaries. Also, to be fair to Bioptigen's Diver software, we did not compare segmentation results within the ONH, even though our algorithm is designed to segment the hyper-reflective peak. This is because the definition of the NFL boundary in the presence of the hyaloid artery is not globally accepted.
Additionally, each manual grader exhibited a bias when tracing layer boundaries by consistently tracing above or below the boundary by a constant distance. To determine this bias, we used a training set of 20 images, 10 from each of Groups A and B, and minimized the sum of the absolute pixel difference values in each comparison scenario. To account for the bias, we shifted each layer boundary in the automatic segmentation by S-GTDP down by bias values of 0.9, 0.8, 0, 0.2, −0.1, 0.7, 0.7, 0.6, 0.4, and 0.5 pixels, respectively. We also shifted each layer boundary in the automatic segmentation by Bioptigen down by bias values of 0. 5, 5.4, 11, 1.8, 8.1, 5.4, 0.5, and −1.6 pixels, respectively. Note that there are only 8 bias values for Bioptigen because it does not segment the ELM and OIS-OS boundaries. We did not correct for the bias between the two manual graders because their difference reflects the inherent subjectivity of manual markings.
The results in Tables 4 through 7 show that our automatic S-GTDP algorithm segmented retinal layer boundaries on average more closely to the more experienced manual grader as compared to a second manual grader. In the limited results for Group A (Table 4), the two manual graders differed in their layer boundary segmentation by an average of 2.19 µm, while our fully automatic algorithm differed from the more experienced manual grader by 2.15 µm. Similarly, in the limited results for Group B (Table 5), the two manual graders differed by an average of 2.60 µm, while our automatic algorithm differed from the more experienced manual grader by only 1.90 µm. In the entire results for Group A (Table 6), the two manual graders differed by an average of 2.19 µm, while our automatic algorithm differed from the more experienced manual grader by 2.17 µm. Finally, in the entire results for Group B (Table  7), the two manual graders differed by an average of 2.66 µm, while the automatic algorithm differed from the more experienced manual grader by only 1.96 µm.

Conclusion
The qualitative results in Fig. 9 demonstrate the robustness of our S-GTDP framework to accurately segment retinal layers of murine eyes in SD-OCT images, even in the presence of various pathological features. Quantitative results in Tables 4 through 7 show that our automatic segmentation closely matches the gold standard of manual segmentation and outperforms Bioptigen's commercially available Diver software. These results further demonstrate that our algorithm has an overall consistent performance, as its accuracy was not diminished in the images that categorically failed to be meaningfully segmented by the Diver software. We also show that results obtained by our algorithm match those obtained by the more experienced manual grader on average more closely than results from the second manual grader, as detailed in Tables 4 through 7, which attests to the ability of our algorithm to reduce the subjectivity inherent in manual segmentation. This is highly encouraging for reducing the time and manpower required to segment such features in preclinical studies.
The SVM algorithm in Section 3.2 utilized a linear horizontal projection to yield the desired feature vectors. This is due to the fact that the mouse retina in the Bioptigen system appeared relatively flat. In the case of encountering a non-flat retina (e.g. when utilizing other imaging systems or alternative animal species), we can easily modify the algorithm to remove its reliance on a flat retina. As described and implemented in our previous publications [4,5], we can attain a rough estimate of the retina curvature by fitting a convex hull to a pilot estimate of the OS-RPE. Then, the image's A-scans are re-arranged according to this pilot estimate to flatten the retina, for which a horizontal projection is then valid.
The accuracy of our proposed layer segmentation framework has yet to be quantified for other scenarios, including the segmentation of mouse SD-OCT images with other pathologies. This can be easily achieved using approaches such as one-against-all classification, in which a separate SVM would be trained for each pathology type to classify a B-scan as belonging to that pathology class or belonging to the class of "everything else." The design of a fully automated layer segmentation algorithm capable of dealing with different manifestations of retinal pathologies from different diseases is a challenging problem. While many algorithms in the past few years have been developed to deal with a specific type of pathology, the users of these algorithms often must know the specific pathology in their data set and select the appropriate algorithm. Our proposed two-step approach is the simplest case of a general framework for the automated segmentation of retinal boundaries from eyes with different anatomic and pathologic features. In this framework, the first SVM based step detects the specific pathology and selects the appropriate algorithm for the data set in hand. The second step can utilize any of the layer segmentation algorithms developed in the past few years. Our future work will address the more challenging case of segmenting retinal layers in human eyes with multiple types of pathology from different diseases such as diabetic retinopathy, macular hole, and age-related macular degeneration.