Multimodal registration of SD-OCT volumes and fundus photographs using histograms of oriented gradients.

With availability of different retinal imaging modalities such as fundus photography and spectral domain optical coherence tomography (SD-OCT), having a robust and accurate registration scheme to enable utilization of this complementary information is beneficial. The few existing fundus-OCT registration approaches contain a vessel segmentation step, as the retinal blood vessels are the most dominant structures that are in common between the pair of images. However, errors in the vessel segmentation from either modality may cause corresponding errors in the registration. In this paper, we propose a feature-based registration method for registering fundus photographs and SD-OCT projection images that benefits from vasculature structural information without requiring blood vessel segmentation. In particular, after a preprocessing step, a set of control points (CPs) are identified by looking for the corners in the images. Next, each CP is represented by a feature vector which encodes the local structural information via computing the histograms of oriented gradients (HOG) from the neighborhood of each CP. The best matching CPs are identified by calculating the distance of their corresponding feature vectors. After removing the incorrect matches the best affine transform that registers fundus photographs to SD-OCT projection images is computed using the random sample consensus (RANSAC) method. The proposed method was tested on 44 pairs of fundus and SD-OCT projection images of glaucoma patients and the result showed that the proposed method successfully registers the multimodal images and produced a registration error of 25.34 ± 12.34 μm (0.84 ± 0.41 pixels).


Introduction
Fundus imaging and spectral domain-optical coherence tomography (SD-OCT) are two common types of imaging modalities that provide different information about the human retina. Fundus imaging is referred to as the process of acquiring a 2D representation of the 3D retina by means of reflected light. With this definition, the broad category of fundus imaging includes modalities/techniques such as red-free fundus photography, color fundus photography, stereo fundus photography, scanning laser ophthalmoscopy (SLO), and fluorescein angiography [1]. On the other hand, spectral-domain OCT, despite its recent appearance (the first SD-OCT device became commercially available less than 10 years ago [2]), has been the clinical standard of care for several eye diseases [1]. This is due to providing 3D information of retinal structures such as intraretinal layers and optic nerve head that are not available via fundus imaging. Both fundus and OCT imaging techniques are vastly utilized in diagnosis and management of eye diseases such as diabetic retinopathy, glaucoma, and age-related macular degeneration (AMD). From the clinical prospective, a better automated alignment of OCT-fundus images can directly provide clinicians with an insight into structures they need to follow/monitor for diagnosis or management. For example, in glaucoma it is important to monitor changes to the optic disc for both diagnosis and management of disease [3,4]. Moreover, studies have shown that combining complementary information from both sources is beneficial for automated segmentation of retinal structures such as blood vessels [5], optic disc and cup boundaries [6,7], and intraretinal surfaces (e.g. internal limiting membrane (ILM)) [8,9]. However, the performance of these multimodal segmentation approaches is dependent on the quality of the registration. For instance, in [5], a few scans were excluded from the test set due to relatively large registration errors. Therefore, combining complementary information from different imaging modalities not only could benefit physicians in their diagnosis and monitoring the ophthalmic diseases, but also is advantageous for automated techniques that are utilized for processing imaging data.
As mentioned above, there are various techniques for retinal imaging each of which produces different types of images (i.e. with different size, resolution, and intensity profile) from the retina. There has been a great deal of effort through a variety of techniques on registering retinal images generated by different modalities [10][11][12][13][14][15][16][17][18][19]. Some of the previous works focused on stitching (mosaicing) images of the same modality with the aim of obtaining a broader field of view [12,17,20]. In addition, there are works that attempted to register multimodal retinal images including fluorescence angiogram and red-free fundus pairs [14,15,21], SLO and color fundus photographs [22,23], SD-OCT and color fundus photographs [7,22,[24][25][26]. The focus of the current work is on multimodal registration of fundus and SD-OCT modalities.
Generally, the pixel intensities between multimodal retinal image pairs might be different; however, compared to other types of multimodal retinal imaging, the intensity profiles of color fundus photographs and SD-OCT images are substantially different. Hence, in order to benefit from the most dominant structural information that both modalities share (i.e. retinal blood vessels), the current color fundus and OCT registration methods [7,[24][25][26] include a vessel segmentation step as part of their algorithms. The retinal vasculatures are the best candidates for identifying the corresponding points (e.g. blood vessel bifurcations and crossing points or blood vessel ridges) between two modalities.
However, the vessel segmentation errors, in either modality, could potentially introduce some errors to the registration process as the corresponding points between image pairs are identified from the blood vessel maps. For instance, the method proposed in [27] for blood vessel segmentation from SD-OCT modality could produce false positives due to the presence of the optic nerve head region [5]. Additionally, segmenting the blood vessels from both modalities is a time-consuming task which necessitates additional considerations (e.g. parameter tuning) when the dataset contains different fundus photographs (stereo and non-stereo fundus photographs) with different scales and sizes such as the one used in this work. Therefore, we propose a robust feature-based registration framework that is capable of aligning fundus (stereo and non-stereo) and SD-OCT modalities without requiring blood vessel segmentation with the aim of speeding up the registration process. Feature-based registration methods have been used for registering a variety of images [18] and they have also demonstrated successful results in registering unimodal retinal images [17,19,20]. Control points detection is a very important step in feature-based registration algorithms as the final landmarks that are used for computing the registration transformation are selected from the CPs. In our feature-based framework, we propose to identify the CPs from the actual images (not their vessel maps) by detecting the corners (i.e. points for which there are two different dominant edge directions in a local neighborhood of the point) in the images using the features from accelerated segment test (FAST) corner detection approach [28,29] which, as its name suggests, is very fast and computationally efficient. Additionally, in order to find the correspondence between two sets of points, we extract the local structural information in the neighborhood of each CP by computing the histogram of oriented gradients (HOG) [30] feature and identify the best matching feature descriptors.
In particular, the proposed method starts with a few preprocessing steps including: 1) creating a 2D projection image from the 3D SD-OCT volume, 2) enhancing the contrast of the images, and 3) rescaling the fundus photographs. Next, the control points, which are identified by FAST corner detection, are represented by a descriptor. More specifically, as previously mentioned, in order to avoid the use of intensity information and benefit from the structural features (i.e. retinal vasculature) without attempting to segment the blood vessels, histograms of oriented gradients is employed as the CP's descriptors. The approximate nearest neighbor method [31] is utilized in a forward-backward fashion to identify the matching descriptors. Finally, after removing the incorrect matches, the registration transformation is calculated using the random sample consensus (RANSAC) method [32].

Methods
The overall flowchart of the proposed method is depicted in Fig. 1 and can be summarized in five major steps as follows: 1) a preprocessing step including SD-OCT projection image computation, contrast enhancement, and fundus image rescaling (Section 2.1), 2) identifying the control points (Section 2.2), 3) computing gradient-based features (Section 2.3), 4) feature matching (Section 2.4), and 5) calculating the transformation (Section 2.5).

Preprocessing
In order to be able to register the 2D fundus photographs to the 3D SD-OCT volumes, a 2D projection image of the volume is required. The OCT projection image is obtained using the method proposed in [34] where a multi-resolution graph-theoretic approach is employed to segment the intraretinal surfaces within the 3D SD-OCT volumes [33,34]. In order to obtain the projection image, two intraretinal surfaces are segmented: the junction of the inner and outer segments of photoreceptors (IS/OS) and the outer boundary of the retinal pigment epithelium (RPE), also called the Bruch's membrane (BM) surface. A thin-plate spline is fitted to the BM surface from which the OCT volume is flattened to obtain a consistent optic nerve head shape across patients [33]. The SD-OCT projection image is computed by averaging the voxel intensities in the z-direction between the IS/OS junction and BM surfaces (Fig. 2). Two types of color fundus photographs exist in the dataset used in this work: 1) stereo fundus images ( Fig. 3a covering almost 20 degrees field of view), where the optic nerve head region of the retina is imaged from two different angles and placed side by side, and 2) ONH-centered non-stereo fundus photographs (Fig. 3b), which cover a broader field of the retina (35 degrees). For the stereo fundus photograph pairs, the image that has higher quality and less imaging artifact is selected to be considered for the registration. In addition, there is extra information included on the fundus photographs (e.g. dates, text, and color bars) which produce strong corners which could distract the registration process and so were automatically removed from the images. Hence, a binary mask indicating the region of interest of each image was produced by thresholding the images followed by a morphological opening operator (Fig. 3).
Furthermore, the blood vessels have the highest contrast in the green channel of the fundus photographs; hence only the information of the green channel was used in our method. The control points in the images were detected by looking for the corners, which are sensitive to the pixel intensities; therefore, in order to increase the chance of finding the best matching points, the number of CPs needs to be maximized. Consequently, the contrast of both fundus photographs (green channel) and the OCT projection images were enhanced and normalized using the contrast limited adaptive histogram equalization (CLAHE) method [35].
Since the images are from different modalities, they differ in size and resolution. Moreover, the size and resolution of two types of fundus photographs in the dataset are completely different from each other. In order to bring all the images to a similar scale and resolution, the fundus photographs are scaled such that their optic disc has a similar size as the optic disc in their corresponding OCT projection image. Since the optic disc has roughly a circular shape, the location and size of optic disc in both modalities is approximated using a circular Hough transform. First, a grayscale morphological closing operator with a ball-shaped structuring element is applied to both enhanced images in order to remove the blood vessels (attenuate the dark features in the images). Subsequently, the gradient of the closed image is computed and the circular Hough transform is applied to the gradient magnitude image. The center and radius of the most dominant circle in the fundus (c f , r f ) and OCT (c o , r o ) images estimates the optic disc location and size in both modalities, respectively ( Fig. 4). Since the resolution of the OCT projection images are consistent in the entire dataset, both stereo and non-stereo fundus photographs are scaled (according to their corresponding OCT projection images) such that r f = r o .

Control point detection
A control point (aka interest point) is a pixel which has a well-defined position and can be robustly detected. Two properties of interest points are having high local information content and repeatability between different images. Identifying a sufficient number of CPs in images is a key in feature-based registration methods as a lack of a sufficient number of CPs increases the risk of unsuccessful registration or decreases the robustness of the method. Bifurcations are reasonable candidates to be utilized as CPs due to the fact that the blood vessel's structure remains unchanged between modalities. However, obtaining bifurcations requires segmenting the blood vessels from both modalities which could be challenging in poor quality images. Hence, instead of looking for bifurcations, we proposed to utilize corners in images as the CPs. Features from accelerated segment test (FAST) [28,29] was employed to detect corners in the image as this method has a high accuracy and robustness and is able to find the corners very fast. Consequently, there is no need for vessel segmentation and by detecting corners, most of the  bifurcations are also detected as they resemble corners in the images. The FAST corner detection algorithm determines whether a pixel is a corner utilizing its neighboring pixel intensities. More specifically, consider an image I and a query pixel p, which is to be identified as a corner or not, with the intensity of I p and also consider a Bresenham circle of radius 3 containing 16 pixels surrounding the pixel p [29]. The pixel p is identified as a corner if the intensities of N contiguous pixels out of the 16 are either above (I { N } > I p + T ) or below (I { N } < I p − T ) the intensity of the query pixel, I p . T is a predefined threshold value and I { N } is the intensity of N contiguous pixels where N ∈ {9, 10, 11, 12}. The algorithm quickly rejects the pixels that are not a corner by comparing the intensity of pixels 1, 5, 9 and 13 of the circle with I p (Fig. 5). If the intensities of at least three of these four locations are not above I p + T or below I p − T , then p is not a corner, otherwise, the algorithm checks all 16 points. This procedure repeats for all pixels in the image. In order to avoid the distraction caused by the magnified background noise (produced in the contrast enhancement step) and obtaining so many corners on the background (especially for OCT projection image), a smoothing Gaussian filter is applied to the images before corner detection (Fig. 6).

Gradient-based feature computation
The method proposed in this work for extracting features has similarities with SIFT descriptors and is inspired by the descriptor proposed in [30] for human detection. The basic idea behind the feature computation method is characterizing the local appearance of each CP's neighborhood by distribution of local intensity gradients or edge directions. More specifically, the neighborhood of size M N × M N around each control point is defined using a small spatial block which is  (Fig. 7). Constraining the directions to 180 • instead of 360 • causes the histogram to be less distinctive, but at the same time, more robust to the intensity change as is quite possible between multimodal images. The histograms from all cells in the block are concatenated to form a 1-D vector of size 8 × N × N . In order to further the invariance to affine changes in illumination and contrast, all histograms in a block are normalized such that the concatenated feature vector has a unit size. Therefore, the normalized concatenated vector, which includes the components of all normalized cell histograms in a block, is called the histograms of oriented gradient (HOG) descriptor and represents the local shape characteristics (e.g. gradient structure) of each CP's neighborhood.

Feature matching
In order to find the best matching CPs between a pair of multimodal images, the method in [31] for identifying the approximate nearest neighbors in high dimensions was employed. The method eliminates ambiguous matches in addition to using the match threshold. A match is considered ambiguous when it is not remarkably better than the second best match. Assume (1). The feature vectors having a distance larger than a match threshold (here 0.2) are eliminated from further investigation. • The ambiguous match ratio is calculated by dividing the distance of the first nearest neighbor feature vector by the distance of the second nearest neighbor feature vector.
• If the match ratio between the two distances is greater than a predefined ratio threshold, the match is considered ambiguous and eliminated.
The method iterates over H f until all feature vectors are examined. Even though this approximate nearest neighbor method produces more reliable matches, if the images contain repeating patterns (which is not the case for retinal images), the corresponding matches are likely to be eliminated as ambiguous. In order to be more conservative, the ratio threshold was set to 0.8 in this study.
Identifying the match pairs utilizing the method described above had the potential to result in assigning a feature vectors from H o to multiple feature vector from H f due to the fact that the iterations are performed independently from each other. In order to solve this issue and identify unique matches, a forward-backward search is performed. Hence, in the backward mode, the same procedure applies to H o and the best matching feature vectors from H f are identified. The set of final matching pairs, S, includes only the unique matches which are common between forward and backward modes (Fig. 8).

Transformation computation
The set of matching CP pairs detected in Sec. 2.4 are utilized to compute the transformation matrix. However, the algorithm does not guarantee 100% accuracy in matching CPs which results in possible incorrect matches. Therefore, the incorrect matching pairs are identified using the geometrical distribution of all matching pairs. Assume , the Euclidean distance between all CP pairs, dist 1 × L , in the image domain as well as the mean, dist m , and standard deviation, dist sd , of the distance vector are computed. Consider a pair, (p f (x i , y i ), p o (x i , y i )), and its corresponding distance, dist (i) . If the points are too close to each other (dist (i) < dist m − dist sd ) or too far from each other (dist m + dist sd < dist (i) ), the pair is marked as an incorrect matching pair (Fig. 9). Here, in order to be more conservative and keep the high quality matches, the points with the distance of more than one standard deviation from the average distance, dist m , were marked as outliers. Identifying the incorrect matching pairs using this procedure is achievable under the assumption that the number of correct pairs are more than incorrect pairs, the images are in similar scales, there is no reflection involved, and the rotation needed to align the multimodal retinal images is minimal.
In addition to removing the incorrect matches, a refinement step is applied to the CP pairs which allows for small adjustments of CP locations within a small neighborhood of each CP (5 × 5 window). The refinement step exists for two reasons: 1) to account for possible errors in corner (CP) detection due to presence of noise, imaging artifacts and low contrast and 2) since the images come from two different modalities with significantly different intensity profiles, it is possible that pixels in the neighborhood of the CP are actually better matching candidates than the CP itself. Thus, the HOG feature vector is computed for all 25 pixels inside each CP's neighborhood (in both modalities) and the two feature vectors with minimum distance in the feature space are identified and their corresponding pixels are considered as the new CPs. Note that, both, one, or none of the CPs could be updated through the refinement step.
In order to estimate the affine transformation, random sample consensus (RANSAC) method is utilized [32]. Despite removing the incorrect matches from the matching set, the chance of the presence of incorrect matching CPs is not zero. Therefore, the objective is to robustly calculate the transformation from S which may contain outliers (i.e., low-quality or incorrect matches). The algorithm performs as follows: 1. Randomly select a subset of three pairs s from S and instantiate the affine transformation from this subset. Here, the sampling is with replacement.
2. Apply the transformation to the rest of pairs in the set and determine the set of pairs S i whose distance of the transformed control point in fundus image,ĉ p f , from its corresponding control point in OCT image, c p o , is less than a predefined threshold. The set S i is the consensus set of the sample and defines the inlier pairs of S.
3. Repeat the previous two steps a large number of times and select the largest consensus set S i . The affine transformation is re-estimated utilizing all the CP pairs in the subset S i [32].

Data
The performance of the proposed method was evaluated on a multimodal dataset including color fundus photographs and the SD-OCT volumes of of 44 open-angle glaucoma or glaucoma suspect patients. The optic nerve head (ONH)-centered SD-OCT volumes were acquired using a Cirrus HD-OCT device (Carl Zeiss Meditec, Inc., Dublin, CA) in one eye (per patient) at the University of Iowa. Each scan has a size of 200×200×1024 voxels (in the x-y-z direction, respectively) which corresponds to a volume of size 6×6×2 mm 3 in the physical domain (20 • ), and the voxel depth was 8 bits in grayscale. Additionally, the optic disc region of each patient's retina was also imaged using a fundus camera. Almost half of the patients (twenty-four) were imaged using a stereo-base Nidek 3-Dx stereo retinal camera (3072×2048 pixels corresponding to nearly 20 • ). The remaining twenty patients had a regular color fundus photograph acquired using a Topcon 50-DX camera (2392×2048 pixels corresponding to 35 • ). The pixel depth was 3 8-bit red, green and blue channels. Some of those pairs were taken at the same day, while others were taken months or even more than a year apart.

Experiments
Since we are registering multimodal images with completely different intensity profiles, in order to quantitatively evaluate the proposed method, the intensity-based metrics are avoided and the evaluation is performed using point-based metrics. The reference standard needed for the point-based evaluation is obtained by identifying a set of landmark pairs from the original images manually. In order to assure the collection of appropriate landmarks capable of a fair evaluation, we marked five pair of points that were not too close to each other and as much as possible were fairly distributed. The manual landmarks are mostly selected from the vasculature regions that create unique and recognizable points in both images such as corners and bifurcations. The manual registration was performed by computing the affine transformation using three randomly selected pairs from the set of landmarks identified for the evaluation purpose. In order to present comparative results, in addition to the manual registration, the performance of the proposed method was also compared to our previous iterative closest point (ICP) registration approach reported in [7,36]. The ICP-based method does not use the intensity information; however, as part of the algorithm, blood vessels need to be extracted and the registration transformation is actually computed using the vessel maps. The registration accuracy was evaluated using root mean square (RMS) error which measures the amount of misalignments between the manual landmarks of OCT images and their corresponding transferred landmarks of fundus photographs: where p o,i andp f ,i are the i-th manual point in OCT image and its corresponding transferred manual point in the fundus photograph, respectively. The mean, standard deviation, and the maximum of RMS errors of the manual and automated approaches were compared. Furthermore, the running time and the success rate of the registration methods were compared to each other. The registration was considered successful if the RMS error was less than or equal to the maximum error obtained using the manual registration approach (and so, by definition, the manual registration approach would have a 100% success rate). The running time of the manual registration includes the required time for manual landmark identification and transformation computation. All experiments were performed using a PC with Windows 7 64-bit OS, 64 GB RAM, and Intel(R) Xeon(R) CPU 3.70 GHz. Figure 10 shows the comparative results of registering two pairs of fundus (stereo and non-stereo) and OCT images using ICP, manual, and the proposed methods. The checkerboard images are provided for qualitative comparison of the registration results. Quantitatively, the mean, standard deviation, and the maximum RMS error calculated using the entire dataset is reported in Table 1.

Results
Based on the RMS values, the manual registration and the proposed method had significantly smaller errors than the ICP registration method (p < 0.05). However, the RMS errors of the manual registration were not significantly different from the proposed method (p < 0.05). The success rate and the running time of the registration methods are reported in Table 2. Due to the definition of a successful registration, the manual registration has a 100% success rate. The proposed method achieved 98% success rate because of failing in registering one pair and the ICP registration method failed to successfully register eight cases. The running time of the proposed method was significantly lower than the manual and the ICP registration methods (p < 0.5). Similarly, the running time of the ICP registration method was significantly smaller than the manual registration (p < 0.05). Additionally, Fig. 11 shows ICP registration failures (i.e. the RMS error was greater than 0.127 mm = 4.23 pixels) due to having low imaging quality and presence of motion artifact in OCT projection image. The manual and the proposed methods did not fail; however, they produced slightly larger registration errors.

Discussion and conclusion
In this paper, we proposed a feature-based registration method for aligning optic nerve headcentered SD-OCT volumes and fundus photographs. Since the intensity of the images are substantially different, the registration needs to rely only on the structural features that the image pairs have in common. Whereas previously proposed existing fundus and SD-OCT registration approaches include a vessel segmentation step as part of their algorithms where the errors in    vessel segmentation could potentially propagate into the registration process as well, in this work, we employed the histograms of oriented gradient features to capture the structural information in the images so as not to require the segmentation of blood vessels. Eliminating the vessel segmentation step is beneficial as it prevents propagating the possible segmentation errors (e.g. false positives in the vessel maps near the optic disc [5]) to the registration process. Additionally, removing the vessel segmentation step reduces the required time for registering the fundus and OCT image pairs. In addition to significant intensity change between image pairs, which is one item that differentiates fundus/SD-OCT registration from other types of retinal image registration, existing very low-contrast fundus photographs and presence of extra text information on the stereo fundus photographs when the second pair is not available (Fig. 10(A)) and the presence of imaging artifacts in SD-OCT projection images cause the registration to be more challenging. Since acquiring SD-OCT volumes takes a few seconds, the OCT projection images could suffer from motion artifact (Fig. 11(B) and 11(C)). Volume truncation is another type of SD-OCT imaging artifact which appears as a black region in the projection image ( Fig. 11(B)) and causes the registration to be difficult. However, since the transformation matrix is computed using RANSAC algorithm with enough number of matching CPs between two modalities, our proposed method was able to successfully manage the imaging artifacts.
Additionally, our proposed method needs on average less than 3 seconds to perform the registration which is considerably fast. The most time consuming part of typical feature-based registration algorithms is identifying the control points for which all pixels in both images need to be examined. However, utilizing FAST corner detection for identifying the control points in our proposed method has the advantage of quickly rejecting the pixels that are not corners using a computationally efficient test on the neighboring pixels of the query pixel.
Furthermore, the proposed method is capable of registering the macular-centered OCT volumes and fundus photographs which do not contain the optic nerve head region. Since the optic disc appears differently in OCT projection images and fundus photographs, the absence of the optic disc makes registering the macular-centered retinal images less challenging. Moreover, the applications of the proposed method could potentially be extended to retinal mosaicing and registering other multimodal retinal images such as fluorescein angiography, SLO, and red-free fundus photographs. For instance, we have successfully registered en-face optic-nerve-headcentered and macula-centered OCT images (where the overlapping region between the two images is only around 20%) using our proposed registration method (Fig. 12). Moreover, our proposed method could also be extended for the registration of other image pairs, such as corneal nerve images.
Even though the histograms of orientated gradient features are not rotationally invariant, they were suitable for registering the multimodal retinal images in our dataset, due to the fact that both modalities were acquired in the optic nerve head-centered mode and therefore, significant rotations were not required for aligning image pairs. However, employing the proposed method in other applications, where rotation is necessary to register two images, would just require replacing HOG with relative HOG (RHOG) features which are rotationally invariant as they are computed with respect to the main orientation of the control points. The main direction of each CP is obtained by computing the resultant of gradient directions of all pixels inside the neighborhood of each CP using a 2D Gaussian kernel.
In summary, our proposed feature-based registration method was capable of registering stereo and color fundus photographs to their corresponding SD-OCT projection images. In particular, after creating the 2D projection image from the SD-OCT volume, the contrast of the both modalities were enhanced and the fundus photographs were scaled such that the size of optic discs, which was approximated using a circular Hough transform, in both images became similar. Next, FAST corner detection was utilized to identify the control points in both images. The

Macula
Matching Points Stitching Fig. 12. Examples of ONH-centered and macula-centered OCT stitching with the aim of obtaining a larger field of view using the proposed feature-based registration method. Note that the common area between each pair is around 20% of the size of each image.
histograms of oriented gradients were capable of capturing the structural profile of each CP's neighborhood without segmenting the blood vessels. In order to identify the best matching CPs, an approximate nearest neighbor method was utilized in the forward-backward mode which determines the best matching CPs by calculating the distances of descriptors in the feature space. After removing the incorrect matches and refining the CP locations, the best affine transform that registered the image pairs was calculated using RANSAC algorithm. Our feature-based registration method is very fast and outperformed our previous ICP registration method [7].

Funding
This work was supported, in part, by the Department of Veterans Affairs Rehabilitation Research and Development Division (IK2RX000728 and I01 RX001786); the National Institutes of Health (R01 EY018853 and R01 EY023279); and the Marlene S. and Leonard A. Hadley Glaucoma Research Fund.