Applying SLODAR to measure aberrations in the eye.

As a proof of concept we apply a technique called SLODAR as implemented in astronomy to the human eye. The technique uses single exposures of angularly separated "stars" on a Hartmann-Shack sensor to determine a profile of aberration strength localised in altitude in astronomy, or path length into the eye in our application. We report on the success of this process with both model and real human eyes. There are similarities and significant differences between the astronomy and vision applications.


Introduction
In astronomy adaptive optics is a well-accepted technology with millions of dollars spent removing the changing effects of the turbulent atmosphere to achieve diffraction limited imagery. The use of adaptive optics in vision science for the study of the human eye's function and retinal imaging is also becoming well-accepted in research, but the extent of its application is only in its infancy compared to the reaches attained in astronomy. The significance of its use in disease diagnosis, optical correction, and physiological understanding within society is marked. The field of adaptive optics includes both wavefront sensing and wavefront correction, and so far most of the application to vision science/ophthalmology has been with the former. This paper proposes the use of Slope Detection and Ranging (SLODAR) which is a wavefront sensing technique that has been developed in astronomy but is yet to be used in vision science. The use of SLODAR in vision science will allow the determination of the origin of aberrations in the eye and hence learning more about ocular optical structure.
Of the components of the eye, the cornea has the greatest refractive power and as a result produces some of the largest aberrations. Other optical elements within the eye produce further aberrations that either add to or compensate those generated at the cornea. He et al. 1 , Artal et al. 2 , Wang et al. 3 measured the aberrations produced by the cornea and the complete eye using corneal topography and a wavefront sensor. Internal aberrations were calculated by the subtraction of the two previous measurements. Artal et al. 4 and more recently Dubinin et al. 5 measured the internal aberrations with a Hartmann-Shack sensor by submerging the eye in water to eliminate the effects of the cornea. The aberrations generated from the posterior cornea and crystalline lens usually compensated partly for the aberrations produced of the anterior cornea. Recent work has attempted to model the structure and performance of the crystalline lens [6][7][8] . Some information is obtained through alternative methods such as MRI 9 , but there remains a need for fast, in-situ measurement of the contributions within the structure of the eye to the total aberration. These attempts to quantify the internal structure and operation of the eye are the forefront of activity in vision -the needs for and benefits of this knowledge have been well documented by others. 10 Several methods exist in astronomy to measure the severity and location of turbulence and the consequential wave aberrations. Modal tomography as suggested by Ragazzoni et al. 11 is one technique available in astronomy to calculate the aberrations produced at certain altitudes above the telescope. The technique used several laser guide stars to retrieve the three-dimensional distribution of the perturbing layers using a tomographical technique. Gonchorav et al. 12 implemented a modified version of Ragazzoni et al.'s technique to take into account the refractive power of the layers within the eye to determine the aberrations at fixed planes. Two other methods that may be applied without changing existing optical systems are Scintillation Detection and Ranging (SCIDAR) and Slope Detection and Ranging (SLODAR). They employ imaging or manipulation of the pupil field rather than of the image formed by the telescope, in conjunction with binary or multiple star scenes, to effectively triangulate the altitudes of turbulent layers and determine their motion and severity. In astronomy both are restricted in their use by the brightness of the stars in the area of interest. SCIDAR has been adopted in many observatories around the world 13,14 . It relies on scintillation caused by intervening phase screens on two angularly separated sources to extract the correlation profiles along those paths that suffer similar aberrations somewhere on their paths to the telescope's pupil. By changing the plane to which the instrument is conjugated to one below the pupil, one can make a better scintillation pattern and become sensitive to variations close to the pupil. This refinement is called Generalised SCIDAR 14 .
SLODAR has been successfully employed by Wilson 15 , Johnston et al. 13 , and by Goodwin, Jenkins and Lambert 16,17 . The pupil is subdivided into an array of lenslets, with a slightly different image of the binary sources being formed from each lenslet. With reference to Fig. 1, if star A and star B are considered and replicated in lenslets i and j, then movement common to both stars is a result of tip-tilt in the locality of the lenslet. Each lenslet can have a different tip-tilt as an approximation to the wavefront across the pupil. This is the operation of the traditional Hartmann-Shack wavefront sensor. The existence of dual star images, and slightly different wavefronts at the sensor from each, allows monitoring of movement between star A and B in lenslet i, and relative movements between star A in lenslet i and star B in lenslet j, etc. Correlating these movements the lenslet combinations that offer common motions can be determined, and hence triangulated to the region in space where light from star A and star B share a common aberration (circled in Fig. 1). This is the extent of the analysis in astronomy, whereby the changing layers of turbulence can be described in terms of altitude above the pupil and density of aberration structure, and the speed of motion by temporal correlation for both analysis and correction.
It should be noted that this is not the only adoption of terminology from astronomy to vision. Observations away from the line of sight exhibit different aggregate aberration to that found on the line of sight 18 , and so Dubinin et al. 5 examined the concept of an isoplanatic patch, used regularly in astronomy to assess the reduction in imaging resolution caused by turbulence, to characterize the best average correction over range of angles in vision. The eye exhibits aberration at different depths due to the structure of the refracting elements, so the wavefront observed from an angle is the projected combination of these aberrations. The variation of the aberration with angle is anisoplanatism. SLODAR is a proven technique for assessing the location and strength of the aberration with altitude. We employ it here with respect to depth in the eye.

Experimental overview
When using SLODAR with the eye there is no predetermined star spacing since the stars are created using an illumination of the retina, and are not confined to those that happen to be suitable in the region of the sky during the observation period. This is a significant advantage from an algorithmic perspective. Any and many arrangements of angular separation of the point sources on the retina can be used, allowing the aberrations at specific layers within the eye to be calculated. Fig. 1 shows the application of SLODAR for the human eye, and its refracting surfaces such as tear film/anterior cornea, posterior cornea, anterior lens and posterior lens, and variations due to the gradient index profile of the lens. A probe beam pattern is imaged into the eye (not shown) and is scattered from the retina to provide the necessary point source stars. The linear spacing of these on the retina is the vertical axis, and at this separation the resolvable surfaces set by the intersection of the rays show three within the domain of the crystalline lens and one at the cornea. As the layer spacing is a function of the angle between the "stars", the equally resolvable altitudes or more aptly, depths, occur at uneven spacings, as is illustrated by the vertical lines. These will in fact be surfaces that are not necessarily planar but described by the intersection of the rays at increments in optical path length. The optical path length accounts for the refractive index of the media through which the rays travels on their way to each lenslet. Moving the surface to which the Hartmann-Shack is conjugated (where the principal rays cross) is the recently proposed technique of Generalised SLODAR 16 .
Another advantage of applying SLODAR to vision is that the necessary information may be collected in a single exposure, effectively capturing a frozen eye structure. This structure will change as the subject performs a visual task. Still within this "frozen" time, which is currently accepted to be in the order of 30 ms 19 , multiple snapshots of the eye may be used with differently spaced "stars", or different conjugate frames to obtain finer resolution in depth across each surface. This mix of angular separations of the "stars" and conjugated depths while the eye does not have time to change is possible with active optics in both the illumination systems that creates the "stars" and the relay to the Hartmann-Shack sensor. Throughout this paper we refer to the point source scattering at the subject's retina as "stars" to use the terminology originating in the astronomical application.

Analysis of the data
As with any Hartmann-Shack system, the first processing step is the determination of the image centroids corresponding to each lenslet because these are related to the slope of the wavefront in that local region of the plane to which the lenslet array is conjugated. Usually this plane is the pupil of the subject. For each possible star the approximation to the wavefront originating at that star may be determined from the centroid position, and could be represented by a modal decomposition into Zernike polynomial coefficients. The SLODAR algorithm yields an equivalent to turbulent layer strength as a function of triangulated altitude, or more aptly, optical path length into the eye. The elements of the eye, while changeable, are nowhere near as volatile as eddy formation and dissipation in atmospheric turbulent layers, so there should be more information to be gleaned from the algorithm in this case than equivalent turbulence strength C n 2 which is derived from the phase structure function of the bulk turbulence refractive index variation, D n ( r r 1 , r r 2 ) = n( r r 1 )n( r r 2 ) . In astronomy, while inhomogeneous media is considered, the phase structure is assumed to be an isotropic function of the separation between any two points in the turbulent volume and hence D n (r) = n( r r 1 )n( r r 1 + r) . The covariance of centroids can then be related to the power spectral density, , which in turn is proportional to the cumulative C n 2 by fluid dynamics models, allowing for the C n 2 parameter to be structured at different heights of different strength layers. We apply the method of Wilson 15 as in astronomy to identify the depth location of the aberrating elements. The centroids are transformed into the coordinate space, parallel and perpendicular to the interstellar axis joining the "stars". The spatial arrangements of centroids from different stars are cross-correlated, and those of the same stars are auto-correlated. The spatial Fourier transform of the cross-correlation yields the cross-power spectrum and, of the auto-correlation, the power spectrum. The use of a noise-compensation factor in the denominator of the subsequent division of the spatial cross-power spectrum by the spatial power spectrum, prior to inverse Fourier transforming, yields a deconvolution result whose radial deviation from the origin is related to the strength of the aberration at each depth layer. A cut along the direction of the inter-stellar axis reveals the required C n 2 profile.
This is performed on one frame, or for each of several frames. Unlike the case in astronomy where the SLODAR process runs over many thousands of frames to statistically average the phase structure, we have a limited number of frames to work with for any observation, but do not expect large variation in the phase structure. Nevertheless there are other noise sources in the data acquisition including centroiding noise so a measure of averaging is helpful. The limited number of observations has implications when attempting to normalise the cross-correlation for the number of baselines (lenslet separations) possible in the data, O(i,j) 15 . There are less baselines that determine contributions at deeper layers than the combinations of lenslet spacings that reflect information for the closer layers, and so the deeper layers are very susceptible to noise effects and can ill-condition the process. For the results presented here we chose not to normalise for the number of baselines, so a bias is present in the result towards those layers closest to the conjugated plane. To test the implications of this bias we induce an artificial layer in the data at the depth of the seventh resolvable layer using a 14x14 lenslet grid. The resultant C n 2 peak is reduced to two-thirds that represented if the layer were instead inserted at the pupil.
To compensate for the small number of frames in each dataset, we suggest an optional further step in the analysis involving an iterative method resembling the Ayers-Dainty 20 algorithm to improve the deconvolved depth profile. The restriction of confining the 2-D FFT of the deconvolution result to the known lenslets is employed in one domain, and a weak positivity constraint is enforced on the result domain when iterating between 2-D FFT and profile domains. The refractive strength can not physically take negative values. Ten to twenty iterations are sufficient in obtaining a convergence for this dataset.
In astronomy the cumulative C n 2 from this profile is normalised using other information such as an estimate of Fried parameter (r o ) or variance of the phase. The Fried parameter r o and the isoplanatic angle θ o are both dependant on the cumulative C n 2 profile over the height/depth range, so if these are estimated by another method, the aberration strength at each layer can be quantified. In astronomy these quantities might be measured by the likes of a Differential Image Motion Monitor (DIMM), which can be implemented on the Hartmann-Shack centroids directly 17 . However, the C n 2 (h) profile has different meaning in our application as there is no immediate power law arrangement based on turbulent eddy development and flow; rather a more deterministic aberration exists at each layer. The phase change spatially in each layer is deterministic, so it will depend on the coordinates in the layer being described, r r 1 and r r 2 , rather than simply the distance of separation, r, so unlike the turbulence, the phase change may not be considered isotropic and hence r o will now be a function of the mean angle of the stars from the optic axis. The Fried parameter (r o ) is the width where the mean square phase variance is 1 rad 2 . This is perhaps the best way to estimate an r o equivalent, using the difference in the wavefront reconstruction from each star, and indeed is the focus of recent work by Dubinin et al. 5 We make no use of this other than for the reader to compare severity of the aberrations with that experienced in astronomy.

Experimental arrangement
To demonstrate applying SLODAR to the eye, several experiments on a single lens model eye were conducted using a 543 nm HeNe laser illuminating through a HoloEye liquid crystal spatial light modulator (SLM) and a relay lens system to generate the stars on the retina. The optical system is illustrated in Fig. 2. The SLM uses phase modulated diffractive elements as variable focal length lens to control the positioning of the point sources on the retina. Assessment of the confinement of the "star", and hence whether it functions as a point source is done by observing the images formed by the lenslets of the Hartmann-Shack sensor on return of the scattered light from the retina. A power of 0.31 µW was measured at the cornea of the eye with the phase only diffractive function displayed on the SLM. This 0.78 W m -2 exposure is well within the IEC60825-1:2001 thermal Maximum Permissible Exposure (MPE) of 10 W m -2 and is within the photochemical restrictions for prolonged (of the order of 100 s) continuous use at the same star location should it be used in-vivo, and indicates there is latitude for increased signal should it be required. The Hartmann-Shack sensor samples over a 5.6 mm diameter pupil with 14 x 14 lenslet coverage, with an exposure time of 1/60 th s. The centroids of the images are determined in Matlab for each star set. When the angular separation of the stars is small the two sets of star images are intertwined, but as the star separation increases the images occupy different regions of the image plane. It is possible to illuminate one star after the other and make use of the full capability of the centroiding process provided the eye is comparatively static between exposures. It is also possible to move the plane to which the Hartmann-Shack lenslet array is conjugated, and so perform generalised SLODAR for finer depth resolution. We chose three such datasets using a model eye with the array conjugated to the pupil and at 5 mm optical path length differences either side of this plane. The resulting profiles from these generalised SLODAR experiments on a model eye are given in Fig. 3. The model eye has a plano-convex polymethacrylate lens with anterior radius of curvature of 7.58 mm, an anterior asphericity of Q -0.229, a thickness of 4.5 mm, and we assumed a refractive index of 1.49. The stars are given a 0.002 radian separation prior to the lens system shown in Fig. 2, that magnifies the angle by 22 times; a 5.8 degree separation. The lenslet array is centred on the principal ray for each star, but we have shifted the plane to which the Hartmann-Shack sensor is conjugated by 5mm in optical path length for the two profiles shown in Fig. 3. It is evident that the shift moves the profiles across resolvable layers and the location of the refractive surfaces can be determined by this shift. The shift has moved the contribution of the anterior surface of the lens in the model eye out from under the null correlation peak at depth 0, now beginning to appear at the 1st layer (blue line), and the contribution of the posterior surface of the lens is seen around the -5th to -6th layers (red line) has similarly moved forward -4th to -5th layers (blue line). Since the single optic of the model eye is homogeneous between front (convex) and back (planar) surfaces we do not expect nor observe any significant aberration strength in the intervening layers. The horizontal depth axis is scaled by the layer spacing, and is a function of the star separation and intervening refractive index changes in the elements of the eye, and the vertical strength axis is normalised but would be scaled to Cn 2 using an estimate of ro, in astronomy. Such scaling does not make sense in our application -the relative structure is the desired observation. This would not normally resolve many layers, but there is evidence for the refractive effect of the curved front face of the plano-convex lens around layer -5 of the red curve. The contributions between layers -5 and -6 moves to layers -4 to -5 when the sensor is reconjugated by 5mm (blue curve). Notice there is a dominant but erroneous peak in the SLODAR profile depth of zero. Evident is that a contribution that was masked by the erroneous peak at zero depth, appears unmasked now at layer +1 of the blue curve. This illustrates the versatility of generalised SLODAR where fine depth information can still be achieved for small angular separations between the stars. The error bars indicate the standard deviation in the deconvolution result over ten observations.
It is our intention to change to a non-visible wavelength for testing this apparatus on a human eye, but to confirm the effectiveness of SLODAR, we have access to datasets of a single subject using a commercial COAS-HD aberrometer (Wavefront Sciences, Albuquerque). These allowed greater separation of the sources, and as such potential for finer resolution of the refraction within the depth of the eye. These exposures were taken at dissimilar times, one star (angle) at a time; therefore they do not carry simultaneous information of the state of the eye, and may introduce slightly different conjugation depths. The instrument was conjugated to the anterior cornea, but the principal rays from each star crossed at the pupil, so in essence the apparatus is as ray traced in Fig. 4, but the Hartmann-Shack had slightly defocused PSF to work with -at the f-number of the lenslets this is barely detectable, so we consider the experiment as if the whole apparatus was conjugated to the pupil. This assumption is in keeping with our definition that the conjugation plane is where Rear face of lens now evident Erroneous peak at 0 th layer Front face of lens shifted Front face of lens the principal rays intersect. The data for each star were referenced by angle to the optic axis at the detector of (0, 3.84), (7.12, 3.84), (20.55, 3.84), and (-20.55, 3.84) degrees in (horizontal, vertical) to the line of sight. In the analysis this places each combination of stars so the interstellar axis is also horizontal with respect to the lenslets. A single dataset (0, 0) is acquired perpendicular to the axis of the other datasets, on the line of sight. A graphic to illustrate the two-dimensional angular arrangement of this data is shown in Fig. 4. The subset of lenslets used in this experiment spatially sample the phase surface with 44 x 44 grid centred on and perpendicular to the principal ray. We have chosen not to correct for minor change in angle of the detector at each observation. For each data set a single exposure is acquired. Observe that there are two intersection surfaces (shown by red dotted patterns) at the cornea and almost six within the structure of the crystalline lens. In actuality, there will be three times as many surfaces as shown for the 44x44 lenslet measurements as we have limited the number of rays to 15 for clarity. Expected is a region of non-aberration in the aqueous humor of two layers. Notice that the "layers" are in fact curved surfaces fitting the intersection of rays because of the variable optical path length along each ray's path to a lenslet. Also observe that the silhouette of the pupil defines a cone either side of the pupil over which SLODAR may collect information, and this arrives with only limited overlap at the cornea, the implications of which are discussed in the second to last paragraph of section 4. Overlaid is a schematic of the positioning of the stars relative to line of sight for the COAS-HD data. Figure 4 ray-traces the scenario for (0, 3.84) and (-20.55, 3.84) angle datasets and shows where the intersection planes are expected in the SLODAR depth profile. In order to quantify the usefulness of SLODAR in this application, we need to sample the refractive elements in finer detail. Thus Fig. 4 illustrates the ray-trace obtainable from the twenty degree separated stars, but the reader should bear in mind the resolvable surfaces, illustrated by the red dotted surfaces or layers shown at the intersection of the rays, are for a 15x15 lenslet array and we have restricted the number of rays for clarity. For the COAS-HD 44 x 44 array there will be two intervening surfaces between each of those shown. The available signal will therefore be spread over many more layers, with the crystalline lens for example expected to be examined by 18 to 21 layers, the non refracting aqueous humour by the six closest layers to the pupil, and the cornea at 5 to 6 depth layers on the opposite side to the conjugation layer. To aid  Figures 5-7 show the processed profiles derived from the radius of motion of the centroids, without compensation for longer baselines, and using a Weiner deconvolution correction factor of 0.001, inside a 20 iteration modified Ayers-Dainty algorithm. All profiles are cut from the deconvolution result (see Fig. 8(a) which we discuss later) along the horizontal interstellar axis except for Fig. 6(a) which is a vertical cut because the stars are separated vertically in this case. In Fig. 6 the progression from (a) to (b) to (c) is an increase in star separation by two at each stage, with the aberration strength being sampled by twice as many layers as the previous profile. The implication of spreading the limited signal over more layers is that they are easily swamped by noise in the correlation process. However, there is reasonable evidence in the profiles for the presence and relative strength of the aberration at each layer. We believe these results give a sufficient impetus for further study and perfection of SLODAR as a valuable assessment tool for vision. .55 degrees to the right defines the direction of depth in the profile. In (a) the corneal contribution is shown around the layers -15 to -9 with the lens occupying layers -3 to 18, and in (b) the corneal contribution is shown around layers 12 to 18 with the lens contribution spread over 3 to -17. There is reasonable symmetry between the profiles, but one would expect significant differences also at these examine different volumes into the eye. We have also annotated the locations of the cornea and lens using the ray-tracing to predict the location and number of layers to which these would be approximately confined.
The order of the stars used in the cross-correlation defines whether the depth on the x-axis of each profile is positive inside or outside the eye, and hence whether the corneal contribution appears on the left or right of the zero depth "layer" or conjugation depth. We choose the convention of the leftmost star as the first argument of the cross-correlation. This allows us to compare the two profiles in Fig. 5(a) and (b) which arise from stars separated from the optics axis by the same angle but on different sides of the eye, i.e. (-20.55,3.84) and (0,3.84) in (a) and (0,3.84) and (20.55, 3.84) in (b). It would be expected that these were coarsely symmetrical but the profiles show subtle nuances due to the different structure examined on either side of the eye. Further improvement in depth resolution is seen in Fig.   7(a) for 27 degree separation between the stars and Fig. 7(b) for the 41 degree separation. The finest resolution in depth arises from the (-20.55, 3.84) and (20.55,3.84) dataset shown in Fig.  7(b). At this separation of 41 degrees there are very few baselines that examine the corneal contribution or the rear surface of the lens. The normalisation O(i,j) would compensate for this if it could be applied. Instead the prevalence of the cornea is diminished and we get the chance to examine the crystalline lens in greater detail. For comparison with astronomy, the Fried parameter, least at the largest separation, yields a D/r o of 2.8 using the unit phase variance of the first three order Zernike polynomial expansion of the wavefronts from each star.  6 . The profile of aberration strength with depth into the eye determined by SLODAR from the COAS-HD data. As the angular separation of the stars doubles so does the spread of the refractive strength into more layers. (a) 3.84 degree separation shows cornea at -2, and posterior lens from layers 0 to 3, (b) 7.12 degree separation with cornea around -5 and lens from layer -1 to 3, and (c) ~14 degree separation with cornea around -10, and lens spread to the right across layers -3 to 12. With more separation the information is spread across more depth range and hence is harder to extract.
The refractive power of the cornea is two times that of the lens 21 , and as was identified by Goncharov 12 the refractive power this exhibits is a dominant effect in any reconstruction. We expect a significant power of the crystalline lens to be associated with the posterior and anterior surfaces. The remaining power within the crystalline lens is of extreme interest but significantly lower than these other components. Therefore wider star separations or changes in the conjugation depth are required to improve the signal corresponding to the internal layers sampling the lens. So unlike astronomy, dominant features are present in the correlations, and even in the autocorrelations. This is seen easily in Fig. 8(b) where the elliptical "plateau" arises because of correlations across the rays that sample the cornea. Refer again to Fig. 4, where it is evident the outer rays arriving at the Hartmann-Shack lenslets do not participate in correlations to do with the cornea because they do not intersect any other rays within this region. The corneal power hence is confined to those rays that do cross within the region of the cornea, and is seen as a plateau-like feature whose size is reduced as the angular separation of the stars is increased, and whose shape is the spatial autocorrelation of the region where the rays do intersect at the cornea. (For large separation of stars there are very few rays sampling the cornea, as is borne out by the reduced strength of the cornea in Fig. 7(b) for example). Figure 8(c) and 8(d) show the similar effect in the cross-correlation for the model eye experiment for the 0 mm and 5mm change in conjugation plane. This feature takes a different location in the correlation results from each reconjugation. Figure 8(a) shows how the reduction by the SLODAR algorithm renders the depth profile.  Fig. 7. The profile of aberration strength with depth into the eye determined by SLODAR from the COAS-HD data. The COAS-HD results for stars that are widely separated illustrate a corneal contribution to the right and a distributed lens contribution left and centre. (a) ~27 degree separation with cornea within layers 17 to 26 and lens occupying layers -13 to 11; and (b) by ~41 degrees with cornea to the right of layer 27 and larger spread of information about the lens. The corneal contribution is reduced because it is collected from very few lenslet combinations. Note that the layer spacing is a very small depth at these star separations, and while we supposedly are conjugated to the pupil of the eye, and we would therefore expect the contribution of the crystalline lens to be to the left of the zero layer, there is significant detectable misalignment in the instrument setup, and consequently a spread to the right also.
More combinations of the star separations are possible from the COAS-HD datasets. For example combining the data sets at (7.12,3.84) and (0,0) specifies an interstellar axis that is neither horizontal nor vertical but rather at ~29 degrees orientation to the horizontal lenslet arrangements. With reference to the result such as in Fig. 8(a), where the profile would be cut horizontally through the centre (and displayed in Fig. 6(b)), this profile would be cut at the angle of 29 degrees through the centre. Consequently each depth "layer" is 1.14 times the optical path length spacing of Fig. 6(b) and so acquires one more sampling layer within the lens. Interpolation artifacts would result when determining the profile at this angle so such combinations are not shown here, but are valid data to refine the depth estimate.

Discussion
There are a number of considerations related to the process and analysis of results. The first of these refers to Fig. 4, where it is ray-traced through the human eye, rather than the traditional scenario experienced in astronomy where the stars are an infinite distance from the turbulence volume. Rays arriving at each lenslet would be considered parallel throughout the turbulence volume, resulting in resolvable altitudes that are evenly spaced. In the case of the eye the rays are still fan shaped when they arrive at the anterior lens and hence the points of intersection of the rays are not evenly spaced within a layer, and the layers are not planar, but curved to fit these intersections. Surfaces formed by the intersection of these rays will not coincide with actual surfaces of the refractive elements, nor with the lenticular structure of the crystalline lens, and so contributions from these will be spread over a few layers as collected in Figs. 5-7. Indeed the fine structure within the depths of the lens will not be immediately resolvable so effects local to each layer will be integrated. The ability to ray-trace a model eye to determine the optical path lengths and, accordingly the surfaces where the rays intersect, means that a "layer" spread function might be determined for each surface 22 . With these as a model a least squares fit to each layer would allow the inclusion of data from many observations, and hence a more resolved profile than possible from a single. The model would allow us to rule out attribution to those depths that are expected to have no refractive elements, and to expand the information about the aberration at or around each surface by calculation of the "voxel" spread function as a function of depth and spatial extent. It is envisaged that the spread functions could be scaled to fit a particular subject's eye after a first pass of the data as performed here, using the easily identified locations of the refractive surface locations. The fit to these spread functions can incorporate diffractive processes that have shown to be important in the astronomy algorithm 16 . There is much research to undertake in this area.
The mapping to "voxel" spread functions is the basis for a tomographic reconstruction. Such would account well for all the feature energy that is not confined to the interstellar axis in Fig. 8. The results presented here have not accounted for the absence of this energy from the profile, yet the features are substantial and their absence from the cut profile are a principal reason for the variations of the profile. At a minimum it would be sensible when orientation of the interstellar axis is not exactly known, to perform a Radon transform of the data around this axis and sum along a narrow angular range, for example ±3 degrees to consolidate the impact, but the substantial features present in the result would still only be captured by tomography. Off-axis structure is not observed in astronomy data because of the isotropic phase structure, and the statistically averaging over large numbers of observations.
The anterior elements or surfaces alter both the exit and entry paths of rays, causing a discrepancy between the assumed and actual positions of beacons or "stars" on the retina, and hence on the actual depth of each layer. This is the so-called tilt-anisoplanatism found when using laser guide stars in astronomy, and as in astronomy the only true assessment of this is through use of a natural reference star. In vision we suggest this reference could be a feature on the retina such as a blood vessel. Causing the illumination to enter over the entirety of the pupil will minimise the effects of the tilt anisoplanatism as we employ in the system of Fig. 2.
The illumination system can be adaptive to achieve the smallest stars observed back at the wavefront sensor. In the COAS-HD data the illumination is a narrow beam through the centre of the pupil. In either case, recall the lenslets of the Hartmann-Shack sensor have a very large f-number and correspondingly a large depth of field which makes it blind to the effects of higher order aberrations on the path of the illumination to the retina and to what depth on the retinal surface the reflection originates.
On exit of the scattered light we are of course attempting to notice deviation caused by the intervening media. Aberrations between the wavefront sensor and the supposed conjugate plane can be localized and corrected for using multiple observation angles, with the assumption that the angular average gives the true structure. This is relied on in all the "inverse" ray-tracing 23 or propagation 24 and maximum likelihood algorithms 10 . We have not combined these observations in this proof-of-concept because data are taken at disparate times. The inclusion of many SLODAR observations, each taken in rapid succession, is the focus of alternative algorithms 16 including tomography.
A complementary improvement could be achieved using data from instruments that may assess the anterior corneal surface. Knowledge of this surface may be incorporated to resolve or remove information at this depth. Known contributions may be removed in the data reduction or adjusted optically on exit from the eye before reaching the Hartmann-Shack sensor by use of adaptive optics in the sensing path. This would improve the analysis of the remaining components of interest. Iterative correction may be undertaken using adaptive optics in this fashion based on the evolving estimate of the surface under correction.
If one measures the eye while it is engaged in a visual task, many other useful quantities can be established. Temporal Cross-correlation as suggested by Wilson 13 can be used to determine movement of the refractive elements, and hence the analogies to the Greenwood frequency and coherence time for the eye structure in AO modeling and correction. Indeed temporal changes in the eye while undertaking accommodation tasks will yield further data sets to aid the assessment. It is also expected that time spectral analysis of the centroid motion will yield trends that can be attributed to sinus rhythm or tear film thinning etc, and removing these from the sequences will give better estimation of the other elements. This is akin to removing the so-called Dome turbulence in astronomy by temporal filtering 16 .

Conclusion
The application to the human eye of the SLODAR turbulence profiling technique determines the location and severity of aberrations as a function of depth. The technique with little modification has been applied to a model eye and a real subject, and hence confirmed using a dedicated setup and with commercial wavefront sensor data. The profiles arising allow immediate comparison between the astronomy and vision cases, but many further developments are possible in the latter than are possible or practical in astronomy. This is because we can introduce a number of factors such as control over the light levels, wavelength, and angular spacing. With further algorithmic development the deterministic aberrations rather than just statistical strength at each layer may be determined. The SLODAR method provides this profile information in a single or limited set of exposures that may be taken while the eye movement is effectively "frozen", with repeated such measurements allowing evaluation of changes within the aberrations and eye structure while the subject is undertaking visual tasks, and as such has the potential to be a useful technique in vision science.