High-resolution topography from planetary images and laser altimetry

Mapping landforms on the Moon is of great interest and importance for future human settlements and resources exploration. One of the ﬁ rst steps is to map the topography in great detail and resolution. However, data from the Lunar Orbiter Laser Altimeter (LOLA) provide low-resolution elevation maps in comparison to the size of detailed geological features. To improve resolution, we developed a new method to upscale topographic maps to a higher resolution using images from the Lunar Reconnaissance Orbiter Camera (LROC). Our method exploits the relation between topographic gradients and degrees of shading of incoming sunlight. In contrast to earlier published methods, our approach is based on probabilistic, linear inverse theory, and its computational ef ﬁ ciency is very high due to its formulation through the Sylvester Equation. The method operates on multiple images and incorporates albedo variations. A further advantage of the method is that we avoid/reduce the use of arbitrary tuning parameters through a probabilistic formulation where all weighting of data and model parameters is based on prior information about data uncertainties and reasonable bounds on the model. Our results increase the resolution of the topography from ~60 m per pixel to 0.9 m per pixel, bringing it to the same pixel resolution as the optical images from LROC, allowing in some cases detection of craters as small as ~3 m of diameter. We estimate uncertainties of the topographic model due to noise in the images, and in the low-resolution (LOLA) model.


Introduction
Surveying Planetary landforms is often challenging, presenting several technical obstacles in terms of engineering, data and image processing, computational power and mathematical complexity. Interesting features are observed from the Lunar Reconnaissance Orbiter Camera (LROC) in optical images in Regions of Interest (ROI) such as caves, lava tube pits, lava channels and craters under shaded areas, possibly containing water ice (Smith, D. E. et al., 1997;Smith, D. E. et al., 2010a;Smith, D. E. et al., 2010b).
Mapping these areas is fundamental for any further investigation supporting the decision making in connection with future human settlements and resources exploration. For this, it is necessary to identify the surface geology, the rock composition, and its spatial distribution and similarities with Earth or other planetary bodies, to infer the possibility of accessing ISRU (In-Situ Resource Utilization) (Johnson et al., 2010).
One of the initial stages is to map the topography and probe the geological features of the landforms. All this would only be possible if the imagery data contains enough detail and resolution; poor resolution means missing important and small elements, for instance, cave entrances and lava tubes.
Imagery data by NASA spacecraft, the Lunar Reconnaissance Orbiter Camera (LROC), provides images on a scale of approximately half to 1 m per pixel. However, the original topographic data acquired by the Lunar Orbiter Laser Altimeter (LOLA) onboard LRO provided elevation maps of approximately 60 m per pixel near the equator. This poorer resolution in comparison with the images misses smaller surface elements, thereby increasing the uncertainty of geological interpretations.
The challenge of retrieving important small-scale features in planetary bodies has been a subject of several studies for decades. In a NASA report carried out by William Whittaker (2012), methodologies are discussed for developing high-resolution modelling of surface and subsurface lunar mapping with sparse data, enabling robotic developments. In part of this research, a simulated area was created to imitate the view of landers or rovers -which means in higher resolution. This method shows a synthetic case where a low-resolution terrain was used and then added new small features in accordance with statistical models of Surveyor data (NASA Surveyor Project Final Report, 1968). An image was then created from a simulated LiDAR map. Due to the complexity of such studies, it was necessary to develop elaborated robotic missions. In this same report, it is described how rovers and landers equipped with LiDAR cameras and small gadgets will be taken to lava pit entrances to investigate their interior. In this way, a series of robotic missions are made necessary to compensate for the lack of information we have from orbit.
A different approach carried by Kaku et al. (2017) is about mapping the lunar cave Marius Hills, whose diameter is approximately 65 m (Haruyama, J. et al., 2009). Using SELENE Lunar Radar Sounder (LRS) they developed a technique combining LRS patterns with gravity data, showing regions with lower gravity suggesting the presence of voids, for instance, lava tubes or caves. Also, as seen in Haruyama et al. (2012) and Haruyama J. et al. (2014) SELENE data has been used to create Lunar Digital Terrain Models from terrain stereo camera observations. Also, in an attempt to improve the resolution of small geological features, a study carried out by Barker, M.K. et al. (2015) offered improved lunar digital elevation models (DEM) considering geolocation errors. This method used the SELENE Terrain Camera (TC) data as constraint to remove errors in orbital pointing and positioning from topographic data from LOLA, by filling gaps in the topographic data using the TC data without a need for interpolation. Although this method is robust and presents an excellent alternative for improving DEMs, the resolution of 512 pixels per degree from LOLA data (~60m per pixel) is maintained and the resolution of the Terrain Camera images are downscaled to match the topographic resolution.
In a recent, important work from Barker, M.K. et al. (2021), a DEM in higher resolution from the LOLA-based model is presented where only laser altimetry is used to consistently reduce orbital errors. This method provides 5 m per pixel resolution.
Over the last few decades, significant improvements of topographic resolution have been provided by Shape-From-Shading methods that exploit variations in sunlight illuminations to estimate terrain gradients. In these methods, high-resolution images are the main source of information, in combination with low-resolution topographic data. Zhang et al. (1999) provide an excellent review of the development of these methods before the turn of the Millennium, and they classify the existing methods into several categories, of which the most flexible is perhaps the class of so-called minimization methods (Ikeuchi and Horn, 1981;Horn, 1986Horn, , 1990Lee and Kuo, 1993;Lohse et al., 2006;Gaskell et al., 2008;Archinal et al., 2011;Lux, 2011;C. Wu et al., 2011;B. Wu et al., 2016;Alexandrov and RossBeyer, 2018). These methods iteratively minimize a cost function which is a weighted sum of brightness data misfit, smoothness penalties (Ikeuchi and Horn, 1981;Horn, 1986Horn, , 1990Lee and Kuo, 1993), and deviation from an initial guess about the terrain geometry. Recent methods (see, e.g., Alexandrov and Beyer, R. A. 2018) approach the minimization problem through sophisticated optimization techniques, such as the multiresolution approach (see, e.g., Crouzil et al., 2003). Several methods in the literature operate on multiple images/illumination angles (Lohse et al., 2006;Gaskell et al., 2008;Archinal et al., 2011;Wu et al., 2011;Alexandrov and RossBeyer, 2018), compute albedo (Lohse et al., 2006;B. Wu et al., 2016;Alexandrov and RossBeyer, 2018), and allows non-linear (non-Lambertian) surface reflectance models (Lohse et al., 2006;B. Wu et al., 2016;Alexandrov and RossBeyer, 2018).
Our approach uses multiple images, incorporates albedo variations, and it is currently based on the Lambertian reflectance model. However, it differs from earlier studies in three ways: (1) We adopt a fully probabilistic approach that minimizes/avoids the use of arbitrary tuning parameters associated with penalties, constraints and data fitting.
(2) The probabilistic scheme allows for uncertainty estimates, and (3) our method is linear and based on the solution of the Sylvester Equation (De Ter an et al., 2018), which ensures high computational speed (Harker and O'Leary, 2015). The latter means that large areas can be dealt with without first splitting them into smaller pieces, and merging the solutions afterwards.
Our results show a significant improvement of the elevation resolution from the 60 m of the low-resolution digital elevation model (LOLA) to less than 1 m (same as the images) per pixel. In the theory section below we derive the equations and explain in detail the theoretical approach that was developed. In the numerical experiments section, we present a calculation on a synthetically created area to test our mathematical formulation as well as the algorithm. Finally, in the results section, we apply it to real data from the Moon.

Theoretical formulation
We use a probabilistic inverse problems approach (Tarantola andValette, 1982 andTarantola, 2005) to find the maximum a posteriori model of the high-resolution topography, and the associated uncertainties. Let us define the desired high-resolution topography map (matrix) M, and the gradient maps X and Y, providing information about the slopes in the North-South and East-West direction, respectively, derived from the illumination. The matrix G is the topography gradient operator (NS direction) and G 0 its transposed (the gradient operator in the EW direction). The gradient operators are here represented in their matrix form (see Harker and O'Leary, 2015 on the subject). We wish to solve the linear inverse problem for the unknown highresolution topographic image model M. Here we use the gradients derived from the image data as constraints: Our solution consists of two separate, linear steps: (1) Determination of surface gradients from shades observed on one or multiple highresolution images, and (2) Estimation of a high-resolution topographic map from the gradient maps.
Estimating the surface slopes from observed brightness variations.
In this section, we suggest a method for estimating the normal vector n of the terrain at a point ðx; yÞ from the observed vector of brightnesses δ at ðx; yÞ corresponding to N different illumination angles. The normal vector will allow us to compute the terrain gradient at ðx;yÞ. We aim at a fully probabilistic formulation of the problem, so we assume that the brightness is related to the normal vector through the equation where g is the radiance model, and e δ is noise with zero mean and covariance matrix C δ . If the surface at ðx; yÞ is hit by light from K different directions with unit direction vectors s k , and we assume a Lambertian radiance model, the brightness δ k for the k'th direction is the magnitude of the projection of s on n, scaled by the effective albedo a (the product of the surface albedo, and other factors in the camera that scale the observed light): giving a when n and s k are parallel, and 0 when n and s k are perpendicular. Hence, for illumination from K different directions equation (3) becomes: where δ is the vector of brightnesses, and S is the matrix whose k'th row is s k . In addition to equation (5), we have the (exact) constraint that n is a unit vector: In principle, if we know the illumination angle matrix S, and we have observed brightnesses δ from the image data, we can, if the matrix S is well-conditioned, estimate the normal vector n, and hence the gradient (the partial derivatives) at ðx; yÞ from equation (5). However, if there are less than 3 illumination angles, or if illumination directions are close to parallel, S will be ill-conditioned. This means that, without additional information, there will be either infinitely many solutions to equation (5), or the solution will be very sensitive to noise in δ. However, the additional equation (6) adds one more constraint on n and reduces the non-uniqueness/ill-posedness of the problem. This makes a solution possible for even two illumination angles only (giving 3 equations with 3 unknowns), but there is still a risk that significant noise in the brightness data may seriously degrade the solution.
To avoid this problem, we introduce prior information about the terrain normal vector n by augmenting equation (5) with 3 linear equations expressing that the estimated normal vector should not be too far away from our a-priori best guess, namely the normal vector n P at ðx; yÞ computed from the low-resolution topographic data. This is expressed through the equation: (or, equivalently, n P ¼ n þ e n ) where e n is an uncertainty with zero mean and covariance matrix C n . In practice, the variances of C n can be determined from the expected maximum slopes of the terrain.
Equations (5) and (7) together gives: It can be shown (Tarantola, 2005) that the least-squares solutions to this (overdetermined) equation is: Fig. 1. A) Topography of the area used for the numerical example from LOLA (NASA/PDS/MIT). The terrain is generated from a set of LOLA-based model, which in this test plays the role of the "unknown" high-resolution map. B) Synthetic low-resolution topography, 40 times down-scaled. C) Smoothed low-resolution topography with a high number of pixels. This surface was used as an a-priori 'best guess' of the topography in the numerical test of our algorithm.
where C δ is the covariance matrix of the noise on the brightness vector, and the posterior covariance matrix forñ (measuring the uncertainty and correlation between estimated normal vector components after considering both the brightness data and the a-priori information) is given by (Tarantola, 2005): When the weights on the low-resolution topographic data are small, that is, when our prior tolerance σ M is large, the components of C n À1 are small, and we see from (9) that When the constraint (7) is added, both n and the albedo a can be determined. Note, however, that when only one illumination angle is used in the calculation, the argument above does not hold, and the estimated normal vector and albedo will be biased by the low-resolution topographic data. This is expected, since in this case S is singular, and solutions need to be strongly guided by the low-resolution topography.
Having found the estimated normal vector b n from (6) and (8) at each surface point we can now estimate the matrices X and Y whose components are the local partial derivatives of the topography f ðx; yÞ in the NSand EW direction, respectively. We will do this for each point ðx; yÞ independently, assuming for simplicity that uncertainties of partial derivatives at different points are uncorrelated. The non-unit normal vector Nðx; yÞ: is related to our unit normal vector n through normalization: b nðx; yÞ ¼ Nðx; yÞ jNðx; yÞj : or, conversely, allowing us to easily obtain the partial derivatives (the two first components of Nðx; yÞ) from b nðx; yÞ.
Applying the constraints (6) to n and the subsequent transformation (14) from n to N is essentially scaling with, first a factor 1=a, where a is the effective albedo, and then another scaling to make Nð3Þ ¼ 1. We assume that these two operations together are approximately linear. This means that the estimated gradient of the terrain f : is approximately Gaussian with covariance matrix whereñ 3 is the 3rd component ofñ, andC ð2x2Þ n is the upper-left 2 Â 2 submatrix ofC n . For simplicity, we shall assume in the following thatC n is approximately diagonal (that the off-diagonal components are much smaller than the diagonal components).
2.1. Estimating high-resolution topography from surface slopes, constrained on low-resolution altimetry data Using the above-described method to obtain the gradient maps X and Y of the terrain, and their uncertainties, we now proceed to estimate the topography model M itself. To avoid unnecessary biases from our a-priori terrain model (an upsampled version of the low-resolution model) we subtract the effects of the low-resolution topography M 0 and define the model update ΔM ¼ M À M 0 , which contains the fine details we wish to add to M 0 . Having estimated the gradient maps X and Y of the terrain, and the low-frequency variations X 0 and Y 0 of the gradients from M 0 we have: and we can now compute the gradient residuals ΔX ¼ X À X 0 and ΔY ¼ Y À Y 0 . Combining (17) with equations (1) and (2), we obtain the following equations for the updates:  . Computed Maximum a Posteriori (MAP) high-resolution topography from the synthetic imagery and the prior, low-resolution terrain model. This model has the best possible consistency with the brightness data and the a-priori low-resolution model.
These are the equations we want to solve for ΔM. Once the updates are found, we can compute the high-resolution topography estimate as To finally solve (18) for ΔM, we assume that the a-priori uncertainties of the components of ΔM, measuring our tolerance on the expected variation of the terrain, are Gaussian and mutually independent, all with variance σ M : As far as the estimated partial derivative fields ΔX and ΔY are concerned, we perform a transformation of the gradient at each location ðx; yÞ: where W T W ¼ C r . This means that the transformed gradient b r has uncorrelated, Gaussian components with unit standard deviation. To find the approximate maximum posterior (MAP) solution (the 'most probable' solution satisfying both data and a priori assumptions), we assume that the correlations between the components of the gradients (the offdiagonal components of C r ) are small. In this case, we can find the MAP solution by minimizing the difference between the left-hand and the right-hand sides of equations (21), normalized with the noise standard deviation, and simultaneously minimizing the norm of ΔM normalized with the a-priori standard deviation. This is done by minimizing the following expression with respect to ΔM: where σ 2 D is the variance of the noise on the data, and σ 2 M is the a priori variance of the unknown model M.
It can be shown (see Appendix) that the ΔM minimizing (22) satisfies the normal equation: which is of the form AΔM þ ΔMB ¼ C for matrices A; B and C with appropriate dimensions. This is a socalled Sylvester Equation for which no simple closed-form solution has    2018). The advantage of this formulation is a considerable gain in computational speed (see Harker and O'Leary, 2015).
Finally, based on the computed MAP model, the noise variance of the gradient data, and the prior variance of ΔM, we proceed with evaluation of the posterior uncertainties for ΔM, which is the final estimate of the reliability of the result. Here we exploit the fact that our algorithm is computationally very efficient, and use a Monte Carlo algorithm. The algorithm is iterative, and in each iteration it adds random Gaussian noise with variance σ 2 D to the data, and random noise with variance σ 2 M to the a-priori low-resolution terrain model, and carries out the inversion to produce a realization of ΔM. From these realizations a final uncertainty variance can be estimated.

Numerical Experiments
To demonstrate and test our method, we created a synthetic test example to investigate if our method and algorithm are able to reconstruct a known, controlled area. A set of topographic data from the LOLA mission plays the role of the "unknown high-resolution map", from which we computationally created an image and computed, by simple downscaling, a low-resolution version of the topographic map. The synthetic image was computed by subjecting the artificial high-resolution topography to illumination with a chosen illumination angle. It is important to note that, although this numerical experiment was originally inspired from a real area on the Moon, the scales in meters and elevation angles are chosen to validate our method and might not be matching realistic numbers.
In Fig. 1A, we see a horizontally downscaled version of the real LOLAbased topographic data from an area on the Moon located around 25; 41 N and 2; 83 E from SLDEM2015 Planetary Data System LOLA data node: (http://imbrium.mit.edu/EXTRAS/SLDEM 2015). The projection is simple cylindrical, as for all subsequent data used in this paper.
From this image, we used a down-sampling operator to downscale it 40 times from the original one and created a low-resolution topography (Fig. 1B).
To be able to perform inversion, we created a prior 'best guess' of the topography by transforming the low-resolution topographic data into a smooth, upscaled map with the same pixel size as the image (see Fig. 1C).
We now created two artificial images (Fig. 2) from Fig. 1 with illumination angles: Then the gradients in North-South and East-West directions were   (1)) and incidence angle 74; 52 and azimuth angle 93:56 (right (2)) from LROC (NASA/GSFC/Arizona State University). estimated (Fig. 3) from the illumination of the artificial image. The direction of the light (see Fig. 4) determines which one of the components, N-S or E-W of the gradient will have most weight in the calculations. Gradients in the image close to parallel with the solar illumination angles will be better determined than those with a near-90 illumination.
In the inversion we computed the high-resolution topography from the synthetic images (Fig. 2), using the down-sampled low-resolution topography (Fig. 1C) as a constraint. The result can be seen in Fig. 5.
To confirm the validity of this reconstruction we have computed synthetic images. Fig. 8 shows images with the two illumination angles. In the following section, we will look deeper into the uncertainties related to the reconstruction.

Uncertainty analysis
To investigate the uncertainty of the computed model in the numerical experiment, we carried out some tests (see Fig. 6). First of all, we have the posterior uncertainty, as shown in Fig. 7. This plot shows the total uncertainty at each surface point resulting from the combined effect of 10% brightness error on the image, and a 10 m uncertainty on the apriori, low-resolution model. Fig. 8 shows the elevation residuals (difference between the MAP model (Fig. 5B) and the true model). The plot reveals important information about the dependence between illumination angles and the accuracy of the reconstruction.
The first observation is that the maximum total error is around 1 m, if we disregard edge effects, but the smoothness of the map means that relative errors between nearby points (e.g., 10-20 m spacing) is very small. The residual is largest where the topographic variations are most significant (at the rim of the big crater in the middle). The interplay  between illumination angles to create these effects is not easily seen. More information can be extracted from Fig. 11 showing the slope residuals in NS and EW directions. Generally, the NS slope residuals are larger than in the EW direction, and we see that the largest NS slope residuals are observed in the directions that are between the two illumination azimuths. This is in accordance with our expectation that the accuracy of estimated slopes is best when gradients are near-parallel to the illumination vector.

Real data results
We use now the real imagery data of an area centred approximately 34 S and 166 E, in the Region of Interest (ROI) Mare Ingenii. The area spans approximately 1200m Â 1200m.
The image data covering this area was provided by LROC, observations M115225180L, M121124338L (Fig. 10), with a resolution of 0; 53 m=pixel horizontally and 0; 57 m=pixel vertically. In Fig. 10, craters and other surface features are shown illuminated with sunlight with an incidence angles of 52; 00 and 74; 52 , and azimuth angles of -60:40 and 93:56 , respectively. The LOLA elevation model of the area from 2015 can be seen in Fig. 11. The resolution is 512 pixels per degree, corresponding to 59:225 m/per pixel.
To be able to perform the inversion, a smoothed version of the lowresolution topography (as in Fig. 1C) with the same pixel resolution as the images was computed. After this, calculation of terrain gradients, followed by estimation of the high-resolution topography was carried out. The result is shown in Figs. 12 and 13. Fig. 12 shows the entire area (around 1200m Â 1200mÞ, and Fig. 13 shows a smaller sub-area of 60 m Â 60 m, selected to better show detailed surface features and landforms. Here one can see the smallest surface features that the high-resolution elevation map is capable to detect. The topography of small craters with diameters ranging from $ 25m down to $ 3m can easily be observed (see Fig. 14).
To verify our real-data results, we calculated the posterior elevation uncertainties, as shown in Fig. 15a and b. Input to the calculations is the prior uncertainty of the high-resolution elevation DEM, and an estimate of the uncertainty of the image data. The prior uncertainty of the elevation DEM is not the 10 cm vertical precision of the LOLA data, reported by Mazarico et al. (2014), and Barker et al. (2016), but rather a measure of the expected difference between the smoothed, upscaled LOLA model and the desired high-resolution model. These differences can be significant, and in this study we assume a standard deviation of 300 m. This is on the conservative side, and the high value is chosen to avoid a too strong influence from the prior. Fig. 15a shows an absolute posterior uncertainty of a few tens of centimeters. However, the uncertainty of relative elevations between nearby points is much smaller: Fig. 15b shows a histogram of 100 Monte Carlo samples of the posterior distribution of the elevation difference between two points 10 m apart (both points at North/South coordinate y ¼ 550 m, and at East/West coordinates x ¼ 500 m and x ¼ 510 m, respectively). We see that the uncertainty of this relative elevation difference has a standard deviation of~5 cm, showing that surface features with horizontal dimensions of meters or tens of meters are reconstructed with high fidelity.
To further demonstrate the validity of our results, we computed the posterior data uncertainty. The result is shown in Fig. 16 where it is seen that the posterior data uncertainty is everywhere consistent with the  As we consider only two illumination angles in this study, some anisotropy will in some cases distort the crater rims, amplifying their height around the direction of the illumination. We have already observed this effect in the slope residuals in Fig. 9. In our real-data examples, anisotropy appears to be negligible, something that the relatively large angle between the two directions of illumination contributes to.
Since the original production of the LOLA-based digital elevation models, there have been significant improvements, as presented by Barker et al. (2021). These results can potentially be used as input to our method to obtain high-resolution topography maps of very high quality.
Future improvement of our methodology should include more accurate reflectance models. This study was based on the linear Lambertian model, but possible linearized versions of other models should be considered.

Conclusions
In this study, we developed a computational method that uses high resolution imagery from planetary surfaces, together with low-resolution altimeter data, to create high-resolution Digital Elevation Models of planetary landforms. We applied the method to observations from LRO Camera (LROC) constrained by topographic maps derived from the Lunar Orbiter Laser Altimeter (LOLA) onboard the Lunar Reconnaissance Orbiter (LRO) in order to create a high-resolution Digital Elevation Model (DEM) of Lunar landforms.
Our method is based on probabilistic, linear inverse theory, based on the Sylvester Equation, and it shows high computational efficiency. The method works on multiple images and it incorporates albedo variations. We avoid/reduce the use of arbitrary tuning parameters through our probabilistic formulation where all weighting of data and model parameters is based on prior information about image noise and expected deviations of the computed model from low-resolution altimeter data.
In our analysis of Lunar data, we used LOLA-based altimeter data with a resolution of 512 pixels per degree, which provides a~60 m per pixel resolution. These data were used as a constraint in our inverse problem formulation to compute a final topographic map from images with a resolution of 0,53 m per pixel horizontally, and 0.57 m vertically. The resulting topographic map was upscaled to the same, high resolution.
Our formulation provides not only high-resolution maps but also more reliable evaluation of the uncertainties of the results. The method offers a way of studying finer details of regions of interest without the need for expensive engineering campaigns.
Future perspectives of this work are to improve the capabilities of this method to map formations of more sophisticated and challenging shapes and smaller sizes, as well as using the colour information of images to study rock-physical properties of planets. LOLA-based DEMs with higher resolution are obvious sources of information to be integrated into our method to generate terrain maps of hitherto unseen detail.

Author statement
Both authors have contributed to the conceptualization, methodology, software, validation, formal analysis, data curation, visualisation, writing, reviewing & editing.