Shape reconstruction from images: Pixel fields and Fourier transform

We discuss shape reconstruction methods for data presented in various image spaces. We demonstrate the usefulness of the Fourier transform in transferring image data and shape model projections to a domain more suitable for shape inversion. Using boundary contours in images to represent minimal information, we present uniqueness results for shapes recoverable from interferometric and range-Doppler data. We present applications of our methods to adaptive optics, interferometry, and range-Doppler images.


1.
Introduction. It is an early lesson in mathematics that many problems are considerably easier to handle by transferring the variables and functions to another domain via, e.g., the Fourier transform (FT). Here we show that this principle can be used in reconstructing the shape (and dynamics) of a body from data presented in various image spaces. Shape reconstruction can actually be seen as a form of model-based image processing, or de-noising with strong prior constraints: from noisy images, we determine a model of the object seen in them, and this model can be used to reconstruct the original images (compare Figs. 1 and 2). Such an interpretation does not even necessitate a unique object model (for viewing geometries other than those of the original images).
Our approach is applicable to any shape modelling from images; here we use images obtained with astronomical instruments as our application examples. Imageresolved astronomical data are usually presented as pixel image fields defined by generalized projection operators [7]. Such images are obtained by, e.g., adaptive optics or radar.
The viability of shape reconstruction from the boundary curves of object projections in optical images was investigated in [8,9]. For completeness, we show below that the shape can also be uniquely defined from the boundary contours of its range-Doppler images under some mild conditions, and that the uniqueness results of optical images can be extended to interferometric images. The boundary extraction methods have the advantage that they are independent of both the scattering model of the target and the often erroneous interior pixel intensity values. On the other hand, they can be problematic as the image boundaries are not always clearly defined due to blurring and other imaging errors.
An efficient way to use pixel values is to make a continuous (complex-valued) image function of a pixel field or a model projection via the Fourier transform. As we show below, this explicitly retains most of the weight on the image boundary curves as well. We present analytical Fourier transforms of projected images in Sect. 2. We describe the FT reconstruction method in Sect. 3, and discuss its advantages. Applications in adaptive optics, interferometry, and range-Doppler radar (with uniqueness results) are discussed in Sect. 4, and we sum up in Sect. 5.
2. Fourier transforms of images of polyhedral models. Let us first consider the usual approach of fitting the pixel values directly without FT. The model representation is a polyhedron that is projected onto some projection plane (ξ, η) ∈ R 2 via a linear transformation [7,8]. Let P : R 3 → R 2 be the operator for this linear mapping. Then the projection PT i of each facet T i is assigned a brightness factor B i depending on the visibility, attitude, and surface scattering or radiation flux model of the facet. The scattering model S(µ µ 0 , α) is typically dependent on the cosines µ and µ 0 between the surface normal and, respectively, the viewing and illumination directions, and on the angle α between the latter two [7]. Since the projection cosine µ is traditionally taken as a fixed factor in S, we write As far as their role in B i is concerned, models of thermal surface radiation flux are essentially similar to the scattering models, except that they have a time-lag component as discussed below. For each point on the projection, we assign a factor I(ξ, η) ∈ {0, 1}, a piecewise constant integer function encoding the visibility and illumination (VI) condition of the point (ξ, η). This constitutes the generalized projection or image mapping. In principle, one could determine the exact projection polygons inside which I(ξ, η) = 1 by considering all intersections due to occluding facets [8]. In practice, this is unnecessarily laborious, so we use the separate projections of the original polyhedron facets. Each facet is checked for VI as a whole by ray-tracing (using the centroid of the facet), and the level of resolution can be controlled at will by dividing a facet into subfacets; for example, the radar imaging process usually requires greater accuracy than the thermal infrared. Thus we approximate, for each facet T i , Two polygons in a plane (a facet projection of a polyhedral shape model and a pixel frame) overlap, if any of their boundary lines intersect, or if any vertex of either is inside the other. For such cases, one finds all the intersection points, if any, and thus determines the boundary lines of the overlap polygon. The sum of the areas of all the overlap polygons inside a pixel frame, each multiplied by the factors B j I j of the corresponding facet, then determine the model intensity P mod i of the pixel. The derivatives w.r.t. the vertex coordinates of the polyhedron, and hence the parameter gradients for efficient optimization, follow directly from this. We find the solution that minimizes P obs − P mod , where P obs/mod are the vectors that contain the observed and modelled pixel intensities. Since each feasible facetpixel overlap pair must be checked, this is essentially an N 2 process, although the number of function-value computations is smaller than in the N 2 process of the FT sampling below. We remark that it is possible just to sample the B i at various points on the surface (or the image plane), and then use these samples to produce the model pixel intensities. However, though fast in computing the forward problem, this is not efficient in shape reconstruction. This is because now there are no analytical partial derivatives with respect to parameters only related to the position in the image plane (such as model size and offset factors, unless there is a significant point-spread function), and other shape parameters have derivatives only through the orientation of the local model surface patches.
Consider now doing the computations in the Fourier-transformed domain, the plane (u, v) ∈ R 2 . For practical purposes, we define our (two-dimensional) Fourier transform of some function f (ξ, η) somewhat differently from the standard form: The point here is that any integral transform that has the same basic properties as FT is suitable for our purposes, so constants and normalizations used in the definition are irrelevant. Indeed, some completely different transforms may be just as good, but we have found the FT approach to converge very well. Also, interferometric data are typically samples of a Fourier transform, given in the frequency (u, v)-plane, so the FT approach is ideal for such cases.
Letting T be the set of facets forming the polyhedron, the transform integral can be written, by Green's theorem, as where Γ ij are the boundary line segments defining the VI part of the projected facet PT i , oriented counterclockwise. In practice, these are the edges of entire projected facets (or subfacets) included in the sum depending on their I i . The facet factor B i can also include the intrinsic lightness (albedo) of the local surface, and this can be left as a a free parameter (or a function over the surface). For (u, v) = (0, 0), the Fourier transform is the total brightness of the image: i.e., the last sum is the area of the VI part of PT i . For a line segment Γ ij with end points (a, b) and (c, d), I ij (u, v) can be written in a closed form by substituting the line equations so that we have The summation over I ij (u, v) can be reordered and speeded up by noting that each polygon edge in the interior is shared by two polygons, so a new factorB can be taken to be the difference between the two B i , and the edge term is computed only once. Note that this explicitly shows why most of the information in the image is indeed from the limb and shadow boundary curves discussed in [8,9]. The values of B for interior edges are usually close to zero (indeed, they vanish for the geometric scattering B i = const.), so most of the weight is on the boundary edges. From the above expression, it is obvious that the integral has continuous partial derivatives with respect to the projected vertices, which are linear combinations of the original vertices of the facet. Thus the Fourier transform has continuous partial derivatives with respect to the facet vertices.
3. Fourier transform method for model reconstruction. Our approach is largely independent of shape representations. For their effectiveness and simplicity, we prefer octantoids [9]. An octantoid is a surface given by p ∈ R 3 that can be parametrized in the form x(θ, ϕ) = e a(θ,ϕ) sin θ cos ϕ, y(θ, ϕ) = e a(θ,ϕ)+b(θ,ϕ) sin θ sin ϕ, z(θ, ϕ) = e a(θ,ϕ)+c(θ,ϕ) cos θ, where a, b and c are conveniently expressed as linear combinations of the (real) spherical harmonic functions Y m l (θ, ϕ), with coefficients a lm , b lm and c lm , respectively. Note that (θ, ϕ), 0 ≤ θ ≤ π, 0 ≤ ϕ < 2π, are coordinates on the unit sphere S 2 parametrizing the surface but not describing any physical directions such as polar coordinates. As usual, the Laplace series for a, b, c are useful for keeping the number of unknowns small and the surface smooth; separate vertex parameters can be used as well, but this usually necessitates heavy regularization. The drawback of this is its globality: one might want less smoothness regularization in some regions than in others. When more local control is desired (e.g., a feature clearly visible in fly-by images or in radar), the representation (5) may be expanded with spherical splines or spherical wavelets to provide local detail without affecting the global shape.
The representation (5) is convenient for asteroid shapes, as asteroids are often geometrically starlike or close to it. This indicates that we can use the deviation from starlikeness as a regularization measure. For this effect, we define In many cases, we can explicitly set b = c = 0 for starlike shapes, but it is often useful to employ γ instead as this gives more room for shape adjustment. The parametric representation (5) using a finite number of spherical harmonics is global in the sense that a change in each parameter will affect shape globally. This has a strong regulating effect, which is usually beneficial as the available data are often incomplete and noisy.
We can now write the FT reconstruction procedure as follows: 1. For each data image D i and observation geometry E i , the two-dimensional . . N i , on the spatial frequency plane. For pixel images, the transform can be computed by Eq. (2) when considering each pixel as a polygon, or by using fast Fourier transform functions for chosen grid points (but the time spent for FD i (u, v) is irrelevant as most of the computations are for the trial models). 2. The shape parameters a lm , b lm , and c lm are initialized such that Eq. (5) represents a sphere approximately equal in size to the target.
3. For each observation geometry E i , the Fourier transform FM i (u, v) of the corresponding projection image M i of the model is calculated as described in the previous section, together with the partial derivatives of FM i (u, v) with respect to the shape parameters. 4. An objective function χ 2 is formed, with the square norm of the complexvalued FT fit error: is the offset between the data image D i and the model image M i , and, by the convolution theorem, T i is the Fourier transform of the pointspread function of the imaging system. The regularization term γ corresponds to Eq. (6). Additional regularization measures are also possible, e.g., local convexity, gravitational slope or the inertia tensor [8,9]. Usually γ is the best choice, as the physical regularization methods tend to restrict the shape too severely during the initial convergence.
In addition, the intensity level of each data and model image must be normalized. Often it is enough to divide both model M i and data image D i by their respective mean intensities. Equivalently, writing we have (cf. [6]) where the mean · is taken over {(u ij , v ij )}, j = 1 . . . N i . However, sometimes it is better to allow the intensity level of each M i to be a free parameter and use χ 2 ; this is useful in the case where the mean intensity of D i is corrupted by excessive noise in the image background (this is typical for range-Doppler images). This causes the χ 2 rel -based solution to have a slightly wrong size to compensate for the "diluted" normalized intensity level inside the actual object region of D i . 5. The shape parameters a lm , b lm and c lm , spin vector direction, and the offsets (o x i , o y i ) as well as the possible intensity level factors C i minimizing χ 2 are determined with a suitable method such as the Levenberg-Marquardt algorithm. The crux of the idea is that the Fourier transform of the plane-projected mesh of a model polytope is simple to compute analytically, and the partial derivatives with respect to vertex coordinates exist and can be straightforwardly given in a closed form. Since F −1 F = I, FT retains all the information in the original image. We can list some particular advantages of the FT approach: • The FT method algorithm is simpler than direct pixel fitting, and it converges robustly • Information at any point in the frequency plane comes from all points in the image plane, which increases robustness • FT sampling can be used to filter the image information at different frequency (i.e., resolution) scales • Point-spread functions can easily be taken into account The downside of the FT method is increased computation time and complexity as we move from the sparse boundary curve to the two-dimensional Fourier transform of the projection image. For image fields with N relevant pixels, the boundary curve approach pertains to essentially √ N boundary elements and is basically an N process, whereas the FT method is N 2 . In all methods, much of computing time is, of course, spent on the same ray-tracing computations. In any case, the polyhedral and Fourier computations are trivially parallel. Each facet or each point on the (u, v)-plane can be considered separately, so the computations can be implemented very effectively on a graphics processing unit.

Applications.
4.1. Adaptive optics images. The resolution of even the best telescopes is not limited by their optics but by the Earth's atmosphere. The incoming wavefront is distorted by the atmospheric turbulence causing speckle patterns in the image. In effect, the angular resolution of a single telescope is limited to 0.5 arcseconds, making image-resolved imaging of asteroids nearly impossible. Adaptive optics tries to correct the atmospheric distortion with a help of a computer-controlled deformable mirror. The effects of the atmosphere are mitigated by the means of a reference star. The degradation of the wavefront from a known star is analyzed, and the mirror is adjusted to counter the effect. Further, the raw image obtained this way can be post-processed with various image-processing algorithms. With adaptive optics, disk-resolved imaging with angular resolution approaching the diffraction limit becomes possible. However, the improved precision comes at a cost. The most reliable information in the adaptive optics image is the boundary curve, as the the interior contains artefacts caused by the imaging process. In addition, the boundary extraction methods will often introduce artefacts of its own, especially if the boundary pixels are fuzzy. Thus, using the image field, the boundary extraction may be bypassed altogether, and even the raw image can be used directly without separate image-processing.
As an example, we consider the large main-belt asteroid 41 Daphne. We used 14 adaptive optics images obtained from the Very Large Telescope array at ESO, with pixel size of approximately 0.01 arcseconds. This corresponds to 7 − 12 kilometers per pixel, as the geocentric distance varies between images. Each image is transformed to the frequency plane and sampled on a rectangular grid consisting of 8064 points. In addition, we included several lightcurves, which mostly refined the spin state solution. Interestingly enough, they had no discernible effect to the actual shape solution as the available AO images seem to constrain the shape adequately. The solution is similar to the one in [8] obtained with image boundaries. The adequacy of AO data can also be seen in the curve in [8] depicting the lightcurve fit as the weight of AO data is increased: the lightcurve fit does not decrease much along the curve.
Our model consists of a triangular mesh with 1568 facets, with vertex locations defined by 243 shape parameters. The highest degree of spherical harmonics in the reconstruction is nine. In addition to the shape parameters and the direction of the rotation axis, we also determined optimal offset parameters for each image, since the object's location on the image plane is unknown. The light-scattering model used in the reconstruction is not important since most of the information is on the boundaries. We chose the standard combination of the Lommel-Seeliger and Lambert laws [6]: where C is a free constant for each image for adjusting the intensity level of the model to match that of the data. More complicated scattering models such as Hapke can be used, but this has no effect on the result as the interiors of the object are not reliable in the images in any case. In Fig. 1 we show some of the adaptive optics images. Projections of the reconstructed model are presented in Fig. 2. It is interesting to note that the non-convex details visible in the AO images ( Fig. 1) are also apparent in the reconstructed model, so the result could indeed be seen as a form of image processing as well.

4.2.
Interferometry in the thermal infrared: Heat diffusion equation and its inversion model. The advances in ground-based thermal infrared interferometry are now making it possible to obtain angular resolution approaching the milliarcsecond range; i.e., corresponding to tens of pixels across typical 100-km size class targets in the main belt. Each antenna pair in the array making up the interferometer samples the two-dimensional Fourier transform of the plane-of-sky thermal flux density [13]. Since the observable is already in the form of a Fourier transform, this data type is especially suitable for the FT approach of the inverse problem.
After enough Fourier transform samples are obtained, the Fourier transform may in principle be inverted to a form a dirty image, from which the actual final thermal image is reconstructed as a pixel field with the aid of various iterative algorithms and prior assumptions [13]. However, in the case of asteroids, we can use the mathematical model of the target as a very strong prior constraint, so the imageforming step can be discarded altogether and the raw Fourier data used directly for reconstructing the three-dimensional model. This is beneficial especially in the cases where the Fourier plane is too sparsely sampled to form the image. In this sense, asteroids are particularly suitable for interferometric observations. We will discuss the practical aspects of the interferometric inversion procedure elsewhere, and present the main theoretical points here.
Thermal-range interferometry differs from optical wavelengths in that the radiation of the surface cannot be described as simply as with light-scattering models. However, we note here an uniqueness result concerning the optical-equivalent region, meaning optical wavelengths or other domains where the boundary curves of the image of the radiating target are the same as those in the optical region. This is the case with, e.g., zero thermal inertia, when the surface releases the received radiation energy immediately. Proof. This is an immediate corollary of F −1 F = I: due to F −1 , full coverage of the (u, v)-plane data at different viewing geometries uniquely produces a full set of (ξ, η) images for which the uniqueness results are derived. Note that one does not need to produce images (and extract the boundaries) from the FT data by F −1 ; the result applies to the (u, v)-data directly since no other model can match the boundaries if the images are constructed (we assume that the light-scattering model is correct). This applies to image field data directly, of course.
The resolution provided by thermal-range inteferometers makes it possible to build detailed 3D models of asteroids if the thermal radiation and conduction process on the surface and in the subsurface layer can be modelled with sufficient accuracy. Several models for asteroid heat radiation exist, with varying complexity [4].
As the facets in the polyhedral mesh of the model are usually much larger than the diurnal thermal skin depth, it is reasonable to assume that heat is conducted only in the direction of the facet normal. Hence it is enough to solve the onedimensional heat diffusion equation with a radiation boundary condition for each facet, with shadowing and mutual heating between facets taken into account.
Calculating the heat flux of an polyhedral model consists of solving the heat diffusion equation numerically for each facet and for each time step. For even moderately-sized models, with the shadowing and the diurnal solar radiation variation taken into account, it can be prohibitively expensive computationally.
An elegant alternative approach to the finite difference-method is the Fourierseries method [1,11] briefly described below in view of our application. As the fast Fourier transform is computationally cheap, the Fourier-series approximation is much faster than the finite-difference method for solving the diffusion equation. The downside is that the approximation originally pertains to moderate heat variations, and the mutual heating of facets must be ignored. However, even in this case the Fourier-series approximation will still provide good initial values for the finitedifference method decreasing the computation time.
From the point of view of the inverse problem, the Fourier-series method, though an approximation, is nevertheless quite adequate for describing the observed radiative flux (thermal brightness) on the object's surface. This is because, again, the bulk of the information comes from the boundary of the target (a heated surface patch vs. cold background). Just as in the adaptive optics case, the accuracy of the model of the interior intensity distribution is not crucial.

Diurnal cycle and Fourier series.
We present here the main points of the Fourier-series thermal approach. We define the insolation factor ins(µ 0 , p) for a point p on the surface [10] as This is obviously a cyclic function of φ, the rotation angle of the asteroid around its axis (when the asteroid is effectively stationary during one rotation; i.e., its rotation period is much smaller than the orbital one), so it can be expanded as a Fourier series: For a convex body, the function ins(µ 0 , p) is continuous. If the minimum and maximum limits of φ for µ 0 ≥ 0 are, respectively, φ rise and φ set for a point p, we have The coefficients d n (p) can be readily computed analytically to any order n. For a nonconvex body, we check the interval [φ rise , φ set ] for facets rising above the local horizon and blocking the Sun at some φ (dividing the interval into N epochs).  Then we write the Fourier integral as above to obtain d n (p), when the integrand is zero between shadow epochs φ in and φ out (this creates derivatives w.r.t. shape parameters). Since the integration limits φ in and φ out are approximate, one can just as well compute the integral by taking the FFT of the values of ins(µ 0 , p) of 2 M equidistant epochs distributed inside [0, 2π]; this gives the Fourier coefficients up to order 2 M −1 . The derivatives of d n w.r.t. parameters can also be computed simply by taking the FFT of the derivatives of ins(µ 0 , p). In fact, one can use FFT for convex bodies as well for an approximation. Now ins(µ 0 , p) is discontinuous for some p, so the Fourier representation is more approximative than in the convex case.

4.2.2.
Heat diffusion equation and IR flux. The heat diffusion equation (with density ρ, heat conductivity K, specific heat capacity c p , and temperature T ) is given by where the vertical direction ξ in the surface material is aligned with the direction of the surface normal. This can be solved analytically with a suitable boundary condition, a periodic ansatz, and by assuming that the temperature variation ∆T is small compared to the mean temperature T 0 . Then we can linearize equations and solve for ∆T as a Fourier series [1,11]. We define damping factors Ψ n and phase lags ∆φ n by (10) Ψ n = (1 + 2Θ n + 2Θ 2 n ) −1 , ∆φ n = sgn(n) arctan and ω is the rotation rate of the asteroid, ε the material emissivity at thermal wavelengths, A the albedo (lightness, 0 ≤ A ≤ 1, of the surface material), σ the Stefan-Boltzmann constant, and F o the solar flux at the asteroid. Then the radiated IR flux at a point p i (the centroid of facet i) per surface area is The assumption ∆T /T 1 does not always hold. This is dependent on the thermal inertia Γ, which is defined as Thermal inertia measures how resistant the surface is to diurnal temperature changes; an object with a high thermal inertia is cooler with smaller diurnal temperature variations than an object with relatively small thermal inertia. It follows that for objects of low thermal inertia ∆T /T 1 no longer necessarily holds, and the FFT solution deviates from the numerical finite-difference solution of the diffusion equation [4]. This is illustrated in Fig. 3. However, in our simulations this discrepancy between the solutions has an indiscernible effect to the final shape solution. The shapes of the thermal curves are essentially same for the two methods, and the largest temperature differences do not matter in the inversion.
The Atacama large submillimeter array (ALMA) is an interferometer array in the Chilean desert. The maximum spacing between the antennas is 16 kilometers, making possible the resolution of 5 milliarcsecond at infrared wavelengths. Even if the detail present in many radar images is unattainable by the thermal infrared observations, a thermal map of an asteroid surface is enough for shape reconstruction, as we demonstrate here.
To generate the simulated data, we considered a hypothetical asteroid at the heliocentric distance 1.5 AU, with a thermal inertia of 100 Jm −2 Ks − 1 2 and the albedo 0.1. The pixel size of the instrument is assumed to be 10 milliarcseconds which, taking into account the distance of the target from the Earth, corresponds to approximately 10 kilometers per pixel. The plane-of-sky projections of observed thermal flux of the model asteroid are illustrated in Fig. 4. The actual data are the samples of the Fourier transforms of these projections. To avoid 'inverse crime', the data were generated with the finite difference method, and the FFT method was used in the inversion. The original model and the reconstructed shape parametrized by 108 parameters are shown in Fig. 5. 4.3. Range-Doppler radar. With planetary radar observations, spatial resolution of the scale of ten meters is possible (Arecibo and Goldstone radars) [12]. Unfortunately the power received by the radar is inversely proportional to the fourth power of distance, severely restricting the range of possible targets. In range-Doppler imaging the object is resolved in the range and in the frequency. A nonzero radial velocity of a point on the surface of the object causes a frequency shift in the reflected signal proportional to the velocity. Thus the frequency resolution of a radar image depends on the apparent spin vector of the asteroid. Now the projection mapping (x, y, z) → (r, D) is [7] r = (x cos ϕ + y sin ϕ) sin θ + z cos θ, where the radar direction in a coordinate system fixed to the asteroid (the z-axis aligned with the rotation axis) is (θ, ϕ), and the rotation rate of the asteroid is ω. The image mapping of this, unlike the previous ones, is many-to-one. The many-to-one mapping property and the depth vs. width plane makes visual image interpretation tricky; ridges and craters visible in the radar image are not necessarily physical features, but could also be artefacts due to the peculiar way the image is formed. For the scattering law we use a simple cosine law [12]: The specularity of the surface is measured by the exponent n.
We next state a uniqueness theorem on radar-based shape determination, based on boundary contours. In practice, the boundary curve method for radar images is not as simple to implement as for adaptive optics images. The uniqueness result here is not so much a prescription for reconstruction as a statement on the minimal information content of radar images (indeed, noisy radar images have most of their information on a boundary curve). It also shows formally what is apparent numerically: that the many-to-one mapping of the radar image is no hindrance to obtaining shape details.
For other uniqueness results on radar, see [7]. For details of the terminology and definitions of silhouettes and shape reconstruction see [8]. As in the other uniqueness results on shape reconstruction, we assume the geometry of the system to be known; i.e., the rotation speed and the direction of the rotation axis are given. By a free tangent we mean a tangent of a surface point that does not intersect any other parts of the body (except as a possible tangent of some other point), and a free direction on S 2 means that a half-line in that direction from a surface point does not intersect the body either.
The nearest radar image boundary ρ is the boundary of the radar image closest to the radar direction; i.e., the curve ρ(D) = min r| D , where r is the range of an image pixel from the radar and D is the Doppler width, D min ≤ D ≤ D max . Note that ρ(D) can be discontinuous. The nearest boundary is usually the clearest feature of a radar image, and if we use just ρ(D), we can disregard both pixel brightnesses and scattering models. The data and inverse problem model are thus robust. We also normalize, as in [7],D := D sin θ , and assume that the radar image plane (r,D) is defined for all directions ω = (θ, φ) ∈ S 2 ; i.e., that the limits at θ → 0 and θ → π exist, even though in these directions D = 0 for all image points and the image contracts to a line in r.
The pair is one-to-one and symmetric: S −1 = S.
Theorem 4.3. The shape of a body for which each surface point has at least one direction pair consisting of a free tangent (either of its two directions) and a free direction is uniquely reconstructable from the nearest radar image boundaries obtained at radar directions covering the whole of S 2 .
Proof. The nearest boundary ρ(D) of a radar image in the direction ω defines part of a silhouette in the plane whose normal is Sω (hence the many-to-one mapping causes no ambiguity in this construction). The part occluded by the body is obtained at the opposite radar direction −ω (the silhouette planes Sω and S(−ω) coincide). Since the images are obtained at all ω ∈ S 2 , and each point on the surface is thus mapped onto at least one silhouette plane by the assumption of the theorem, the body can be reconstructed as in [8] by taking the intersection of the cylinder continuations of the silhouettes in R 3 (this also takes care of possible discontinuous ρ(D) since each surface point is covered by at least one image boundary).
Remark 1. The set R of bodies reconstructable from radar boundaries is a subset of the set T of bodies reconstructable from their silhouettes (these are tangentcovered bodies; i.e., each surface point has at least one free tangent [8]). The set C Figure 6. Simulated radar images. The range r increases from left to right, and the Doppler frequency D is measured vertically. The nearest boundary ρ(D) is the leftmost boundary of each image.
of convex bodies, reconstructable from integrated brightnesses on S 2 , is obviously a subset of R. Thus we can arrange the sets of bodies reconstructable from various image boundary data in the sequence

C ⊂ R ⊂ T ⊂ G,
where G is the set of bodies reconstructable from their edge and shadow boundaries [8].
Among natural bodies such as asteroids, the set R is not much smaller than T or G (for example, a curved banana is in R). In other words, the radar boundary data define a typical shape almost as well as adaptive optics boundary data. The pixel brightnesses can be used to determine concave details (valleys and craters not visible in silhouettes). As Fig. 6 shows, there is much more information in full radar images than just the boundary. Indeed, the body that created those images is clearly not in T . However, general uniqueness results on such data (as well as full optical image fields) are difficult, and perhaps impossible, to prove.
As real observed data sets are currently unavailable to us, we have to rely on simulations to test our reconstruction algorithm. In Fig. 6 simulated radar images are shown, obtained from the model in Fig. 7 (left) with added Gaussian blurring and noise. Our reconstructed model uses 108 shape parameters, and for each radar image, the optimal offset on the plane and the scaling parameter are also determined during the optimization process. The goal here was to produce an intermediateresolution model rather than a high-resolution one. In principle, the latter could be achieved by adjusting the individual vertices of the model after determining the function series, but usually the noise level sets limits for this; we will discuss this aspect elsewhere. 5. Discussion. We have completed the series of papers describing the theory and solution procedures of inverse problems of generalized projections. In [5,6], the solution of shape and spin reconstruction from integrated brightnesses was presented with uniqueness results, and the general concept of generalized projections was introduced in [7] with uniqueness results on some radar observation types. Boundarycurve solutions for images and the corresponding uniqueness theorems, special types of interferometric data, and the methods for combining data from different source types were discussed in [8,9]. Here we have introduced an efficient way of solving the inverse problems of various image types with the Fourier transform approach, and presented uniqeness results on general interferometry and radar data. Now we have practical and efficient theoretical and computational methods at our disposal for all the data types associated with generalized projections in astronomy: photometry (integrated brightnesses), adaptive optics or stellar occultations (images and their boundaries), radar, and interferometry. We plan to combine these into a general software package for data analysis.