Vision-Based Estimation of Small Body Rotational State during the Approach Phase

The heterogeneity of the small body population complicates the prediction of small body properties before the spacecraft's arrival. In the context of autonomous small body exploration, it is crucial to develop algorithms that estimate the small body characteristics before orbit insertion and close proximity operations. This paper develops a vision-based estimation of the small-body rotational state (i.e., the center of rotation and rotation axis direction) during the approach phase. In this mission phase, the spacecraft observes the rotating celestial body and tracks features in images. As feature tracks are the projection of the landmarks' circular movement, the possible rotation axes are computed. Then, the rotation axis solution is chosen among the possible candidates by exploiting feature motion and a heuristic approach. Finally, the center of rotation is estimated from the center of brightness. The algorithm is tested on more than 800 test cases with two different asteroids (i.e., Bennu and Itokawa), three different lighting conditions, and more than 100 different rotation axis orientations. Each test case is composed of about 250 synthetic images of the asteroid which are used to track features and determine the rotational state. Results show that the error between the true rotation axis and its estimation is below $10^{\circ}$ for $80\%$ of the considered test cases, implying that the proposed algorithm is a suitable method for autonomous small body characterization.


I. Introduction
The heterogeneity of small body characteristics is a crucial factor to consider during mission preparation and in-orbit operations. In particular, the limited knowledge of small bodies' properties imposes numerous challenges during mission design and operations. Shape, rotational state, and geophysical characteristics can strongly change depending on the small body under study and it is hard to foresee before mission arrival the exact small body properties. When a small body mission is planned, an observation campaign is performed to bound the characteristics of the small body under study and to estimate the range of uncertainties of these quantities [1]. In particular, preliminary shape, pole orientation, and rotation period can be deduced from light-curves inversions [2,3] and, if feasible, radar campaign [4].
The rotational period has shown high agreement with the light-curves inversion procedure and it can be estimated from on-board light-curves analysis exploiting Fourier transform during far approach [5,6]. Rotation pole, shape, and gravity field estimation is a more complex problem and ground-based observations often do not agree with reality.
On the one hand, the rotation pole orientation is directly coupled, through the rotational equations, to the inertial tensor which is deduced from the shape. Even though small bodies, in particular asteroids, are mainly princal axis rotators [7], fine shape estimation remains deeply coupled with the rotation pole inertial orientation. Current techniques to solve this problem rely on stereophotoclinometry or its variations [8][9][10]. On the other hand, the gravity field can be derived from the shape under the assumption of constant density [11][12][13][14] or gathered by solving the orbit determination problem by processing long trajectory arcs [15,16]. Unfortunately, these techniques strongly rely on human intervention to control the solution convergence and validate the output shape. Moreover, the required computational time makes it unsuitable for onboard applications. It is worth noting that all these techniques rely on communications with the ground implying radio signal delays and high costs.
To overcome these limitations, vision-based systems have been gaining attention as a cost-effective and accurate solution because they can provide real-time information to the spacecraft with limited impact on system budgets. In this context, it is crucial to design autonomous vision-based algorithms capable of supporting small-body parameter estimation without ground communications. The list of parameters to be determined at a given mission phase depends on which information is needed for the following mission phases. Surely, the rotation pole, the shape, and the gravity field are of primary importance to allow the GNC architecture to perform critical mission phases like insertion maneuver or close proximity characterization [17]. These quantities are strongly correlated and their onboard estimation is difficult to be performed independently. The gravity field depends on the small body shape and its density distribution. Therefore it can be derived from the shape under the assumption of constant density [11][12][13][14] or estimated during the localization procedure [18]. The shape can be deduced by shape from motion algorithms [6,19] or by methods solving for the SLAM (Simultaneous Localization And Mapping) problem [20][21][22]. The rotational state initial estimation is required for two main reasons. First, it is necessary to define a relative reference frame which implies the estimation of a point and three vectors. Second, the knowledge -even if coarse -of the small body rotational dynamics can help the convergence of the relative localization as proposed in Panicucci [20]. The rotational state can be determined implicitly by solving the SLAM problem as the spacecraft poses, i.e., positions and orientations, are estimated in the small-body-fixed reference frame. Alternatively, it can be determined during close approach when the small body is resolved and image processing algorithms can extract information about the small body rotation in time. Bandyonadhyay et al. [6] proposes a method to determine the rotation state of the small body by simultaneously optimizing the shape and the rotation state. The proposed method estimates accurately the pole orientation and the small body shape, but it requires a Monte-Carlo optimization which increases the computational burden. Bissonnette et al. [23] studies the estimation of the pole orientation from matched features by fitting landmarks movement on the small body surface. This work studies the possibility of detecting ellses in the image and reconstructing the landmark's circles movements in 3D.
Starting from this latter work, the work presented in this paper develops an algorithm to autonomously determine the small body rotational state during close approach without the need of performing the localization task. The proposed approach estimates also the small-body-fixed reference frame origin leading to the definition of the small-body-fixed reference frame. This task is crucial to be performed before the localization task because it enables changing the navigation from the inertial reference frame to the small-body-fixed reference frame by providing a first estimate of the small-body rotational dynamics. By tracking points extracted from small body images taken during the approach, it is possible to retrieve the circular movement of the landmark in the inertial frame. Indeed, the trajectory of each landmark in the 3D space is a circle having as center a point on the rotation axis and lying on a plane defined by the same axis. By assuming known the rotational period, the feature circular movements provide an estimate of all the possible rotation axes. Multle solutions are present because the camera cannot retrieve the movement of the feature along its boresight. The correct one is determined by using a heuristic approach. The algorithm is tested over numerical simulation with synthetic images in the loop for a wide range of observational geometry and two different asteroids, i.e., Bennu and Itokawa. Finally, numerical results are presented and discussed to understand the algorithm's performances and limitations.

II. Notation
In this paper the following notation is used: • 3D vectors are in lower case bold text, such as , and 2D vectors are in upper case bold, such as .
• Matrices are in plain text in brackets, such as [ ] • Vector initialization per performed with parenthesis, such as = } is a 3D reference frame centered in with axes 1 , 2 , and 3 . All the reference frame are right-handed.
• A = { , 1 , 2 } is a 2D reference frame. This reference frame is centered in with axes 1 and 2 which are orthonormal.
• The vector expressed in the S reference frame is denoted S • The homography matrix from S to C is [CS].
• The vector expressed in the S reference frame is denoted S • The angular velocity of reference frame B with respect to reference frame N is labeled B/N

III. Problem Statement and Algorithm Overview
During the close approach to a small body, the spacecraft moves at low velocity with respect to its target by acquiring images of the small body to perform characterization. Note that the spacecraft is not under the influence of the small body gravity field, thus the spacecraft motion is fully determined by the Sun gravity and deep-space perturbations.
The spacecraft observes with an onboard camera the small body which is resolved in the image and rotates around its rotation axis which is assumed fixed in the inertial reference frame. Indeed, the movement of the spacecraft in the small-body-fixed reference frame can be decomposed in the motion of the spacecraft in the inertial reference frame and the motion due to the small-body rotational dynamics. The observational geometry, depicted in Fig. 1 where the main geometrical entities are defined. In particular: • The approach angle is the angle between the small-body rotation axis and the approach direction, i.e., the camera-small-body direction.
• The illumination angle or phase angle is defined as the angle between the approach direction and the Sun-small-body direction.
• The obliquity, also known as the axial tilt, is the angle between the small-body rotation axis and the small-body orbital plane.

Approach Direction
Sun-Asteroid Direction

Rotation Axis
Complementary angle to Obliquity Illumination Angle Approach Angle Obliquity Orbit Normal Vector Fig. 1 The observational geometry during small body approach.
Note that the approach angle defines which hemisphere of the small body is observed during the approach. Moreover, the illumination angle is an indicator of possible shadows due to the terminator line and self-shadowing. A more detailed analysis of shadow generation can be found in Panicucci et al. [19]. Finally, the obliquity defines which hemisphere is illuminated at approach time, thus which is the observable part of the small body with a vision-based sensor. The combination of these three angles defines the appearance of the small body in the image, thus the performances of the image processing and vision-based system under study.
As the spacecraft inertial trajectory is usually dynamically slower than the rotational dynamics of the spacecraft, the small body motion in the images is due to the small body rotation. In case this assumption is not verified, an estimation of the spacecraft inertial poses can be exploited to reconstruct the inertial epipolar geometry between the different poses and correct the images for the inertial spacecraft rototraslational motion. From images of the rotating small bodies, it is possible to extract and follow 2D features which represent the movement of surface 3D landmarks. In the inertial reference frame landmarks, which are points anchored to the small body, rotate according to the small-body rotational dynamics. When considering a simple rotator, the landmark trajectory in the inertial space is an arc of a circle. This circle is projected as a conic or as a single line in the camera frame (see Sec. VI.A for a more rigorous analysis). By fitting the conic from the observed small body features, it is possible to reconstruct the circle where the landmark lies.
This procedure provides a direct estimation of all the possible solutions for the small body rotation axis orientation. As outlined in detail in Sec. VI, some solutions can be pruned by exploiting the information about feature movement in the images. Others cannot be removed lacking the information about landmark movement in the boresight direction, thus a heuristic approach is exploited to determine the correct solution.
The algorithm is composed as follows: 1) Determination of the origin from the first image to define the small-body-fixed reference frame origin as outlined in Sec. V.
2) Images are processed sequentially to identify in the image the projection of the circle associated with the landmark movement. Several features are extracted which implies that several landmarks are observed.
3) The algorithm decides whether to consider the conics as degenerated or not. This is a crucial point of the algorithm as the possible solutions for the rotation axis are different in number between the two cases (see Sec. VI for more details).
4) The conics are fitted and the possible solutions for the rotation axis orientation are identified. This process is outlined for the not-degenerate conics case in Sec. VI.C and for the degenerate conics case in Sec. VI.D.
5) The solution is identified by discarding not feasible and not expected solutions. This process strongly depends on the degeneracy of the observed conics as outlined in Sec. VI.C and Sec. VI.D.
The main assumptions are: 1) The rotational period is known from light curves. This is a standard estimation from ground-based campaigns [16] or far-approach photometric studies [5,6].
2) The distance between the small body and the probe is known for the first camera view. This is required to break the scale ambiguity and to place the small body reference frame at the time of the first camera view. This could be considered a strong assumption but a wrong estimation of the distance of the first camera view, or equivalently the scale factor, would simply induce an overall scaling of the solution. The small-body-spacecraft distance can be determined by a preliminary scale estimation through ΔV ranging with circle or ellipsoid fitting [24,25].
3) The small body is observed for one rotation to enable the spacecraft to observe long feature tracks needed to bound the rotation axis estimation.

IV. Determination of the Small-Body-Fixed Reference Frame Origin
In this section the estimation of an approximation of the rotation center is presented. From rigid body dynamics, the rotation axis passes through the small body barycenter. As the probe is not orbiting around the small body, the mass distribution, and thus the barycenter, can not be determined. To perform the estimation, the center of brightness is considered to be a good approximation of the barycenter [6].
To compute the center of brightness, the first image is binarized by using the Otsu's method [26] to identify the bright pixels. By labeling 1 the bright pixel map in the first image, the center of brightness C 1 CB expressed in the first image reference frame C 1 is computed as where is an arbitrary pixel in 1 , the subscripts and label the and coordinates in the image, and | 1 | is the area The center of brightness is computed only for the first image as the distance to the small body is considered known only for that instant. Bandyonadhyay et al. [6] notes that images with opposite phase are almost mirror images with respect to the rotation axis projection, thus center of brightness can be estimated from mirror silhouettes. Unfortunately, for high phase angles, silhouettes are not mirror silhouettes as most of the body is not illuminated. To limit this effect, it is considered that the center of brightness backprojection is a good approximation of the rotation center without any correction. Note this is usually true for low illumination angles as the center of brightness can shift considerably with respect to the rotation center projection when the illumination angle increases. Algorithms are present in the literature [27,28] to compensates for this effect, but their use is considered to be future work.
By knowing the distance from the first camera pose to the small body, the rotation center RC is computed by backprojecting in the 3D space the center of brightness: where SC 1 is the spacecraft position at the first image time, SC 1 /SB is the small-body-spacecraft distance at the first image time, and [ 1 ] is the rotation matrix from the inertial frame N to the camera frame C 1 at the first image time.
The computation of N RC defines the origin of the small-body-fixed reference frame B. In order to fully define B, the angular velocity B/N of the small-body-fixed reference frame B with respect to the inertial reference frame N must be characterized. As the rotational period is known, only the unit vector of the rotational axisˆ B/N must be estimated.

A. Feature-Based Image Processing
The first needed step of the algorithm is to process an image to detect landmark projection movement in the images.
From subsequent images, information about the relative motion of the scene can be recovered. The apparent motion in the image of the scene and objects is called optical flow which is caused by both the observer's and the object's motion.
The optical flow estimation is a crucial step to be performed as it provides information about the relative movement between the observer and the scene.
A standard procedure to perform this task is to use a feature-based image processing algorithm. Features are 2D entities that are characterized by a location and a compact description of the feature information. As a consequence, two tasks are performed to identify a feature [29]: feature detection and feature description. Feature extraction deals with the problem of determining the feature location to efficiently perform feature association in the following steps. Many algorithms (e.g., SURF [30], BRISK [31], ORB [32], SIFT [33]) are available to perform this task. The main difference is the types of structure (e.g., corners, edges, or blobs) they try to detect and their computational effort. Instead, the feature description is designed to compact information contained in the feature location neighborhood to build a compact and informative descriptor designed to be non-redundant and optimized for feature association. Feature detection algorithms often have a dedicated description algorithm (e.g., SURF [30], BRISK [31], and KAZE [34]), but eventually any combination of feature description and feature detection is possible.
Once features are detected and described from different images, it is necessary to associate their path in the images to gather the optical flow. Feature association is a complex procedure as points, when projected, can change their intensity due to different phase angles. Moreover, features can be shadowed during their motion which implies the loss of some tracked features. This task can be fulfilled with two different approaches [29]: feature matching and feature tracking. On the one hand, feature matching exploits local information around the considered feature to perform feature association between different images. A naive and widely-used method to perform this task is brute force matching where features in one image are associated according to the closest descriptor in the other image. As the association uses only information which depends on the feature descriptor, feature matching is usually exploited when images are generated from different observational geometry and when the motion is rapidly changing in time. On the other hand, feature tracking determines feature association by searching in the neighborhood of the feature location. A classical approach to perform feature tracking is Kanade-Lucas-Tomasi (KLT) algorithm [35,36]. The KLT tracker simply exploits image intensity spatial gradient to find the feature displacement in the next image by Newton-Rapson descent algorithm.
In this work, to determine the landmark projection movement in the image, SURF features are used to identify salient points in small body images. SURF features are chosen as they are often extracted faster than other features and are robust to camera rotation and translation. Feature association is then performed with the KLT algorithm as it does not require the use of time-consuming descriptor algorithms.

B. Kanade-Lucas-Tomasi Optical Flow Tracker
In this section, an overview of the KLT algorithm is provided as depicted in Lucas and Kanade [35] and Tomasi and Kanade [36]. Let and be two different grayscale images. The aim is to find the displacement of the feature from to by minimizing the error between the two images. The error function between the two images is defined by computing the squared difference in intensity in a region of interest which is normally considered a user-defined rectangle or square. As a consequence: Image can be linearly approximated: where the spatial partial derivative can be computed by finite differences. The error can be thus minimized: The displacement is analytically computed: In the KLT algorithm, not all features are tracked as only a subset is informative for the tracker. Good features are defined internally from the tracking algorithm as the ones having the highest eigenvalues of the matrix [37].
This ensures that feature tracking is a well-conditioned process and that the algorithm is numerically stable. Moreover, to increase algorithm performances and to exploit different detail levels in the image a pyramidal scheme is often used [38].
In the present work the feature extraction and tracking are performed with ADS software Themis [39]. Themis is an enhanced space-certified KLT tracker which compensates for features rotation, scaling, and translation. Moreover, it implements a pyramidal scheme and homography prediction filtering to improve the algorithm's overall performance.
The use of Themis is justified by the fact that it has been optimized for space application, and it has shown its embeddability in space-certified processors LEON4 [39].
The output of Themis is a series of tracked features of different length in the images obtained during the approach to a small body. These tracks are exploited to fit the conic obtained from the circle projection in the image and to determine the rotation axis orientation.

A. Projection of the circle
Before entering into the details of the fitting of the conics generated by the projection of the tracked landmarks, it is worth analyzing how a 3D circle projects into a camera.
Let P the 3D reference centered in the circle origin and whose third axis oriented as the circle plane normal cl . The circle plane is the plane where the circle lie and where the first two axes of P lie. Moreover, let P be the 2D reference frame centered in circle center projected on the the circle plane and whose axes are oriented as the projection on the the circle plane of the first two axes of P. In P, the equation of the circle can be expressed as follows: where is the circle radius and P ℎ is a 2D homogeneous point expressed in P and belonging to the circle plane defined by cl . More details about homogeneous coordinates can be found in Hartley and Zisserman [40]. By denoting C the image plane reference, the mapping from the circle plane and the image plane is defined by the following homography [40]: where [ ] is the camera calibration matrix, C is the th axis of P expressed in C, and C cl is the vector from the camera to the circle origin in C. Note that C is also the th column of the rotation matrix [ ] from P to the camera frame C.
The circle as projected in the camera can be computed by noticing that: Thus: where [ ] is the conic representing the projection of the circle in the image. If [ ] is full rank, the conic represents an ellipse, an hyperbola, or a parabula. Otherwise, the conic degenerates to a line. As the circle is not a degenerate conics and its matrix has full rank, it is worth noticing that the conics degenerate to a line only if the homography matrix has not full rank. This happens when the camera boresight is perpendicular to the circle plane normal, i.e., 3 cl = 0.
To prove that the projection is a single line, is worthy demostrating that the projection of 1 and 2 in the image are parallel. For the sake of simplicity, letˆ cl be the unit vector pointing as cl . Note thatˆ cl is linearly dependent of 1 and Note that these vectors belongs to the circle plane and to the image plane. To prove that these vectors are parallel, it must be proven that 1 × 2 = 0. Thus: This implies that the degenerate conic is composed of a single line. This implies that either the circle is projected as a full rank conic or to a rank-1 degenerate conic. No other situations are possible. From an operative perspective, this implies that the algorithm to be designed must cope with full-rank conic estimation or with single line fitting.
When the conic is full rank, it is represented by a full rank matrix which is usually decomposed as follows: Otherwise, when the conic degenerates, it is represented by the following conic [40] [ ] = + where and are the two lines composing the degenerate conic. As the two lines coincide for the circle projection, = . Thus: which implies that Eq. 10 can be rewritten as the classical line equation C As multiple features are tracked by Themis, several 3D circles are projected in the image. This implies that a series of ellipses or lines is present in the image. Note that the 3D landmark movement is caused by the rotational dynamics of the spacecraft. This implies that the landmarks move on 3D circles whose circle plane normal is the small body rotation axis. Thus, the 3D circles have all the same circle plane normal and have different origins lying on the rotation axis.

B. Detection of Degenerate Solutions
As shown in Sec. 'VI.A, the projection of the 3D circle generated by the landmarks is a series of ellipses when the camera boresight is not perpendicular to the circle plane normal. As the circle plane normal is the rotation axis direction for all the 3D circles, this situation happens when the rotation axis is tilted with respect to the camera boresight.
Otherwise, the projection of the 3D circle degenerates into a sheaf of parallel lines. It is thus necessary to develop a method to detect whether the rotation axis is perpendicular to the camera boresight from tracked features. To do so, it is proposed hereafter a detection algorithm to understand if the conic projection is degenerate.
Recall that a point lying on an ellipse can be represented as follows: where ( , ) are Cartesian coordinates obtained from the image coordinates through normalization with respect to the image size. The normalization is required to let the following steps of the algorithm be independent with respect to the image size.
All the reprojected points of the same track must verify Eq. 17. Thus: where is the number of frames during which the 3D point is tracked for the th conic, [0 × ] is the zero × matrix, the subscript define the parameters associated with the th conic, 2, ∈ R ×3 denotes the matrix associated with the second-order conic coefficients, and 1, ∈ R ×3 the matrix associated with the remaining-orders conics coefficients.
Recall that a point lying on a line can be represented as follows: Moreover, the points of the th line verify: As the lines of the matrix in Eq. 20 are linearly dependent, the rank of the matrix 1, is one. Due to tracking errors, the rank is not one. Nevertheless, being 1, nearly singular, its condition number is high. note that this is only true when the conics are degenerate. As 1, has lines, it is more convenient to work with 3, = 1, 1, To understand if the family of conics degenerates, the following steps are applied: 1) Per each tracked feature, the track length is computed to store only the longest curv tracks. This step is necessary to avoid processing a high number of tracks which increases the numerical burden. Moreover, it is worth noting that long-tracked features are the ones that provide more information about the rotational state. curv is set to 50 in this work.
2) Per each tracked feature, the conditioning number of 3, is computed. If the tracks conditioning number is greater than a threshold CN for more than sheaf , the conics are degenerate and processed as a sheaf of parallel lines. Otherwise, they are considered a family of ellipses. CN and sheaf haare set to 10 5 and 0.8 respectively in this work.
This procedure detects autonomously whether the rotation axis is perpendicular to the camera boresight. As the 3D circle projection generates different conics, the estimation procedure is not identical between the two cases and two algorithms must be developed. First, in Sec. VI.C the general case of a tilted axis with respect to the camera boresight is presented. Then, in Sec. VI.D the algorithm when the two axes are perpendicular is expounded.

Ellipses Fitting
If the rotation axis is tilted with respect to the camera boresight, the projection of the concentric circles is a family of ellipses. As the 3D circle origins belong to the same axis, the ellipse semi-major axes are all oriented in the same direction [23]. The optimization to gather the ellipses family exploits this property to compute a series of ellipses with the same orientation. This optimization technique is outlined in Bissonnette et al. [23] and reviewed hereafter.
Let be the number of ellipses to be found. The orientation of the th ellipse according to its Cartesian representation (see Eq. 17) is: Eq. 21 shows that to obtain a family of ellipses with the same orientation, the fraction of the three Cartesian parameters must be the same for all the ellipses. By introducing = − , it is straightforward to impose that = and = ∀ ∈ [1, ] to implicitly verify the constraint in Eq. 21. Thus, the equation of the th ellipses becomes [23]: By defining the vectors 1 = ( , , 1 , · · · , ) and 2 = ( 1 , 1 , 1 , · · · , , , ) and the matrices where , , , is the th point of the th ellipse in normalized coordinates, the problem is to be rewritten as: This provides a family of conics with all the same orientation. Note that the conics are not necessary ellipses as no constraint has been imposed on the Cartesian coefficients to verify it. Thus, for each conic, the eccentricity is computed and, if greater or equal to 1, the conic is discarded. In Fig. 2a and 2b the different optimization outputs for the main steps are shown.

Computation of the Rotation Axis Candidates
Once the ellipses family is found, the rotation axis can be reconstructed by knowing that a 2D ellipse in the image is the projection 3D circle [43]. From a single ellipse, an elliptic cone, i.e. a cone with an elliptical cross-section, is generated from backprojection. If a sheaf of planes is intersected with the cone, only two planes generate circles when intersected [43]. This geometrical construction is shown in Fig. 3. In the figure, the red ellipse is backprojected in the From these 2 planes, 4 unit normal vectors can be defined. It is important to note that both the positive and the negative normal vectors define the same plane. But, if the goal is to gather the rotation axis, the two normal vectors are associated with opposite rotation directions on the same plane. Therefore, the correct normal vector must be identified.
The first step is to compute the four unit normal vectors. By knowing that the matrix [ ] (see Eq. 14) represents an ellipse being the 3D circle projection, all the unit normal vectors generating the conics in the image can be extracted [43]. Note that the matrix [ ] is composed of the ellipse Cartesian parameters in pixel units. Thus the parameters found in the previous step must be rescaled according to the normalization procedure for consistency.

Let and ∀ ∈ [1, 3] be the eigenvalues and eigenvectors of [ ].
If det [ ] < 0, the eigenvalues are ordered in such a way that 3 < 0 < 1 ≤ 2 . The four solutions for the circle plane normal are computed as [43]: To estimate the four solutions, the two vectors are computed per each ellipse and their mean is calculated to minimize numerical errors. It is worth noting that the proposed method is not robust with respect to outliers. RANSAC could be exploited to identify the correct orientation among a given orientation set as in Andreis et al. [44,45].   In Fig. 4 the geometry of the normal vectors reconstruction is shown. The four solutions are depicted in two different colors according to the generating circle plane. Moreover, their projections on the C 1 reference frame are shown in brown. Note that they are paired in two different ways. First, each pair has the same projection on the camera boresight direction. Second, each pair has the same module in the camera plane, but opposite direction. Recall that the solutions with opposite directions but associated with the same plane denote the same circle, but swept in opposite directions.

Rotation Axis Pruning and Selection
As the correct rotation axis must be found among the four solutions, a procedure is outlined hereafter to identify the correct normal vector among the 4 different possibilities. The main idea is to remove two spurious solutions among the four by exploiting how features move in time. The estimation of the correct rotation axis between the two remaining solutions is a more complex task as they are both acceptable with respect to the camera projection and both respect the features' motion. The idea behind the correct identification of the solution is the selection based on a heuristic, using statistical facts or a priori information. Two heuristics are investigated: 1) A coarse guess of the rotation axis direction is available on board from ground-based observation. The correct solution is identified as the closest one to the onboard guess.
2) It is remarked that, even though the observed small body can have concavities, the majority of the initialized 3D points during tracking lie between the camera and the estimated rotation center. To have statistical relevance, this reasoning must be applied to a high number of 3D points. As a consequence all 3D points that are tracked more than a given percentage of frame heur over one rotational period. In this work, heur is set to be 0.2. Once it is understood which is the solution that initializes the majority of the point closer to the camera, that solution is the correct one [23].
The first step is to exploit the feature rotation as seen in the image to remove two spurious solutions. Except due to tracker errors, matched points rotate statistically all in the same direction. This rotation identifies uniquely the projection of the rotation axis on the camera boresight. Thus, by computing the feature rotation, it is possible to identify the two correct rotation axis candidates.
To avoid selecting the wrong solutions due to tracking errors, the rotation direction of each track is computed and the rotation axis candidates not consistent with the average feature rotation are discarded. The rotation direction , between two successive frames is computed as: where ℎ , = , , , is the th tracked feature location in the th image in homogeneous normalized coordinates and ℎ = , is the th ellipse center in homogeneous normalized coordinates. The vector , provides information about the rotation direction the th feature has performed between the th image and the ( + 1)th image. By counting the features moving clockwise and the ones moving counterclockwise, it is possible to understand which is the average feature motion. It is thus useful to define the clockwise index cw : where 3 is the camera boresight unit vector. According to cw definition, features move clockwise if cw > 0 and counterclockwise otherwise. The two correct solutions are selected as the ones providing the same rotation direction when projected on the camera boresight.
At this stage, two solutions are still feasible and the correct one must be determined. To do so, two heuristic approaches are investigated. If a coarse first guess is available on board, the correct solution is selected as the one with the smallest angular error between the two. Otherwise, the second heuristic is exploited: the correct axis is the one initializing the majority of the points between the camera and the rotation center estimated in Sec. IV. It is worth noting that the circle plane has not been estimated yet as the direction of the circle plane normal defines a sheaf of parallel planes where the circle could lie. It is thus necessary to select a point to uniquely define where the circle plane is. By knowing the axis direction and a point on the axis, i.e., the previously-calculated rotation center RC , the circle can be fully reconstructed [46]. The equation of the th 3D circle is written as: where is the distance between the rotation center RC and th circle center in the direction of the rotation axis, is the th circle radius, and ( , ), ( , ) and ( , ) are computed as in Appendix A. The parameter uniquely defines the plane in the sheaf, whereas uniquely identifies the circle size on that plane. Note that the pixel coordinates ( , ) are used for this optimization step.
This geometrical situation is depicted in Fig. 5  To determine the parameters and , the following cost function is minimized: Once this procedure is performed for all tracks, an estimate of the parameters and ∀ is determined. This enables the determination of the plane where the 3D circle lies and the 3D circle radius. It is now necessary to understand where the landmark has been detected at feature initialization. This is performed by computing the initialization angular position of the 3D landmark on the th circle at the beginning of the observational period.
Let Fˆ be the reference frame centered in the rotation center RC and with the vertical axis directed asˆ , i.e., one of the remaining solution for the rotation axis. The other two unit vectors are arbitrary. The position of the th 3D landmark Fˆ LM in the Fˆ frame at time is: where Δ = − init is the interval between and the initialization time init and rot is the small body rotational period.
By starting from = 0, the th 3D landmark is projected in the image for all the available tracking times and the reprojection error is minimized to find . Note that this optimization step could be easily removed by simple backprojection of the initial feature location and the intersection of the feature ray with the 3D circle plane, as proposed by Bissonnette et al. [23]. This solution has not been investigated in the present paper, but it could be a valuable option to reduce computational costs.
These last two steps, i.e., the determination of the 3D circle and the angular position at the beginning of the tracking, are computed for all ellipses and the two remaining feasible orientation axis solutions. Then, by counting how many landmarks are initialized between the camera and the rotation center, it is possible to select the rotation axis solution as the one providing the higher number of point initialization closer to the camera.

Lines Fitting
If the rotation axis is detected to be perpendicular to the camera boresight, the 3D circle projections degenerate into a sheaf of parallel lines as outlined in Sec. VI.A. It is worth noting that the tracked features do not form perfect lines because of tracking errors. Moreover, since the projection of the rotation axis on the camera boresight is null, only two rotation axis candidates are possible.
In this section, the same notation of Sec. VI.C is used for sake of simplicity. Let be the number of lines to be estimated.
A sheaf of parallel lines is characterized by having the same orientation among all the lines. The equation of the th line is given by Eq. 19. Note that the parameters and are constant for all the lines in the sheaf by construction.
Thus, , , , , i.e., the th feature location in the th image, must verify the line equation in Eq. 19. By defining the parameter vector = ( , , 1 , · · · , ) and the matrix [ sheaf ]: the sheaf fitting problem is rewritten as It is solved by solving through Singular Value Decomposition [41]. An example of this procedure is reported in Fig. 6a and 6b

Computation of the Rotation Axis Candidates
As lines are the degenerate projection of the planes where the 3D circles lie, the rotation axis projection on the image is given by the line perpendicular to the sheaf of parallel lines and passing through the rotation center. This geometrical configuration is shown in Fig. 6b. Therefore, the 2D vector perpendicular to the sheaf must be backprojected in the 3D space as shown in Fig. 7. Let the vectors ⊥ and be respectively the normal and parallel vectors to the sheaf. These vectors can be computed from the line equation after proper rescaling of the equation in pixel coordinates. Therefore, the two solutions for the rotation axis are:

Rotation Axis Pruning and Selection
To exclude the spurious solution, a heuristic similar to the one in the previous section is introduced. If a coarse guess exists, the solution is chosen to be the closest one to the initial guess. Otherwise, it is possible to determine which is the correct solution by determining how features move in the image on average.
Let L = { , 1 , 2 , 3 } be a reference frame centered in the image center and with unit vectors 1 = ( ⊥ , 0) , 2 = , 0 , and the perpendicular to define a right-handed reference frame. Let "in front" and "behind" be the subsets of the image defined by an observer looking in the direction of while standing on the image. If the rotation axis is projected as ⊥ ,

Fig. 7 The estimation of the rotation axis for the perpendicular-axis case
the majority of the 3D points are moving from "behind" to "in front"; from "in front" to "behind" if projected as − ⊥ .
Indeed, as the majority of the 3D landmarks are initialized between the camera and the rotation center, the feature mean movement respects this motion. As in the non-degenerate conic case, shorter tracks are considered with a parameter heur = 0.2 to increase the statistical relevance of the heuristic.
The movement vector , in the direction of 2 of th feature in the th image is computed as Finally, the mean movement direction index dir is computed as: The value of dir gives information about whether the landmarks move from "in front" to "behind" or the other way around. If dir > 0, the rotation axis is projected in the image in the same direction of ⊥ ; the opposite otherwise. Thus, the sign of dir uniquely defines the rotation axis orientation in the inertial reference frame.

A. Test Cases Description
To test the proposed algorithm, numerical simulations have been performed. In the simulations, the spacecraft is in close approach with the small body. As the probe is not under the influence of the small body gravity field, the trajectory has been simulated with a dynamics dominated by Solar Radiation Pressure (SRP) in the Sun-small-body rotating frame.
Note that the spacecraft is orbiting the Sun but the swept arc of the conic is so short that it can be approximated by a straight line, so as to ignore the Sun gravity in the equations of motion. The small body rotates around an inertially-fixed axis with constant angular velocity. The spacecraft orientation in the inertial frame N is considered known and the camera points to the small body center of mass.
The probe is placed at an approaching distance from the small body and slowly approaches the small body. The mapping camera, like Hayabusa's AMICA [48] or OSIRIS-REx's MapCam [49] (see Tab 1), is constantly observing the small body. Images are simulated with the SurRender software [50]. Small-body shapes, Bennu and Itokawa in this study, are taken from the Planetary Data System -Small Body Node by downloading the highest resolution polyhedral shape model.
The shape model is given to SurRender by providing the small-body bulk optical properties, like albedo, reflectivity, and diffusivity, and assuming the Hapke BRDF (Bidirectional Reflectance Distribution Function) [51] to obtain high-fidelity simulated images. The texture is added to the small body to take into account the small-body surface albedo. The camera optics is represented with a Gaussian Point Spread Function (PSF) and a pin-hole projection model. The KLT tracker Themis processes images every 60 seconds for Bennu and 180 seconds for Itokawa to account for realistic image processing computational performances. Numerical values, including 3D meshes, are taken from operational missions scenario [48,49,[52][53][54]. Note that in this study no real images are considered because of unavailability of finding frequent and successive images to provide to the KLT algorithm.
Two main test cases are studied hereafter: Bennu and Itokawa. To study the performances of the proposed algorithm the approach direction is kept constant and 3 different illumination angles are considered: 10 • , 45 • , and 75 • . After having defined the Sun-small-body and the approach directions, the rotation axis must be defined to fully determine the observational geometry during the approach (see Fig. 1). To characterize the algorithm performances, the rotation is the camera focal length 3 and are the pixel physical size in and components 4 is the camera center images. By considering that only one case is simulated for = ±90 as is not defined and that = ±180 are the same orientation for all , 134 simulations are performed per illumination angle and asteroid leading to 804 simulations and more than 200,000 images. This rendering and simulation effort has been performed to push towards the validation and the performance assessment of the algorithm when illumination changes, shape varies, or the rotation axis orientation is unforeseeable. Examples of synthetic images for both asteroids with different illumination angles are shown in Fig. 8a and 8b for the sake of completeness.

B. Origin Estimation
The first analyzed results deal with the small-body-fixed reference frame estimation. To assess the performances in estimating the origin, the origin error is defined as the norm between the estimated origin and the true one.
Note that the origin error is mainly influenced by the center of britghtness computation, as the distance to the small body is considered know (see Sec. III). The probability density functions (PDFs) of the origin error are reported in Fig. 9a and 9b for different values of the illumination angle. Recall that Bennu and Itokawa have a best-fitting-ellipse semiaxes of 252 m × 246 m × 228.3 m and 267.5 m × 147 m × 104.5 m, respectively. As known from the literature [25,27,28], the center of brightness deviates from its true value when the illumination angle increases. Thus its backprojection deviates from the true origin accordingly. For very high illumination angle the performances degrade and the origin is estimate to be far from the true value. This is even more important for an highly concave and oblate body as Itokawa.
It is worth noting that this error can influence the rotation axis estimation as the origin determination is necessary for the determination of the 3D circle , , and . Despite this behavior, the rotation axis estimation show good performances for the majority of the simulated scenarios, implying that the origin estimation play a minor role. As stated in Sec. IV, algorithms were designed in the past to compensate for this effect implying a possible reduction of the origin error and its influence on the rotation axis estimation.

C. Detection of Degenerate Solutions
In this section, the results of the degenerate solutions detection are presented. To evaluate the performances of the detection, note that two classes are present: "Tilted Rotational Axis" and "Perpendicular Rotational Axis". For each class it is possible to define precision , recall , and the F-score 1 : where is the true positive number, is the false positive number, and is the false negative number. These metrics are selected as precision provides information about how many detections within a class are correct, recall states how many samples are correctly detected in the considered class, and the F-score 1 is a mix of precision and recall which is high when both are high. For the two considered classes, the considered metrics are reported in Tab. 2. The precision of the "Tilted Rotation Axis" class is 1 which implies that all the solutions labeled within the class belong to this class. Its recall is high is always above 0.85 which implies that more than 85% of the tilted axis are identified correctly. By looking at Fig. 10a -10f that show the detection results by varying the right ascension and declination, it is worth seeing that no false detection of the tilted rotation axis is present. Finally, the F-score for the "Tilted Rotation Axis" is high implying that a good balance between precision and recall is achieved.
Regarding the "Perpendicular Rotation Axis" class, its recall is 1 which means that all the perpendicular rotation axis are correctly identified. The precision is lower implying that some of the found solutions do not belong to the class. By looking at Fig. 10a -10f, all the false detections are close to the perpendicular case in the right-ascension-declination plane. This is because tracks are almost linear and the algorithm classifies them as lines. This problem could be avoided by increasing the conditioning number parameter CN , but this could cause the presence of false detection of perpendicular rotation axis as tracks could be detected as ellipses due to tracking errors. The F-score is high also for this class which implies a good balance between precision and recall.
Note that results are similar for both asteroids meaning that performances are similar despite the different onboard cameras. It is worth noting that the Bennu test case has more false detection of the perpendicular axis which is probably due to the low concavity of the asteroid (see Fig. 10a

D. Rotation Axis Estimation
The last results to be analyzed are the performances of the rotation axis estimation. As explained in Sec. VII.A, the rotation axis is varied by defining a spaced grid in the right-ascension-declination plane to cover all the geometrical configurations that the probe could encounter during the approach. To assess the algorithm performances, the angular error between the true rotation axis is used as the performance index. Moreover, two different simulations are reported hereafter in accord with the two heuristics proposed in Sec. VI: the final selection based on onboard guess or the one based on the point initialization.
First, the results for the rotation axis estimation based on the available onboard guess are analyzed. This means that the final rotation axis is selected as the closest one to a coarse estimation of the rotation axis available on board. This coarse estimation is simulated in the proposed framework by randomly rotating of 20 • the true rotation axis. A compact visualization of the results is reported in the Cumulative Density Function (CDFs) in Fig.11a and 11b for various illumination angles and the two studied asteroids. Note that the CDFs are cropped at 30 • to have a detailed look close to the low error solutions. In both cases, the angular error is below 10 • for 80% of the cases and it is below 20 • for 90% of the cases. This implies that the proposed method estimates correctly the 4 solutions and identifies the correct one. Note that the illumination does not seem to play a major role for the Itokawa test case (see Fig. 11b), while the performances degrade for the Bennu test case when the illumination angle increase (see Fig. 11a). As stated in Sec. VII.C, this is due to the different shadows present in the two test cases. A more detailed view of the results is reported in Fig. 12a -12f where the angular error is shown in the right-ascension-declination plane. In the figures, the sectors with the white contours are associated with rotation axes perpendicular to the camera boresight. Moreover, the white sectors label the solutions where the algorithm has not converged. This is mainly due to the impossibility of detecting a series of ellipses with all the same orientation leading to the detection of a conic family with an eccentricity greater than 1. Finally, the yellow sectors are associated with very high errors which are due to wrong detection among the four solutions. It is worth noting that the perpendicular cases have generally low errors and higher errors are present when the rotation axis is parallel to the approach direction. Note that the wrong classifications outlined in VII.C do not show high errors because they occur when the rotation axis is almost perpendicular.
Second, the results for the rotation axis estimation based on the landmark initialization are studied. Results are similar with respect to the previous ones with small performance degradation, this is mainly because the correct axis is selected less often. The performance reduction is anyway low as shown in Fig. 13a and 13b. Indeed, the angular error is below 10 • for 75% of the samples for all asteroids and illumination angles and it is below 20 • for 85% of the cases. Note that performance reduction is not due to the change in the angular error, which is the same as the tracked features are the same, but to the higher number of samples in the distribution tails. This is visible in Fig. 14a -14f where the angular error is shown in the right-ascension-declination plane. The number of yellow sectors increases because the algorithm fails in detecting the correct solution among the remaining two with a higher frequency with respect to the onboard guess heuristic. It is worth noting that this heuristic do not rely on a previously-computed guess and the rotation axis estimation is performed autonomously from images.

VIII. Conclusion
This paper outlines an algorithm to autonomously determine the rotational state of a small body during the approach phase. Under the assumption of knowing the rotational period, the algorithm determines the rotation axis orientation of a small body exploiting images during the approach phase. The spacecraft camera acquires images of an unknown small body and extracts features from these images. The long-tracked features are the projection of 3D landmarks whose movement is due to the small body rotation. By exploiting the 2D feature tracks, the algorithm fits a family of conics which are the projection of the landmark 3D circle. By backprojecting the found conics in 3D, the algorithm finds all the possible solutions for the small body rotation axis. The correct one is selected by ensuring coherence with the movement of the features in the image and by exploiting a heuristic approach leading to identifying the right rotation axis. Moreover, the center of rotation is determined from the first image by backprojecting the small body center of brightness.
Numerical simulations are performed with a concave and a convex asteroid over a wide range of illumination angles and pole orientations. The simulations underline that the algorithm can efficiently detect the case in which the rotation axis is perpendicular to the spacecraft approach direction. Moreover, the rotation axis estimation is performed with limited error in most cases. Indeed the error between the true rotation axis and its estimation is below 10 • for 80% of the considered test cases. The computed rotation axis is a valuable first guess of the rotation axis to enable spacecraft localization around an unknown small body. The proposed rotation axis estimation and the origin determination are valuable tools to define the small-body-fixed reference frame during the approach phase. This is an important and preliminary task to be fulfilled before performing insertion maneuvers, shape determination, or small-body-fixed localization.