A Generic Pushbroom Sensor Model for Planetary Photogrammetry

Different imaging instruments are designed in the planetary exploration missions, which require respective photogrammetric software modules to support the geometric processing of planetary remote sensing images. To decrease the cost of software development and maintenance, this paper presents a generic pushbroom sensor model for planetary photogrammetry. Thus, various coordinate transformations can be conducted in a unified and efficient way, without considering the specific camera and the optical distortion equations. In terms of implementation, the camera file and orientation data file designed for the airborne linear array camera ADS40 are used to manage the interior orientation and exterior orientation parameters of planetary images. The generic pushbroom sensor model supports the summing mode, varying exposure times within an image and image distortions, which are typical problems in need of a solution in planetary mapping. Furthermore, an alternative photogrammetric process based on extracting tie points on approximate orthophotos is developed. The geometric accuracy and computational efficiency of the generic pushbroom sensor model were compared with the popular planetary cartographic software—Integrated System for Imagers and Spectrometers (ISIS). The experimental results demonstrate that the proposed generic pushbroom sensor model can (1) deliver the same geometric accuracy as the ISIS pushbroom sensor model, (2) greatly improve the computational efficiency of orthophotos generation and tie points extraction, and (3) support various types of planetary remote sensing images. Moreover, the proposed generic pushbroom sensor model reduces the cost of software development because different types of planetary images share the same code base.


Introduction
Linear array cameras are widely used in planetary exploration missions to derive cartographic products (Albertz et al., 2005;Di et al., 2014;Kirk et al., 2008;Speyerer et al., 2016). However, the photogrammetric processing of linear pushbroom images is more complicated than that of frame images. Moreover, various types of linear array cameras are designed with different requirements to achieve their respective scientific objectives, which increases the burden of photogrammetric software development. Obviously, a highly universal pushbroom sensor model can facilitate both the software development and the geometric processing of planetary remote sensing images. Researchers have pointed out that making geometric camera models available in various software packages is still a technically challenging task Kirk et al., 2012). Furthermore, photogrammetric processing algorithms and software tools always lag behind the imaging instrument development. For example, the early planetary photogrammetric software failed to support varying exposure times within an image, such that the raw Mars Express (MEX) High Resolution Stereo Camera (HRSC) images need to be segmented into multiple small pieces to conduct processing (Kirk et al., 2017). However, if there is a generic sensor model available, these problems can be avoided or minimized as much as possible. Therefore, developing a generic pushbroom sensor model for planetary images is very meaningful.
The most widely used planetary cartographic software is Integrated Software for Imagers and Spectrometers (ISIS) developed by the United States Geological Survey (USGS) Astrogeology team (Edmundson et al., 2012). ISIS has the merits of open-source availability, well-documented support and continuous software maintenance. At present, ISIS has supported nearly 30 planetary exploration missions, from the early Apollo program to the recent Origins Spectral Interpretation Resource Identification Security Regolith Explorer (OSIRIS-REx) mission. In addition, the open-source automated stereogrammetry software-Ames Stereo Pipeline (ASP), developed by the National Aeronautics and Space Administration (NASA) Ames Research Center, has been gradually recognized in the planetary mapping community (Beyer et al., 2018;Shean et al., 2016;Tao et al., 2018). ASP has advantages in image matching and digital terrain model (DTM) generation. The authors of ISIS developed the corresponding pushbroom sensor model for planetary images, and the sensor model parameters are stored in the ISIS cube images. ASP also uses the ISIS pushbroom sensor model to process planetary images. Additionally, to apply more powerful photogrammetric features to conduct practical mapping projects, commercial photogrammetric workstations, such as BAE Systems' SOCET SET, have also been adopted by USGS to produce planetary cartographic products (Kirk et al., 2008). Thus, USGS exports the ISIS pushbroom sensor model to the SOCET SET's generic pushbroom sensor model to conduct the photogrammetric operations. Although there are currently some available open-source and commercial photogrammetric software offerings for planetary mapping, they show some deficiencies in practice. In particular, the computational efficiency of the ISIS pushbroom sensor model is unsatisfactory, resulting in photogrammetric processing procedures that deliver a slow speed. Commercial photogrammetric software offerings are efficient and robust, but they are mainly developed for Earth observation remote sensing images. Therefore, many practical problems will be encountered when they are used for planetary mapping, such as lacking planetary datum and inability to support some unique features of the imaging instrument (e.g., multiline detectors of MEX HRSC).
Currently, the planetary mapping products are unable to meet the needs of planetary scientific research. Due to insufficient financial support, limited professional engineers and very few planetary photogrammetric software packages, most of the raw planetary images are not geometrically processed. For example, the Lunar Reconnaissance Orbiter (LRO) Narrow Angle Cameras (NACs) have acquired more than one million images as of the writing time of this paper (see website https://pilot.wr.usgs.gov/ for details), and have covered almost the whole lunar surface with a spatial resolution of 0.5 to 2 m/pixel. However, only a small amount of local or regional mapping products are derived from these valuable LRO NAC images (Di et al., 2019;Haase et al., 2012;Hu & Wu, 2018;Wu & Liu, 2017). Similarly, though the returned MEX HRSC images cover approximately the entire surface of Mars, the Level 4 products, namely, digital orthophoto map (DOM) and DTM, cover only approximately 40% of the Martian surface. Moreover, the currently available MEX HRSC Level 4 products are derived from single-orbit adjustment. In terms of the multi-orbit DOM and DTM products derived from the bundle adjustment of multiple MEX HRSC observations, only the MC11 quadrangle has been completed (Gwinner et al., 2016). However, the MC11 quadrangle is not yet available for public download. Obviously, if more open-source and commercial photogrammetric software can support the pushbroom sensor model of planetary images, more mapping products will be derived. In addition, the fusion processing of planetary images acquired by different space agencies shows great significance. This can take advantage of the merits of different data sources, resulting in higher resolution and higher accuracy planetary mapping products. However, fusion processing of planetary images will be an extremely complex task. A generic pushbroom sensor model with high efficiency and robustness will play a significant role in such fusion processing. This paper aims at developing a generic pushbroom sensor model that can process various types of planetary pushbroom images in a unified and efficient way. The universality of the proposed pushbroom sensor model is achieved by fully considering the characteristics of the raw planetary pushbroom images (e.g., different camera designs). Moreover, in order to make the generic pushbroom sensor model more robust, we have also solved some typical problems for the photogrammetric processing of planetary pushbroom images, such as the summing mode, varying exposure times of the scan line and image distortions. In addition, an alternative photogrammetric processing flow based on approximate orthophotos is specially developed for planetary images, which utilizes the generic pushbroom sensor model to conduct orthorectification and extract tie points on approximate orthophotos. The proposed generic pushbroom sensor model was fully tested with LRO NAC and MEX HRSC images.

Basic Knowledge of Sensor Model for Planetary Photogrammetry
The sensor model establishes the functional relationship between the two-dimensional (2D) image space and the three-dimensional (3D) ground space. Generally, there are two kinds of widely used sensor models in photogrammetry, namely, the rigorous sensor model and the replacement sensor model (Tao & Hu, 2001). The rigorous sensor model represents the physical imaging process based on the interior orientation (IO) parameters, exterior orientation (EO) parameters and some sensor-related geometric parameters. The rigorous sensor model is also referred to as physical sensor model in some literature. The rational function model (RFM) is one of the most representative replacement sensor models, which uses polynomial functions to relate image coordinates and ground coordinates (Dial et al., 2003). RFM has merits of high efficiency and confidentiality. In addition, RFM is mainly applied to process Earth remote sensing images. However, owing to the practical problems of planetary photogrammetry (e.g., the limited instrument position and pointing accuracy and the lack of ground control data), photogrammetrists prefer to adopt the rigorous sensor model to process the planetary images (Di et al., 2014;Kirk et al., 2008;Wu et al., 2014). Because the spacecraft ephemeris and attitude data can be introduced into the satellite triangulation procedure as weighted observation values, they contribute to derive a robust bundle adjustment solution and higher model accuracy. Therefore, this paper focuses on the rigorous sensor model of planetary photogrammetry.
In case of linear pushbroom images, each image line is exposed at a different instance of time. The main characteristics of the linear pushbroom images are time-dependence and narrow angular fields of view, which leads to very weak image geometry. This results in the complexity of the pushbroom sensor model as well as the photogrammetric processing. Before introducing the proposed generic pushbroom sensor model, we will first discuss the basic knowledge of sensor model for planetary photogrammetry, including the ISIS pushbroom sensor model, Spacecraft, Planet, Instrument, Camera-matrix and Events (SPICE) kernels, and the acquisition of IO and EO parameters. These elements are the basis for constructing the rigorous sensor model of planetary images.

ISIS Pushbroom Sensor Model
The photogrammetric processing methodology designed by the ISIS development team is already a de facto standard in the planetary mapping community. ISIS has developed its own image format, namely, the ISIS cube image, which stores the image data as well as the corresponding auxiliary metadata information in a single file. Specifically, the instrument position and pointing information as well as other auxiliary data (e.g., sun position and body rotation) are stored as table items in the ISIS cube images. These auxiliary data can be used to calculate the IO and EO parameters of the planetary images. The ISIS pushbroom sensor model has both advantages and disadvantages. The advantage is that ISIS uses the authoritative camera geometric parameters and optical distortion models, ensuring accurate coordinate transformation. The disadvantage is that ISIS pushbroom sensor model delivers low computational efficiency. For example, it is noted that ISIS adopts an inefficient ground-to-image transformation algorithm to process planetary pushbroom images, resulting in a slow processing speed in the cartographic products generation procedures .

SPICE Kernels
The spacecraft position, camera pointing, instrument parameters, exposure time of scan lines and other auxiliary data are archived as SPICE kernels by NASA's Navigation Ancillary Information Facility (NAIF) (Acton et al., 2016). In the construction stage of the ISIS pushbroom sensor model, the SPICE kernels are required to conduct various complicated frame transformations. Then, the IO and EO parameters of the planetary image are computed. The ISIS spiceinit program relates the planetary image to the corresponding SPICE kernels. As shown in Table 1, when spiceinit runs successfully, the relative file paths of these SPICE kernels are written into the ISIS cube images.

Earth and Space Science
Considering MEX HRSC as an example, Figure 1 illustrates the sensor model construction for planetary images based on SPICE kernels. Specifically, the MEX HRSC instrument position and pointing information are stored in spacecraft position kernel (SPK) and orientation kernel (CK) files respectively. The instrument kernels (IK) contain the camera geometric parameters, and the planetary constant kernels (PCK) contain the planetary datum and constant parameters. The sensor model provides the coordinate transformation between 3D ground coordinates and 2D image coordinates. Suppose that the attitude data are acquired by the star tracker; then, the rigorous sensor model for MEX HRSC images can be expressed by (1) where (X,Y,Z) are the 3D ground coordinates, (X S , Y S , Z S ) are the perspective center coordinates, λ is the scale factor, R body HRSC is the rotation matrix from the MEX HRSC instrument frame to the MEX spacecraft body frame, R star body is the rotation matrix from the MEX spacecraft body frame to the star tracker frame, R J2000 star is the rotation matrix from the star tracker frame to the J2000 coordinate frame, and R Mars J2000 is the rotation matrix from the J2000 coordinate frame to the Mars body fixed coordinate frame, (x,y) are the 2D image coordinates, and f is the focal length. R body HRSC and R star body are constant values, R MARS J2000 can be calculated with the instrument kernels, and R J2000 star are derived with the observation data of the star tracker. Indeed, the product of the four rotation matrices in Equation 1 can form a final rotation matrix R Mars HRSC , which refers to the rotation matrix from the MEX HRSC instrument frame to the Mars body fixed coordinate frame. Therefore, Equation 1 can be rewritten as (2) Equation 2 describes the central perspective projection, which connects the image point, ground point and the perspective center. It is the basic mathematical model used in the photogrammetric processing procedures of planetary images. It should be noted that for linear pushbroom images, the central perspective projection holds true in the across-track direction of each image line. In addition, please note that in Equation 1 and 2, image distortions exist in the transformation, such that the 2D image coordinates x and y can be treated as undistorted image coordinates.

Acquisition of Interior Orientation Parameters
The IO parameters are used to conduct coordinate transformation between pixel coordinates p(s,l) and image coordinates p(x,y). The camera geometric parameters such as focal length and pixel size are provided in the instrument kernel, which can be extracted by searching for the corresponding NAIF keywords. Since the camera calibration results of MEX HRSC are not included in the SPICE kernels, here we use LRO NACs to discuss the IO parameters. Table 2 lists the camera geometric parameters and the corresponding NAIF keywords for LRO NACs. As an example, the following equations can be used to compute the image coordinates from pixel coordinates where (x d , y d ) indicate the distorted image coordinates. Please note that the boresight adjustment as well as other IO parameters (e.g., focal plane origin and principal point offset) are defined by the instrument team. In addition, in order to facilitate the coordinate transformation between pixel coordinates and image coordinates, some instrument teams provide two sets of affine transformation parameters in the instrument kernel. Specifically, the affine transformation parameters consisting of the values of keywords 'TRANSX' and 'TRANSY' can be used to compute the image coordinates from pixel coordinates, and the affine transformation parameters consisting of the values of keywords 'ITRANSS' and 'ITRANSL' can be used to compute the pixel coordinates from image coordinates.

Acquisition of the Exterior Orientation Parameters
The initial EO parameters of each image line can be computed using the SPICE software toolkit. However, users need to specify some parameters (e.g., light time and stellar aberration corrections) when using the functions provided in the SPICE software toolkits. Therefore, even with the same planetary image and the corresponding SPICE kernels, different instrument position and pointing values may be derived due to differences in parameter settings. An alternative method is extracting the initial EO parameters from the ISIS pushbroom sensor model. This can keep the established sensor model consistent with ISIS. Specifically, the ISIS spiceinit program first relates the planetary image to the corresponding SPICE kernels, then computes the instrument position and pointing information, and subsequently outputs them into the Thus, the Euler angle form of the instrument pointing data can be easily obtained from the corresponding rotation matrix R Mars HRSC . For the case of the instrument position data, the vector form of position coordinates can be converted from the J2000 coordinate frame to the Mars body fixed coordinate frame by the following equation.
where P j refers to the position vector in the J2000 coordinate frame, P t refers to the position vector in the Mars body fixed coordinate frame, and the subscript t refers to the target being observed (i.e., Mars in the example). Consequently, the initial EO parameters in the Mars body fixed coordinate frame can be derived using the instrument position and pointing data.
In addition, to make the bundle adjustment solution derived from the ISIS jigsaw program available for more photogrammetric software packages, we can also extract the refined EO parameters from the ISIS pushbroom sensor model. Different from the data storage method of the initial instrument position and pointing data, ISIS uses polynomial functions to represent the refined instrument position and pointing data. Similarly, the polynomial function parameters can be output to readable text files, and used to derive the refined EO parameters at any exposure time. The second-order polynomial functions used to compute the instrument position and pointing values are as follows where (X t , Y t , Z t ) and (O t , P t , K t ) refer to the instrument position and attitude values at exposure time t, respectively; (X 0 ,Y 0 ,Z 0 ) and (O 0 ,P 0 ,K 0 ) refer to the coefficients of the constant terms; (X 1 ,Y 1 ,Z 1 ) and (O 1 ,P 1 , K 1 ) refer to the coefficients of the first-order terms; (X 2 ,Y 2 ,Z 2 ) and (O 2 ,P 2 ,K 2 ) refer to the coefficients of the second-order terms; and t n refers to the normalized exposure time. Let t b and t s denote the base exposure time and time scale respectively; then, the normalized exposure time t n can be calculated with the following equation 2.2. Design of the Generic Pushbroom Sensor Model for Planetary Photogrammetry 2.2.1. Main Characteristics of the Generic Pushbroom Sensor Model As shown in Figure 2, the imaging instruments used in planetary exploration missions are always designed according to their own scientific objectives. Specifically, the LRO NACs have two identical cameras (i.e., NAC-LEFT and NAC-RIGHT), and the linear array of each NAC contains 5064 charge-coupled device (CCD) detectors. The LRO NACs can return lunar orbital photographs with a spatial resolution of 0.5 m/ pixel at the nominal 50 km orbit altitude. The LRO NAC images can be used to derive very detailed lunar cartographic products (Speyerer et al., 2016). The two LRO NACs are mounted side by side with an overlap of approximately 130 pixels, providing a total ground track swath-width of 5 km. Unlike LRO NACs, MEX HRSC contains five panchromatic and four multispectral channels, which are mounted parallel on the focal plane (Albertz et al., 2005). Each linear array contains 5184 CCD detectors. MEX HRSC is specially designed for global Mars mapping and can acquire stereo images during a single orbital pass. It can provide a maximum ground sample distance (GSD) of 10 m/pixel at a 250-km orbit height. Actually, both the LRO NAC team and MEX HRSC team are constantly optimizing their photogrammetric software to obtain the maximum scientific return from the planetary remote sensing images (Haase et al., 2019;Heipke et al., 2007;Putri et al., 2019). However, due to the differences in the camera design and the resulting sensor model, the photogrammetric software modules, namely, the whole computational package used to complete the entire photogrammetric processing flow, developed for LRO NAC images cannot be directly used to process MEX HRSC images, and vice versa. In summary, the universality of the pushbroom sensor model is still a prominent problem in planetary mapping.
According to the practical processing requirements of planetary photogrammetry, the proposed generic pushbroom sensor model should have the following characteristics: (1) support multiple planetary bodies such as Moon and Mars; (2) support various types of linear array cameras, including multiline and single line scanners; (3) provide an efficient ground-to-image transformation algorithm, which can improve the computational efficiency for multiple geometric processing procedures (e.g., orthorectification by the indirect method); (4) allow solving some practical problems such as the summing mode, varying exposure times of the scan line and image distortions; (5) support the error propagation and refinement of the EO parameters.

Rigorous Sensor Model Based on Airborne Linear Array Camera ADS40
To our best knowledge, both the commercial airborne linear scanners ADS40 and MEX HRSC are designed by the German Aerospace Center (i.e., DLR). These two cameras have many common points in the camera design, such as placing multiple linear arrays on the same focal plane and synchronously acquiring forward-looking, backward-looking and downward-looking images in one observation. ADS40 has been very successful in the field of aerial remote sensing (Gonzalez et al., 2013;Tempelmann et al., 2000). The photogrammetric processing techniques developed for ADS40 can be used to process MEX HRSC and other planetary images as well. Indeed, the ADS40 sensor model provides a very good foundation for developing the generic pushbroom sensor model for planetary photogrammetry. The design principle of the ADS40 Figure 2. Illustration of the different camera designs in planetary mapping. The coordinate system shown in the figures is consistent with that used in the field of earth observation satellite photogrammetry, and it is different from the default coordinate system in SPICE kernels and ISIS. In the focal plane of MEX HRSC, ND refers to the downward-looking panchromatic channel; S1, S2, P1 and P2 refer to the other four panchromatic channels; and RE, GR, BL, IR refer to red, green, blue and infrared channels respectively.
sensor model enables it to meet most of the characteristics listed in section 2.2.1. Specifically, the ADS40 sensor model uses a separate camera file to store each detector's calibrated image coordinates. This provides a generalized approach to perform coordinate transformation between pixel coordinates and image coordinates, and avoids considering the detailed optical distortion equations relative to specific cameras. The ADS40 sensor model also uses a separate orientation data file to store each scan line's EO parameters, making it very efficient to compute the EO parameters of any scan line through interpolation. Moreover, this is also helpful to solve the problem of varying exposure times within an image. Furthermore, the ADS40 sensor model can easily support different versions of the EO parameters, namely, the initial EO parameters derived from SPICE kernels or the refined EO parameters derived from the bundle adjustment. This facilitates the photogrammetric processing in practical mapping projects. In case of error propagation, the orientation data file contains the uncertainties of EO parameters, which can be used to determine the weight values in the bundle adjustment. Last but not least, the ADS40 sensor model is well supported by mainstream commercial photogrammetric workstations such as Hexagon Geospatial's IMAGINE Photogrammetry (formerly Leica Photogrammetry Suite-LPS). Therefore, we can make use of more commercial photogrammetric software to process planetary images. However, to ensure the ADS40 sensor model is suitable for processing planetary images, a substantial amount of work remains.

Solving the Typical Problems of Planetary Pushbroom Images 2.3.1. Summing Mode
The summing mode images are generated by averaging blocks of pixels into "macropixels", and the values of macropixels are usually 2, 4, 8, etc. Most pushbroom sensor models in existing commercial photogrammetric software fail to support the summing mode. Under this circumstance, one alternative method is enlarging the summing mode images to the full image size before conducting processing. In the proposed generic pushbroom sensor model, the problem of the summing mode has been fully taken into account. Specifically, the summing mode information is written into the orientation data file. For example, if the value of macropixels is 2, then we output a key value pair "summingmode = 2" in the comment fields of the orientation data file. Thus, when the orientation data file is imported, the summing mode information is interpreted by the proposed generic pushbroom sensor model. The pixel coordinate transformation from the summing mode image to the raw detectors of the linear array can be written as where (s r , l r ) indicate the pixel coordinates on the raw detectors of the linear array, (s s , l s ) indicate the pixel coordinates on the summing mode image, summing¯x and summing¯y indicate the values of macropixels in the along-track and across-track directions respectively, (start¯s s , start¯l s ) indicate the starting sample and starting line of the summing mode image respectively, and (start¯s r , start¯l r ) indicate the starting sample and starting line of the raw detectors of the linear array respectively. The inverse transformation, namely, transformation from the raw detectors of the linear array to the summing mode image, can be written as

Varying Exposure Times
The varying exposure times of the scan line is not as common as the summing mode in planetary images, but it is typical in MEX HRSC images. To accommodate changes of altitude and velocity in the highly elliptical orbit of MEX spacecraft, HRSC always changes the exposure time every few hundred scan lines within an image. It is noted that the latest versions of several open source (e.g., USGS ISIS) and commercial photogrammetric software (e.g., BAE Systems SOCET GXP) have been able to support varying exposure times (Hare et al., 2019). However, many photogrammetric software packages only support pushbroom images acquired with constant exposure time (Kirk et al., 2017). To solve this problem, the exposure time of each scan line must be determined accurately prior to the photogrammetric processing. First, we should divide the scan lines of an HRSC image into multiple small segments, and each segment has constant line exposure duration (LED). Note that the LED varies with different segments. Then, given a scan line l, the exact segment S that contains scan line l can be determined by traversing. Next, the exposure time relative to scan line l can be calculated by where ET indicates exposure time, startLine indicates the start line of the segment S, and startLineET indicates the exposure time of the startLine. With the exact exposure time of the scan line l, the corresponding EO parameters relative to scan line l can be calculated using SPICE kernels. Once the generic pushbroom sensor model is constructed, the varying exposure times in photogrammetric operations are inconsequential, since we can directly use the EO parameters of neighboring scan lines to interpolate the EO parameters of any scan line with subpixel accuracy.

Image Distortions
The coordinate transformation in image space involves pixel coordinates, distorted image coordinates and undistorted image coordinates, as shown in Figure 3 (a). As we all know, the photogrammetric operations (e.g., bundle adjustment) must use the undistorted image coordinates to achieve the best geopositioning accuracy. For ideal undistorted images, the coordinate transformation between pixel coordinates and the undistorted image coordinates can be directly performed using affine transformation. Actually, all cameras have varying degrees of image distortions. Consequently, the coordinate transformation between the pixel coordinates and the undistorted image coordinates becomes relatively complicated and often requires the intermediate distorted image coordinates.
In terms of the ISIS pushbroom sensor model, the transformation between the pixel coordinates and the distorted image coordinates can be performed with a simple affine transformation, and then the optical distortion equations are used to compute the undistorted image coordinates from the distorted image coordinates. Figure 3 (b) describes the image space coordinate transformation procedures for the ISIS pushbroom sensor model. Here, we use the LRO NAC image to introduce the detailed coordinate transformation steps. As listed in Table 2, the LRO NACs only have one term radial distortion coefficient K 1 , and the optical distortion equations can be expressed as (Speyerer et al., 2016) r ¼ y d where r indicates the radial distance from the principal point, and y c indicates the y-component of the undistorted image coordinates. Note that the x-component of the distorted and undistorted image coordinates is always zero for LRO NACs (but may not be zero for other cameras). Thus, the optical distortion equations with respect to the x-component are not presented in Equation 11. It should be noted that the coordinate transformation from undistorted image coordinates to distorted image coordinates requires iterative calculations. It is fortunate that such a coordinate transformation has been well solved in photogrammetry. One can refer to the ISIS source code for specific implementation details (Edmundson et al., 2012). However, different imaging instrument teams always adopt different camera calibration methods, and derive different optical distortion equations. Consequently, the ISIS pushbroom sensor model needs to interpret the optical distortion equations and implement the corresponding image space coordinate transformation algorithms (e.g., pixel-to-image and image-to-pixel) for each camera. Obviously, this lacks generality. Unlike the ISIS pushbroom sensor model, the proposed generic pushbroom sensor model directly performs the transformation between pixel coordinates and the calibrated image coordinates (i.e., undistorted image coordinates), as shown in Figure 3 (c). Specifically, the pixel-to-image transformation is performed by interpolation using the calibrated image coordinates of two neighboring CCD detectors, and the image-to-pixel transformation is performed by binary search using an ordered table.

File Formats Used in the Generic Pushbroom Sensor Model
We use a camera file and an orientation data file to manage the IO and EO parameters of planetary images, respectively. For convenience, the file format designed for the airborne linear array camera ADS40 is used to derive the camera file and the orientation data file for planetary images, respectively. Each planetary image has a corresponding orientation data file. However, the camera file relative to a specific imaging instrument needs to be generated only once, and then can be used for other images. Prior to conducting the photogrammetric processing procedures, the proposed generic pushbroom sensor model should first correctly interpret the camera file and the orientation data file. Then, various coordinate transformation steps are based on these two files. In this way, the proposed generic pushbroom sensor model provides a general method for the photogrammetric processing of planetary pushbroom images and can be applied to different linear array cameras and planetary bodies.

Camera File
As listed in Table 3, the camera file contains the focal length, pixel size, number of pixels as well as a look-up table with the calibrated image coordinates of each pixel on the CCD linear array. We also store the basic information of the linear array detectors (e.g., the camera name and the sensor line name) in the camera file. This helps to identify the observed target (e.g., Moon or Mars). In order to generate the camera file, it is necessary to strictly follow the image coordinates definition provided in the SPICE IK kernel file. Indeed, the camera file relates to the linear array detectors instead of the camera. Specifically, in the case of MEX HRSC, there are nine camera files in total for all nine linear array detectors. Similarly, in the case of LRO NACs, there are two camera files for NAC-LEFT and NAC-RIGHT respectively.
The camera file can be used to perform image space coordinate transformation. More specifically, the undistorted image coordinates p(x c , y c ) with respect to the pixel coordinates p(s,l) are interpolated using the known calibrated image coordinates of two neighboring CCD detectors. Given the undistorted image coordinates p(x c , y c ), the corresponding pixel coordinates p(s,l) are determined by binary search using an ordered  Figure 3 (c). This provides a generalized image space coordinate transformation approach, without considering the detailed optical distortion equations of each camera. It should be noted that the default instrument frame definition in SPICE and ISIS is different from the conventional photogrammetric frame definition in the field of Earth observation. In ISIS and SPICE kernels, the Z-axis of the instrument frame always points toward the planetary surface, which is consistent with the coordinate frame definition in computer vision, whereas in the conventional photogrammetric frame, the Z-axis points upward. Thus, we apply a simple rotation transformation to make the instrument frame consistent with the conventional photogrammetric frame definition. This can enable the proposed generic pushbroom sensor model to be interpretable by more open-source and commercial photogrammetric software in the field of Earth observation. Obviously, the instrument frame conversion Note: The units of the focal length, pixel size and image coordinates are millimeters. In image coordinates, the first term is the x-coordinate and the second term is the y-coordinate.

10.1029/2019EA001014
Earth and Space Science also affects the values of image coordinates. Consequently, the image coordinates listed in Table 3 may be different from the values measured in the ISIS qview program (e.g., the exchange of x and y coordinates).

Orientation Data File
For the ISIS pushbroom sensor model, the EO parameters are stored together with the planetary image data, such that the refined EO parameters by bundle adjustment will directly override the initial EO parameters. This will cause some difficulties in practical photogrammetric processing procedures, because the bundle adjustment procedure always requires several iterations. Thus, it is better to keep the initial and the refined EO parameters at the same time. Taking this into account, we store the EO parameters in an orientation data file that is separate from the planetary image data. Additionally, the proposed generic pushbroom sensor model stores the EO parameters of each scan line. Consequently, the EO parameters at any exposure time can be computed efficiently by the linear interpolation method.
The orientation data file is developed based on the file format designed for airborne ADS40 images. Different from the camera file in text format, the orientation data file adopts the binary file format to reduce the file size. Moreover, the ADS40 orientation data file uses a data compression strategy to express floating-point-type numbers as integer-type numbers (Tempelmann et al., 2000), which can further save storage space. This data compression strategy is also adopted in the proposed generic pushbroom sensor model. Consequently, the data volume of the orientation data file for a typical LRO NAC image (containing 52224 scan lines) is only approximately 2 MB. It should be noted that this data compression strategy also loses some accuracy very slightly (usually at the millimeter level or less), which can be ignored in the practical application. In addition, the ADS40 orientation data file also supports the uncertainties of each scan line. It is noteworthy that ISIS has implemented the conversion from the ISIS pushbroom sensor model to the SOCET SET pushbroom sensor model, which facilitates our software development (Edmundson et al., 2012). Compared with the ISIS pushbroom sensor model, the methodology of the ADS40 orientation data file shows some advantages.
The generation of the orientation data file for planetary pushbroom images requires some auxiliary files, as shown in Table 4. These auxiliary files can be acquired using the ISIS tabledump program. Specifically, the spacecraft trajectory and camera attitude data provided in the instrument position and pointing files are in the J2000 coordinate frame. Thus, the body rotation and sun position files are required to conduct the transformation from the J2000 coordinate frame to the planetary body fixed coordinate frame. The file of line scan times is especially useful for planetary images having varying exposure times (e.g., MEX HRSC). It provides the accurate exposure time information of scan lines, which is needed for calculating the EO parameters.

Various Coordinate Transformations Based on the Generic Pushbroom Sensor Model
As previously discussed, the sensor model is mainly used to conduct coordinate transformation between 3D ground coordinates and 2D image coordinates, which is the true heart of photogrammetry. Figure 4 illustrates the various coordinate transformation steps in the photogrammetric processing procedures.
Indeed, the pixel-to-ground and ground-to-pixel transformation steps can be accomplished based on other coordinate transformation steps. Specifically, to conduct the pixel-to-ground transformation, we can first perform the pixel-to-image transformation, and then perform the image-to-ground transformation. Similarly, the ground-to-pixel transformation can be implemented by using the ground-to-image transformation and image-to-pixel transformation. It should be noted that in Figure 4, we use 3D geodetic coordinates P(φ, λ, H) instead of space rectangular coordinates P(X,Y,Z) to express the ground coordinates in the coordinate transformation steps involved. This is mainly for ease of understanding. It is obvious that the photogrammetric operations need to be based on space rectangular coordinates (i.e., 3D Cartesian coordinates). As shown in Figure 4 (e), the 3D geodetic coordinates P(φ, λ, H) and space rectangular coordinates P(X,Y,Z) indicate the same ground point, and the coordinate transformation between these two object coordinates has been well solved in photogrammetry.

Pixel-to-Image Transformation
The pixel coordinates p(s,l) of a certain point on the raw image can be measured by its scan line l and sample s (Granshaw, 2016). As presented in Figure 4 (d), in the case of the image coordinates system, the x-axis is usually defined parallel to the flight direction, and the y-axis is perpendicular to the x-axis. The x-axis and y-axis constitute the right-handed system. It should be emphasized that there are different ways to define the image coordinates system according to the camera settings. The pixel-to-image transformation computes the calibrated image coordinates p(x c , y c ) of an image point from the corresponding pixel coordinates p(s,l). It is often used in bundle adjustment or stereo reconstruction. For example, the matched tie points are in the pixel coordinates system by default, whereas the bundle adjustment based on the collinearity equation needs to use the image coordinates as observation values, such that the pixel-to-image transformation is required.
As mentioned previously, the camera file stores the calibrated image coordinates of each pixel as a look-up table. Therefore, given a pixel p(s,l), the corresponding calibrated image coordinates p(x c , y c ) can be computed through linear interpolation (Kocaman & Gruen, 2008). Let k and k+1 denote the two neighboring pixels relative to the pixel p(s,l). The calibrated image coordinates of these two neighboring pixels are (x k , y k ) and (x k+1 , y k+1 ), respectively. Then, the pixel-to-image transformation can be performed by the following equations where k and d indicate the integer and decimal parts of the pixel coordinate s, respectively, and d = s − k. In the above pixel-to-image transformation formulas, the pixel coordinate l is not involved. This is a characteristic of linear pushbroom images because the image coordinates system should be defined in each scan line instead of the entire image space.

Image-to-Pixel Transformation
As mentioned previously, we directly perform the coordinate transformation between pixel coordinates and calibrated image coordinates without using the intermediate distorted image coordinates. This can ensure the generality of the proposed generic pushbroom sensor model. As shown in Figure 3 (a), the calibrated image coordinates of all the CCD detectors always show a curved line shape owing to optical distortions. Consequently, given the calibrated image coordinates p(x c , y c ), we cannot directly compute the corresponding pixel coordinates p(s,l). Indeed, the image-to-pixel transformation requires iterative calculations. However, such an image-to-pixel transformation algorithm is not new. It has been used in airborne ADS40 images for many years. One can also refer to the open-source photogrammetric software-Dirk's General Analytical Plotter (DGAP) for more implementation details (DGAP, 2019). For completeness, here we present the basic calculations steps. The image-to-pixel transformation can be divided into two steps: (1) the localization of the pixel coordinate s with integer accuracy; and (2) the subpixel refinement of the pixel coordinate s using linear interpolation. It is noted that even if the CCD linear array detectors contain optical distortions and show a curved line shape, it is very similar to a straight line in general. Assume that we use the 2D coordinates system defined in Figure 3. With the increase of the pixel coordinate s from 1 to N, the image coordinate y decreases accordingly. Therefore, the calibrated image coordinates of all the CCD detectors can form an ordered table. Thus, the location of the pixel coordinate s with integer accuracy becomes a classical numerical analysis problem, and there are mature algorithms to solve this problem (Press et al., 1993). Then, the subpixel refinement can be performed using the following linear interpolation equation where k and k+1 indicate the two neighboring pixels relative to the pixel coordinate s, k indicates the integer part of the pixel coordinate s, and y k and y k+1 indicate the y-coordinates of the two neighboring pixels.
It should be noted that although the image-to-pixel transformation requires several iterations, it always shows very high computational efficiency because all calculations in the iterative process are pure numerical computations. Similar to the pixel-to-image transformation, the pixel coordinate l is also not involved in the image-to-pixel transformation. Usually, the image-to-pixel transformation follows the ground-to-image transformation, which can be used to perform orthorectification with the indirect method. Thus, the pixel coordinate l is determined in the ground-to-image transformation step, such that it is unchanged in the subsequent image-to-pixel transformation step.

Image-to-Ground Transformation
The image-to-ground transformation computes the ground coordinates P(X,Y,Z) from the image coordinates p(x,y). This transformation can be used for orthorectification by the direct method. Given an image point p(x,y), the corresponding ground coordinates P(X,Y,Z) can be computed by the intersection of the light ray and an extended ellipsoid, as shown in Figure 5 (Pleiades, 2019). Assume that the ellipsoid height of the ground point to be determined is h. Thus, an extended ellipsoid can be formed. As seen, the light ray from the perspective center S intersects the extended ellipsoid with two intersection points. This means that the ground coordinates P(X,Y,Z) of the intersection point satisfy the following ellipsoid equation.
Please note Equation 2 again. It shows that the unknowns X, Y, Z can be expressed as functions of the scale factor λ. Thus, by replacing the unknown X, Y, Z with functions of the scale factor λ, a quadratic equation with respect to λ is formed. Then, two solutions of λ can be obtained by solving the above equation. This can determine two intersection points, namely, P and P′ (see Figure 5). Obviously, the intersection point P is closer to the perspective center S, and it is the ground point to be determined. In fact, we do not know the exact value of the ellipsoid height h relative to the ground point beforehand. Thus, an iterative method based on DTM is needed.

Ground-to-Image Transformation
Considering all the coordinate transformation steps, the ground-to-image transformation is the most complex one. As shown in Figure 6, the EO parameters of each image line vary with time. Thus, to perform the ground-to-image transformation, we should first determine the best scan line with respect to the ground point. This is indeed an iterative procedure, because we do not know which scan line is the best scan line when a ground point is given. Compared with ISIS, the proposed generic pushbroom sensor model adopts more efficient ground-to-image transformation algorithms.
Here, we distinguish distorted and undistorted planetary pushbroom images. For planetary pushbroom images without distortions, the ground-to-image transformation algorithm is implemented based on the geometric constraints of the central perspective plane (CPP) of the scan line (Wang et al., 2009). For linear pushbroom images having image distortions, we adopt the methodology of generalized distance prediction (GDP) in image space . The CPP-based ground-to-image transformation algorithm uses the distance between the ground point and the projection plane to calculate the iterative increment of the scan line, which avoids or decreases the complex calculations of the collinearity equation. The CPP-based method is very suitable for linear pushbroom images without image distortions because the image coordinates of the CCD detectors exhibit a straight line in the 2D focal plane, and the corresponding projection plane is a 3D plane. However, when the linear pushbroom images contain distortions, the projection plane becomes a 3D curved surface instead of a 3D plane. Thus, it is relatively difficult to compute the distance between a ground point and a 3D curved surface. Under this circumstance, extra work (e.g., segmenting the linear array into multiple short line segments) is needed to use the CPP-based method. In contrast, the GDP-based method predicts the iterative increment of the scan line by using the generalized distance in image space, which considers the influence of image distortions in the algorithm design. Both CPP-and GDP-based ground-to-image transformation algorithms show higher computational efficiency than the conventional binary search method used in the ISIS pushbroom sensor model.

Photogrammetric Processing Procedures
Based on the Generic Pushbroom Sensor Model 2.6.1. Basic Processing Flow Figure 7 presents the construction of the generic pushbroom sensor model as well as the overall photogrammetric processing process. The special feature of the process is that the tie points are automatically extracted based on approximate orthophotos. This is highly dependent on the high accuracy and efficiency of the proposed generic pushbroom sensor model.
The detailed calculation steps are as follows.
(1) The Planetary Data System (PDS) format raw images are input into ISIS using the import program designed for each imaging instrument, such as lronac2isis for LRO NAC images and hrsc2isis for MEX HRSC images, and then converted to ISIS cube images.
(2) The ISIS cube images are initialized using the ISIS spiceinit program, then the corresponding SPICE kernels are determined, and subsequently, the instrument position and pointing information are calculated. At this stage, the ISIS pushbroom sensor model is constructed.
(3) Each scan line's EO parameters and each CCD detector's calibrated image coordinates are obtained from the ISIS pushbroom sensor model. This step requires some auxiliary data files (e.g., exposure time of scan line, instrument position and pointing, body rotation and sun position), which can be exported from ISIS cube images using the ISIS tabledump program.  (4) The orientation data file and camera file are generated based on the corresponding file format designed for the airborne ADS40 sensor model.
(5) With the constructed generic pushbroom sensor model, approximate orthophotos are generated first using the initial orientation data files. Next, tie points are extracted on the approximate orthophotos, and then converted to the raw images for bundle adjustment using the ground-to-image transformation.
(6) The successful bundle adjustment solution will derive refined orientation data files, which are used to generate high-accuracy mapping products (e.g., orthophotos and DTM).

Automatic Tie Points' Extraction Based on Approximate Orthophotos
Reliable tie points are essential to successful bundle adjustment. It is noted that the automatic tie points' extraction for pushbroom planetary images, especially for large-scale mapping work, is still a challenging task. Recently, researchers tend to conduct image matching of planetary images on orthophotos or approximate orthophotos instead of the raw images (Beyer et al., 2018;Heipke et al., 2007;Speyerer et al., 2016;Tao et al., 2018). The methodology of image matching on orthophotos has some advantages in removing image distortions and decreasing the search range of conjugate points because the map-projected stereo images have the same GSD and show rough geopositioning accuracy. Thus, for two stereo orthophotos, the approximate position of the conjugate points can be directly predicted using the geographic coordinates of the orthophotos. This approach is particularly effective for planetary remote sensing images returned from recent lunar and Mars exploration missions (e.g., LRO and MEX), since the initial EO parameters of these images show good quality. This can greatly improve the accuracy and efficiency of image matching. Therefore, the matched conjugate points on orthophotos can be converted to conjugate points on raw images and then used as tie points for bundle adjustment. Such a coordinate transformation can use the proposed generic pushbroom sensor model.

Test Datasets
The proposed generic pushbroom sensor model was implemented based on the open-source photogrammetric software DGAP. The software modules for orthophotos' generation and tie points' extraction were developed using Qt 5.2.1 in the Ubuntu 14.04 operating system installed on a virtual machine. The ISIS 3.5.2 was used to perform preprocessing of planetary images and algorithm comparison. The hardware configurations were an Intel Core i7-7500U with a 2.70 GHz CPU and 8 GB RAM. Though the virtual machine decreased the computational performance to some extent, we can still conduct a fair comparison because both ISIS and the developed software run on the same hardware environment. The radius of the reference sphere is 1737.4 km for lunar images and 3396.19 km for Martian images. The test images include LRO NAC and MEX HRSC images, as listed in Table 5. We first examine the geometric accuracy of the generic pushbroom sensor model, followed by the evaluation of the bundle adjustment results. It should be emphasized that the bundle adjustment test cannot fully demonstrate the accuracy of the sensor model. But it helps to evaluate the applications of the proposed generic pushbroom sensor model in the photogrammetric processing flow (e.g., tie points' extraction on orthophotos).

Geometric Accuracy of the Generic Pushbroom Sensor Model
We used five image points on one LRO NAC image (i.e., M109215691RE) to evaluate the coordinate transformation among the pixel coordinates p(s,l), image coordinates p(x,y) and the geodetic coordinates P(φ, λ, H). For each image point, the ISIS qview program presents the corresponding image coordinates and geodetic coordinates using the measured pixel coordinates. These coordinates are listed in the 'ISIS' row of Table 6, which can be considered as true values. Meanwhile, with the measured pixel coordinates, the corresponding image coordinates and geodetic coordinates can be computed using the proposed generic pushbroom sensor model (i.e., pixel-to-image transformation and image-to-ground transformation). In addition, using the calculated geodetic coordinates, the back-projected pixel coordinates can be computed by first using ground-to-image transformation and then image-to-pixel transformation. These coordinates are listed in the 'Ours' row of Table 6. In the case of the image-to-ground transformation, the ground coordinates are calculated through projecting the measured image coordinates onto a reference DTM (see section 2.5.3 for details). We used the same reference DTM that was used in the ISIS. The differences between the calculated coordinates and the known ones were calculated, and the results are shown in the 'Residuals' row of Table 6.

Bundle Adjustment Results
The bundle adjustment tests were conducted using the LRO NAC and MEX HRSC images listed in Table 5. For comparison, we conducted bundle adjustment tests using tie points generated from the ISIS pointreg program and the methodology of extracting tie points on orthophotos. Unfortunately, we failed to extract enough tie points using the ISIS pointreg program for MEX HRSC images. Here, we do not mean that ISIS is unable to support MEX HRSC images. This may be caused by the large image differences due to different imaging angles between the forward-looking and backward-looking channels of HRSC, such that ISIS is very difficult to match enough tie points to conduct a successful bundle adjustment. However, through orthorectification, the image distortions in the raw images can be greatly removed, and it is easier to conduct Note: GSD refers to ground sample distance. For MEX HRSC images, nd2 refers to the Level 2 products of the downward-looking panchromatic channel; s12 and s22 refer to the Level 2 products of the two side-looking panchromatic channels. The side-looking images (i.e., s12 and s22) of hd788 and hd795 observations were acquired with summing mode, such that the corresponding image width is 2584 pixels. the image matching on orthophotos space. The matched tie points using the methodology of extracting tie points on orthophotos were converted to a parameter value language (PVL) format control network file that can be interpreted by ISIS. The bundle adjustment was conducted with the ISIS jigsaw program. Additionally, in the ISIS software package the term "control measure" is always used to express tie point, such that we will use both terms in this paper.
We used the ISIS autoseed program to generate initial candidate control measures. Since not all candidate control measures can be matched, a dense grid of tie points is used for autoseed. Specifically, the spacing in the X and Y directions is 250 m for LRO NAC test and 5000 m for MEX HRSC test, respectively. Consequently, the number of candidate control measures for LRO NAC and MEX HRSC tests are 3587 and 9096, respectively. In the case of our method, the NCC and pyramid matching algorithms were used. The matching window size for both our method and ISIS was 15 × 15. The search window size was 31 × 31 for our method and 301 × 301 for ISIS's method. The application of geopositioning information allows our method to use a smaller search range to determine the conjugate points. It is well known that the matched conjugate points inevitably contain some outliers, which should be removed beforehand or during the bundle adjustment procedure. We removed the control measures with image coordinates' residuals larger than one pixel. In the bundle adjustment process conducted by ISIS jigsaw program, we only corrected the constant term errors in the camera angles and spacecraft positions. Figure 8~Figure 9 present the geopositioning accuracy of tie points that were derived from the bundle adjustment results. For visual effects, the valid control measures for the MEX HRSC and LRO NAC tests are presented in Figure 10 and Figure 11 respectively. These figures were drawn using the ISIS qnet program, and are shown in their  original appearance. Table 7 lists the number of valid control measures on each image and the corresponding image coordinates' residuals derived from the bundle adjustment results. Table 8 lists the total number of candidate control measures and the corresponding processing time in the tie points' extraction procedure.

Discussion
As shown in Table 6, the calculated image coordinates derived from the proposed generic pushbroom sensor model were exactly the same as the true values. For the calculated geodetic coordinates, it was observed that there were systematic errors of approximately 8.5 × 10 −6 degrees in the latitude direction, which was about one pixel on the ground scale. Experiences suggest that this systematic error was mainly introduced by the different interpretations of the first scan line and different interpolation methods used to calculate the EO parameters. It should be noted that this systematic error was not caused by the proposed generic pushbroom sensor model. The residuals in longitude direction were only a few centimeters (about one tenth of a pixel on the ground scale), which can be ignored compared with the image GSD. It is also noted that the radius direction shows residuals of approximately tens of centimeters. This was mainly caused by the different interpolation methods used at the height value calculation step. Note that the reference DTM for LRO NAC images is the default global lunar DTM in ISIS, and it has a low resolution of 236 m/pixel. Thus, different interpolation methods will yield different height values. From the "back-projection" columns in Table 6, it can be observed that the back-projected pixel coordinates were almost the same as the measured pixel coordinates. The residuals were less than 9.4 × 10 −6 pixels in the line direction and less than 4.6 × 10 −8 pixels in the sample direction. This demonstrates that though there are small systematic errors in the latitude direction, the proposed generic pushbroom sensor model shows very high internal coincidence accuracy. Therefore, the correctness of the proposed generic pushbroom sensor model is well verified.
In the case of LRO NAC images, as shown in Table 7 and Table 8, the image coordinates' residuals and Sigma0 value of our method were almost the same as that of ISIS. However, it is obvious that our method was more efficient in the tie points' extraction procedure. In the case of MEX HRSC images, the methodology of extracting tie points on orthophotos can obtain enough high accuracy control measures within several minutes (see Figure 10). In contrast, the ISIS pointreg program did not match enough control measures to conduct a successful bundle adjustment for MEX HRSC images. Overall, pointreg is well designed. The main drawback of pointreg is that it conducts image matching on the original image domain, without making use of the geometric information of stereo pairs to predict the start position of the conjugate points. Therefore, the search space for pointreg should be relatively large compared with our method. However, an excessively large search space may lead to false conjugate points or failure of image matching. In contrast, the methodology of extracting tie points on orthophotos conducts image matching on the orthophotos domain, which makes full use of the available geometric information of the images. Thus, the effectiveness of extracting tie points on approximate orthophotos has been preliminarily verified. Furthermore, the methodology of extracting tie points on orthophotos also depends on the proposed generic pushbroom sensor model to accurately convert pixel coordinates from orthophotos to raw images.
For both LRO NAC and MEX HRSC tests, most of the image coordinates' residuals derived from the bundle adjustment results were less than Figure 11. Valid control measures used for bundle adjustment of LRO NAC images. The areas not covered by the control measures are mainly due to no overlap or very poor texture. half-pixels in both sample and line directions (see Table 7). It is also observed that the sample residuals of the MEX HRSC test were slightly larger than that of the LRO NAC test. This can be explained by the lower quality of SPICE kernels corresponding to the MEX HRSC images. More specifically, the instrument pointing kernels of MEX HRSC images only have a predicted level of accuracy, whereas the instrument pointing kernels of LRO NAC images always show a higher reconstructed level of accuracy. As listed in Table 8, the Sigma0 values of both LRO NAC and MEX HRSC tests were less than 0.5, indicating satisfactory bundle adjustment results. As shown in Figure 8~Figure 9, the geopositioning accuracy of the tie points also indicates successful bundle adjustment solutions. Here, we use 0.5 m as the average GSD of LRO NAC images. Thus, in the case of the LRO NAC test, the geopositioning accuracy in latitude and longitude directions was approximately one pixel for our method and slightly larger than one pixel for ISIS (see Figure 8). Whereas in the radius direction, it is observed that our method was slightly better than ISIS's method. The radius accuracy of the tie points was approximately 1.5 m (~3 pixels) for our method and 2.0 m (~4 pixels) for ISIS. For the MEX HRSC test, the geopositioning accuracy result was only derived from our method. As shown in Table 5, the GSD of MEX HRSC images varies between 21.4 and 38.3 m. We use 30 m as the average GSD. Figure 9 shows that most of the tie points' geopositioning accuracy was better than 30 m in latitude and longitude directions, which was equal to approximately one pixel relative to the average GSD. The geopositioning accuracy in the radius direction was slightly lower than that in the latitude and longitude directions for the MEX HRSC test. Overall, both LRO NAC and MEX HRSC bundle adjustment tests delivered satisfying geopositioning accuracy.
It should be noted that low residuals and sigma0 indicate a good bundle adjustment result that distributes the remaining errors well. But it does not mean the high accuracy of a sensor model. Indeed, the bundle adjustment test is mainly used to verify the feasibility of extracting tie points on orthophotos, which involves various coordinates transformation operations that are conducted using the proposed generic pushbroom sensor model. Specifically, we firstly use indirect method to generate orthophotos, which requires a large amount of ground-to-image transformation operations. Then, the tie points matched on orthophotos are converted to raw images, and this coordinate transformation process is indeed a ground-to-image transformation operation as well. Therefore, the proposed generic pushbroom sensor model plays a significant role for the methodology of extracting tie points on orthophotos. In terms of efficiency, the proposed generic pushbroom sensor model show high computational efficiency due to applying fast ground-to-image transformation, which makes the alternative photogrammetric processing flow (i.e., extracting tie points on orthophotos) has practical application value. Finally, it should be pointed out that the number of the test images is small, and we did not test the images with high incidence or emission angles. Obviously, the generic pushbroom sensor model will take more computation time to conduct the ground-to-image transformation for the planetary images acquired with poor image geometry.

Conclusions
Importantly, raw planetary images require rigorous photogrammetric processing procedures to produce valuable cartographic products. Thus, the sensor model is the basis of planetary photogrammetry, which is used in almost each step of the photogrammetric processing procedures (e.g., bundle adjustment, stereo reconstruction, orthorectification and DTM generation). The main shortcomings of existing pushbroom sensor models used in the planetary mapping community are low computational efficiency and lack of generality. This affects the fast generation of planetary cartographic products as well as the fusion processing of multiple source planetary images. Therefore, this paper presents a generic pushbroom sensor model for planetary photogrammetry. On the basis of fully considering the characteristics of the raw planetary pushbroom images, the proposed generic pushbroom sensor model provides an abstraction mechanism for various coordinate transformation steps. Thus, there is no need to consider the specific imaging instrument and the corresponding optical distortion models in the photogrammetric operations' stages. In terms of managing the IO and EO data, the camera file and orientation data file designed for the airborne ADS40 sensor model is adopted and extended to support the planetary images. Through applying efficient ground-to-image transformation algorithms, the photogrammetric processing procedures of planetary pushbroom images can be conducted more efficiently. In a word, the proposed generic pushbroom sensor model can process various types of planetary pushbroom images in a unified and efficient way, which decreases the software development and maintenance burden. In addition, the methodology of extracting tie points on orthophotos can greatly improve the computational efficiency and robustness of image matching, which is especially suitable for planetary images without man-made objects. It should be noted that the proposed generic pushbroom sensor model adopts efficient and robust ground-to-image transformation algorithms, enabling the fast generation of orthophotos from planetary pushbroom images and the accurate coordinate transformation between orthophotos and raw images. Therefore, the alternative photogrammetric process based on approximate orthophotos elicits more practical values for planetary pushbroom images.
Though the experimental results show that our method is better than ISIS in some algorithms (e.g., tie points' extraction), the developed software modules are still in the demo stage. ISIS has obvious advantages in stability and maintainability. Currently, we only support limited imaging instruments. Additionally, the file formats used to store the IO and EO parameters of planetary images are indeed inherited from airborne ADS40 images. Therefore, in theory existing photogrammetric software that supports ADS40 images can use the proposed methodology to process planetary images through necessary preprocessing. However, there are still many problems in practical use. For example, the summing mode information that is stored in the comment fields of the orientation data file is self-defined, and it cannot be interpreted by existing open-source (e.g., DGAP) and commercial software (e.g., LPS). Furthermore, it will be difficult to use the processing strategy presented in this paper to solve the problem of varying IO parameters per observation. In addition, it is fortunate that USGS has ported the optimized ISIS pushbroom sensor model to the Community Sensor Model (CSM) and release the CSM source code under an open-source license (Hare et al., 2019;USGS, 2019). Predictably, the CSM relative to planetary images developed by USGS will greatly facilitate the software development for planetary imaging instruments. Nevertheless, our work is still meaningful. Ultimately, more options will help to process massive planetary images. It should be noted that the practical application of a sensor model is affected by many factors. The most important aspect is that standards need to be established and adhered to.
It is hoped that the proposed generic pushbroom sensor model and the methodology of tie points' extraction on orthophotos can be applied to large-scale mapping projects involving thousands or more planetary images. Obviously, this requires more comprehensive experimental verification. Furthermore, the proposed generic pushbroom sensor model can be used to conduct fusion processing of multiple-source planetary images. This helps to generate global-wide lunar and Mars cartographic products using high-resolution planetary images acquired by multiple space agencies. An obvious advantage is that the photogrammetric processing software can be kept unchanged when dealing with different sensor data because the generic pushbroom sensor model is a sensor-independent approach.