The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLVIII-1-2024
https://doi.org/10.5194/isprs-archives-XLVIII-1-2024-141-2024
https://doi.org/10.5194/isprs-archives-XLVIII-1-2024-141-2024
10 May 2024
 | 10 May 2024

AI-Driven Dim-Light Adaptive Camera (DimCam) for Lunar Robots

Ran Duan, Bo Wu, Long Chen, Hao Zhou, and Qichen Fan

Keywords: DimCam, Deep Learning, Dim Light, Lunar Robots, Lunar South Pole

Abstract. The past decade has been a boom in lunar exploration. China, India, Japan and other countries have successfully landed landers or rovers on the lunar surface (Wu et al., 2014, 2018, 2020; Prasad et al., 2023). Future missions to explore the Moon are focusing on the lunar south pole (Peña-Asensio et al., 2024). The solar altitude angle at the lunar south pole is extremely low, resulting in low solar irradiance and large areas often in dim light or shadows. The permanently shadowed regions (PSRs) at the lunar south pole are also likely containing substantial amounts of water ice (Li et al., 2018). Future lunar robots exploring the lunar south pole will need to operate in low light or shadowed regions, making camera sensors sensitive to the dim-light environments necessary for these robots. Common night vision sensors usually use near-infrared cameras. However, sensors based on passive infrared technology have image resolution limited by several factors, including the intensity of infrared radiation emitted by the object, the sensitivity of the camera, and the performance of the optical system. For instance, thermal imagers typically have a resolution of 388*284 pixels only.

We have developed a dim-light adaptive camera (DimCam) that is ultra-sensitive to the varying illumination conditions driven by AI to achieve high-definition imaging of 1080P or above, for future lunar robots operating in shadows or dim-light regions. The DimCam integrates two starlight-level ultra-sensitive imaging sensors connected by a rigid base to provide stereo vision in low illumination environment. An AI edge computing unit is embedded inside the DimCam to adaptively denoise and enhance image quality. The AI module uses an end-to-end image denoising network to identify and remove noises in the images more accurately by utilizing depth information from the stereo sensors. Compared with traditional monocular denoising algorithms, the denoising network based on stereo vision can significantly improve denoising effects and efficiency by enhancing the signal-to-noise ratio of the data input in the front end. The superposition of overlapping scenes can be regarded as a delayed exposure. Concurrently, the residual analysis of the aligned images aids in noise identification. In addition, for pixels obscured by noise, more accurate pixel values can be restored through interpolation or replacement using depth information obtained from the stereo sensors. Subsequently, a pre-trained lightweight deep network modified from Zero-DCE (Guo et al., 2020) is used for image quality enhancement in terms of brightness and contrast, providing high-quality images even in low-light environments for subsequent applications, such as positioning and navigation of robots, 3D mapping of the surrounding environment, and autonomous driven. We have tested the DimCam in a simulated environment in the laboratory, and the results show that the DimCam has promising performances and great potential for various applications.