Skip to main content
Log in

SR-net: satellite relative pose estimation network for a noncooperative target via RGB images

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Space exploration has drawn increasing attention to space control technology. For debris removal missions and on-orbit servicing, accurate pose estimation of a noncooperative target is critical. This article introduces the satellite relative pose estimation network (SR-Net) two-stage training method for a noncooperative target via RGB images. As the first stage in regressing the 3D translation, we combined the detection and translation regression modules into a single model. SR-Net decouples the translation and rotation information in stage two by utilizing classification instead of regression, using the detected picture as input and fitting a rotation by minimizing the weighted least squares. Furthermore, a large-scale dataset for 6-DoF pose estimation is introduced, which can be utilized as a benchmark for various state-of-the-art monocular vision-based 6-DoF pose estimation methods. Ablation studies are used to verify the effectiveness and scalability of each module. SR-Net can be added to a baseline model as a separate module to improve the 6-DoF pose estimation accuracy for noncooperative targets. The results are extremely encouraging since they show that using only vision data, it is feasible to accurately estimate the 6-DoF pose of a noncooperative target.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Algorithm 1
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

The datasets generated and analyzed during the current study are available in the GitHub repository, https://github.com/walalala233/SR-Net.

References

  1. Deng J, Dong W, Socher R, Li LJ, Li K, Li FF (2009) in: Imagenet: a large-scale hierarchical image database. In: IEEE conference on computer vision and pattern recognition, IEEE, CVPR 2009, 2009, pp. 248–255. https://doi.org/10.1109/CVPR.2009.5206848

  2. Ding M, Wei L, Wang BF (2011) Vision-based estimation of relative pose in autonomous aerial refueling. Chin J Aeronaut 24:807–815. https://doi.org/10.1016/S1000-9361(11)60095-2

    Article  Google Scholar 

  3. He KM, Zhang XY, Ren SQ, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, CVPR 770–778. https://doi.org/10.1109/CVPR.2016.90

  4. Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. In: 3rd International Conference on LearningRepresentations, ICLR 2015 - Conference Track Proceedings. https://doi.org/10.48550/arXiv.1412.6980

  5. Kisantal M, Sharma S, Park TH, Izzo D, Märtens M, D’Amico S (2020) Satellite pose estimation challenge: dataset, competition design, and results. IEEE T Aero Elec Sys 56:4083–4098. https://doi.org/10.1109/TAES.2020.2989063

    Article  Google Scholar 

  6. Liao X, Wen QY, Zhang J (2013) Improving the adaptive steganographic methods based on modulus function. IEICE T Fund Electr 96:2731–2734. https://doi.org/10.1587/transfun.E96.A.2731

    Article  Google Scholar 

  7. Liao X, Chen GY, Yin JJ (2016) Content-adaptive steganalysis for color images. Secur Commun Netw 9:5756–5763. https://doi.org/10.1002/sec.1734

    Article  Google Scholar 

  8. Liao X, Peng J, Cao Y (2021) GIFMarking: the robust watermarking for animated GIF based deep learning. J Vis Commun Image R 79:103244. https://doi.org/10.1016/j.jvcir.2021.103244

    Article  Google Scholar 

  9. Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. In: European conference on computer vision, Springer, ECCV 2014, 2014, pp. 740–755. https://doi.org/10.1007/978-3-319-10602-1_48

  10. Liu Y, Xie ZW, Wang B, Liu H (2016) Pose measurement of a non-cooperative spacecraft based on circular features. In: IEEE international conference on real-time computing and robotics, RCAR 2016. IEEE 2016:221–226. https://doi.org/10.1109/RCAR.2016.7784029

    Article  Google Scholar 

  11. Liu Y, Xie ZW, Liu H (2020) Three-line structured light vision system for non-cooperative satellites in proximity operations. Chin J Aeronaut 33:1494–1504. https://doi.org/10.1016/j.cja.2019.08.024

    Article  Google Scholar 

  12. Markley FL, Cheng Y, Crassidis JL, Oshman Y (2007) Averaging quaternions. J Guid Control Dyn 30:1193–1197. https://doi.org/10.2514/1.28949

    Article  Google Scholar 

  13. Park TH, Sharma S, D'Amico S (2019) Towards robust learning-based pose estimation of noncooperative spacecraft. In: AAS/AIAA Astrodynamics Specialist Conference, vol 171. p 3667–3686. https://doi.org/10.48550/arXiv.1909.00392

  14. Paszke A, Gross S, Massa F et al (2019) Pytorch: an imperative style, high-performance deep learning library. Adv Neural Inf Proces Syst 32:8026–8037

    Google Scholar 

  15. Phisannupawong T, Kamsing P, Torteeka P, Channumsin S, Sawangwit U, Hematulin W, Jarawan T, Somjit T, Yooyen S, Delahaye D, Boonsrimuang P (2020) Vision-based spacecraft pose estimation via a deep convolutional neural network for noncooperative docking operations. Aerospace-Basel. 7(9):126. https://doi.org/10.3390/aerospace7090126

    Article  Google Scholar 

  16. Proença PF, Gao Y (2020) Deep learning for spacecraft pose estimation from photorealistic rendering. In: IEEE International Conference on Robotics and Automation, ICRA 2020, 2020, pp. 6007–6013. https://doi.org/10.1109/ICRA40945.2020.9197244

  17. Ren SQ, He KM, Girshick R, Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. Adv Neural Inf Proces Syst 28:91–99

    Google Scholar 

  18. Sharma S, D’Amico S (2020) Neural network-based pose estimation for noncooperative spacecraft rendezvous. IEEE T Aero Elec Sys 56:4638–4658. https://doi.org/10.1109/TAES.2020.2999148

    Article  Google Scholar 

  19. Ventura J (2016) Autonomous proximity operations for noncooperative space targets. Dissertation, Technische Universität München

  20. Wang G, Manhardt F, Tombari F, Ji XY (2021) GDR-net: geometry-guided direct regression network for monocular 6D object pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021, 2021, pp. 16611–16621. https://doi.org/10.1109/CVPR46437.2021.01634

  21. Xiang Y, Schmidt T, Narayanan V, Fox D (2018) PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes. In: Robotics: Science and Systems, RSS 2018. https://doi.org/10.15607/RSS.2018.XIV.019

  22. Xu WF, Liang B, Li B, Xu YS (2011) A universal on-orbit servicing system used in the geostationary orbit. Adv Space Res 48:95–119. https://doi.org/10.1016/j.asr.2011.02.012

    Article  Google Scholar 

  23. Yann LC, Yoshua B, Geoffrey H (2015) Deep learning. Nature 521:436–444. https://doi.org/10.1038/nature14539

    Article  Google Scholar 

  24. Zhang HP, Jiang ZG (2014) Multi-view space object recognition and pose estimation based on kernel regression. Chin J Aeronaut 27:1233–1241. https://doi.org/10.1016/j.cja.2014.03.021

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cheng Zhang.

Ethics declarations

Conflict of interest

Not applicable.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Su, D., Zhang, C., Chen, Z. et al. SR-net: satellite relative pose estimation network for a noncooperative target via RGB images. Multimed Tools Appl 82, 31557–31573 (2023). https://doi.org/10.1007/s11042-023-14791-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-14791-6

Keywords

Navigation