Skip to main content
Log in

A fast coarse-to-fine point cloud registration based on optical flow for autonomous vehicles

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Point cloud registration is a vital prerequisite for many autonomous vehicle tasks. However, balancing the accuracy and computational complexity is very challenging for existing point cloud registration algorithms. This paper proposes a fast coarse-to-fine point cloud registration approach for autonomous vehicles. Our method uses nearest neighbor sample consensus optical flow pairwise matching resulting from a 2D bird’s eye view to initialize the coarse registration. It provides an initial 2D guess matrix for the fine registration and effectively reduces the computational complexity. In two-stage registration, our approach eliminates outliers by utilizing our self-correction module, which improves the robustness without using global positioning system (GPS) information. Point cloud registration experiments show that only our approach can process in real-time (71 ms, on average) while achieving state-of-the-art accuracy on the KITTI Odometry dataset, achieving a mean relative rotation error of 0.125 and a mean relative translation error of 0.038 m. In addition, real-road vehicle-to-vehicle point cloud registration experiments verify that the proposed algorithm can effectively align two vehicles’ point cloud when the GPS is not synchronized. A demonstration video is available at https://youtu.be/BJTSDChQchw.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data Availability

The authors declare that all data and materials support our claims in the manuscript and comply with field standards. The original data involved in our research are available.

References

  1. Li D, He K, Wang L, Zhang D (2022) Local feature extraction network with high correspondences for 3d point cloud registration. Appl Intell. https://doi.org/10.1007/s10489-021-03055-1

  2. Huang X, Li S, Zuo Y, Fang Y, Zhang J, Zhao X (2022) Unsupervised point cloud registration by learning unified gaussian mixture models. IEEE Robot Autom Lett 7(3):7028–7035. https://doi.org/10.1109/LRA.2022.3180443

    Article  Google Scholar 

  3. Segal A, Haehnel D, Thrun S (2009) Generalized-icp. In: Proceedings of robotics: science and systems, Seattle, USA. https://doi.org/10.15607/RSS.2009.V.021

  4. Yew ZJ, Lee GH (2018) 3Dfeat-net: weakly supervised local 3d features for point cloud registration. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer vision – ECCV 2018. Springer, pp 630–646. https://doi.org/10.1007/978-3-030-01267-0_37

  5. Bai X, Luo Z, Zhou L, Fu H, Quan L, Tai C-L (2020) D3feat: joint learning of dense detection and description of 3d local features. In: 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 6358–6366. https://doi.org/10.1109/CVPR42600.2020.00639

  6. Lu W, Wan G, Zhou Y, Fu X, Yuan P, Song S (2019) Deepvcp: an end-to-end deep neural network for point cloud registration. In: 2019 IEEE/CVF international conference on computer vision (ICCV), pp 12–21. https://doi.org/10.1109/ICCV.2019.00010

  7. Besl PJ, McKay ND (1992) A method for registration of 3-d shapes. IEEE Trans Pattern Anal Mach Intell 14(2):239–256. https://doi.org/10.1109/34.121791

    Article  Google Scholar 

  8. Yue X, Liu Z, Zhu J, Gao X, Yang B, Tian Y (2022) Coarse-fine point cloud registration based on local point-pair features and the iterative closest point algorithm. Appl Intell. https://doi.org/10.1007/s10489-022-03201-3

  9. Qi L, Wu F, Ge Z, Sun Y (2022) Deepmatch: toward lightweight in point cloud registration. Frontiers Neurorobotics, vol 16. https://doi.org/10.3389/fnbot.2022.891158

  10. Servos J, Waslander SL (2017) Multi-channel generalized-icp: a robust framework for multi-channel scan registration. Robot Auton Syst 87:247–257. https://doi.org/10.1016/j.robot.2016.10.016

    Article  Google Scholar 

  11. Chen H, Zhang X, Du S, Wu Z, Zheng N (2019) A correntropy-based affine iterative closest point algorithm for robust point set registration. IEEE/CAA J Automatica Sinica 6(4):981–991. https://doi.org/10.1109/JAS.2019.1911579

    Article  MathSciNet  Google Scholar 

  12. Kim H, Song S, Myung H (2019) Gp-icp: ground plane icp for mobile robots. IEEE Access 7:76599–76610. https://doi.org/10.1109/ACCESS.2019.2921676

    Article  Google Scholar 

  13. Maken FA, Ramos F, Ott L (2022) Stein icp for uncertainty estimation in point cloud matching. IEEE Robot Autom Lett 7(2):1063–1070. https://doi.org/10.1109/LRA.2021.3137503

    Article  Google Scholar 

  14. Zhang J, Yao Y, Deng B (2022) Fast and robust iterative closest point. IEEE Trans Pattern Anal Mach Intell 44(7):3450–3466. https://doi.org/10.1109/TPAMI.2021.3054619

    Google Scholar 

  15. Yao Z, Zhao Q, Li X, Bi Q (2021) Point cloud registration algorithm based on curvature feature similarity. Measurement 177:109274. https://doi.org/10.1016/j.measurement.2021.109274

    Article  Google Scholar 

  16. Biber P, Strasser W (2003) The normal distributions transform: a new approach to laser scan matching. In: Proceedings 2003 IEEE/RSJ international conference on intelligent robots and systems (IROS 2003) (Cat. No.03CH37453), vol 3, pp 2743–27483. https://doi.org/10.1109/IROS.2003.1249285

  17. Myronenko A, Song X (2010) Point set registration: coherent point drift. IEEE Trans Pattern Anal Mach Intell 32(12):2262–2275. https://doi.org/10.1109/TPAMI.2010.46

    Article  Google Scholar 

  18. Liu W, Wu H, Chirikjian GS (2021) Lsg-cpd: coherent point drift with local surface geometry for point cloud registration. In: 2021 IEEE/CVF international conference on computer vision (ICCV), pp 15273–15282. https://doi.org/10.1109/ICCV48922.2021.01501

  19. Liu H, Liu T, Li Y, Xi M, Li T, Wang Y (2019) Point cloud registration based on mcmc-sa icp algorithm. IEEE Access 7:73637–73648. https://doi.org/10.1109/ACCESS.2019.2919989

    Article  Google Scholar 

  20. Li P, Wang R, Wang Y, Tao W (2020) Evaluation of the icp algorithm in 3d point cloud registration. IEEE Access 8:68030–68048. https://doi.org/10.1109/ACCESS.2020.2986470

    Article  Google Scholar 

  21. Zheng Y, Li Y, Yang S, Lu H (2022) Global-pbnet: a novel point cloud registration for autonomous driving. IEEE Trans Intell Transp Syst:1–8. https://doi.org/10.1109/TITS.2022.3153133

  22. Li L, Yang M (2022) Point cloud registration based on direct deep features with applications in intelligent vehicles. IEEE Trans Intell Transp Syst 23(8):13346–13357. https://doi.org/10.1109/TITS.2021.3123619

    Article  Google Scholar 

  23. Liu W, Sun W, Wang S, Liu Y (2021) Coarse registration of point clouds with low overlap rate on feature regions. Signal Process Image Commun 98:116428. https://doi.org/10.1016/j.image.2021.116428

    Article  Google Scholar 

  24. Xie Y, Zhang Y, Chen L, Cheng H, Tu W, Cao D, Li Q (2021) Rdc-slam: a real-time distributed cooperative slam system based on 3d lidar. IEEE Trans Intell Transp Syst:1–10. https://doi.org/10.1109/TITS.2021.3132375

  25. Choy C, Park J, Koltun V (2019) Fully convolutional geometric features. In: 2019 IEEE/CVF international conference on computer vision (ICCV), pp 8957–8965. https://doi.org/10.1109/ICCV.2019.00905

  26. GroSS J, Ošep A, Leibe B (2019) Alignnet-3d: fast point cloud registration of partially observed objects. In: 2019 International conference on 3d vision (3DV), pp 623–632. https://doi.org/10.1109/3DV.2019.00074

  27. Cheng L, Chen S, Liu X, Xu H, Wu Y, Li M, Chen Y (2018) Registration of laser scanning point clouds: a review. Sensors, vol 18(5). https://doi.org/10.3390/s18051641

  28. Huang W, Liang H, Lin L, Wang Z, Wang S, Yu B, Niu R (2022) A fast point cloud ground segmentation approach based on coarse-to-fine markov random field. IEEE Trans Intell Transp Syst 23 (7):7841–7854. https://doi.org/10.1109/TITS.2021.3073151

    Article  Google Scholar 

  29. Wang H, Wang Z, Lin L, Xu F, Yu J, Liang H (2021) Optimal vehicle pose estimation network based on time series and spatial tightness with 3d lidars. Remote Sensing, vol 13(20). https://doi.org/10.3390/rs13204123

  30. Xu F, Wang Z, Wang H, Lin L, Liang H (2022) Dynamic vehicle pose estimation and tracking based on motion feedback for lidars. Appl Intell. https://doi.org/10.1007/s10489-022-03576-3

  31. Tao Q, Hu Z, Zhou Z, Xiao H, Zhang J (2022) Seqpolar: sequence matching of polarized lidar map with hmm for intelligent vehicle localization. IEEE Trans Vehicular Technol 71(7):7071–7083. https://doi.org/10.1109/TVT.2022.3170627

    Article  Google Scholar 

  32. Yu C, Lei J, Peng B, Shen H, Huang Q (2022) Siev-net: a structure-information enhanced voxel network for 3d object detection from lidar point clouds. IEEE Trans Geosci Remote Sensing 60:1–11. https://doi.org/10.1109/TGRS.2022.3174483

    Google Scholar 

  33. Bai C, Xiao T, Chen Y, Wang H, Zhang F, Gao X (2022) Faster-lio: lightweight tightly coupled lidar-inertial odometry using parallel sparse incremental voxels. IEEE Robot Autom Lett 7 (2):4861–4868. https://doi.org/10.1109/LRA.2022.3152830

    Article  Google Scholar 

  34. Xu Y, Boerner R, Yao W, Hoegner L, Stilla U (2019) Pairwise coarse registration of point clouds in urban scenes using voxel-based 4-planes congruent sets. ISPRS J Photogram Remote Sensing 151:106–123. https://doi.org/10.1016/j.isprsjprs.2019.02.015

    Article  Google Scholar 

  35. Chen Y, Peng W, Tang K, Khan A, Wei G, Fang M (2022) Pyrapvconv: efficient 3d point cloud perception with pyramid voxel convolution and sharable attention. Computat Intell Neurosci 2022:2286818. https://doi.org/10.1155/2022/2286818

    Google Scholar 

  36. Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th international joint conference on artificial intelligence - vol 2. IJCAI’81. Morgan Kaufmann Publishers Inc., pp 674–679. https://doi.org/10.5555/1623264.1623280

  37. Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE conference on computer vision and pattern recognition, pp 3354–3361. https://doi.org/10.1109/CVPR.2012.6248074

  38. Abdel-Aziz MK, Perfecto C, Samarakoon S, Bennis M, Saad W (2022) Vehicular cooperative perception through action branching and federated reinforcement learning. IEEE Trans Commun 70 (2):891–903. https://doi.org/10.1109/TCOMM.2021.3126650

    Article  Google Scholar 

  39. Ngo H, Fang H, Wang H (2022) Beamforming and scalable image processing in vehicle-to-vehicle networks. J Signal Process Syst 94(5):445–454. https://doi.org/10.1007/s11265-021-01696-6

    Article  Google Scholar 

  40. Yoon DD, Ayalew B, Nawaz Ali GGM (2022) Performance of decentralized cooperative perception in v2v connected traffic. IEEE Trans Intell Transport Syst 23(7):6850–6863. https://doi.org/10.1109/TITS.2021.3063107

    Article  Google Scholar 

  41. Zhao C, Li L, Pei X, Li Z, Wang F. -Y., Wu X (2021) A comparative study of state-of-the-art driving strategies for autonomous vehicles. Accident Anal Prevent 150:105937. https://doi.org/10.1016/j.aap.2020.105937

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Key Research and Development Program of China (Grant No. 2020AAA0108103) and the Independent Project of Robotics and Intelligent Manufacturing Innovation Institute, Chinese Academy of Sciences (Grant No. C2021002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huawei Liang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Liang, H., Li, Z. et al. A fast coarse-to-fine point cloud registration based on optical flow for autonomous vehicles. Appl Intell 53, 19143–19160 (2023). https://doi.org/10.1007/s10489-022-04308-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-04308-3

Keywords

Navigation