Skip to main content
Log in

A new method for two-stage partial-to-partial 3D point cloud registration: multi-level interaction perception

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

3D point cloud registration related to rigid transformation is a fundamental yet crucial task in computer vision and graphics. For rigid registration, the local alignment of two-point clouds is equivalent to a global alignment. Since sufficient information exchange is an effective way to enhance mutual understanding, it is necessary to design a reasonable and sufficient feature interaction across two-point clouds to to obtain discriminative features and explore overlapping points. Recently, although a series of learning-based registration methods have been explored, most of the existing methods lack attention to multi-level feature interactions. In addition, there seems to be no paper that explicitly proposes a method for two-stage registration. However, intermediate constraints can be set in the two-stage registration to supervise the coarse registration and better refine the fine registration. To this end, this paper proposes a multi-level interaction perception method for two-stage partial-to-partial point cloud registration that can hierarchically capture discriminative structural features by the interaction of local details and global features from different dimensions, as well as improve the perception of locality in the early information exchange. Also, a spatial overlap-aware transformer is constructed to highlight the common regions while perceiving the global information of the point cloud. Thus, overlap constraints with high confidence between source and target point clouds can be obtained. The registration evaluation is performed on numerous partial 3D point clouds with Gaussian noise, and the results reveal that our method can achieve superior performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. http://graphics.stanford.edu/data/3Dscanrep/

References

  1. Li Y, Ma LF, Zhong ZL, Liu F, Chapman MA, Cao DP, Li J (2020) Deep learning for LiDAR point clouds in autonomous driving: a review. IEEE Trans Neural Netw Learn Syst 32(8):3412–3432

    Article  Google Scholar 

  2. Cui YD, Chen R, Chu WB, Chen L, Tian DX, Li Y, Cao DP (2021) Deep learning for image and point cloud fusion in autonomous driving: a review. IEEE Trans Intell Transp Syst 23:722–739

    Article  Google Scholar 

  3. He YS, Huang HB, Fan HQ, Chen QF, Sun J (2021) FFB6D: A full flow bidirectional fusion network for 6D pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3003–3013

  4. Guo JW, Xing XJ, Quan WZ, Yan DM, Gu QY, Liu Y, Zhang XP (2021) Efficient center voting for object detection and 6D pose estimation in 3D point cloud. IEEE Trans Image Process 30:5072–5084

    Article  Google Scholar 

  5. Xu YQ, Jung C, Chang YK (2022) Head pose estimation using deep neural networks and 3D point clouds. Pattern Recogn 121:108210

    Article  Google Scholar 

  6. Yue YF, Wen MX, Zhao CY, Wang YZ, Wang DW (2021) COSEM: collaborative semantic map matching framework for autonomous robots. IEEE Trans Industr Electron 69(4):3843–3853

    Article  Google Scholar 

  7. Cai QL, Chen KW, Yao CH, Chu HK (2021) Automatic local point cloud registration algorithm and point cloud reconstruction system. In: Proceedings of the IEEE International Symposium on Intelligent Signal Processing and Communication Systems, pp 1–2

  8. Shinde RC, Durbha SS, Potnis AV (2021) LidarCSNet: a deep convolutional compressive sensing reconstruction framework for 3D airborne lidar point cloud. ISPRS J Photogramm Remote Sens 180:313–334

    Article  Google Scholar 

  9. Alsadik B, Karam S (2021) The simultaneous localization and mapping (SLAM) - An overview. Surv Geospat Eng J 2(1):1–12

    Google Scholar 

  10. Rosen DM, Doherty KJ, Terán Espinoza A, Leonard JJ (2021) Advances in inference and representation for simultaneous localization and mapping. Ann Rev Control Robot Auton Syst 4:215–242

    Article  Google Scholar 

  11. Li HD, Hartley R (2007) The 3D-3D registration problem revisited. In: Proceedings of the IEEE international conference on computer vision, Rio de Janeiro, Brazil, pp 1–8

  12. Aoki Y, Goforth H, Srivatsan RA, Lucey S (2019) PointNetLK: Robust & efficient point cloud registration using PointNet. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, USA, pp 7163–7172

  13. Wang Y, Solomon JM (2019a) Deep closest point: Learning representations for point cloud registration. In: Proceedings of the IEEE International Conference on Computer Vision, Long Beach, USA, pp 3523–3532

  14. Wang Y, Solomon JM (2019b) PRNet: Self-supervised learning for partial-to-partial registration. In: Proceedings of the Advances in Neural Information Processing Systems, Vancouver, Canada, p 8814-8826

  15. Li JH, Zhang CH, Xu ZY, Zhou HN, Zhang C (2020) Iterative distance-aware similarity matrix convolution with mutual-supervised point elimination for efficient point cloud registration. In: Proceedings of the European Conference on Computer Vision, pp 378–394

  16. Yew ZJ, Lee GH (2020) RPMNet: Robust point matching using learned features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 11824–11833

  17. Fu K, Liu S, Luo X, Wang M (2021) Robust point cloud registration framework based on deep graph matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 8893–8902

  18. Qi CR, Su H, Mo K, Guibas LJ (2017) PointNet: Deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp 77–85

  19. Huang SY, Gojcic Z, Usvyatsov M, Wieser A, Schindler K (2021) PREDATOR: Registration of 3D point clouds with low overlap. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4267–4276

  20. Zhu LF, Liu D, Lin CW, Yan R, Gómez-Fernández F, Yang NH, Feng ZY (2021) Point cloud registration using representative overlapping points, arXiv preprint arXiv:2107.02583

  21. PaulJ B, NeilD M (1992) A method for registration of 3D shapes. IEEE Trans Pattern Anal Mach Intell 14(2):239–256

    Article  Google Scholar 

  22. Rusinkiewicz S, Levoy M (2001) Efficient variants of the ICP algorithm. In: Proceedings of the International Conference on 3D Digital Imaging and Modeling, Quebec City, Canada, pp 145–152

  23. Fitzgibbon AW (2003) Robust registration of 2D and 3D point sets. Image Vis Comput 21(13–14):1145–1153

    Article  Google Scholar 

  24. Yang JL, Li HD, Campbell D, Jia YD (2015) Go-ICP: a globally optimal solution to 3D ICP point-set registration. IEEE Trans Pattern Anal Mach Intell 38(11):2241–2254

    Article  Google Scholar 

  25. Zhou QY, Park J, Koltun V (2016) Fast global registration. In: Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, pp 766–782

  26. Qian J, Chen KQ, Chen QY, Yang YH, Zhang JH, Chen SY (2021) Robust visual-lidar simultaneous localization and mapping system for UAV. IEEE Geosci Remote Sens Lett 19:1–5

    Article  Google Scholar 

  27. Eckart B, Kim K, Kautz J (2018) HGMR: Hierarchical gaussian mixtures for adaptive 3D registration. In: Proceedings of the European Conference on Computer Vision, Munich, Germany, pp 705–721

  28. Campbell D, Petersson L (2016) GOGMA: Globally-optimal gaussian mixture alignment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp 5685–5694

  29. Derpanis KG (2010) Overview of the RANSAC algorithm. Image Rochester NY 4(1):2–3

    Google Scholar 

  30. Li JY, Hu QW, Ai MY (2020) GESAC: robust graph enhanced sample consensus for point cloud registration. ISPRS J Photogramm Remote Sens 167:363–374

    Article  Google Scholar 

  31. Li JY, Hu QW, Ai MY (2021) Point cloud registration based on one-point RANSAC and scale-annealing biweight estimation. IEEE Trans Geosci Remote Sens 59(11):9716–9729

    Article  Google Scholar 

  32. Wang Y, Sun Y, Liu ZW, Sarma SE, Bronstein MM, Solomon JM (2019) Dynamic graph CNN for learning on point clouds. ACM Trans Graph 38(5):1–12

    Article  Google Scholar 

  33. Sarode V, Li XQ, Goforth H, Aoki Y, Srivatsan RA, Lucey S, Choset H (2019) PCRNet: Point cloud registration network using PointNet encoding, arXiv preprint arXiv:1908.07906

  34. Kurobe A, Sekikawa Y, Ishikawa K, Saito H (2020) CorsNet: 3D point cloud registration by deep neural network. IEEE Robot Autom Lett 5(3):3960–3966

    Article  Google Scholar 

  35. Zhao HW, Liang ZD, Wang CX, Yang M (2021) CentroidReg: a global-to-local framework for partial point cloud registration. IEEE Robot Autom Lett 6(2):2533–2540

    Article  Google Scholar 

  36. Zhang ZH, Chen GL, Wang X, Shu MC (2021) DDRNet: fast point cloud registration network for large-scale scenes. ISPRS J Photogramm Remote Sens 175:184–198

    Article  Google Scholar 

  37. Xu H, Liu SC, Wang GF, Liu GH, Zeng B (2021) OMNet: Learning overlapping mask for partial-to-partial point cloud registration. In: Proceedings of the IEEE International Conference on Computer Vision, pp 3132–3141

  38. Wang YJ, Yan CG, Feng YT, Du SY, Dai QH, Gao Y (2022) STORM: structure-based overlap matching for partial point cloud registration. IEEE Trans Pattern Anal Mach Intell 45(1):1135–1149

    Article  Google Scholar 

  39. Zagoruyko S, Komodakis N (2017) Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In: Proceedings of the International Conference on Learning Representations, Palais, France

  40. Woo S, Park J, Lee JY, Kweon IS (2018) CBAM: Convolutional block attention module. In: Proceedings of the European Conference on Computer Cision, Munich, Germany, pp 3–19

  41. Guo MH, Cai JX, Liu ZN, Mu TJ, Martin RR, Hu SM (2021) PCT: Point cloud transformer. Computational Visual Media 7(2):187–199

    Article  Google Scholar 

  42. Wu ZR, Song S, Khosla A, Yu F, Zhang LG, Tang XO, Xiao JX (2015) 3D ShapeNets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1912–1920

  43. Hezroni I, Drory A, Giryes R, Avidan S (2021) DeepBBS: Deep best buddies for point cloud registration. In: Proceedings of the IEEE International Conference on 3D Vision, pp 342–351

  44. Wang H, Liu X, Kang W, Yan Z, Wang B, Ning Q (2022) Multi-features guidance network for partial-to-partial point cloud registration. Neural Comput Appl 34(2):1623–1634

    Article  Google Scholar 

  45. Sang B, Chen H, Wan J, Yang L, Li T, Xu W, Luo C (2022) Self-adaptive weighted interaction feature selection based on robust fuzzy dominance rough sets for monotonic classification. Knowl-Based Syst 253:109523

    Article  Google Scholar 

  46. Sang B, Chen H, Yang L, Li T, Xu W (2021) Incremental feature selection using a conditional entropy based on fuzzy dominance neighborhood rough sets. IEEE Trans Fuzzy Syst 30(6):1683–1697

    Article  Google Scholar 

  47. Sang B, Chen H, Yang L, Wan J, Li T, Xu W (2022) Feature selection considering multiple correlations based on soft fuzzy dominance rough sets for monotonic classification. IEEE Trans Fuzzy Syst 30(12):5181–5195

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by Natural Science Foundation of Fujian Province of China( Grants No. 2021J01540 and No. 2021J05106) and National Natural Science Foundation of China( Grants No. 62032022, No. 62176244, and No. 62006215).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feilong Cao.

Ethics declarations

Conflict of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meng, X., Zhu, L., Ye, H. et al. A new method for two-stage partial-to-partial 3D point cloud registration: multi-level interaction perception. Int. J. Mach. Learn. & Cyber. 14, 3765–3781 (2023). https://doi.org/10.1007/s13042-023-01863-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-023-01863-0

Keywords

Navigation