Skip to main content

Rotation-Invariant Completion Network

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2023)

Abstract

Real-world point clouds usually suffer from incompleteness and display different poses. While current point cloud completion methods excel in reproducing complete point clouds with consistent poses as seen in the training set, their performance tends to be unsatisfactory when handling point clouds with diverse poses. We propose a network named Rotation-Invariant Completion Network (RICNet), which consists of two parts: a Dual Pipeline Completion Network (DPCNet) and an enhancing module. Firstly, DPCNet generates a coarse complete point cloud. The feature extraction module of DPCNet can extract consistent features, no matter if the input point cloud has undergone rotation or translation. Subsequently, the enhancing module refines the fine-grained details of the final generated point cloud. RICNet achieves better rotation invariance in feature extraction and incorporates structural relationships in man-made objects. To assess the performance of RICNet and existing methods on point clouds with various poses, we applied random transformations to the point clouds in the MVP dataset and conducted experiments on them. Our experiments demonstrate that RICNet exhibits superior completion performance compared to existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Qi, C.R., Su, H., Mo, K., et al.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)

    Google Scholar 

  2. Yuan, W., Khot, T., Held, D., et al.: Pcn: point completion network. In: International Conference on 3D Vision (3DV), pp. 728–737 (2018)

    Google Scholar 

  3. Liu, M., Sheng, L., Yang, S., et al.: Morphing and sampling network for dense point cloud completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 11596–11603 (2020)

    Google Scholar 

  4. Pan, L.: Ecg: edge-aware point cloud completion with graph convolution. In: IEEE Robotics and Automation Letters (2020)

    Google Scholar 

  5. Wang, X., Ang, M.H., Jr., Lee, G.H.: Cascaded refinement network for point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 790–799 (2020)

    Google Scholar 

  6. Tchapmi, L.P., Kosaraju, V., Rezatofighi, H., et al.: Topnet: structural point cloud decoder. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 383–392 (2019)

    Google Scholar 

  7. Pan, L., Chen, X., Cai, Z., et al.: Variational relational point completion network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8524–8533 (2022)

    Google Scholar 

  8. Zheng, C., Cham, T.-J., Cai, J.: Pluralistic image completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1438–1447 (2019)

    Google Scholar 

  9. Qi, C.R., Yi, L., Su, H., et al.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in Neural Information Processing Systems, pp. 5105–5114 (2017)

    Google Scholar 

  10. Tombari, F., Salti, S., Di Stefano, L.: Unique signatures of histograms for local surface description. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6313, pp. 356–369. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15558-1_26

    Chapter  Google Scholar 

  11. Zhang, Z., Hua, B..S, Rosen, D.W., et al.: Rotation invariant convolutions for 3d point clouds deep learning. In: International Conference on 3D Vision, pp. 204–213 (2019)

    Google Scholar 

  12. Zhang, Z., Hua, B.S., Chen, W., et al.: Global context aware convolutions for 3d point cloud understanding. In: International Conference on 3D Vision, pp. 210–219 (2020)

    Google Scholar 

  13. Kim, S., Park, J., et al.: Rotation-invariant local-to-global representation learning for 3d point cloud. In: Advances in Neural Information Processing Systems, pp. 8174–8185 (2020)

    Google Scholar 

  14. Thomas, H.: Rotation-invariant point convolution with multiple equivariant alignments. In: 2020 International Conference on 3D Vision (3DV), pp. 504–513 (2020)

    Google Scholar 

  15. Li, X., Li, et al.: A rotation-invariant framework for deep point cloud analysis. IEEE Trans. Visualizat. Comput. Graph. 4503–4514 (2021)

    Google Scholar 

  16. Zhang, Z., Hua, S.K.: RIConv++: effective rotation invariant convolutions for 3D point clouds deep learning. Inter. J. Comput. Vis. 1228–1243 (2022)

    Google Scholar 

  17. Wang, Y., Sun, Y., et al.: Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. 1–12 (2019)

    Google Scholar 

  18. Knapitsch, A., Park, J., Zhou, Q., et al.: Tanks and temples: benchmarking large-scale scene reconstruction. ACM Trans. Graph. (ToG), 1–13 (2017)

    Google Scholar 

  19. Tatarchenko, M., Richter, S.R., Ranftl, R., et al.: What do single-view 3d reconstruction networks learn? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3405–3414 (2019)

    Google Scholar 

  20. Chen, B., Fan, J., Zhao, P., et al.: Slice sequential network: a lightweight unsupervised point cloud completion network. In: Pattern Recognition and Computer Vision: 4th Chinese Conference, PRCV 2021, pp. 103–114 (2021)

    Google Scholar 

  21. Tao, M., Zhao, C., Wang, J., et al.: Global patch cross-attention for point cloud analysis. In: Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, pp. 96–111 (2022)

    Google Scholar 

  22. Pan, L., Chen, X., Cai, Z., et al.: Variational relational point completion network for robust 3D classification. IEEE Trans. Pattern Anal. Mach. Intell. (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, Y., Shi, P. (2024). Rotation-Invariant Completion Network. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14426. Springer, Singapore. https://doi.org/10.1007/978-981-99-8432-9_10

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8432-9_10

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8431-2

  • Online ISBN: 978-981-99-8432-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics