Skip to main content

SPANet: Spatial and Part-Aware Aggregation Network for 3D Object Detection

  • Conference paper
  • First Online:
PRICAI 2021: Trends in Artificial Intelligence (PRICAI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13033))

Included in the following conference series:

  • 1406 Accesses

Abstract

3D object detection is a fundamental technique in autonomous driving. However, current LiDAR-based single-stage 3D object detection algorithms do not pay sufficient attention to the encoding of the inhomogeneity of LiDAR point clouds and the shape encoding of each object. This paper introduces a novel 3D object detection network called the spatial and part-aware aggregation network (SPANet), which utilizes a spatial aggregation network to remedy the inhomogeneity of LiDAR point clouds, and embodies a part-aware aggregation network that learns the statistic shape priors of objects. SPANet deeply integrates both 3D voxel-based features and point-based spatial features to learn more discriminative point cloud features. Specifically, the spatial aggregation network takes advantage of the efficient learning and high-quality proposals by providing flexible receptive fields from PointNet-based networks. The part-aware aggregation network includes a part-aware attention mechanism that learns the statistic shape priors of objects to enhance the semantic embeddings. Experimental results reveal that the proposed single-stage method outperforms state-of-the-art single-stage methods on the KITTI 3D object detection benchmark. It achieved a bird’s eye view (BEV) average precision (AP) of 91.59%, 3D AP of 80.34%, and heading AP of 95.03% in the detection of cars.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3D object detection network for autonomous driving. In: IEEE CVPR, vol. 1, p. 3 (2017)

    Google Scholar 

  2. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3354–3361. IEEE (2012)

    Google Scholar 

  3. Girshick, R.: Fast r-cNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)

    Google Scholar 

  4. Graham, B., Engelcke, M., Van Der Maaten, L.: 3D semantic segmentation with submanifold sparse convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9224–9232 (2018)

    Google Scholar 

  5. Graham, B., van der Maaten, L.: Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307 (2017)

  6. He, C., Zeng, H., Huang, J., Hua, X.S., Zhang, L.: Structure aware single-stage 3d object detection from point cloud. In: CVPR (2020)

    Google Scholar 

  7. Ku, J., Mozifian, M., Lee, J., Harakeh, A., Waslander, S.: Joint 3D proposal generation and object detection from view aggregation. arXiv preprint arXiv:1712.02294 (2017)

  8. Kuang, H., Wang, B., An, J., Zhang, M., Zhang, Z.: Voxel-FPN: multi-scale voxel feature aggregation for 3d object detection from lidar point clouds. Sensors 20(3), 704 (2020)

    Article  Google Scholar 

  9. Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: Pointpillars: fast encoders for object detection from point clouds. arXiv preprint arXiv:1812.05784 (2018)

  10. Lin, T.Y., Goyal, P., Girshick, R., He, K., DollĂ¡r, P.: Focal loss for dense object detection. IEEE Trans. Patt. Anal. Mach. Intell. 42, 318–327(2018)

    Google Scholar 

  11. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  12. Qi, C.R., Litany, O., He, K., Guibas, L.J.: Deep hough voting for 3d object detection in point clouds. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9277–9286 (2019)

    Google Scholar 

  13. Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum pointnets for 3D object detection from rgb-d data. arXiv preprint arXiv:1711.08488 (2017)

  14. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR), vol. 1, p. 4. IEEE (2017)

    Google Scholar 

  15. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in Neural Information Processing Systems, pp. 5099–5108 (2017)

    Google Scholar 

  16. Shi, S., et al.: Pv-RCNN: Point-voxel feature set abstraction for 3D object detection. In: CVPR (2020)

    Google Scholar 

  17. Shi, S., Wang, X., Li, H.: Pointrcnn: 3D object proposal generation and detection from point cloud. arXiv preprint arXiv:1812.04244 (2018)

  18. Shi, S., Wang, Z., Shi, J., Wang, X., Li, H.: From points to parts: 3D object detection from point cloud with part-aware and part-aggregation network. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)

    Google Scholar 

  19. Yan, Y., Mao, Y., Li, B.: Second: sparsely embedded convolutional detection. Sensors 18(10), 3337 (2018)

    Article  Google Scholar 

  20. Ye, Y., Chen, H., Zhang, C., Hao, X., Zhang, Z.: Sarpnet: shape attention regional proposal network for lidar-based 3D object detection. Neurocomputing 379, 53–63 (2020). https://doi.org/10.1016/j.neucom.2019.09.086

    Article  Google Scholar 

  21. Zhou, Y., Tuzel, O.: Voxelnet: end-to-end learning for point cloud based 3D object detection. arXiv preprint arXiv:1711.06396 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yangyang Ye .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ye, Y. (2021). SPANet: Spatial and Part-Aware Aggregation Network for 3D Object Detection. In: Pham, D.N., Theeramunkong, T., Governatori, G., Liu, F. (eds) PRICAI 2021: Trends in Artificial Intelligence. PRICAI 2021. Lecture Notes in Computer Science(), vol 13033. Springer, Cham. https://doi.org/10.1007/978-3-030-89370-5_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89370-5_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89369-9

  • Online ISBN: 978-3-030-89370-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics