Skip to main content

QuadSampling: A Novel Sampling Method for Remote Implicit Neural 3D Reconstruction Based on Quad-Tree

  • Conference paper
  • First Online:
Computer-Aided Design and Computer Graphics (CADGraphics 2023)

Abstract

Implicit neural representations have shown potential advantages in 3D reconstruction. But implicit neural 3D reconstruction methods require high-performance graphical computing power, which limits their application on low power consumption platforms. Remote 3D reconstruction framework can be employed to address this issue, but the sampling method needs to be further improved.

We present a novel sampling method, QuadSampling, for remote implicit neural 3D reconstruction. By hierarchically sampling pixels within blocks with larger loss value, QuadSampling can result in larger average loss and help the neural learning process by better representing the shape of regions with different loss value. Thus, under the same amount of transmission, our QuadSampling can obtain more accurate and complete implicit neural representation of the scene. Extensive evaluations show that comparing with prior methods (i.e. random sampling and active sampling), our QuadSampling framework can improve the accuracy by up to 4%, and the completion ratio by about 1–2%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Campos, C., Elvira, R., Rodríguez, J.J.G., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Trans. Robot. 37(6), 1874–1890 (2021)

    Article  Google Scholar 

  2. Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: CVPR, pp. 5939–5948 (2019)

    Google Scholar 

  3. Chibane, J., Mir, A., Pons-Moll, G.: Neural unsigned distance fields for implicit function learning. In: NeurIPS (2020)

    Google Scholar 

  4. Choy, C.B., Xu, D., Gwak, J.Y., Chen, K., Savarese, S.: 3D-R2N2: a unified approach for single and multi-view 3D object reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 628–644. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_38

    Chapter  Google Scholar 

  5. Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: Fujii, J. (ed.) Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1996, New Orleans, LA, USA, 4–9 August 1996, pp. 303–312. ACM (1996)

    Google Scholar 

  6. Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T.A., Nießner, M.: ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In: CVPR, pp. 2432–2443 (2017)

    Google Scholar 

  7. Deng, K., Liu, A., Zhu, J., Ramanan, D.: Depth-supervised NeRF: fewer views and faster training for free. In: CVPR, pp. 12872–12881 (2022)

    Google Scholar 

  8. Dong, S., et al.: Multi-robot collaborative dense scene reconstruction. ACM Trans. Graph. 38(4), 84:1–84:16 (2019)

    Google Scholar 

  9. Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3D object reconstruction from a single image. In: CVPR, pp. 2463–2471 (2017)

    Google Scholar 

  10. Gkioxari, G., Johnson, J., Malik, J.: Mesh R-CNN. In: ICCV, pp. 9784–9794 (2019)

    Google Scholar 

  11. Golodetz, S., Cavallari, T., Lord, N.A., Prisacariu, V.A., Murray, D.W., Torr, P.H.S.: Collaborative large-scale dense 3D reconstruction with online inter-agent pose optimisation. IEEE Trans. Vis. Comput. Graph. 24(11), 2895–2905 (2018)

    Article  Google Scholar 

  12. Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: A Papier-Mâché approach to learning 3D surface generation. In: CVPR, pp. 216–224 (2018)

    Google Scholar 

  13. Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Auton. Robots 34(3), 189–206 (2013)

    Article  Google Scholar 

  14. Kähler, O., Prisacariu, V.A., Ren, C.Y., Sun, X., Torr, P.H.S., Murray, D.W.: Very high frame rate volumetric integration of depth images on mobile devices. IEEE Trans. Vis. Comput. Graph. 21(11), 1241–1250 (2015)

    Article  Google Scholar 

  15. Kähler, O., Prisacariu, V.A., Valentin, J.P.C., Murray, D.W.: Hierarchical voxel block hashing for efficient integration of depth images. IEEE Robot. Autom. Lett. 1(1), 192–197 (2016)

    Article  Google Scholar 

  16. Kanazawa, A., Tulsiani, S., Efros, A.A., Malik, J.: Learning category-specific mesh reconstruction from image collections. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 386–402. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01267-0_23

    Chapter  Google Scholar 

  17. Lin, C., Kong, C., Lucey, S.: Learning efficient point cloud generation for dense 3D object reconstruction. In: AAAI, pp. 7114–7121 (2018)

    Google Scholar 

  18. Mescheder, L.M., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. In: CVPR, pp. 4460–4470 (2019)

    Google Scholar 

  19. Michalkiewicz, M., Pontes, J.K., Jack, D., Baktashmotlagh, M., Eriksson, A.P.: Implicit surface representations as layers in neural networks. In: ICCV, pp. 4742–4751 (2019)

    Google Scholar 

  20. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24

    Chapter  Google Scholar 

  21. Newcombe, R.A., et al.: KinectFusion: real-time dense surface mapping and tracking. In: 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011, Basel, Switzerland, 26–29 October 2011, pp. 127–136 (2011)

    Google Scholar 

  22. Nießner, M., Zollhöfer, M., Izadi, S., Stamminger, M.: Real-time 3D reconstruction at scale using voxel hashing. ACM Trans. Graph. 32(6), 169:1–169:11 (2013)

    Google Scholar 

  23. Park, J.J., Florence, P.R., Straub, J., Newcombe, R.A., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: CVPR, pp. 165–174 (2019)

    Google Scholar 

  24. Prokudin, S., Lassner, C., Romero, J.: Efficient learning on point clouds with basis point sets. In: ICCV, pp. 4331–4340 (2019)

    Google Scholar 

  25. Sitzmann, V., Martel, J.N.P., Bergman, A.W., Lindell, D.B., Wetzstein, G.: Implicit neural representations with periodic activation functions. In: NeurIPS (2020)

    Google Scholar 

  26. Straub, J., et al.: The replica dataset: a digital replica of indoor spaces. CoRR abs/1906.05797 (2019). http://arxiv.org/abs/1906.05797

  27. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2012, Vilamoura, Algarve, Portugal, 7–12 October 2012, pp. 573–580. IEEE (2012)

    Google Scholar 

  28. Sucar, E., Liu, S., Ortiz, J., Davison, A.J.: iMAP: implicit mapping and positioning in real-time. In: ICCV, pp. 6209–6218 (2021)

    Google Scholar 

  29. Tang, D., et al.: Deep implicit volume compression. In: CVPR, pp. 1290–1300 (2020)

    Google Scholar 

  30. Wei, Y., Liu, S., Rao, Y., Zhao, W., Lu, J., Zhou, J.: NerfingMVS: guided optimization of neural radiance fields for indoor multi-view stereo. In: ICCV, pp. 5590–5599 (2021)

    Google Scholar 

  31. Wen, C., Zhang, Y., Li, Z., Fu, Y.: Pixel2mesh++: multi-view 3D mesh generation via deformation. In: ICCV, pp. 1042–1051 (2019)

    Google Scholar 

  32. Wu, J., Zhang, C., Xue, T., Freeman, B., Tenenbaum, J.: Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In: NeurIPS, pp. 82–90 (2016)

    Google Scholar 

  33. Wu, Z., et al.: 3D shapenets: a deep representation for volumetric shapes. In: CVPR, pp. 1912–1920 (2015)

    Google Scholar 

  34. Yang, G., Huang, X., Hao, Z., Liu, M., Belongie, S.J., Hariharan, B.: PointFlow: 3D point cloud generation with continuous normalizing flows. In: ICCV, pp. 4540–4549 (2019)

    Google Scholar 

  35. Zhu, Z., et al.: NICE-SLAM: neural implicit scalable encoding for SLAM. In: CVPR, pp. 12776–12786 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu-Ping Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, XQ., Wang, YP. (2024). QuadSampling: A Novel Sampling Method for Remote Implicit Neural 3D Reconstruction Based on Quad-Tree. In: Hu, SM., Cai, Y., Rosin, P. (eds) Computer-Aided Design and Computer Graphics. CADGraphics 2023. Lecture Notes in Computer Science, vol 14250. Springer, Singapore. https://doi.org/10.1007/978-981-99-9666-7_21

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-9666-7_21

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-9665-0

  • Online ISBN: 978-981-99-9666-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics