Skip to main content

A Mixed Time-of-Flight and Stereoscopic Camera System

  • Chapter
  • First Online:
Time-of-Flight Cameras

Part of the book series: SpringerBriefs in Computer Science ((BRIEFSCOMPUTER))

  • 2046 Accesses

Abstract

Several methods that combine range and color data have been investigated and successfully used in various applications. Most of these systems suffer from the problems of noise in the range data and resolution mismatch between the range sensor and the color cameras. High-resolution depth maps can be obtained using stereo matching, but this often fails to construct accurate depth maps of weakly/repetitively textured scenes. Range sensors provide coarse depth information regardless of presence/absence of texture. We propose a novel tof-stereo fusion method based on an efficient seed-growing algorithm which uses the tof data projected onto the stereo image pair as an initial set of correspondences. These initial “seeds” are then propagated to nearby pixels using a matching score that combines an image similarity criterion with rough depth priors computed from the low-resolution range data. The overall result is a dense and accurate depth map at the resolution of the color cameras at hand. We show that the proposed algorithm outperforms 2D image-based stereo algorithms and that the results are of higher resolution than off-the-shelf RGB-D sensors, e.g., Kinect.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    All experiments described in this chapter use the Mesa SR4000 camera [18].

  2. 2.

    http://www.4dviews.com

  3. 3.

    The disparity space is a space of all potential correspondences [22].

  4. 4.

    http://www.ptgrey.com/

  5. 5.

    http://vision.middlebury.edu/stereo/data/scenes2006/

References

  1. Alenyà, G., Dellen, B., Torras, C.: 3D Modelling of leaves from color and tof data for robotized plant measuring. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 3408–3414 (2011)

    Google Scholar 

  2. Attamimi, M., Mizutani, A., Nakamura, T., Nagai, T., Funakoshi, K., Nakano, M.: Real-time 3D visual sensor for robust object recognition. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4560–4565 (2010)

    Google Scholar 

  3. Castañeda, V., Mateus, D., Navab, N.: SLAM combining ToF and high-resolution cameras. In: IEEE Workshkop on Motion and Video Computing (2011)

    Google Scholar 

  4. Cech, J., Matas, J., Perdoch, M.: Efficient sequential correspondence selection by cosegmentation. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1568–1581 (2010)

    Google Scholar 

  5. Cech, J., Sanchez-Riera, J., Horaud, R.: Scene flow estimation by growing correspondence seeds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3129–3136 (2011)

    Google Scholar 

  6. Čech, J., Šára, R.: Efficient sampling of disparity space for fast and accurate matching. In: Proceedings of BenCOS Workshop CVPR (2007)

    Google Scholar 

  7. Chan, D., Buisman, H., Theobalt, C., Thrun, S.: A noise-aware filter for real-time depth upsampling. In: ECCV Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications (2008)

    Google Scholar 

  8. Dal Mutto, C., Zanuttigh, P., Cortelazzo, G.M.: A probabilistic approach to ToF and stereo data fusion. In: Proceedings of 3D Data Processing, Visualization and Transmission, Paris (2010)

    Google Scholar 

  9. Diebel, J., Thrun, S.: An application of Markov random fields to range sensing. In: Proceedings on Neural Information Processing Systems (NIPS) (2005)

    Google Scholar 

  10. Dobias, M., Sára, R.: Real-time global prediction for temporally stable stereo. In: Proceedings of IEEE International Conference on Computer Vision Workshops, pp. 704–707 (2011)

    Google Scholar 

  11. Droeschel, D., Stückler, J., Holz, D., Behnke, S.: Towards joint attention for a domestic service robot—person awareness and gesture recognition using time-of-flight cameras. In: Proceedings of International Conference on Robotic and Animation (ICRA), Shanghai, pp. 1205–1210 (2011)

    Google Scholar 

  12. Fischer, J., Arbeiter, G., Verl, A.: Combination of time-of-flight depth and stereo using semiglobal optimization. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 3548–3553 (2011)

    Google Scholar 

  13. Freedman, B., Shpunt, A., Machline, M., Arieli, Y.: Depth Mapping Using Projected Patterns. US Patent No. 8150412 (2012)

    Google Scholar 

  14. Geiger, A., Roser, M., Urtasun, R.: Efficient large-scale stereo matching. In: Proceedings of the Asian Conference on Computer Vision (ACCV), pp. 25–38 (2010)

    Google Scholar 

  15. Gould, S., Baumstarck, P., Quigley, M., Ng, A.Y., Koller, D.: Integrating visual and range data for robotic object detection. In: Proceedings of European Conference on Computer Vision Workshops (2008)

    Google Scholar 

  16. Hansard, M., Horaud, R., Amat, M., Lee, S.: Projective alignment of range and parallax data. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3089–3096 (2011)

    Google Scholar 

  17. Jebari, I., Bazeille, S., Battesti, E., Tekaya, H., Klein, M., Tapus, A., Filliat, D., Meyer, C., Sio-Hoi, I., Benosman, R., Cizeron, E., Mamanna, J.-C., Pothier, B.: Multi-sensor semantic mapping and exploration of indoor environments. In: Technologies for Practical Robot Applications (TePRA), pp. 151–156 (2011)

    Google Scholar 

  18. Mesa Imaging AG. http://www.mesa-imaging.ch

  19. Moravec, H.P.: Toward automatic visual obstacle avoidance. In: 5th International Conference Artificial Intelligence (ICAI), pp. 584–594 (1977)

    Google Scholar 

  20. Newcombe, R.A., Davison, A.J.: Live dense reconstruction with a single moving camera. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1498–1505 (2010)

    Google Scholar 

  21. Park, J., Kim, H., Tai, Y.-W., Brown, M.-S., Kweon, I.S.: High quality depth map upsampling for 3D-TOF cameras. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011)

    Google Scholar 

  22. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vision 47, 7–42 (2002)

    Google Scholar 

  23. Seitz, S., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. In: Proceedings of Computer Vision and Pattern Recognition (CVPR), pp. 519–528 (2006)

    Google Scholar 

  24. Stückler J., Behnke, S.: Combining depth and color cues for scale- and viewpoint-invariant object segmentation and recognition using random forests. In: Proceedings of IEEE/ RSJ International Conference on Robots and Systems (IROS), pp. 4566–4571 (2010)

    Google Scholar 

  25. Wang, L., Yang, R.: Global stereo matching leveraged by sparse ground control points. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3033–3040 (2011)

    Google Scholar 

  26. Zhu, J., Wang, L., Yang, R.G., Davis, J.: Fusion of time-of-flight depth and stereo for high accuracy depth maps. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Miles Hansard .

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Miles Hansard

About this chapter

Cite this chapter

Hansard, M., Lee, S., Choi, O., Horaud, R. (2013). A Mixed Time-of-Flight and Stereoscopic Camera System. In: Time-of-Flight Cameras. SpringerBriefs in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-4658-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-4658-2_5

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-4657-5

  • Online ISBN: 978-1-4471-4658-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics