skip to main content
10.1145/2668904.2668931acmotherconferencesArticle/Chapter ViewAbstractPublication PagescvmpConference Proceedingsconference-collections
research-article

Line-preserving hole-filling for 2D-to-3D conversion

Published:13 November 2014Publication History

ABSTRACT

Many 2D-to-3D conversion techniques rely on image-based rendering methods in order to synthesize 3D views from monoscopic images. This leads to holes in the generated views due to previously occluded objects becoming visible for which no texture information is available. Approaches attempting to alleviate the effects of these artifacts are referred to as hole-filling. This paper proposes a method which determines a non-uniform deformation of the stereoscopic view such that no holes are visible. Additionally, an energy term is devised, which prevents straight lines in the input image from being bent due to the non-uniform image warp. This is achieved by constructing a triangle mesh, which approximates the depth map of the input image and by integrating a set of detected lines into it. The line information is incorporated into the underlying optimization problem in order to prevent bending of the lines. The evaluation of the proposed algorithm on a comprehensive dataset with a variety of scenes shows that holes are efficiently filled without obvious background distortions.

Skip Supplemental Material Section

Supplemental Material

References

  1. F. Bornemann and T. März. Fast image inpainting based on coherence transport. Journal of Mathematical Imaging and Vision, 28(3):259--278, jul 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. D. Doria and R. J. Radke. Filling large holes in lidar data by inpainting depth gradients. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pages 65--72. IEEE, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  3. D. Farin, R. Peerlings, and P. de With. Depth-image representation employing meshes for intermediate-view rendering and coding. In 3DTV Conference, 2007, pages 1--4, May 2007.Google ScholarGoogle Scholar
  4. M. Farre, O. Wang, M. Lang, N. Stefanoski, A. Hornung, and A. Smolic. Automatic content creation for multiview autostereoscopic displays using image domain warping. In Multimedia and Expo (ICME), 2011 IEEE International Conference on, pages 1--6, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Fehn. Depth-image-based rendering (dibr), compression, and transmission for a new approach on 3d-tv. In Proceedings of SPIE Stereoscopic Displays and Virtual Reality Systems, volume 5291, pages 93--104, 2004.Google ScholarGoogle Scholar
  6. R. Gal, O. Sorkine, and D. Cohen-Or. Feature-aware texturing. In Proceedings of EUROGRAPHICS Symposium on Rendering, volume 11, pages 297--303, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. K. He, H. Chang, and J. Sun. Content-aware rotation. Computer Vision, IEEE International Conference on, 0:553--560, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 20(11):1254--1259, Nov 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross. Nonlinear disparity mapping for stereoscopic 3d. ACM Transactions on Graphics (proceedings of ACM SIGGRAPH), 29(3):1, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Y. Liu and V. Caselles. Exemplar-based image inpainting using multiscale graph cuts. Image Processing, IEEE Transactions on, 22(5):1699--1711, May 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. H. Oh and Y.-S. Ho. H.264-based depth map sequence coding using motion information of corresponding texture video. In L.-W. Chang and W.-N. Lie, editors, Advances in Image and Video Technology, volume 4319 of Lecture Notes in Computer Science, pages 898--907. Springer Berlin Heidelberg, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. N. Plath, S. Knorr, L. Goldmann, and T. Sikora. Adaptive image warping for hole prevention in 3d view synthesis. Image Processing, IEEE Transactions on, 22(9):3420--3432, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Rubinstein, D. Gutierrez, O. Sorkine, and A. Shamir. A comparative study of image retargeting. ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH Asia), 29(5):160:1--160:10, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Sarkis, W. Zia, and K. Diepold. Fast depth map compression and meshing with compressed tritree. In H. Zha, R.-i. Taniguchi, and S. Maybank, editors, Computer Vision -- ACCV 2009, volume 5995 of Lecture Notes in Computer Science, pages 44--55. Springer Berlin Heidelberg, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. D. Scharstein and C. Pal. Learning conditional random fields for stereo. In Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on, pages 1--8, June 2007.Google ScholarGoogle ScholarCross RefCross Ref
  17. H. Shum and S. B. Kang. Review of image-based rendering techniques. In Proceedings of SPIE Visual Communications and Image Processing, volume 4067, pages 2--13, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  18. A. Smolic, S. Poulakos, S. Heinzle, P. Greisen, M. Lang, A. Hornung, M. Farre, N. Stefanoski, O. Wang, L. Schnyder, R. Monroy, and M. Gross. Disparity-aware stereo 3d production tools. In Visual Media Production (CVMP), 2011 Conference for, pages 165--173, Nov 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. J. Sun and H. Ling. Scale and object aware image thumbnailing. International Journal of Computer Vision, 104(2):135--153, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. R. Tanger, M. Müller, P. Kauff, and R. Schäfer. Depth/disparity creation for trifocal hybrid 3d system. In SMPTE Conferences, volume 2013, pages 1--8. Society of Motion Picture and Television Engineers, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  21. C. Vazquez, L. Zhang, F. Speranza, N. Plath, and S. Knorr. 2D-to-3D Video Conversion: Overview and Perspectives, pages 37--61. John Wiley & Sons, Ltd, 2013.Google ScholarGoogle Scholar
  22. R. G. von Gioi, J. Jakubowicz, J.-M. Morel, and G. Randall. Lsd: a line segment detector. Pattern Analysis and Machine Intelligence (PAMI), IEEE Transactions on, 32(4):722--732, 2012.Google ScholarGoogle Scholar
  23. O. Wang, M. Lang, M. Frei, A. Hornung, A. Smolic, and M. Gross. Stereobrush: Interactive 2d to 3d conversion using discontinuous warps. In Proceedings of the Eighth Eurographics Symposium on Sketch-Based Interfaces and Modeling, SBIM '11, pages 47--54, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. T. Yan, W. Lau, Rynson, Y. Xu, and L. Huang. Depth mapping for stereoscopic videos. International Journal of Computer Vision, 102(1--3):293--307, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. M. Zwicker, H. Pfister, J. van Baar, and M. Gross. Ewa splatting. IEEE Transactions on Visualization and Computer Graphics, 8:223--238, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Line-preserving hole-filling for 2D-to-3D conversion

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Other conferences
              CVMP '14: Proceedings of the 11th European Conference on Visual Media Production
              November 2014
              153 pages
              ISBN:9781450331852
              DOI:10.1145/2668904

              Copyright © 2014 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 13 November 2014

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article

              Acceptance Rates

              Overall Acceptance Rate40of67submissions,60%

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader