skip to main content
research-article

Modeling hair from an RGB-D camera

Authors Info & Claims
Published:04 December 2018Publication History
Skip Abstract Section

Abstract

Creating realistic 3D hairs that closely match the real-world inputs remains challenging. With the increasing popularity of lightweight depth cameras featured in devices such as iPhone X, Intel RealSense and DJI drones, depth cues can be very helpful in consumer applications, for example, the Animated Emoji. In this paper, we introduce a fully automatic, data-driven approach to model the hair geometry and compute a complete strand-level 3D hair model that closely resembles the input from a single RGB-D camera. Our method heavily exploits the geometric cues contained in the depth channel and leverages exemplars in a 3D hair database for high-fidelity hair synthesis. The core of our method is a local-similarity based search and synthesis algorithm that simultaneously reasons about the hair geometry, strands connectivity, strand orientation, and hair structural plausibility. We demonstrate the efficacy of our method using a variety of complex hairstyles and compare our method with prior arts.

Skip Supplemental Material Section

Supplemental Material

a205-zhang.mp4

mp4

88.3 MB

References

  1. Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. 2009. Patch-Match: A Randomized Correspondence Algorithm for Structural Image Editing. ACM Trans. Graph. 28, 3 (2009), 24:1--24:11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Connelly Barnes, Eli Shechtman, Dan B. Goldman, and Adam Finkelstein. 2010. The Generalized Patchmatch Correspondence Algorithm. In ECCV'10. 29--43. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Connelly Barnes and Fang-Lue Zhang. 2017. A survey of the state-of-the-art in patch-based synthesis. Computational Visual Media 3, 1 (2017), 3--20.Google ScholarGoogle ScholarCross RefCross Ref
  4. Xudong Cao, Yichen Wei, Fang Wen, and Jian Sun. 2014. Face alignment by explicit shape regression. International Journal of Computer Vision 107, 2 (2014), 177--190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Menglei Chai, Linjie Luo, Kalyan Sunkavalli, Nathan Carr, Sunil Hadap, and Kun Zhou. 2015. High-quality hair modeling from a single portrait photo. ACM Trans. Graph. 34, 6 (2015), 204:1--204:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Menglei Chai, Tianjia Shao, Hongzhi Wu, Yanlin Weng, and Kun Zhou. 2016. AutoHair: Fully Automatic Hair Modeling from a Single Image. ACM Trans. Graph. 35, 4 (2016), 116:1--116:12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Menglei Chai, Lvdi Wang, Yanlin Weng, Xiaogang Jin, and Kun Zhou. 2013. Dynamic Hair Manipulation in Images and Videos. ACM Trans. Graph. 32, 4 (2013), 75:1--75:8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Menglei Chai, Lvdi Wang, Yanlin Weng, Yizhou Yu, Baining Guo, and Kun Zhou. 2012. Single-view Hair Modeling for Portrait Manipulation. ACM Trans. Graph. 31, 4 (2012), 116:1--116:8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Nicolas Cherin, Frederic Cordier, and Mahmoud Melkemi. 2014. Modeling piecewise helix curves from 2D sketches. Computer-Aided Design 46 (2014), 258--262. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Angela Dai, Matthias Nießner, Michael Zollhöfer, Shahram Izadi, and Christian Theobalt. 2017. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. ACM Trans. Graph. 36, 3 (2017), 76:1--76:19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Soheil Darabi, Eli Shechtman, Connelly Barnes, Dan B. Goldman, and Pradeep Sen. 2012. Image Melding: Combining Inconsistent Images Using Patch-based Synthesis. ACM Trans. Graph. 31, 4 (2012), 82:1--82:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Jose I Echevarria, Derek Bradley, Diego Gutierrez, and Thabo Beeler. 2014. Capturing and stylizing hair for 3D fabrication. ACM Trans. Graph. 33, 4 (2014), 125:1--125:11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Electronic Arts. 2014. The Sims Resource. http://www.thesimsresource.com/. Tomas Lay Herrera, Arno Zinke, and Andreas Weber. 2012. Lighting hair from the inside: A thermal approach to hair reconstruction. ACM Trans. Graph. 31, 6 (2012), 146:1--146:9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2014a. Robust hair capture using simulated examples. ACM Trans. Graph. 33, 4 (2014), 126:1--126:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2015. Single-view hair modeling using a hairstyle database. ACM Trans. Graph. 34, 4 (2015), 125:1--125:9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Liwen Hu, Chongyang Ma, Linjie Luo, Li-Yi Wei, and Hao Li. 2014b. Capturing Braided Hairstyles. ACM Trans. Graph. 33, 6 (2014), 225:1--225:9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Liwen Hu, Shunsuke Saito, Lingyu Wei, Koki Nagano, Jaewoo Seo, Jens Fursund, Iman Sadeghi, Carrie Sun, Yen-Chun Chen, and Hao Li. 2017. Avatar digitization from a single image for real-time rendering. ACM Trans. Graph. 36, 6 (2017), 195:1--195:14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Wenzel Jakob, Jonathan T Moon, and Steve Marschner. 2009. Capturing hair assemblies fiber by fiber. ACM Trans. Graph. 28, 5 (2009), 164:1--164:9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. D. Li, T. Shao, H. Wu, and K. Zhou. 2017. Shape Completion from a Single RGBD Image. IEEE Trans. Vis. Comp. Graph. 23, 7 (2017), 1809--1822.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Linjie Luo, Hao Li, and Szymon Rusinkiewicz. 2013. Structure-aware hair capture. ACM Trans. Graph. 32, 4 (2013), 76:1--76:12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Richard A Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J Davison, Pushmeet Kohi, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon. 2011. KinectFusion: Real-time dense surface mapping and tracking. In Mixed and augmented reality (ISMAR), 2011 10th IEEE international symposium on. IEEE, 127--136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Sylvain Paris, Will Chang, Oleg I Kozhushnyan, Wojciech Jarosz, Wojciech Matusik, Matthias Zwicker, and Frédo Durand. 2008. Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. 27, 3 (2008), 30:1--30:9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Shuran Song and Jianxiong Xiao. 2013. Tracking revisited using RGBD camera: Unified benchmark and baselines. In IEEE International Conference on Computer Vision (ICCV). 233--240. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Sebastien Valette and Jeanmarc Chassery. 2004. Approximated Centroidal Voronoi Diagrams for Uniform Polygonal Mesh Coarsening. Computer Graphics Forum 23, 3 (2004), 381--389.Google ScholarGoogle ScholarCross RefCross Ref
  25. Kelly Ward, Florence Bertails, Tae-Yong Kim, Stephen R Marschner, Marie-Paule Cani, and Ming C Lin. 2007. A survey on hair modeling: Styling, simulation, and rendering. IEEE Trans. Vis. Comp. Graph. 13, 2 (2007), 213--234. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Yichen Wei, Eyal Ofek, Long Quan, and Heung-Yeung Shum. 2005. Modeling Hair from Multiple Views. ACM Trans. Graph. 24, 3 (2005), 816--820. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Cem Yuksel, Scott Schaefer, and John Keyser. 2009. Hair meshes. ACM Trans. Graph. 28, 5 (2009), 166:1--166:7. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Meng Zhang, Menglei Chai, Hongzhi Wu, Hao Yang, and Kun Zhou. 2017. A Data-driven Approach to Four-view Image-based Hair Modeling. ACM Trans. Graph. 36, 4 (2017), 156:1--156:11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Qian-Yi Zhou and Vladlen Koltun. 2014. Color Map Optimization for 3D Reconstruction with Consumer Depth Cameras. ACM Trans. Graph. 33, 4 (2014), 155:1--155:10. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Modeling hair from an RGB-D camera

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 37, Issue 6
        December 2018
        1401 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3272127
        Issue’s Table of Contents

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 4 December 2018
        Published in tog Volume 37, Issue 6

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader