Abstract
Creating realistic 3D hairs that closely match the real-world inputs remains challenging. With the increasing popularity of lightweight depth cameras featured in devices such as iPhone X, Intel RealSense and DJI drones, depth cues can be very helpful in consumer applications, for example, the Animated Emoji. In this paper, we introduce a fully automatic, data-driven approach to model the hair geometry and compute a complete strand-level 3D hair model that closely resembles the input from a single RGB-D camera. Our method heavily exploits the geometric cues contained in the depth channel and leverages exemplars in a 3D hair database for high-fidelity hair synthesis. The core of our method is a local-similarity based search and synthesis algorithm that simultaneously reasons about the hair geometry, strands connectivity, strand orientation, and hair structural plausibility. We demonstrate the efficacy of our method using a variety of complex hairstyles and compare our method with prior arts.
Supplemental Material
- Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. 2009. Patch-Match: A Randomized Correspondence Algorithm for Structural Image Editing. ACM Trans. Graph. 28, 3 (2009), 24:1--24:11. Google ScholarDigital Library
- Connelly Barnes, Eli Shechtman, Dan B. Goldman, and Adam Finkelstein. 2010. The Generalized Patchmatch Correspondence Algorithm. In ECCV'10. 29--43. Google ScholarDigital Library
- Connelly Barnes and Fang-Lue Zhang. 2017. A survey of the state-of-the-art in patch-based synthesis. Computational Visual Media 3, 1 (2017), 3--20.Google ScholarCross Ref
- Xudong Cao, Yichen Wei, Fang Wen, and Jian Sun. 2014. Face alignment by explicit shape regression. International Journal of Computer Vision 107, 2 (2014), 177--190. Google ScholarDigital Library
- Menglei Chai, Linjie Luo, Kalyan Sunkavalli, Nathan Carr, Sunil Hadap, and Kun Zhou. 2015. High-quality hair modeling from a single portrait photo. ACM Trans. Graph. 34, 6 (2015), 204:1--204:10. Google ScholarDigital Library
- Menglei Chai, Tianjia Shao, Hongzhi Wu, Yanlin Weng, and Kun Zhou. 2016. AutoHair: Fully Automatic Hair Modeling from a Single Image. ACM Trans. Graph. 35, 4 (2016), 116:1--116:12. Google ScholarDigital Library
- Menglei Chai, Lvdi Wang, Yanlin Weng, Xiaogang Jin, and Kun Zhou. 2013. Dynamic Hair Manipulation in Images and Videos. ACM Trans. Graph. 32, 4 (2013), 75:1--75:8. Google ScholarDigital Library
- Menglei Chai, Lvdi Wang, Yanlin Weng, Yizhou Yu, Baining Guo, and Kun Zhou. 2012. Single-view Hair Modeling for Portrait Manipulation. ACM Trans. Graph. 31, 4 (2012), 116:1--116:8. Google ScholarDigital Library
- Nicolas Cherin, Frederic Cordier, and Mahmoud Melkemi. 2014. Modeling piecewise helix curves from 2D sketches. Computer-Aided Design 46 (2014), 258--262. Google ScholarDigital Library
- Angela Dai, Matthias Nießner, Michael Zollhöfer, Shahram Izadi, and Christian Theobalt. 2017. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. ACM Trans. Graph. 36, 3 (2017), 76:1--76:19. Google ScholarDigital Library
- Soheil Darabi, Eli Shechtman, Connelly Barnes, Dan B. Goldman, and Pradeep Sen. 2012. Image Melding: Combining Inconsistent Images Using Patch-based Synthesis. ACM Trans. Graph. 31, 4 (2012), 82:1--82:10. Google ScholarDigital Library
- Jose I Echevarria, Derek Bradley, Diego Gutierrez, and Thabo Beeler. 2014. Capturing and stylizing hair for 3D fabrication. ACM Trans. Graph. 33, 4 (2014), 125:1--125:11. Google ScholarDigital Library
- Electronic Arts. 2014. The Sims Resource. http://www.thesimsresource.com/. Tomas Lay Herrera, Arno Zinke, and Andreas Weber. 2012. Lighting hair from the inside: A thermal approach to hair reconstruction. ACM Trans. Graph. 31, 6 (2012), 146:1--146:9. Google ScholarDigital Library
- Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2014a. Robust hair capture using simulated examples. ACM Trans. Graph. 33, 4 (2014), 126:1--126:10. Google ScholarDigital Library
- Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2015. Single-view hair modeling using a hairstyle database. ACM Trans. Graph. 34, 4 (2015), 125:1--125:9. Google ScholarDigital Library
- Liwen Hu, Chongyang Ma, Linjie Luo, Li-Yi Wei, and Hao Li. 2014b. Capturing Braided Hairstyles. ACM Trans. Graph. 33, 6 (2014), 225:1--225:9. Google ScholarDigital Library
- Liwen Hu, Shunsuke Saito, Lingyu Wei, Koki Nagano, Jaewoo Seo, Jens Fursund, Iman Sadeghi, Carrie Sun, Yen-Chun Chen, and Hao Li. 2017. Avatar digitization from a single image for real-time rendering. ACM Trans. Graph. 36, 6 (2017), 195:1--195:14. Google ScholarDigital Library
- Wenzel Jakob, Jonathan T Moon, and Steve Marschner. 2009. Capturing hair assemblies fiber by fiber. ACM Trans. Graph. 28, 5 (2009), 164:1--164:9. Google ScholarDigital Library
- D. Li, T. Shao, H. Wu, and K. Zhou. 2017. Shape Completion from a Single RGBD Image. IEEE Trans. Vis. Comp. Graph. 23, 7 (2017), 1809--1822.Google ScholarDigital Library
- Linjie Luo, Hao Li, and Szymon Rusinkiewicz. 2013. Structure-aware hair capture. ACM Trans. Graph. 32, 4 (2013), 76:1--76:12. Google ScholarDigital Library
- Richard A Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J Davison, Pushmeet Kohi, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon. 2011. KinectFusion: Real-time dense surface mapping and tracking. In Mixed and augmented reality (ISMAR), 2011 10th IEEE international symposium on. IEEE, 127--136. Google ScholarDigital Library
- Sylvain Paris, Will Chang, Oleg I Kozhushnyan, Wojciech Jarosz, Wojciech Matusik, Matthias Zwicker, and Frédo Durand. 2008. Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. 27, 3 (2008), 30:1--30:9. Google ScholarDigital Library
- Shuran Song and Jianxiong Xiao. 2013. Tracking revisited using RGBD camera: Unified benchmark and baselines. In IEEE International Conference on Computer Vision (ICCV). 233--240. Google ScholarDigital Library
- Sebastien Valette and Jeanmarc Chassery. 2004. Approximated Centroidal Voronoi Diagrams for Uniform Polygonal Mesh Coarsening. Computer Graphics Forum 23, 3 (2004), 381--389.Google ScholarCross Ref
- Kelly Ward, Florence Bertails, Tae-Yong Kim, Stephen R Marschner, Marie-Paule Cani, and Ming C Lin. 2007. A survey on hair modeling: Styling, simulation, and rendering. IEEE Trans. Vis. Comp. Graph. 13, 2 (2007), 213--234. Google ScholarDigital Library
- Yichen Wei, Eyal Ofek, Long Quan, and Heung-Yeung Shum. 2005. Modeling Hair from Multiple Views. ACM Trans. Graph. 24, 3 (2005), 816--820. Google ScholarDigital Library
- Cem Yuksel, Scott Schaefer, and John Keyser. 2009. Hair meshes. ACM Trans. Graph. 28, 5 (2009), 166:1--166:7. Google ScholarDigital Library
- Meng Zhang, Menglei Chai, Hongzhi Wu, Hao Yang, and Kun Zhou. 2017. A Data-driven Approach to Four-view Image-based Hair Modeling. ACM Trans. Graph. 36, 4 (2017), 156:1--156:11. Google ScholarDigital Library
- Qian-Yi Zhou and Vladlen Koltun. 2014. Color Map Optimization for 3D Reconstruction with Consumer Depth Cameras. ACM Trans. Graph. 33, 4 (2014), 155:1--155:10. Google ScholarDigital Library
Index Terms
- Modeling hair from an RGB-D camera
Recommendations
Calibrating Non-overlapping RGB-D Cameras
ICPR '14: Proceedings of the 2014 22nd International Conference on Pattern RecognitionIn this paper, we propose a novel method of calibrating non-overlapping RGB-D cameras using one chessboard fixed with a laser pointer. A laser pointer is fixed at one calibration board so that its pose at the coordinate system of the calibration board ...
Tracking an RGB-D Camera Using Points and Planes
ICCVW '13: Proceedings of the 2013 IEEE International Conference on Computer Vision WorkshopsPlanes are dominant in most indoor and outdoor scenes and the development of a hybrid algorithm that incorporates both point and plane features provides numerous advantages. In this regard, we present a tracking algorithm for RGB-D cameras using both ...
Live RGB-D camera tracking for television production studios
Highlights A novel low-cost tool for camera tracking in broadcasting studio environments. Driftless tracking with keyframes. Real-time performance using a GPU. Allows moving actors in the scene while tracking. Comparison with Kinfu. In this work, a real-...
Comments