Skip to main content
Log in

Dynamic 3D facial expression modeling using Laplacian smooth and multi-scale mesh matching

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

We propose a novel algorithm for the high-resolution modeling of dynamic 3D facial expressions from a sequence of unstructured face point clouds captured at video rate. The algorithm can reconstruct not only the global facial deformations caused by muscular movements, but also the expressional details generated by local skin deformations. Our algorithm consists of two parts: Extraction of expressional details and Reconstruction of expressions. In the extraction part, we extract the subtle expressional details such as wrinkles and folds from each point cloud with Laplacian smooth operator. In the reconstruction part, we use a multi-scale deformable mesh model to match each point cloud to reconstruct time-varying expressions. In each matching, we first use the low-scale mesh to match the global deformations of point cloud obtained after filtering out the expressional details, and then use the high-scale mesh to match the extracted expressional details. Comparing to many existing non-rigid ICP-based algorithms that match directly the mesh model to the entire point cloud, our algorithm overcomes the probable large errors occurred where the local sharp deformations are matched since it extracts the expressional details for separate matching, therefore, our algorithm can produce a high-resolution dynamic model reflecting time-varying expressions. Additionally, utilization of multi-scale mesh model makes our algorithm achieve high speed because it decreases iterative optimizations in matching. Experiments demonstrate the efficiency of our algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Igarashi, T., Nishino, K., Nayar, S.K.: The appearance of human skin. Tech. Rep. CUCS-024-05 (2005)

  2. Joshi, P., Tien, W.C., Desbrun, M., Pighin, F.: Learning controls for blendshape based realistic facial animation. In: SCA ’03, pp. 187–192 (2003)

  3. Dellepiane, M., Pietroni, N., Tsingos, N., Asselot, M., Scopigno, R.: Reconstructing head models from photographs for individualized 3d-audio processing. In: PG, pp. 1719–1727 (2008)

  4. Goldenstein, S., Vogler, C., Metaxas, D.: 3D facial tracking from corrupted movie sequences. In: CVPR, pp. 1880–1885 (2004)

  5. Huang, X., Zhang, S., Wang, Y., Metaxas, D., Samaras, D.: A hierarchical framework for high resolution facial expression tracking. In: CVPRW’04, pp. 22–29 (2004)

  6. Zhang, L., Snavely, N., Curless, B., Seitz, S.M.: Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph. 23(3), 548–558 (2004)

    Article  Google Scholar 

  7. Amberg, B., Romdhani, S., Vetter, T.: Optimal step nonrigid ICP algorithms for surface registration. In: CVPR, pp. 1–8 (2007)

  8. Minoi, J.L., Gillies, D.:3D facial expression analysis and deformation. In: The 4th Symposium on Applied Perceptioin in Graphics and Visiualiztion, pp. 138–138 (2007)

  9. Schneider, D.C., Eisert, P.: Fast nonrigid mesh registration with a data-driven deformation prior. In: ICCV Workshops, pp. 304–311 (2009)

  10. Hyneman, W., Itokazu, H., Williams, L., Zhao, X.: Human face project. In: ACM SIGGRAPH, Article 5 (2005)

  11. Oat, C.: Animated wrinkle maps. In: ACM SIGGRAPH, pp. 33–37 (2007)

  12. Bickel, B., Botsch, M., Angst, R., Matusik, W., Otaduy, M., Pfister, H., Gross, M.: Multi-scale capture of facial geometry and motion. ACM Trans. Graph. 26(3), 33 (2007)

    Article  Google Scholar 

  13. Huang, H., Chai, J., Tong, X., Wu, H.: Leveraging motion capture and 3D scanning for high-fidelity facial performance acquisition. In: ACM SIGGRAPH, Article 74 (2011)

  14. Huang, H., Yin, K., Zhao, L., Qi, Y., Yu, Y., Tong, X.: Detail-preserving controllable deformation from sparse examples. IEEE Trans. Vis. Comput. Graph. 18(8), 1215–1227 (2012)

    Article  Google Scholar 

  15. Furukawa, Y., Ponce, J.: Dense 3d motion capture for human faces. In: CVPR, PP. 1674–1681 (2009)

  16. Le, B.H., Zhu, M., Deng, Z.: Marker optimization for facial motion acquisition and deformation. IEEE Trans. Vis. Comput. Graph. 19(11), 1859–1871 (2013)

    Article  Google Scholar 

  17. Li, H., Adams, B., Guibas, L.J., Pauly, M.: Robust single-view geometry and motion reconstruction. ACM Trans. Graph. 28(5), Article 175 (2009)

  18. Zeng, Y., Wang, C., Wang, Y., Gu, D., Samaras, D., Paragios, N.: Intrinsic dense 3d surface tracking. In: CVPR, pp. 1225–1232 (2011)

  19. Sübmuth, J., Winter, M., Greiner, G.: Reconstructing animated meshes from time-varying point clouds. Comput. Graph. Forum 27(5), 1469–1476 (2008)

    Google Scholar 

  20. Wand, M., Adams, B., Ovsjanikov, M., Berner, A., Bokeloh, M., Jenke, P., Guibas, L., Seidel, H.-P., Schilling, A.: Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data. ACM Trans. Graph. 28(2), Article 15 (2009)

  21. Popa, T., South-Dickinson, I., Bradley, D., Sheffer, A., Heidrich, W.: Globally consistent space-time reconstruction. Comput. Graph. Forum 29(5), 1633–1642 (2010)

    Google Scholar 

  22. Li, H., Luo, L., Vlasic, D., Peers, P., Popovic, J., Pauly, M., Rusinkiewicz, S.: Temporally coherent completion of dynamic shapes. ACM Trans. Graph. 31(1), Article 2 (2012)

  23. Wang, Y., Gupta, M., Zhang, S., Wang, S., Gu, X.F., Samaras, D., Huang, P.S.: High resolution tracking of non-rigid motion of densely sampled 3d data using Harmonic Maps. Int. J. Comput. Vis. 76(3), 283–300 (2008)

    Article  Google Scholar 

  24. Bradley, D., Heidrich, W., Popa, T., Sheffer, A.: High-resolution passive facial performance capture. In: ACM SIGGRAPH, Article 41 (2010)

  25. Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Gross, M.: High-quality passive facial performance capture using anchor frame. ACM Trans. Graph. 30(4), Article 75 (2011)

  26. Sibbing, D., Habbecke, M., Kobbelt, L.: Markerless reconstruction and synthesis of dynamic facial expression. Comput. Vis. Image Underst. 115(5), 668–680 (2011)

    Article  Google Scholar 

  27. Huang, Y., Zhang, X., Fan, Y., Yin, L., Seversky, L., Allen, J., Lei, T., Dong, W.: Reshaping 3D facial scans for facial appearance modeling and 3D facial expression analysis. Image Vis. Comput. 30(10), 750–761 (2012)

    Article  Google Scholar 

  28. Chi, J., Zhang, C.: Automated capture of real-time 3D facial geometry and motion. Comput. Aided Des. Appl. 8(6), 859–871 (2011)

    Google Scholar 

Download references

Acknowledgments

We would like to thank the authors of [6] and [25] for sharing the face data used in our experiments. The work is supported by National Nature Science Foundation of China under Grant 61303088, 61020106001, 61332015, 61272242, U1201258, Nature Science Foundation of Shandong Province under Grant BS2013DX039, Sci-tech Development Project of Jinan City under Grant 201303021.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing Chi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chi, J., Tu, C. & Zhang, C. Dynamic 3D facial expression modeling using Laplacian smooth and multi-scale mesh matching. Vis Comput 30, 649–659 (2014). https://doi.org/10.1007/s00371-014-0960-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-014-0960-3

Keywords

Navigation