ABSTRACT
This paper presents a novel facial animation retargeting system that is carefully designed to support the animator's workflow. Observation and analysis of the animators' often preferred process of key-frame animation with blendshape models informed our research. Our retargeting system generates a similar set of blendshape weights to those that would have been produced by an animator. This is achieved by rearranging the group of blendshapes into several sequential retargeting groups and solving using a matching pursuit-like scheme inspired by a traditional key-framing approach. Meanwhile, animators typically spend a tremendous amount of time simplifying the dense weight graphs created by the retargeting. Our graph simplification technique effectively produces editable weight graphs while preserving the visual characteristics of the original retargeting. Finally, we automatically create GUI controllers to help artists perform key-framing and editing very efficiently. The set of proposed techniques greatly reduce the time and effort required by animators to achieve high quality retargeted facial animations.
Supplemental Material
- Alexander, O., Rogers, M., Lambeth, W., Chiang, M., and Debevec, P. 2009. Creating a photoreal digital actor: The Digital Emily Project. Conference for Visual Media Production., 176--187. Google ScholarDigital Library
- Bickel, B., Botsch, M., Angst, R., Matusik, W., Otaduy, M., Pfister, H., and Gross, M. 2007. Multi-scale capture of facial geometry and motion. ACM Trans. Graph. 26 (July). Google ScholarDigital Library
- Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proceedings of SIGGRAPH 99, ACM Press/Addison-Wesley Publishing Co., SIGGRAPH 99, 187--194. Google ScholarDigital Library
- Choe, B., Lee, H., and seok Ko, H. 2001. Performance-driven muscle-based facial animation. The Journal of Visualization and Computer Animation 12, 67--79.Google ScholarCross Ref
- Chuang, E., and Bregler, C. 2002. Performance driven facial animation using blendshape interpolation. Tech. rep., Department of Computer Science, Stanford University.Google Scholar
- Chuang, E., and Bregler, C. 2005. Mood swings: expressive speech animation. ACM Trans. Graph. 24, 2, 331--347. Google ScholarDigital Library
- Deng, Z., Chiang, P.-Y., Fox, P., and Neumann, U. 2006. Animating blendshape faces by cross-mapping motion capture data. In Proceedings of I3D 2006, ACM, New York, NY, USA, 43--48. Google ScholarDigital Library
- Digital-Tutors. 2006. Facial Animation and Lip Sync in Maya. Digital-Tutors.Google Scholar
- Douglas, D., and Peucker, T. 1973. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: The International Journal for Geographic Information and Geovisualization 10, 112--122.Google ScholarCross Ref
- Ekman, P., and Friesen, W. 1977. Manual for the facial action coding system. Consulting Psychologists Press, Palo Alto.Google Scholar
- Havaldar, P., 2006. Performance driven facial animation. ACM SIGGRAPH 2006 Course #30 Notes, August. Google ScholarDigital Library
- Hoppe, H. 1996. Progressive meshes. In Proceedings of SIGGRAPH 96, ACM, SIGGRAPH 96, 99--108. Google ScholarDigital Library
- Huang, H., Chai, J., Tong, X., and Wu, H. 2011. Leveraging motion capture and 3d scanning for high-fidelity facial performance acquisition. ACM Trans. Graph. 30 (August), 74:1--74:10. Google ScholarDigital Library
- Joshi, P., Tien, W. C., Desbrun, M., and Pighin, F. 2003. Learning controls for blend shape based realistic facial animation. In Proceedings of the SCA 2003, Eurographics Association, 187--192. Google ScholarDigital Library
- Lewis, J., and Anjyo, K. 2010. Direct manipulation blend-shapes. Computer Graphics and Applications special issue: Digital Human Faces 30, 4, 42--50. Google ScholarDigital Library
- Li, H., Weise, T., and Pauly, M. 2010. Example-based facial rigging. ACM Trans. Graph. 29 (July), 32:1--32:6. Google ScholarDigital Library
- Luamanuvae, J., 2011. Personal communication (Weta Digital).Google Scholar
- Ma, W., Jones, A., Chiang, J.-Y., Hawkins, T., Frederiksen, S., Peers, P., Vukovic, M., Ouhyoung, M., and Debevec, P. 2008. Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Trans. Graph. 27 (December), 121:1--121:10. Google ScholarDigital Library
- Ma, X., Le, B. H., and Deng, Z. 2009. Style learning and transferring for facial animation editing. In Proceedings of SCA 2009, ACM, 123--132. Google ScholarDigital Library
- Mallat, S., and Zhang, Z. 1993. Matching pursuits with time-frequency dictionaries. Signal Processing, IEEE Transactions on 41, 12 (Dec.), 3397--3415.Google ScholarDigital Library
- Noh, J., and Neumann, U. 2001. Expression cloning. In Proceedings of SIGGRAPH 2001, ACM, SIGGRAPH 2001, 277--288. Google ScholarDigital Library
- Orvalho, V. C., Zacur, E., and Susin, A. 2008. Transferring the rig and animations from a character to different face models. Computer Graphics Forum 27, 8, 1997--2012.Google ScholarCross Ref
- Pyun, H., Kim, Y., Chae, W., Kang, H. W., and Shin, S. Y. 2003. An example-based approach for facial expression cloning. In Proceedings of SCA 2003, Eurographics Association, 167--176. Google ScholarDigital Library
- Seo, J., Irving, G., Lewis, J. P., and Noh, J. 2011. Compression and direct manipulation of complex blendshape models. ACM Trans. Graph. (Proceedings of SIGGRAPH ASIA 2011) 30, 6. Google ScholarDigital Library
- Seol, Y., Lewis, J., Seo, J., Anjyo, K., and Noh, J. 2011. Spacetime expression cloning for blendshapes. ACM Trans. Graph. (Conditionally Accepted). Google ScholarDigital Library
- Skocaj, D., and Leonardis, A. 2002. Weighted incremental subspace learning. In Proceedings of Workshop on Cognitive Vision.Google Scholar
- Sumner, R. W., and Popović, J. 2004. Deformation transfer for triangle meshes. ACM Trans. Graph. 23 (August), 399--405. Google ScholarDigital Library
- Weise, T., Li, H., Van Gool, L., and Pauly, M. 2009. Face/off: live facial puppetry. In Proceedings of SCA 2009, ACM, 7--16. Google ScholarDigital Library
- Williams, L. 1990. Performance-driven facial animation. In SIGGRAPH '90: Proceedings of the 17th annual conference on Computer graphics and interactive techniques, ACM, New York, NY, USA, 235--242. Google ScholarDigital Library
- Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph. 23 (August), 548--558. Google ScholarDigital Library
Index Terms
- Artist friendly facial animation retargeting
Recommendations
Creating an actor-specific facial rig from performance capture
DigiPro '16: Proceedings of the 2016 Symposium on Digital ProductionCreating a high quality blendshape rig usually involves a large amount of effort from skilled artists. Although current 3D reconstruction technologies are able to capture accurate facial geometry of the actor, it is still very difficult to build a ...
Facial retargeting with automatic range of motion alignment
While facial capturing focuses on accurate reconstruction of an actor's performance, facial animation retargeting has the goal to transfer the animation to another character, such that the semantic meaning of the animation remains. Because of the ...
Spacetime expression cloning for blendshapes
The goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial ...
Comments