AE-NeRF: Audio Enhanced Neural Radiance Field for Few Shot Talking Head Synthesis

Authors

  • Dongze Li School of Artificial Intelligence, University of Chinese Academy of Sciences CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences
  • Kang Zhao Alibaba Group
  • Wei Wang CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences
  • Bo Peng CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences
  • Yingya Zhang Alibaba Group
  • Jing Dong CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences
  • Tieniu Tan CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences Nanjing University

DOI:

https://doi.org/10.1609/aaai.v38i4.28086

Keywords:

CV: Biometrics, Face, Gesture & Pose, CV: 3D Computer Vision

Abstract

Audio-driven talking head synthesis is a promising topic with wide applications in digital human, film making and virtual reality. Recent NeRF-based approaches have shown superiority in quality and fidelity compared to previous studies. However, when it comes to few-shot talking head generation, a practical scenario where only few seconds of talking video is available for one identity, two limitations emerge: 1) they either have no base model, which serves as a facial prior for fast convergence, or ignore the importance of audio when building the prior; 2) most of them overlook the degree of correlation between different face regions and audio, e.g., mouth is audio related, while ear is audio independent. In this paper, we present Audio Enhanced Neural Radiance Field (AE-NeRF) to tackle the above issues, which can generate realistic portraits of a new speaker with few-shot dataset. Specifically, we introduce an Audio Aware Aggregation module into the feature fusion stage of the reference scheme, where the weight is determined by the similarity of audio between reference and target image. Then, an Audio-Aligned Face Generation strategy is proposed to model the audio related and audio independent regions respectively, with a dual-NeRF framework. Extensive experiments have shown AE-NeRF surpasses the state-of-the-art on image fidelity, audio-lip synchronization, and generalization ability, even in limited training set or training iterations.

Published

2024-03-24

How to Cite

Li, D., Zhao, K., Wang, W., Peng, B., Zhang, Y., Dong, J., & Tan, T. (2024). AE-NeRF: Audio Enhanced Neural Radiance Field for Few Shot Talking Head Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3037-3045. https://doi.org/10.1609/aaai.v38i4.28086

Issue

Section

AAAI Technical Track on Computer Vision III