ABSTRACT
This paper reports on the GENEA Challenge 2023, in which participating teams built speech-driven gesture-generation systems using the same speech and motion dataset, followed by a joint evaluation. This year’s challenge provided data on both sides of a dyadic interaction, allowing teams to generate full-body motion for an agent given its speech (text and audio) and the speech and motion of the interlocutor. We evaluated 12 submissions and 2 baselines together with held-out motion-capture data in several large-scale user studies. The studies focused on three aspects: 1) the human-likeness of the motion, 2) the appropriateness of the motion for the agent’s own speech whilst controlling for the human-likeness of the motion, and 3) the appropriateness of the motion for the behaviour of the interlocutor in the interaction, using a setup that controls for both the human-likeness of the motion and the agent’s own speech. We found a large span in human-likeness between challenge submissions, with a few systems rated close to human mocap. Appropriateness seems far from being solved, with most submissions performing in a narrow range slightly above chance, far behind natural motion. The effect of the interlocutor is even more subtle, with submitted systems at best performing barely above chance. Interestingly, a dyadic system being highly appropriate for agent speech does not necessarily imply high appropriateness for the interlocutor. Additional material is available via the project website at svito-zar.github.io/GENEAchallenge2023/.
Supplemental Material
Available for Download
- Amal Abdulrahman and Deborah Richards. 2022. Is natural necessary? Human voice versus synthetic voice for intelligent virtual agents. Multimodal Technologies and Interaction 6, 7 (2022), 51.Google ScholarCross Ref
- Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. 2020. Style-controllable speech-driven gesture synthesis using normalising flows. Comput. Graph. Forum 39, 2 (2020), 487–496. https://doi.org/10.1111/cgf.13946Google ScholarCross Ref
- Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. 2023. Listen, denoise, action! audio-driven motion synthesis with diffusion models. ACM Transactions on Graphics (TOG) 42, 4 (2023), 1–20.Google ScholarDigital Library
- Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. Roy. Stat. Soc. B Met. 57, 1 (1995), 289–300.Google ScholarCross Ref
- Kirsten Bergmann and Stefan Kopp. 2009. GNetIc – Using Bayesian decision networks for iconic gesture generation. In Proceedings of the International Conference on Intelligent Virtual Agents(IVA ’09). Springer, 76–89. https://doi.org/10.1007/978-3-642-04380-2_12Google ScholarDigital Library
- Uttaran Bhattacharya, Elizabeth Childs, Nicholas Rewkowski, and Dinesh Manocha. 2021. Speech2AffectiveGestures: Synthesizing co-speech gestures with generative adversarial affective expression learning. In Proceedings of the ACM International Conference on Multimedia(MM ’21). ACM, New York, NY, USA. https://doi.org/10.1145/3474085.3475223Google ScholarDigital Library
- Justine Cassell, Hannes Högni Vilhjálmsson, and Timothy Bickmore. 2001. BEAT: The behavior expression animation toolkit. In Proceedings of SIGGRAPH. ACM, 477–486. https://doi.org/10.1007/978-3-662-08373-4_8Google ScholarDigital Library
- Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab Entry to the GENEA Challenge 2022 – A Tacotron2 Based Method for Co-Speech Gesture Generation With Locality-Constraint Attention Mechanism. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’22). ACM, 784–789. https://doi.org/10.1145/3536221.3558060Google ScholarDigital Library
- Ankur Chemburkar, Shuhong Lu, and Andrew Andrew. 2023. Discrete Diffusion for Co-Speech Gesture Synthesis. In Companion Publication of the 2023 International Conference on Multimodal Interaction(ICMI ’23 Companion). Association for Computing Machinery.Google Scholar
- Chung-Cheng Chiu, Louis-Philippe Morency, and Stacy Marsella. 2015. Predicting co-verbal gestures: A deep and temporal modeling approach. In Proceedings of the International Conference on Intelligent Virtual Agents(IVA ’15). Springer, 152–166. https://doi.org/10.1007/978-3-319-21996-7_17Google ScholarCross Ref
- Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow. 2023. Diffusion-based co-speech gesture generation using joint text and audio representation. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’23). ACM.Google ScholarDigital Library
- Ylva Ferstl, Michael Neff, and Rachel McDonnell. 2021. ExpressGesture: Expressive gesture generation from speech through database matching. Comput. Animat. Virt. W. 32, 3-4 (2021), e2016. https://doi.org/10.1002/cav.2016Google ScholarCross Ref
- Gerald J. Hahn and William Q. Meeker. 1991. Statistical Intervals: A Guide for Practitioners. Vol. 92. John Wiley & Sons.Google ScholarCross Ref
- Leon Harz, Hendric Voß, and Stefan Kopp. 2023. FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’23). ACM.Google ScholarDigital Library
- Zhiyuan He. 2022. Automatic quality assessment of speech-driven synthesized gestures. Int. J. Comput. Games. Tech. 2022 (2022). https://doi.org/10.1155/2022/1828293Google ScholarCross Ref
- Judith Holler, Kobin H. Kendrick, and Stephen C. Levinson. 2018. Processing language in face-to-face conversation: Questions with gestures get faster responses. Psychon. B. Rev. 25, 5 (2018), 1900–1908. https://doi.org/10.3758/s13423-017-1363-zGoogle ScholarCross Ref
- Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 2 (1979), 65–70.Google Scholar
- International Telecommunication Union, Telecommunication Standardisation Sector. 1996. Methods for subjective determination of transmission quality. Recommendation ITU-T P.800. https://www.itu.int/rec/T-REC-P.800-199608-IGoogle Scholar
- Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, and Jonas Beskow. 2020. Let’s face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings. In Proceedings of the ACM International Conference on Intelligent Virtual Agents(IVA ’20). ACM, Article 31, 8 pages. https://doi.org/10.1145/3383652.3423911Google ScholarDigital Library
- Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, and Gustav Eje Henter. 2021. HEMVIP: Human evaluation of multiple videos in parallel. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’21). ACM, 707–711. https://doi.org/10.1145/3462244.3479957Google ScholarDigital Library
- Gwantae Kim, Yuanming Li, and Hanseok Ko. 2023. The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation. In Companion Publication of the 2023 International Conference on Multimodal Interaction(ICMI ’23 Companion). Association for Computing Machinery.Google ScholarDigital Library
- Gwantae Kim, Seonghyeok Noh, Insung Ham, and Hanseok Ko. 2023. MPE4G: Multimodal Pretrained Encoder for Co-Speech Gesture Generation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1–5.Google ScholarCross Ref
- Geunmo Kim, Jaewoong Yoo, and Hyedong Jung. 2023. Co-Speech Gesture Generation via Audio and Text Feature Engineering. In Companion Publication of the 2023 International Conference on Multimodal Interaction(ICMI ’23 Companion). Association for Computing Machinery.Google ScholarDigital Library
- Vladislav Korzun, Anna Beloborodova, and Arkady Ilin. 2023. The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’23). ACM.Google ScholarDigital Library
- Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexanderson, Iolanda Leite, and Hedvig Kjellström. 2020. Gesticulator: A framework for semantically-aware speech-driven gesture generation. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’20). ACM, 242–250. https://doi.org/10.1145/3382507.3418815Google ScholarDigital Library
- Taras Kucherenko, Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, and Gustav Eje Henter. 2021. A large, crowdsourced evaluation of gesture generation systems on common data: The GENEA Challenge 2020. In Proceedings of the ACM International Conference on Intelligent User Interfaces(IUI ’21). 11–21. https://doi.org/10.1145/3397481.3450692Google ScholarDigital Library
- Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’23). ACM.Google ScholarDigital Library
- Taras Kucherenko, Pieter Wolfert, Youngwoo Yoon, Carla Viegas, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. Evaluating gesture-generation in a large-scale open challenge: The GENEA Challenge 2022. arXiv preprint arXiv:2303.08737 (2023).Google Scholar
- Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S. Srinivasa, and Yaser Sheikh. 2019. Talking With Hands 16.2M: A large-scale dataset of synchronized body-finger motion and audio for conversational motion analysis and synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision(ICCV ’19). IEEE, 763–772. https://doi.org/10.1109/ICCV.2019.00085Google ScholarCross Ref
- Sergey Levine, Philipp Krähenbühl, Sebastian Thrun, and Vladlen Koltun. 2010. Gesture controllers. ACM Trans. Graph. 29, 4, Article 124 (2010), 11 pages. https://doi.org/10.1145/1778765.1778861Google ScholarDigital Library
- Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, and Bo Zheng. 2022. BEAT: A large-scale semantic and emotional multi-modal dataset for conversational gestures synthesis. In Proceedings of the European Conference on Computer Vision(ECCV ’22). Springer, 612–630.Google ScholarDigital Library
- David McNeill. 1992. Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press. https://doi.org/10.1177/002383099403700208Google ScholarCross Ref
- Shivam Mehta, Siyang Wang, Simon Alexanderson, Jonas Beskow, Éva Székely, and Gustav Eje Henter. 2023. Diff-TTSG: Denoising probabilistic integrated speech and gesture synthesis. In Proceedings of the ISCA Speech Synthesis Workshop(SSW ’23). ISCA.Google ScholarCross Ref
- Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter, and Michael Neff. 2023. A Comprehensive Review of Data-Driven Co-Speech Gesture Generation. In Computer Graphics Forum, Vol. 42. Wiley Online Library, 569–596.Google Scholar
- David Obremski, Helena Babette Hering, Paula Friedrich, and Birgit Lugrin. 2022. Exploratory Study on the Perception of Intelligent Virtual Agents With Non-Native Accents Using Synthetic and Natural Speech in German. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’22). 15–24.Google ScholarDigital Library
- Manuel Rebol, Christian Güti, and Krzysztof Pietroszek. 2021. Passing a non-verbal Turing test: Evaluating gesture animations generated from speech. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces(VR ’21). IEEE, 573–581. https://doi.org/10.1109/VR50410.2021.00082Google ScholarCross Ref
- Giampiero Salvi, Jonas Beskow, Samer Al Moubayed, and Björn Granström. 2009. SynFace—Speech-driven facial animation for virtual speech-reading support. EURASIP J. Audio Spee., Article 191940 (2009), 10 pages. https://doi.org/10.1155/2009/191940Google ScholarCross Ref
- Viktor Schmuck, Nguyen Tan Viet Tuyen, and Oya Celiktutan. 2023. The KCL-SAIR team’s entry to the GENEA Challenge 2023 Exploring Role-based Gesture Generation in Dyadic Interactions: Listener vs. Speaker. In Companion Publication of the 2023 International Conference on Multimodal Interaction(ICMI ’23 Companion). Association for Computing Machinery.Google ScholarDigital Library
- Rodolfo Luis Tonoli, Leonardo Boulitreau de Menezes Martins Marques, Lucas Hideki Ueda, and Paula Paro Dornhofer Costa. 2023. Gesture Generation with Diffusion Models Aided by Speech Activity Information. In Companion Publication of the 2023 International Conference on Multimodal Interaction(ICMI ’23 Companion). Association for Computing Machinery.Google ScholarDigital Library
- Siyang Wang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter, and Éva Székely. 2021. Integrated speech and gesture synthesis. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’21). ACM, 177–185. https://doi.org/10.1145/3462244.3479914Google ScholarDigital Library
- Jonathan Windle, Iain Matthews, Ben Milner, and Sarah Taylor. 2023. The UEA Digital Humans entry to the GENEA Challenge 2023. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’23). ACM.Google ScholarDigital Library
- Pieter Wolfert, Jeffrey M. Girard, Taras Kucherenko, and Tony Belpaeme. 2021. To rate or not to rate: Investigating evaluation methods for generated co-speech gestures. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’21). ACM, 494–502. https://doi.org/10.1145/3462244.3479889Google ScholarDigital Library
- Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, and Long Xiao. 2023. DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models. arXiv preprint arXiv:2305.04919 (2023).Google Scholar
- Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, and Haolin Zhuang. 2023. QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2321–2330.Google ScholarCross Ref
- Sicheng Yang, Haiwei Xue, Zhensong Zhang, Minglei Li, Zhiyong Wu, Xiaofei Wu, Songcen Xu, and Zonghong Dai. 2023. The DiffuseStyleGesture+ entry to the GENEA Challenge 2023. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’23). ACM.Google ScholarDigital Library
- Payam Jome Yazdian, Mo Chen, and Angelica Lim. 2022. Gesture2Vec: Clustering gestures using representation learning methods for co-speech gesture generation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS ’22). 3100–3107.Google ScholarCross Ref
- Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. 2020. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Transactions on Graphics 39, 6, Article 222 (2020), 16 pages. https://doi.org/10.1145/3414685.3417838Google ScholarDigital Library
- Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. 2019. Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. In Proceedings of the IEEE International Conference on Robotics and Automation(ICRA ’19). IEEE, 4303–4309. https://doi.org/10.1109/ICRA.2019.8793720Google ScholarDigital Library
- Youngwoo Yoon, Keunwoo Park, Minsu Jang, Jaehong Kim, and Geehyuk Lee. 2021. SGToolkit: An interactive gesture authoring toolkit for embodied conversational agents. In Proceedings of the Annual ACM Symposium on User Interface Software and Technology(UIST ’21). ACM, 826–840. https://doi.org/10.1145/3472749.3474789Google ScholarDigital Library
- Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’22). ACM.Google ScholarDigital Library
- Weiyu Zhao, Liangxiao Hu, and Shengping Zhang. 2023. DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models. In Companion Publication of the 2023 International Conference on Multimodal Interaction(ICMI ’23 Companion). ACM.Google Scholar
- Zeyu Zhao, Nan Gao, Zhi Zeng, Guixuan Zhang, Jie Liu, and Shuwu Zhang. 2023. Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’23). ACM.Google ScholarDigital Library
Index Terms
- The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings
Recommendations
The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation
ICMI '22: Proceedings of the 2022 International Conference on Multimodal InteractionThis paper reports on the second GENEA Challenge to benchmark data-driven automatic co-speech gesture generation. Participating teams used the same speech and motion dataset to build gesture-generation systems. Motion generated by all these systems was ...
Evaluating gesture generation in a large-scale open challenge: The GENEA Challenge 2022
This paper reports on the second GENEA Challenge to benchmark data-driven automatic co-speech gesture generation. Participating teams used the same speech and motion dataset to build gesture-generation systems. Motion generated by all these systems was ...
The KCL-SAIR team's entry to the GENEA Challenge 2023 Exploring Role-based Gesture Generation in Dyadic Interactions: Listener vs. Speaker
ICMI '23 Companion: Companion Publication of the 25th International Conference on Multimodal InteractionThis paper presents the KCL-SAIR team’s contribution to the GENEA Challenge 2023. As this year’s challenge addressed gesture generation in a dyadic context instead of a monadic one, our aim was to investigate how the previous state-of-the-art approach ...
Comments