Skip to main content

GNetIc – Using Bayesian Decision Networks for Iconic Gesture Generation

  • Conference paper
Intelligent Virtual Agents (IVA 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5773))

Included in the following conference series:

Abstract

Expressing spatial information with iconic gestures is abundant in human communication and requires to transform a referent representation into resembling gestural form. This task is challenging as the mapping is determined by the visuo-spatial features of the referent, the overall discourse context as well as concomitant speech, and its outcome varies considerably across different speakers. We present a framework, GNetIc, that combines data-driven with model-based techniques to model the generation of iconic gestures with Bayesian decision networks. Drawing on extensive empirical data, we discuss how this method allows for simulating speaker-specific vs. speaker-independent gesture production. Modeling results from a prototype implementation are presented and evaluated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Bergmann, K., Kopp, S.: Multimodal content representation for speech and gesture production. In: Theune, M., van der Sluis, I., Bachvarova, Y., André, E. (eds.) Proceedings of the 2nd Workshop on Multimodal Output Generation, pp. 61–68 (2008)

    Google Scholar 

  2. Bergmann, K., Kopp, S.: Increasing expressiveness for virtual agents–Autonomous generation of speech and gesture in spatail description tasks. In: Decker, K., Sichman, J., Sierra, C., Castelfranchi, C. (eds.) Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems, pp. 361–368 (2009)

    Google Scholar 

  3. Cassell, J., Stone, M., Yan, H.: Coordination and context-dependence in the generation of embodied conversation. In: Proceedings of the First International Conference on Natural Language Generation (2000)

    Google Scholar 

  4. de Ruiter, J.: Postcards from the mind: The relationship between speech, imagistic gesture, and thought. Gesture 7(1), 21–38 (2007)

    Article  Google Scholar 

  5. Hartmann, B., Mancini, M., Pelachaud, C.: Implementing expressive gesture synthesis for embodied conversational agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) Gesture in Human-Computer Interaction and Simulation, pp. 45–55. Springer, Heidelberg (2006)

    Google Scholar 

  6. Hostetter, A., Alibali, M.: Raise your hand if you’re spatial–Relations between verbal and spatial skills and gesture production. Gesture 7(1), 73–95 (2007)

    Article  Google Scholar 

  7. Howard, R., Matheson, J.: Influence diagrams. Decision Analysis 2(3), 127–143 (2005)

    Article  Google Scholar 

  8. Huenerfauth, M.: Spatial, temporal and semantic models for American Sign Language generation: Implications for gesture generation. Intern. Journal of Semantic Computing 2(1), 21–45 (2008)

    Article  Google Scholar 

  9. Kopp, S., Bergmann, K., Wachsmuth, I.: Multimodal communication from multimodal thinking–Towards an integrated model of speech and gesture production. Intern. Journal of Semantic Computing 2(1), 115–136 (2008)

    Article  Google Scholar 

  10. Kopp, S., Tepper, P., Ferriman, K., Striegnitz, K., Cassell, J.: Trading spaces: How humans and humanoids use speech and gesture to give directions. In: Nishida, T. (ed.) Conversational Informatics, pp. 133–160. John Wiley, New York (2007)

    Chapter  Google Scholar 

  11. Madsen, A., Jensen, F., Kjærulff, U., Lang, M.: HUGIN–The tool for bayesian networks and influence diagrams. Intern. Journal of Artificial Intelligence Tools 14(3), 507–543 (2005)

    Article  MATH  Google Scholar 

  12. Müller, C.: Redebegleitende Gesten: Kulturgeschichte–Theorie–Sprachvergleich. Berlin Verlag, Berlin (1998)

    Google Scholar 

  13. Neff, M., Kipp, M., Albrecht, I., Seidel, H.-P.: Gesture modeling and animation based on a probabilistic re-creation of speaker style. ACM Transactions on Graphics 27(1), 1–24 (2008)

    Article  Google Scholar 

  14. Pickering, M., Garrod, S.: Toward a mechanistic psychology of dialogue. Behavioral and Brain Sciences 27, 169–226 (2004)

    Google Scholar 

  15. Ruttkay, Z.: Presenting in Style by Virtual Humans. In: Esposito, A., Faundez-Zanuy, M., Keller, E., Marinaro, M. (eds.) COST Action 2102. LNCS, vol. 4775, pp. 23–36. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  16. Sowa, T., Wachsmuth, I.: A model for the representation and processing of shape in coverbal iconic gestures. In: Proc. KogWis 2005, pp. 183–188 (2005)

    Google Scholar 

  17. Stone, M., DeCarlo, D., Oh, I., Rodriguez, C., Stere, A., Lees, A., Bregler, C.: Speaking with hands: Creating animated conversational characters from recordings of human performance. In: Proceedings of SIGGRAPH 2004, pp. 506–513 (2004)

    Google Scholar 

  18. Streeck, J.: Depicting by gesture. Gesture 8(3), 285–301 (2008)

    Article  Google Scholar 

  19. Yu, Q., Terzopoulos, D.: A decision network framework for the behavioral animation of virtual humans. In: Proceedings of SIGGRAPH 2007, pp. 119–128 (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bergmann, K., Kopp, S. (2009). GNetIc – Using Bayesian Decision Networks for Iconic Gesture Generation. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds) Intelligent Virtual Agents. IVA 2009. Lecture Notes in Computer Science(), vol 5773. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04380-2_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04380-2_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04379-6

  • Online ISBN: 978-3-642-04380-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics