Skip to main content

Activity Recognition for Natural Human Robot Interaction

  • Conference paper
Social Robotics (ICSR 2014)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8755))

Included in the following conference series:

Abstract

The ability to recognize human activities is necessary to facilitate natural interaction between humans and robots. While humans can distinguish between communicative actions and activities of daily living, robots cannot draw such inferences effectively. To allow intuitive human robot interaction, we propose the use of human-like stylized gestures as communicative actions and contrast them from conventional activities of daily living. We present a simple yet effective approach of modelling pose trajectories using directions traversed by human joints over the duration of an activity and represent the action as a histogram of direction vectors. The descriptor benefits from being computationally efficient as well as scale and speed invariant. In our evaluation, the descriptor returned state of the art classification accuracies using off the shelf classification algorithms on multiple datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Nite Skeleton Tracking, http://wiki.ros.org/nite (accessed: July 30, 2014)

  2. Zhu, C., Sheng, W.: Human daily activity recognition in robot-assisted living using multi-sensor fusion. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 2154–2159 (May 2009), doi:10.1109/ROBOT.2009.5152756

    Google Scholar 

  3. Ni, B., Moulin, P., Yan, S.: Order-preserving sparse coding for sequence classification. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part II. LNCS, vol. 7573, pp. 173–187. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  4. Collet, A., Martinez, M., Srinivasa, S.S.: The moped framework: Object recognition and pose estimation for manipulation. The International Journal of Robotics Research (2011)

    Google Scholar 

  5. Gehrig, D., Krauthausen, P., Rybok, L., Kuehne, H., Hanebeck, U., Schultz, T., Stiefelhagen, R.: Combined intention, activity, and motion recognition for a humanoid household robot. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4819–4825 (September 2011)

    Google Scholar 

  6. Gupta, R., Chia, A.Y.S., Rajan, D.: Human activities recognition using depth images. In: Proceedings of the 21st ACM International Conference on Multimedia, MM 2013, pp. 283–292. ACM, New York (2013), http://doi.acm.org/10.1145/2502081.2502099

    Google Scholar 

  7. Koppula, H., Gupta, R., Saxena, A.: Learning human activities and object affordances from rgb-d videos. IJRR 32(8), 951–970 (2013)

    Google Scholar 

  8. Mansur, A., Makihara, Y., Yagi, Y.: Action recognition using dynamics features. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 4020–4025 (May 2011)

    Google Scholar 

  9. Pieropan, A., Ek, C., Kjellstrom, H.: Functional object descriptors for human activity modeling. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 1282–1289 (May 2013)

    Google Scholar 

  10. Saxena, A.: Anticipating human activities using object affordances for reactive robotic response. In: RSS (2013)

    Google Scholar 

  11. Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: CVPR (March 2011)

    Google Scholar 

  12. Sung, J., Ponce, C., Selman, B., Saxena, A.: Unstructured human activity detection from rgbd images. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 842–849 (May 2012)

    Google Scholar 

  13. Teo, C., Yang, Y., Daume, H., Fermuller, C., Aloimonos, Y.: Towards a watson that sees: Language-guided action recognition for robots. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 374–381 (May 2012)

    Google Scholar 

  14. Xia, L., Chen, C., Aggarwal, J.: View invariant human action recognition using histograms of 3d joints. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 20–27. IEEE (2012)

    Google Scholar 

  15. Yang, X., Tian, Y.: Effective 3d action recognition using eigenjoints. J. Vis. Comun. Image Represent. 25(1), 2–11 (2014), http://dx.doi.org/10.1016/j.jvcir.2013.03.001

    Article  MathSciNet  Google Scholar 

  16. Zhang, C., Tian, Y.: Rgb-d camera-based daily living activity recognition. Journal of Computer Vision and Image Processing 2(4) (December 2012)

    Google Scholar 

  17. Zhang, H., Parker, L.: 4-dimensional local spatio-temporal features for human activity recognition. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2044–2049 (September 2011)

    Google Scholar 

  18. Zhu, C., Sheng, W.: Human daily activity recognition in robot-assisted living using multi-sensor fusion. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 2154–2159 (May 2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Chrungoo, A., Manimaran, S.S., Ravindran, B. (2014). Activity Recognition for Natural Human Robot Interaction. In: Beetz, M., Johnston, B., Williams, MA. (eds) Social Robotics. ICSR 2014. Lecture Notes in Computer Science(), vol 8755. Springer, Cham. https://doi.org/10.1007/978-3-319-11973-1_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-11973-1_9

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11972-4

  • Online ISBN: 978-3-319-11973-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics