Abstract
The ability to recognize human activities is necessary to facilitate natural interaction between humans and robots. While humans can distinguish between communicative actions and activities of daily living, robots cannot draw such inferences effectively. To allow intuitive human robot interaction, we propose the use of human-like stylized gestures as communicative actions and contrast them from conventional activities of daily living. We present a simple yet effective approach of modelling pose trajectories using directions traversed by human joints over the duration of an activity and represent the action as a histogram of direction vectors. The descriptor benefits from being computationally efficient as well as scale and speed invariant. In our evaluation, the descriptor returned state of the art classification accuracies using off the shelf classification algorithms on multiple datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Nite Skeleton Tracking, http://wiki.ros.org/nite (accessed: July 30, 2014)
Zhu, C., Sheng, W.: Human daily activity recognition in robot-assisted living using multi-sensor fusion. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 2154–2159 (May 2009), doi:10.1109/ROBOT.2009.5152756
Ni, B., Moulin, P., Yan, S.: Order-preserving sparse coding for sequence classification. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part II. LNCS, vol. 7573, pp. 173–187. Springer, Heidelberg (2012)
Collet, A., Martinez, M., Srinivasa, S.S.: The moped framework: Object recognition and pose estimation for manipulation. The International Journal of Robotics Research (2011)
Gehrig, D., Krauthausen, P., Rybok, L., Kuehne, H., Hanebeck, U., Schultz, T., Stiefelhagen, R.: Combined intention, activity, and motion recognition for a humanoid household robot. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4819–4825 (September 2011)
Gupta, R., Chia, A.Y.S., Rajan, D.: Human activities recognition using depth images. In: Proceedings of the 21st ACM International Conference on Multimedia, MM 2013, pp. 283–292. ACM, New York (2013), http://doi.acm.org/10.1145/2502081.2502099
Koppula, H., Gupta, R., Saxena, A.: Learning human activities and object affordances from rgb-d videos. IJRR 32(8), 951–970 (2013)
Mansur, A., Makihara, Y., Yagi, Y.: Action recognition using dynamics features. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 4020–4025 (May 2011)
Pieropan, A., Ek, C., Kjellstrom, H.: Functional object descriptors for human activity modeling. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 1282–1289 (May 2013)
Saxena, A.: Anticipating human activities using object affordances for reactive robotic response. In: RSS (2013)
Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: CVPR (March 2011)
Sung, J., Ponce, C., Selman, B., Saxena, A.: Unstructured human activity detection from rgbd images. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 842–849 (May 2012)
Teo, C., Yang, Y., Daume, H., Fermuller, C., Aloimonos, Y.: Towards a watson that sees: Language-guided action recognition for robots. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 374–381 (May 2012)
Xia, L., Chen, C., Aggarwal, J.: View invariant human action recognition using histograms of 3d joints. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 20–27. IEEE (2012)
Yang, X., Tian, Y.: Effective 3d action recognition using eigenjoints. J. Vis. Comun. Image Represent. 25(1), 2–11 (2014), http://dx.doi.org/10.1016/j.jvcir.2013.03.001
Zhang, C., Tian, Y.: Rgb-d camera-based daily living activity recognition. Journal of Computer Vision and Image Processing 2(4) (December 2012)
Zhang, H., Parker, L.: 4-dimensional local spatio-temporal features for human activity recognition. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2044–2049 (September 2011)
Zhu, C., Sheng, W.: Human daily activity recognition in robot-assisted living using multi-sensor fusion. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 2154–2159 (May 2009)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Chrungoo, A., Manimaran, S.S., Ravindran, B. (2014). Activity Recognition for Natural Human Robot Interaction. In: Beetz, M., Johnston, B., Williams, MA. (eds) Social Robotics. ICSR 2014. Lecture Notes in Computer Science(), vol 8755. Springer, Cham. https://doi.org/10.1007/978-3-319-11973-1_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-11973-1_9
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-11972-4
Online ISBN: 978-3-319-11973-1
eBook Packages: Computer ScienceComputer Science (R0)