Abstract
We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye–arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye–arm–hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.
Similar content being viewed by others
Notes
Humans can perform prehensile actions without visual feedback, by relying on tactile and acoustic senses.
Active vision systems employ gaze control mechanisms to actively position the camera coordinate system in order to manipulate the visual constraints.
These sensors are not controlled in terms of the active vision paradigm.
At the end of all trials, we asked 2 subjects to try to reach for the target when the champagne glass (obstacle) was present, but without modification of the path (as in the no-obstacle setup). Unsurprisingly, the arm/hand collided with the champagne glass always when it was positioned at obs2, obs3, obs4, in 6 out of 8 trials the hand collided for obs1 and obs5. The hand never collided when the obstacle was in positions obs6, obs7 and obs8.
The gaze exit time from the obstacle is defined as the time from the beginning of a trial until the onset of a saccade away from the fixated obstacle. The arm exit time is defined as the time from the beginning of a trial until the moment when the arm reaches the closest distance to the obstacle and starts moving toward the target.
The coordination of the gaze and arm exit times from the obstacle for Subject 1 substantially differed from the rest of the subjects. She has shown significantly different amount of the gaze–arm lag when exiting the zone of the obstacle (mean 448 ms, SD 210.5 ms) compared to the rest of the subjects (mean 220.78 ms, SD 135.75 ms) and this difference achieved statistical significance [one-way ANOVA: \(F(1,39)=10.93,p=0.002\)]. A careful analysis of the video from the eye tracker revealed her visuomotor strategy. Interestingly, her eye and arm movements were normal and the gaze guided the arm in all trials. However, she mostly used the coordination strategy where the gaze first visits the obstacle and the moment when gaze switches toward the target she started to move the arm, i.e., start of her arm movement was significantly postponed. In all the other measures she did not significantly differ from the rest of the subjects.
It is important to note that Johansson et al. (2001) focused most of their analysis on gaze and arm timing with respect to entering or exiting the so-called landmark zones. They defined the landmark zone as an area with the radius 3\(^{\circ }\) of visual angle (2 cm) in the work plane in all directions from the corresponding objects in the workspace, including the obstacle. They found that the gaze and arm have almost identical exit times from the obstacle landmark zone. Considering that an approximate overall vertical arm displacement in their experiment was 12 cm, these landmark zones established a coarse representation of the workspace. However, from the plots where precise spatio-temporal measures were presented (Fig. 6A in their paper), it can be seen that the difference between the median gaze and arm exit times at the exact location of the obstacle differs approximately 200 ms in favor of gaze exiting first the obstacle. Similar measures of the gaze–arm exit lag hold for the other intermediary targets (e.g., support surface, target switch and bar tool).
References
Abrams R, Meyer D, Kornblum S (1990) Eye-hand coordination: oculomotor control in rapid aimed limb movements. J Exp Psychol Hum Percept Perform 16(2):248
Aivar M, Brenner E, Smeets J (2008) Avoiding moving obstacles. Exp Brain Res 190(3):251–264
Alberts JL, Saling M, Stelmach GE (2002) Alterations in transport path differentially affect temporal and spatial movement parameters. Exper Brain Res 143(4):417–425
Aloimonos J, Weiss I, Bandyopadhyay A (1988) Active vision. Int J Comput Vis 1(4):333–356
Andersen RA, Cui H (2009) Intention, action planning, and decision making in parietal–frontal circuits. Neuron 63(5):568–583
Bajcsy R (1988) Active perception. Proc IEEE 76(8):966–1005
Bajcsy R, Campos M (1992) Active and exploratory perception. CVGIP Image Underst 56(1):31–40
Baldauf D, Deubel H (2010) Attentional landscapes in reaching and grasping. Vis Res 50(11):999–1013
Ballard D (1991) Animate vision. Artif Intell 48(1):57–86
Ballard DH, Hayhoe MM, Pelz JB (1995) Memory representations in natural tasks. J Cogn Neurosci 7(1):66–80
Bendahan P, Gorce P (2006) A neural network architecture to learn arm motion planning in grasping tasks with obstacle avoidance. Robotica 24(2):197–204
Berthier NE, Clifton RK, Gullapalli V, McCall DD, Robin DJ (1996) Visual information and object size in the control of reaching. J Mot Behav 28(3):187–197
Bishop C (2007) Pattern recognition and machine learning (information science and statistics). Pattern Recognit 4(2):1–748
Bowman M, Johannson R, Flanagan J (2009) Eye-hand coordination in a sequential target contact task. Exp Brain Res 195(2):273–283
Brouwer A, Franz V, Gegenfurtner K (2009) Differences in fixations between grasping and viewing objects. J Vis 9(1):1–8
Castiello U, Bennett K, Mucignat C (1983) The reach to grasp movement of blind subjects. Exp Brain Res 96(1):152–162
Castiello U, Bennett K, Stelmach G (1993) Reach to grasp: the natural response to perturbation of object size. Exp Brain Res 94(1):163–178
Chaumette F, Hutchinson S (2008) Visual servoing and visual tracking. In: Siciliano B, Khatib O (eds) Springer Handbook of Robotics. Springer, Berlin, Heidelberg, pp 563–583
Dalton K, Nacewicz B, Johnstone T, Schaefer H, Gernsbacher M, Goldsmith H, Alexander A, Davidson R (2005) Gaze fixation and the neural circuitry of face processing in autism. Nat Neurosci 8(4):519–526
Dean J, Brüwer M (1994) Control of human arm movements in two dimensions: paths and joint control in avoiding simple linear obstacles. Exp Brain Res 97(3):497–514
Deubel H, O’Regan JK, Radach R (2000) Attention, information processing, and eye movement control. In: Kennedy A, ii Radach R, Heller D, Pynte J (eds) Reading as a perceptual process. Elsevier, Oxford, pp 355–374
Engbert R, Kliegl R et al (2003) Microsaccades uncover the orientation of covert attention. Vis Res 43(9):1035–1045
Espiau B, Chaumette F, Rives P (1992) A new approach to visual servoing in robotics. IEEE Trans Robot Autom 8(3):313–326
Fisk J, Goodale M (1985) The organization of eye and limb movements during unrestricted reaching to targets in contralateral and ipsilateral visual space. Exp Brain Res 60(1):159–178
Gentilucci M, Toni I, Chieffi S, Pavesi G (1994) The role of proprioception in the control of prehension movements: a kinematic study in a peripherally deafferented patient and in normal subjects. Exp Brain Res 99(3):483–500
Gibson JJ (1950) The perception of the visual world. Houghton Mifflin, Boston
González-Alvarez C, Subramanian A, Pardhan S (2007) Reaching and grasping with restricted peripheral vision. Ophthalmic Physiol Opt 27(3):265–274
Goodale MA (2011) Transforming vision into action. Vis Res 51(13):1567–1587
Goodale MA, Haffenden A (1998) Frames of reference for perception and action in the human visual system. Neurosci Biobehav Rev 22(2):161–172
Grasso R, Prévost P, Ivanenko Y, Berthoz A et al (1998) Eye–head coordination for the steering of locomotion in humans: an anticipatory synergy. Neurosci Lett 253(2):115–118
Haggard P, Wing A (1991) Remote responses to perturbation in human prehension. Neurosci Lett 122(1):103–108
Haggard P, Wing A (1995) Coordinated responses following mechanical perturbation of the arm during prehension. Exp Brain Res 102(3):483–494
Hayhoe M, Ballard D (2005) Eye movements in natural behavior. Trends Cogn Sci 9(4):188–194
Hayhoe M, Shrivastava A, Mruczek R, Pelz J (2003) Visual memory and motor planning in a natural task. J Vis 3(1):49–63
Henderson JM, Hollingworth A (1999) The role of fixation position in detecting scene changes across saccades. Psychol Sci 10(5):438– 443
Hesse C, Deubel H (2010) Effects of altered transport paths and intermediate movement goals on human grasp kinematics. Exp Brain Res 201(1):93–109
Hesse C, Deubel H (2011) Efficient grasping requires attentional resources. Vis Res 51(11):1223–1231
Hicheur H, Berthoz A (2005) How do humans turn? head and body movements for the steering of locomotion. In: IEEE-RAS international conference on humanoid robots (Humanoids), IEEE, pp 265–270
Hoffmann H, Schenck W, Möller R (2005) Learning visuomotor transformations for gaze-control and grasping. Biol Cybern 93(2):119–130
Hulse M, McBrid S, Lee M (2009) Robotic hand-eye coordination without global reference: a biologically inspired learning scheme. In: IEEE international conference on development and Learning (ICDL), IEEE, pp 1–6
Inhoff AW, Radach R (1998) Definition and computation of oculomotor measures in the study of cognitive processes. In: Underwood G (ed) Eye guidance in reading and scene perception. Elsevier, Amsterdam, pp 29–53
Jacob R, Karn K (2003) Eye tracking in human–computer interaction and usability research: ready to deliver the promises. Mind 2(3):4
Jakobson L, Goodale M (1991) Factors affecting higher-order movement planning: a kinematic analysis of human prehension. Exp Brain Res 86(1):199–208
Jamone L, Natale L, Nori F, Metta G, Sandini G (2012) Autonomous online learning of reaching behavior in a humanoid robot. Int J Humanoid Robot 9(03):1–26
Javier Traver V, Bernardino A (2010) A review of log-polar imaging for visual perception in robotics. Robot Auton Syst 58(4):378–398
Jeannerod M (1984) The timing of natural prehension movements. J Mot Behav 16(3):235–254
Johansson R, Westling G, Bäckström A, Flanagan J (2001) Eye–hand coordination in object manipulation. J Neurosci 21(17):6917–6932
Johansson RS, Flanagan JR, Johansson RS (2009) Sensory control of object manipulation. Sensorimotor control of grasping: physiology and pathophysiology. Cambridge University Press, Cambridge
Kavraki LE, Svestka P, Latombe JC, Overmars MH (1996) Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans Robotd Autom 12(4):566–580
Khansari-Zadeh S, Billard A (2011) Learning stable nonlinear dynamical systems with Gaussian mixture models. IEEE Trans Robot 27(5):943–957
Khansari-Zadeh SM, Billard A (2012) A dynamical system approach to realtime obstacle avoidance. Auton Robots 32(4):433–454
Khatib O (1986) Real-time obstacle avoidance for manipulators and mobile robots. Int J Robot Res 5(1):90–98
Kuffner Jr J, LaValle S (2000) Rrt-connect: an efficient approach to single-query path planning. In: IEEE international conference on robotics and automation (ICRA), IEEE, vol 2, pp 995–1001
Land M (1999) Motion and vision: why animals move their eyes. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 185(4):341–352
Land M, Mennie N, Rusted J et al (1999) The roles of vision and eye movements in the control of activities of daily living. Perception 28(11):1311–1328
Land MF, Furneaux S (1997) The knowledge base of the oculomotor system. Philos Trans R Soc Lond Ser B Biol Sci 352(1358):1231–1239
Liversedge S, Findlay J (2000) Saccadic eye movements and cognition. Trends Cogn Sci 4(1):6–14
Lukic L, Santos-Victor J, Billard A (2012) Learning coupled dynamical systems from human demonstration for robotic eye–arm–hand coordination. In: Proceedings of the IEEE-RAS international conference on humanoid robots (Humanoids), Osaka, Japan
Lumelsky V, Skewis T (1990) Incorporating range sensing in the robot navigation function. IEEE Trans Syst Man Cybern 20(5):1058–1069
Mansard N, Lopes M, Santos-Victor J, Chaumette F (2006) Jacobian learning methods for tasks sequencing in visual servoing. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 4284–4290
Metta G, Gasteratos A, Sandini G (2004) Learning to track colored objects with log-polar vision. Mechatronics 14(9):989–1006
Metta G, Natale L, Nori F, Sandini G, Vernon D, Fadiga L, Von Hofsten C, Rosander K, Lopes M, Santos-Victor J et al (2010) The icub humanoid robot: an open-systems platform for research in cognitive development. Neural Netw 23(8–9):1125–1134
Mishra A, Aloimonos Y, Fah CL (2009a) Active segmentation with fixation. In: 12th international conference on computer vision (ICCV), IEEE, pp 468–475
Mishra A, Aloimonos Y, Fermuller C (2009b) Active segmentation for robotics. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 3133–3139
Mon-Williams M, Tresilian J, Coppard V, Carson R (2001) The effect of obstacle position on reach-to-grasp movements. Exp Brain Res 137(3):497–501
Natale L, Metta G, Sandini G (2005) A developmental approach to grasping. In: Developmental robotics AAAI spring symposium, vol 44
Natale L, Nori F, Sandini G, Metta G (2007) Learning precise 3d reaching in a humanoid robot. In: IEEE international conference on development and learning (ICDL), IEEE, pp 324–329
Neggers S, Bekkering H (2000) Ocular gaze is anchored to the target of an ongoing pointing movement. J Neurophysiol 83(2):639–651
Noris B, Keller J, Billard A (2010) A wearable gaze tracking system for children in unconstrained environments. Comput Vis Image Underst 115(4):476–486
Paillard J (1982) The contribution of peripheral and central vision to visually guided reaching. In: Ingle D, Goodale M, Marsfield R (eds) Analysis of visual behavior. MIT Press, Cambridge, pp 367–385
Pattacini U, Nori F, Natale L, Metta G, Sandini G (2010) An experimental evaluation of a novel minimum-jerk Cartesian controller for humanoid robots. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 1668–1674
Paulignan Y, MacKenzie C, Marteniuk R, Jeannerod M (1991) Selective perturbation of visual input during prehension movements. Exp Brain Res 83(3):502–512
Pelisson D, Prablanc C, Goodale M, Jeannerod M (1986) Visual control of reaching movements without vision of the limb. Exp Brain Res 62(2):303–311
Prablanc C, Echallier J, Komilis E, Jeannerod M (1979) Optimal response of eye and hand motor systems in pointing at a visual target. Biol Cybern 35(2):113–124
Purdy KA, Lederman SJ, Klatzky RL (1999) Manipulation with no or partial vision. J Exp Psychol Hum Percept Perform 25(3):755
Rayner K (1998) Eye movements in reading and information processing: 20 years of research. Psychol Bull 124(3):372
Rizzolatti G, Fogassi L, Gallese V (1997) Parietal cortex: from sight to action. Curr Opin Neurobiol 7(4):562–567
Rossetti Y, Stelmach G, Desmurget M, Prablanc C, Jeannerod M (1994) The effect of viewing the static hand prior to movement onset on pointing kinematics and variability. Exp Brain Res 101(2):323–330
Rothkopf C, Ballard D (2009) Image statistics at the point of gaze during human navigation. Vis Neurosci 26(01):81–92
Rothkopf C, Ballard D, Hayhoe M (2007) Task and context determine where you look. J Vis 7(14):1–16
Sahbani A, El-Khoury S, Bidaud P (2012) An overview of 3d object grasp synthesis algorithms. Robot Auton Syst 60(3):326–336
Saling M, Alberts J, Stelmach G, Bloedel J (1998) Reach-to-grasp movements during obstacle avoidance. Exp Brain Res 118(2):251–258
Schenck W, Hoffmann H, Möller R (2011) Grasping of extrafoveal targets: a robotic model. New Ideas Psychol 29(3):235–259
Seara JF, Strobl KH, Schmidt G (2003) Path-dependent gaze control for obstacle avoidance in vision guided humanoid walking. In: IEEE international conference on robotics and automation (ICRA), IEEE, vol 1, pp 887–892
Shukla A, Billard A (2011) Coupled dynamical system based arm–hand grasping model for learning fast adaptation strategies. Robot Auton Syst 60(3):424–440
Simmons R (1996) The curvature–velocity method for local obstacle avoidance. In: IEEE international conference on robotics and automation (ICRA), IEEE, vol 4, pp 3375–3382
Sivak B, MacKenzie CL (1990) Integration of visual information and motor output in reaching and grasping: the contributions of peripheral and central vision. Neuropsychologia 28(10):1095–1116
Spijkers WA, Lochner P (1994) Partial visual feedback and spatial end-point accuracy of discrete aiming movements. J Mot Behav 26(3):283–295
Srinivasa SS, Berenson D, Cakmak M, Collet A, Dogar MR, Dragan AD, Knepper RA, Niemueller T, Strabala K et al (2012) Herb 2.0: lessons learned from developing a mobile manipulator for the home. Proc IEEE 100(8):2410–2428
Sung HG (2004) Gaussian mixture regression and classification. PhD thesis, Rice University
Tatler BW, Hayhoe MM, Land MF, Ballard DH (2011) Eye guidance in natural vision: reinterpreting salience. J Vis 11(5):1–23
Timmann D, Stelmach G, Bloedel J (1996) Grasping component alterations and limb transport. Exp Brain Res 108(3):486–492
Tresilian J (1998) Attention in action or obstruction of movement? A kinematic analysis of avoidance behavior in prehension. Exp Brain Res 120(3):352–368
Triesch J, Ballard DH, Hayhoe MM, Sullivan BT (2003) What you see is what you need. J Vis 3(1):86–94
Vernon D, Hofsten C, Fadiga L (2010) A roadmap for cognitive development in humanoid robots, vol 11. Springer, Berlin
Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society conference on computer vision and pattern recognition, IEEE, vol 1, p I-511
Wolpert D, Miall R, Kawato M (1998) Internal models in the cerebellum. Trends Cogn Sci 2(9):338–347
Wolpert D, Flanagan J et al (2001) Motor prediction. Curr Biol 11(18):729
Acknowledgments
This work was supported in part by EU projects POETICON++ (FP7-ICT-288382) and FIRST-MM (FP7-ICT 248258) and Fundação para a Ciência e a Tecnologia (FCT) doctoral grant (SFRH/BD/51072/2010) under IST-EPFL Joint Doctoral Initiative.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Lukic, L., Santos-Victor, J. & Billard, A. Learning robotic eye–arm–hand coordination from human demonstration: a coupled dynamical systems approach. Biol Cybern 108, 223–248 (2014). https://doi.org/10.1007/s00422-014-0591-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00422-014-0591-9