A biologically inspired method for vision-based docking of wheeled mobile robots☆
Introduction
It is currently very popular among roboticists to draw inspiration from the animal kingdom [1], [2]. This trend is termed “biomimetics”. Robot navigation strategies thus derived, often categorized as “behavioral” or “reactive” robotics, aim at the construction of simple control strategies which use direct sensory information, rather than a structured environmental model. Such strategies demonstrate an intimate relationship between movement control and vision. The use of vision is recommended when robots must operate in a dynamic environment.
In this paper we present one such control strategy and its experimentation for the problem of positioning a wheeled robot to a target at a certain location with a certain heading, i.e. docking, using information provided by a video camera. The kinematics of the robot are non-holonomic, so standard techniques of visual servoing (see, e.g., [3]) cannot be directly applied. We introduce a change of variables and a camera space regulation condition which allow solution of the problem via a relatively simple nonlinear control law.
This paper draws on previous work in precision missile guidance [4], [5] that involved missile guidance with an impact-angle constraint, and was built on a combination of geometrical considerations, and recent work in robust control and filtering theory [6], [7], [8], [9], [10].
The remarkable ability of honeybees and other insects like them to navigate effectively using very little information is a source of inspiration for the proposed control strategy. In particular, the work of Srinivasan and his co-authors [11], [12], [13], explaining the use of optical-flow in honeybee navigation where a honeybee makes a smooth landing on a surface without the knowledge of its vertical height above the surface. Analogous to this, the control strategy we present, which is originally published by the co-authors [14], is solely based on instantaneously available visual information and requires no information on the distance to the target. Thus, it is particularly suitable for robots equipped with a video camera as their primary sensor.
From a behavioral point of view, the problem of controlling eye–head systems is a fundamental issue for completion of specific tasks [15]. In this paper, we describe and experimentally investigate a vision-based docking system [16] for controlling a wheeled mobile robot approach to a static target using a video camera. The docking system consists of the behavior-based control law and a vision design. The vision design includes a pan video camera with a visual gaze algorithm that mimics the ability of many living insects to control their direction of gaze, enabling fixation on a specific part of an environment. As a result, it captures images more suitable for completion of a task.
Computer vision-processing techniques [17] allow wheeled robots to understand an environment. Underlying all these techniques is the need to recognize an object of interest in an environment. We use both edge and region detection techniques. Specifically, intersection of the edges of a rectangle produces corners and region information is employed in the form of optical flow. Subsequently, these visual parameters are provided to the control law which regulates the motion of the wheeled robot to the target.
The proposed vision-based robotic docking system was implemented and verified by various experiments using a wheeled robot and a pan video camera in a laboratory setting. In each experiment, the aim was to dock a wheeled robot at a certain location with a different heading. The experimental results demonstrated the effectiveness of the control law.
Docking is required in almost all applications of wheeled robots particularly when mobile wheeled robots are required to recharge their batteries for long term operation. It is envisaged that wheeled robots will play a significant role in search and rescue operations, in port automation and even in autonomous highway systems.
The rest of the paper is organized as follows. Section 2 discusses work in the literature related to the docking problem. Section 3 defines the problem statement. Section 4 introduces the control law for docking a wheeled mobile robot and presents the derivation and mathematically rigorous analysis of the control law. Section 5 includes simulation studies on the robustness of the control law. Section 6 describes the design of the vision-based docking system and reports experimental results. Our conclusions are drawn in Section 7. Lastly, an appendix includes computer vision algorithms used in this work.
Section snippets
Related work
Most studies of this problem can be roughly grouped into two approaches. One focuses on the robot’s “configuration space”, i.e. the relative positions and angles of the robot and target, and perhaps obstacles, in the plane. All these relations are assumed to be available to the control law, and from them it chooses some desirable path. Examples are found in [18], [19], [20], [21] and references therein.
The method described in [18] is similar in its approach to the method presented in this
Problem statement
Our aim is to design a control law by which a car-like vehicle may dock to a target point. The information available to the control law is consistent with the use of a video camera as the main sensor.
We now described the kinematic model of the robot, the measurements available to it, and finally give a complete definition of the problem statement.
The relative position of vehicle and target is given in polar form (see Fig. 1). The vehicle’s position is an extension-less point in the plane, and
Control law
From the optical flow measurements, we can cancel the component due to the robot’s rotation (), and retain only the component due to the relative motion of robot and dock-target. We denote this remaining flow , so: The control input is then chosen as: Here we can think of as the heading error, and as the curvature error, as the car describes a path toward the target.
The gains and should both be positive, and can be
Robustness
It has been mentioned in the literature that a particularly important test of a docking algorithm is the robustness of its terminal positioning precision to imperfect modeling of the kinematics and camera calibration [27], [19].
The parameters chosen for the simulation were: , , , . The initial conditions were , , , .
These parameters imply that the area in which the path could begin to diverge is approximate . Note that in all
A vision-based docking system
In this section, we present a vision-based docking system depicted in Fig. 7 for implementing the behavior-based control law, which is strongly dependent on information from a video camera. We first describe a vision system for recognizing and maintaining a target of interest on an image plane as well as providing control information for the control law. We then discuss the experimental setup and finally report on experimental results.
Our vision-based docking system uses three reference frames
Conclusion
We have presented a vision-based docking system for controlling a wheeled robot to perform docking. The docking system consists of a behavior-based control law based on a navigation technique called the CNG Principle and a vision system design. The behavior-based control law is strongly dependent on information from a video camera. We have described a vision system design that consists of pan video camera and a visual gaze algorithm which mimics behavior of insects. Also, computer vision
Emily Low was born in 1979 in Singapore. She is currently a Ph.D. candidate in Electrical Engineering at the University of New South Wales, Sydney, Australia. She received her B.E. (Hons 1) degree in 2003 from Nanyang Technological University, Singapore. From 1999 to 2000, she worked as technologist on navigation techology in the Defence Science Organization, Singapore. From 2003 to 2004, she worked as software engineer on smart card techology in Gemplus Technologies Asia. Her current research
References (38)
- et al.
Biomimetic robot navigation
Robotics and Autonomous Systems
(2000) - et al.
A connection between H-infinity control and absolute stabilizability of uncertain systems
Systems and Control Letters
(1994) Behaviour-based Robotics
(1998)- et al.
A tutorial on visual servo control
IEEE Transactions on Robotics and Automation
(1996) - et al.
Circular navigation guidance law for precision missile target engagements
Journal of Guidance, Control, and Dynamics
(2006) - et al.
The problem of precision missile guidance: LQR and frameworks
IEEE Transactions on Aerospace and Electronic Systems
(2003) - et al.
Robust Control Design using Methods
(2000) - et al.
Robust Kalman Filtering for Signals and Systems with Large Uncertainties
(1999) - et al.
Nonlinear versus linear control in the absolute stabilizability of uncertain linear systems with structured uncertainty
IEEE Transactions on Automatic Control
(1995) - et al.
Recursive state estimation for uncertain systems with an integral quadratic constraint
IEEE Transactions on Automatic Control
(1995)
How honeybees make grazing landings on flat surfaces
Biological Cybernetics
Honeybee navigation en route to the goal: Visual flight control and odometry
Journal of Experimental Biology
Biomimetic visual sensing and flight control
Aeronautical Journal
On perceptual advantages of active robot vision
Journal of Robotic Systems
Robot Vision
Exponential stabilization of mobile robots with nonholonomic constraints
IEEE Transactions on Automatic Control
Cited by (63)
Hybrid control for tracking environmental level sets by nonholonomic robots in maze-like environments
2021, Nonlinear Analysis: Hybrid SystemsDistributed formation building algorithms for groups of wheeled mobile robots
2016, Robotics and Autonomous SystemsCooperative surveillance of unknown environmental boundaries by multiple nonholonomic robots
2015, Robotics and Autonomous SystemsVisual contact with catadioptric cameras
2015, Robotics and Autonomous SystemsCitation Excerpt :In particular, [2] explained how the organization of the compound eye of insects and their neural processing of visual information acquired in flying allow these insects to estimate the distance of perceived obstacles and to avoid them. Inspired by the performance of insect visual systems, many authors (such as [3–5]) showed how these animals’ behaviors can be embedded into a mobile robot. Particularly, these studies make use of TTC computation for robot navigation purposes (collision avoidance, obstacle detection, etc.).
A Hybrid Algorithm for Collision-Free Navigation of a Non-Holonomic Ground Robot in Dynamic Environments with Steady and Moving Obstacles
2023, Chinese Control Conference, CCC
Emily Low was born in 1979 in Singapore. She is currently a Ph.D. candidate in Electrical Engineering at the University of New South Wales, Sydney, Australia. She received her B.E. (Hons 1) degree in 2003 from Nanyang Technological University, Singapore. From 1999 to 2000, she worked as technologist on navigation techology in the Defence Science Organization, Singapore. From 2003 to 2004, she worked as software engineer on smart card techology in Gemplus Technologies Asia. Her current research interests include vision-based navigation and control of mobile robots.
Ian R. Manchester was born in Sydney, Australia, in 1979. He completed the B.E. (Hons 1) degree in 2001 and the Ph.D. degree in 2005, both in Electrical Engineering at the University of New South Wales, Sydney, Australia. Since 2005 he has held a post-doctoral position in the control systems group at the Department of Applied Physics and Electronics, Ume University, Ume, Sweden. In 2006 he spent a month as a visiting researcher at the Intelligent Systems Research Centre, Sungkyunkwan University, Suwon, South Korea. His current research interests include vision-based guidance and robotics, control of underactuated and non-holonomic systems, and modelling and identification of the human cerebrospinal fluid system. He has published several journal and conference articles on these topics.
Andrey V. Savkin was born in 1965 in Norilsk, USSR. He received the M.S. degree (1987) and the Ph.D. degree (1991) from The Leningrad University, USSR. From 1987 to 1992, he worked in the All-Union Television Research Institute, Leningrad. From 1992 to 1994, he held a postdoctoral position in the Department of Electrical Engineering, Australian Defence Force Academy, Canberra. From 1994 to 1996, he was a Research Fellow with the Department of Electrical and Electronic Engineering and the Cooperative Research Center for Sensor Signal and Information Processing at the University of Melbourne, Australia. Since 1996, he has been a Senior Lecturer, and then an Associate Professor with the Department of Electrical and Electronic Engineering at the University of Western Australia, Perth. Since 2000, he has been a Professor with the School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney.
His current research interests include robust control and filtering, hybrid dynamical systems, missile guidance, networked control systems, computer-integrated manufacturing, control of mobile robots, computer vision, and application of control and signal processing to biomedical engineering and medicine. He has published four books and numerous journal and conference papers on these topics and served as an Associate Editor for several international journals and conferences.
- ☆
This work was supported by the Australian Research Council.