Paper The following article is Open access

Parallel control model for navigation tasks on service robots

, and

Published under licence by IOP Publishing Ltd
, , Citation Holman Montiel et al 2021 J. Phys.: Conf. Ser. 2135 012002 DOI 10.1088/1742-6596/2135/1/012002

1742-6596/2135/1/012002

Abstract

Autonomous mobility remains an open research problem in robotics. This is a complex problem that has its characteristics according to the type of task and environment intended for the robot's activity. Service robotics has in this sense problems that have not been solved satisfactorily. These robots must interact with human beings in environments designed for human beings, which implies that one of the basic sensors for structuring motion control and navigation schemes are those that replicate the human optical sense. In their normal activity, robots are expected to interpret visual information in the environment while following a certain motion policy that allows them to move from one point to another in the environment, consistent with their tasks. A good optical sensing system can be structured around digital cameras, with which it can apply visual identification routines of both the trajectory and its environment. This research proposes a parallel control scheme (with two loops) for the definition of movements of a service robot from images. On the one hand, there is a control loop based on a visual memory strategy using a convolutional neural network. This system contemplates a deep learning model that is trained from images of the environment containing characteristic elements of the navigation environment (various types of obstacles and different cases of free trajectories with and without navigation path). To this first loop is connected in parallel a second loop in charge of defining the specific distances to the obstacles using a stereo vision system. The objective of this parallel loop is to quickly identify the obstacle points in front of the robot from the images using a bacterial interaction model. These two loops form an information feedback motion control framework that quickly analyzes the environment and defines motion strategies from digital images, achieving real-time control driven by visual information. Among the advantages of our scheme are the low processing and memory costs in the robot, and the no need to modify the environment to facilitate the navigation of the robot. The performance of the system is validated by simulation and laboratory experiments.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Please wait… references are loading.
10.1088/1742-6596/2135/1/012002