Elsevier

Neurocomputing

Volume 151, Part 1, 3 March 2015, Pages 101-107
Neurocomputing

Vision-based anticipatory controller for the autonomous navigation of an UAV using artificial neural networks

https://doi.org/10.1016/j.neucom.2014.09.077Get rights and content

Abstract

A vision-based anticipatory controller for the autonomous indoor navigation of an unmanned aerial vehicle (UAV) is the topic of this paper. A dual Feedforward/Feedback architecture has been used as the UAV׳s controller and the K-NN classifier using the gray level image histogram as discriminant variables has been applied for landmarks recognition. After a brief description of the UAV, we first identify the two main components of its autonomous navigation, namely, the landmark recognition and the dual controller based on cerebellar system of living beings, then we focus on the anticipatory module that has been implemented by an artificial neural network. Afterwards, the paper describes the experimental setup and discusses the experimental results centered mainly on the basic UAV׳s behavior of landmark approximation maneuver, which in topological navigation is known as the beaconing or homing problem.

Introduction

Lately the control of unmanned aerial vehicles (UAV) is growing at unprecedented rates. For example, the European Union considers the drone market share could be up to 10% of aviation in 10 years. The wide variety of civil and military applications makes UAVs a perfect testbed for using advances intelligent techniques. In this paper we have applied a dual Feedforward/Feedback architecture for the autonomous navigation of an UAV using visual topological maps [17], [5], which describes the environment as a set of landmarks or relevant places that are modeled as the vertices of a labeled graph and the edges correspond to specific UAV׳s maneuvers. As the landmarks are visual references, a fundamental problem in visual topological navigation is landmark recognition, so that we devote a complete section of the paper to this topic (see paragraph below “Recognition of Visual Landmarks”).

The UAV׳s dual controller architecture is based on internal model that is computationally efficient [7], and from sensory images, provides a reactive response by a Feedback module combinated with a predictive or anticipatory response by a Feedforward module. Through the Feedback module a reactive control signal is obtained from the sensorial inputs, whose behavior is aligned with a brain activity. On the other hand, the anticipatory control signal is obtained as output by the Feedforward controller, which is aligned with a cerebellar activity based on a process of continuous knowledge acquisition in real time during the autonomous navigation of the UAV. This knowledge is consolidated on the Feedforward, getting automation behavior and performance benefits similar to the cerebellum [2].

We focus on the anticipatory controller (Feedforward controller), which generates anticipatory control signals during the navigation of the UAV. The dual control architecture is based on Feedback-Error-Learning [9], where the Feedback controller trains the artificial neural network (Feedforward controller) in real time through each iteration of the loop control.

Throughout the paper, we describe both components of the UAV׳s navigation: first, landmark recognition and afterwards, second the dual control architecture and third the anticipatory Feedforward controller based on an artificial neural network.

The experimental work carried out in our laboratory about the beaconing or homing problem is detailed, where the UAV executes an approximation maneuver to a target landmark in the topological map.

The paper ends with the results and benefits, the final conclusions and future research lines.

Section snippets

Recognition of visual landmarks

The UAV׳s navigation system utilizes the onboard camera to capture the environment images. These images are classified and used by the controller in order to generate the control commands in real time. More specifically, as the navigation system is based on a topological map, it is vital to have an efficient classification of the landmarks images to guarantee a correct guidance of the UAV.

For landmark recognition we have used the gray levels standard histogram as the discriminant variables [16]

The Feedforward/Feedback controller

The block-diagram of the dual Feedforward/Feedback controller [7], [2] is shown in Fig. 3. Notice that the Feedback or reactive controller [9], [10] receives as input the same image error ε from the sensorial module.

We have implemented the controller by following a dual Feed-forward/Feedback control architecture [6] which is constituted by the combination of a Feedback module (either based on a conventional PD control or on a error gradient control [15]), and a Feedforward module (based on an

Experimental work. UAV׳s approximation maneuver to a landmark

For experimenting we have used a quadrotor Parrot AR.Drone 2.0 [18] as robotics research platform [12], available to the general public.

In 2004 the Parrot company launched the project named AR.Drone with the final objective of producing a micro Unmanned Aerial Vehicle aimed at both the mass market of videos games and the home industry [3].

All commands and images can be exchanged with a central controller (dual control architecture) via an ad hoc Wifi connection. The AR.Drone has an onboard HD

Conclusions and future work

This paper has presented a vision-based dual Feedforward/Feedback controller based on internal models paradigm for indoor navigation of an UAV, that uses a visual topological map to autonomously navigate in the environment. We have also proposed a solution to the problem of visual landmark recognition by means of a dual control architecture, in such a way that the convergence of the UAV׳s controller to a state of zero error is equivalent to the final recognition of the corresponding visual

Dario Maravall received his M.Sc. in Telecommunication Engineering from the Universidad Politecnica de Madrid in 1978 and his Ph.D. degree at the same university in 1980. From 1980 to 1988 he was an Associate Professor of Computer Science and Cybernetics Engineering at the School of Telecommunication Engineering, Universidad Politecnica de Madrid. In 1988 he was promoted to full University Professor of Computer Science and Artificial Intelligence at the Computer Science Faculty, Universidad

References (21)

There are more references available in the full text version of this article.

Cited by (30)

  • Heterogeneous formation control of multiple UAVs with limited-input leader via reinforcement learning

    2020, Neurocomputing
    Citation Excerpt :

    Nowadays, unmanned aerial vehicles (UAVs) have become increasingly important in many applications. UAVs can be used to perform crucial tasks, such as cargo transportation, continuous monitoring, topographic survey, and large-scale disaster relief (see, [1–4]). The key advantage of UAVs is the high flexibility: UAVs possess better cost-effectiveness, stronger adaptability, and better maneuverability, and do not suffer from casualties, compared to ground robots [5].

  • Neuroevolution-based autonomous robot navigation: A comparative study

    2020, Cognitive Systems Research
    Citation Excerpt :

    In addition to autonomous navigation domain, AI approaches have shown their superiority over classical methods for the various real-world problems (Hasani, Jalali, Rezaei, & Maleki, 2018; Jalali, Park, Vanani, & Pho, 2020; Maleki, Contreras-Reyes, & Mahmoudi, 2019; Moravveji, Khodadadi, & Maleki, 2019; Sohrabi, Vanani, Jalali, & Abedin, 2019; Zarrin, Maleki, Khodadai, & Arellano-Valle, 2019). Different types of ANNs such as feed-forward, radial basis function or recurrent have been extensively used in navigating mobile robots (Amiri, Abedi-Koupai, Jafar Jalali, & Mousavi, 2017; Jalali et al., 2019; Kebria, Khosravi, Nahavandi, Najdovski, & Hilton, 2018; Maravall, de Lope, & Fuentes, 2015; Pomerleau, 1991). A key trigger for interest in ANNs is in their properties including nonlinear mapping, massively parallel processing, great generalization power, and the ability to learn from examples (Jalali, Moro, et al., 2017; Khatami, Babaie, Khosravi, Tizhoosh, & Nahavandi, 2018; Moro, Ramos, Esmerado, & Jalali, 2019).

  • Deep reinforcement learning for controlling frontal person close-up shooting

    2019, Neurocomputing
    Citation Excerpt :

    These limitations increase the cost of aerial cinematography using drones, while, at the same time, possibly reduce the quality of the obtained shots. Several techniques have been proposed to overcome some of the previous limitations, ranging from planning methods [1,4,5], to crowd avoidance [6] and methods for intelligent control of drones [2]. Even though these techniques are capable of partially automating the control and shooting processes, it is not yet possible to develop drones that will be able to fly fully autonomously and shoot high-quality footage according to the director’s plan.

  • Energy evaluation of low-level control in UAVs powered by lithium polymer battery

    2017, ISA Transactions
    Citation Excerpt :

    The mid-level control is one of the most addressed topics in UAV motion control. Several control techniques have been widely studied and reported in the literature, such as neural networks [8,9], sliding mode control [10–12], Lyapunov theory [13–15], predictive control [16,17], state feedback control [18,19], fuzzy logic [20,21], optimal control [22,23], linear algebra theory [24,25], and a mix of them [26], among others. In most cases, the main objective is to track a desired reference, with bounded positions errors, using a stabilizing control law; but the real energy involved in the control strategy is not taken into account.

  • Adaptive RBFNNs/integral sliding mode control for a quadrotor aircraft

    2016, Neurocomputing
    Citation Excerpt :

    Quadrotor UAVs have received increasing attentions in the past decade, because of their specific characteristics such as autonomous flight, low cost, vertical takeoff/landing ability and onboard vision system [1,2], and their wide applications like surveillance, building exploration and information collection [3,4].

View all citing articles on Scopus

Dario Maravall received his M.Sc. in Telecommunication Engineering from the Universidad Politecnica de Madrid in 1978 and his Ph.D. degree at the same university in 1980. From 1980 to 1988 he was an Associate Professor of Computer Science and Cybernetics Engineering at the School of Telecommunication Engineering, Universidad Politecnica de Madrid. In 1988 he was promoted to full University Professor of Computer Science and Artificial Intelligence at the Computer Science Faculty, Universidad Politecnica de Madrid. From 2000 to 2004 he was the director of the Department of Artificial Intelligence, Universidad Politecnica de Madrid. His current research interests include pattern recognition, computer vision and cognitive autonomous robots. He has published extensively on these subjects and has directed more than 20 funded projects, including a five-years R&D project for the automated inspection of wooden pallets using computer vision techniques and robotic mechanisms, with several operating plants in a number of European countries and in the U.S. .As a result of this multinational project he holds a patent issued by the European Patent Office at The Hague, The Netherlands. He has acted as a Technical Consultant for numerous private firms and is currently a Senior Researcher in the Computational Cognitive Robotics Group of the CAR (Centro of Automatica y Robotica), an official Research Center belonging to the Universidad Politecnica de Madrid and to the CSIC (the Spanish National Council of scientific Research) where his group is working on the development of heterogeneous multirobots systems (mainly formed by UGVs and UAVs).

Javier de Lope (SM׳94, M׳98) received the M.Sc. in Computer Science from the Universidad Politécnica de Madrid in 1994 and the Ph.D. degree at the same university in 1998. Currently, he is an Associate Professor in the Department of Artificial Intelligence at the Universidad Politécnica de Madrid and a senior researcher in the Computational Cognitive Robotics Group at the Centro de Automática y Robótica. His current research interest is centered on the study and design of multi-robot systems, particularly on autonomous coordination and language emergence in robot teams. In the past he has lead a R&D project for developing industrial robotics mechanisms which follow the guidelines of multi-robot systems and reconfigurable robotics, and he also worked on projects related to the computer-aided automatic driving by means of external cameras and range sensors and the design and control of humanoid robots and unmanned flying vehicles.

Juan Pablo Fuentes received the Ph.D. in Artificial Intelligence from the Universidad Politécnica de Madrid (UPM) in 2014 and the M.Sc. in Artificial Intelligence at the same university in 2011. He received the B.Sc. in Software Engineering from UPM in 2010 and B.Sc. in Computer Science at the same university in 1999. He is currently working as a Senior Researcher in Robotics at the Computational Cognitive Robotics Group (UPM). Since 1999 has worked as a Software Engineer, and has more than 15 years of professional experience in IT. His research interests are primarily in the areas of Robotics: Cognitive Robotics, Developmental Robotics, Internal Models, Teachable Robots, Machine Consciousness and UAV (Unmanned Aerial Vehicle).

View full text