Elsevier

Robotics and Autonomous Systems

Volume 110, December 2018, Pages 73-84
Robotics and Autonomous Systems

Analysis of cooperative localisation performance under varying sensor qualities and communication rates

https://doi.org/10.1016/j.robot.2018.09.010Get rights and content

Highlights

  • We alter and analyse parameters affecting Cooperative Localisation (CL).

  • Simulations are performed in MATLAB, and validated on hardware-in-the-loop experiments.

  • Our findings will help determine the suitability of CL for a system.

Abstract

Cooperative Localisation (CL) is a robust technique used to improve localisation accuracy in multi-robot systems. However, there is a lack of research on how CL performs under different conditions. It is unclear when CL is worthwhile, and how CL performance is affected if the system changes. This information is particularly important for systems with robots that have limited power and processing, which cannot afford to constantly perform CL. This paper investigates CL under varying sensor qualities (position accuracy, yaw accuracy, sample rate), communication rates, and number of robots for both homogeneous and heterogeneous multi-robot systems. Trends are found in MATLAB simulations using the UTIAS dataset, and then validated on Kobuki robots using an OptiTrack-based system. We find that yaw accuracy has a substantial effect on performance, a communication rate that is too fast can be detrimental, and heterogeneous systems are greater candidates for cooperative localisation than homogeneous systems.

Introduction

A fundamental challenge for mobile robotics is calculating the position and orientation (pose) of robots within their environment, known as localisation. This is necessary for robots to accurately interact with their environment, as well as for interacting with one another. In many industries, multiple robots operate within the same environment, known as multi-robot systems. Industries such as agriculture [1], warehouse automation [2], search and rescue [3], environment monitoring [4], healthcare assistance [5], mining [6], transport [7], and assembly [8], are beginning to use mobile robots in everyday operations. To address the localisation problem, robots are typically equipped with two types of sensors. The internal state of robots are measured by interoceptive sensors, such as gyroscopes, accelerometers, and wheel encoders. Interoceptive sensors reliably provide data, but have pose errors that accumulates over time. Exteroceptive sensors interact with the environment, such as GPS, cameras, and LIDARs. Exteroceptive sensors do not suffer from error accumulation, but they are sensitive to environment conditions. For example, localisation using GPS requires satellite signals, LIDARs require static terrain or recognisable features, and cameras require certain lighting conditions. There exists a lot of research in creating systems that are robust to exteroceptive sensor outages [9], [10]. In multi-robot systems, the environment can be measured by multiple robots. The robots can then share their information to improve localisation. This is known as Cooperative Localisation (CL).

There are two major areas of research for CL, one is known as Cooperative Simultaneous Localisation and Mapping (C-SLAM), where robots independently produce maps of the environment and then share and combine those maps. C-SLAM was a key contributor for the winning team of the MAGIC 2010 competition [11], where robots had to autonomously survey and map a 500 m × 500 m dynamic urban environment. Communication was not always available, so individual mapping and map fusion was necessary to continue surveillance during communication down-times. C-SLAM has also been used for tasks such as mapping a large area with aerial vehicles [12], localising underwater vehicles to reduce the need for surfacing [13], and to identify and track dynamic targets [14].

C-SLAM can be powerful, but it has requirements that make it unsuitable in certain systems. Firstly, each robot must have SLAM capabilities. This can inflate the cost of multi-robot systems, as effective SLAM often makes use of high quality sensors such as 3D LIDARs. Each robot must also be capable of processing data quickly, either through on-board processing or communication, and is therefore not suitable for systems with inexpensive processors or unreliable communication. Secondly, SLAM performance is dependent on the type and number of landmarks in the environment [15]. For example, SLAM does not operate well in open areas where there are few recognisable features.

The other major area for CL research involves measuring and communicating inter-robot observations. This differs from C-SLAM in that no map sharing occurs. Robots observe one another, estimate each others position, and communicate their estimates to the observed robots. There are no requirements for how robots localise and perform inter-robot measurements, allowing individual robots to have different sensors, processing capabilities, and internal representations of the environment. There is also less dependence on the environment, as it is able to operate provided robots are able to detect one another. This method is the focus of this paper.

Communicating and processing inter-robot measurements incurs a cost of bandwidth, energy, and processing time. To date, CL papers assume these costs are negligible, and therefore communicate whenever possible. However, there are scenarios where bandwidth and power are not readily supplied, and over-use of these resources can lead to mission failure. It has been shown that CL can be used but not when it should be used, or what system changes can be made to improve its effectiveness.

We analyse the efficiency and effectiveness of using CL for indoor ground vehicles under varying sensor quality, inter-robot communication rate, and the number of robots. We make use of a commonly used dataset, and then compare these results using real indoor robots.

Section 1 introduces the concept of cooperative localisation and other recent works. Section 2 discusses the CL approach, including the implementation, sensor fusion, and calibration. Section 3 contains the simulation results, showing how the system configuration affects the effectiveness and efficiency of cooperative localisation. Section 4 contains information about the physical system, including hardware information, software flow, and implementation points of note. Section 5 includes results from physical tests, which is compared to simulation. Section 7 discusses the results and their relevance to previous and future research. Section 8 concludes the paper.

Cooperative localisation (CL) was first addressed in 1994 [16], where robots periodically ‘leap-frogged’ past one another. The stationary robots took the role of stationary landmarks, allowing the mobile robots to more accurately measure their own movement. More recently, CL techniques involve communication between the robots. This allows cooperation to be used without constraining robot movement.

An obvious use case for CL is for localisation assistance. Robots with accurate localisation capabilities assist in the localisation of robots with less expensive or broken sensors. This was shown to be beneficial in experiments where ground vehicle leaders localised followers [17]. CL can also be used as a backup localisation method, where robots attempt to communicate when their primary localisation source is unavailable. This was shown to work in a simulation of smart cars moving through tunnels and urban canyons [18]. GPS sensors are unavailable during these areas, so CL was used until GPS signals returned.

CL is not restricted to land, it has been used in air and underwater vehicles as well. A team of simulated underwater vehicles were able to localise themselves by identifying their distance from a surface vehicle [19]. The surface vehicle was equipped with a GPS unit. This provided accurate localisation for all vehicles without the need for high quality sensors in every vehicle. Ground and air vehicles performed CL to improve performance [20], where ground vehicles were equipped with QR codes, and drones had cameras that were able to estimate distance and angles.

Inspired by the original 1994 technique, CL has been used to improve 3D mapping of large buildings [21]. Child robots periodically take the role of stationary landmarks in order to improve the localisation of a parent robot. The parent robot is equipped with a 3D LIDAR, and controls the child robots as a means to maintain a high localisation accuracy, which results in more accurate 3D maps.

There are a number of different ways to implement CL. Communication is often two-way so that communicated messages are beneficial to both robots, and it was found that using range and bearing is more useful than using either individually [22]. While many papers assume that inter-robot measurements will also identify the robot, some have dealt with anonymous detections [23]. There has also been work on network topologies for instances where communication between robots is non-trivial [24].

CL implementations most commonly use a decentralised framework [17], [25], [26], [27], citing the fragility and lack of scalability of centralised systems, and often use a centralised system as a benchmark. Recent research in this area [17], [24], [25], [28] has used a publicly available multi-robot collaboration dataset known as Multi-robot Cooperative Localisation And Mapping (MRCLAM) [29]. This dataset was collected from five ground vehicles in an indoor location equipped with visual markers for inter-robot observations. The dataset contains the raw inter-robot observations, as well as the ground truth position and orientation of each robot as recorded using a 10-camera Vicon motion capture system with millimetre accuracy.

Recent literature has proposed a large number of cooperative localisation algorithms for improving scalability, reliability, and accuracy for different scenarios. Wanasinghe et al. [17] used a Cubature Kalman Filter (Gaussian-based particle filters that can intrinsically carry covariances, [30]) to show the improvements that can be made when a subset of robots have superior sensing capabilities. C̆urn et al. [27] applied a Common Past-Invariant Ensemble Kalman Filter to improve localisation of a number of road vehicles with regions of no GPS coverage. They argue that cooperative localisation is much cheaper to implement than introducing localisation infrastructure such as beacons.

De Silva et al. [25] developed an algorithm that tracks other robots with registration and correction stages. Each robot communicates the inter-robot observations as well as their own velocity. The robots use a standard Kalman Filter for fusing interoceptive sensors, and Covariance Intersection to incorporate the tracking information. Covariance Intersection is very similar to a Kalman Filter, with the key difference being that Kalman Filters assume all inputs are independent, whereas Covariance Intersection assumes all inputs are dependent.

The reason for these different filters is that cooperative localisation violates the assumption of independence used in a Kalman Filter. If a robot influences another robot’s pose estimate, then inter-robot observations from that robot are no longer independent. This circular reasoning problem is called data incest. Data incest leads to improper covariance values within a filter, which in turn leads to sub-optimal fusion.

Li et al. [26] developed a method using a Split Covariance Intersection Filter that specifically aims to deal with data incest in cooperative localisation. Regular sensory information is fused in an EKF, while the inter-robot information is fused separately by Covariance Intersection.

Section snippets

Approach

The cooperative localisation method used in this paper is as follows. If a robot detects another robot, it measures the range and bearing between them. The inter-robot observation is combined with the observing robot’s pose to produce a position estimate of the observed robot. This can be seen graphically in Fig. 1(a). The position uncertainty of the observed robot is calculated from the pose uncertainty of the observing robot and the uncertainty of the inter-robot measurement, as shown

Results

Cooperative localisation simulations have been performed in MATLAB. Each test is in one of two scenarios: homogeneous, where each robot has the same sensor quality, and heterogeneous, where one of the robots has superior sensing capabilities. The superior robot is known as the parent, and the other robots are known as the children. In every test the robots have the same interoceptive sensors, and only the exteroceptive sensors are altered. The exact exteroceptive sensor specifications are

Experimental system

A hardware-in-the-loop system was used to demonstrate that the results found in simulation have relevance to the real world. The physical system uses YujinRobot Kobukis (Fig. 8) equipped with controllers in the form of Raspberry Pi 3 Model B’s running Robot Operating System (ROS). This system is an example of low-cost hardware that would not be able to perform cooperative localisation all the time, as its limited processing power would be needed for other duties.

ROS is a widely used robotics

Experimental results

The tests in simulation, detailed in Section 3, were repeated for the hardware-in-the-loop system. This was done to determine if physical implementation had significant effects on the results found in simulation. Elements such as time delay, data loss, and lack of synchronisation could have pertinent effects on the performance on cooperative localisation. While the experimental results are only for one physical system, they are not intended as a rigorous investigation in how these elements

Multivariate performance

A function has been created to estimate the predicted cooperative localisation improvement based on the input parameters. This was performed using MATLAB’s curve fitting toolbox. ysa=1.03xsa0.4889ysp=0.11log(xsp)2+0.992log(xsp)ycr=9.51xcr0.01594+10.59yya=1.614xya0.259+2.25ynr=0.364xnr0.76340.37ytotal=ysayspycryyaynr where xsa is the exteroceptive sensor accuracy (m), xsp is the exteroceptive sensor period (s), xcr is the communication rate (Hz), xya is the yaw accuracy (rad), and xnr is

Discussion

The simulation and hardware-in-the-loop results match what has been observed (and sometimes assumed) in literature, while also providing new data which have not previously been analysed. We highlight results and discuss their relevance to previous literature and future research.

The use of cooperative localisation is more effective when the robots have poor localisation. While most literature on this subject assumes this is the case (the authors could not find research on CL in systems where

Conclusion

This paper was written to help future roboticists design systems using cooperative localisation. We have profiled its properties in simulation and through hardware-in-the-loop experiments in order to better understand the conditions where CL is effective and efficient. We have indicated where the commonly available EKF is limited, and hence where state-of-the-art filters may be required.

These results provide information for systems where the power and processing costs for performing CL need to

Acknowledgements

The Commonwealth of Australia (represented by the Defence Science and Technology Group ) supported this research through a Defence Science Partnerships agreement.

Nick Sullivan received his bachelor degrees in mechatronics engineering and computer science from the University of Adelaide, Australia, in 2016. He is currently working toward the Ph.D. degree in robotics at the University of Adelaide. His research interests include multi-robot task allocation, unmanned ground vehicles, and machine learning.

References (32)

  • BrownChris

    Autonomous vehicle technology in mining

    Eng. Min. J.

    (2012)
  • JoKichun et al.

    Development of autonomous car-part I: distributed system architecture and development process

    IEEE Trans. Ind. Electron.

    (2014)
  • KnepperRoss A. et al.

    Ikeabot: An autonomous multi-robot coordinated furniture assembly system

  • ChenXi-Yuan et al.

    Theoretical analysis and application of Kalman filters for ultra-tight global position system/inertial navigation system integration

    Trans. Inst. Meas. Control

    (2012)
  • OlsonEdwin et al.

    Progress toward multirobot reconnaissance and the MAGIC 2010 competition

    J. Field Robot.

    (2012)
  • ForsterChristian et al.

    Collaborative monocular SLAM with multiple micro aerial vehicles

  • Cited by (8)

    • A real-time map merging strategy for robust collaborative reconstruction of unknown environments

      2020, Expert Systems with Applications
      Citation Excerpt :

      An explanation for this result is the cumulative error of the map merger and exploration process when the task is performed by four robots. There are also recent studies that introduce guidelines to help build a multi-robot system based on several parameters (Sullivan, Grainger, & Cazzolato, 2018) and make improvements in communications architecture based on ROS (Tardioli, Parasuraman, & Ögren, 2019). Despite not being related to the MRSLAM field, these works are important for the area as they present new solutions and guidelines that will help the design and implementation of new MRSLAM algorithms.

    • AUV localisation: a review of passive and active techniques

      2022, International Journal of Intelligent Robotics and Applications
    View all citing articles on Scopus

    Nick Sullivan received his bachelor degrees in mechatronics engineering and computer science from the University of Adelaide, Australia, in 2016. He is currently working toward the Ph.D. degree in robotics at the University of Adelaide. His research interests include multi-robot task allocation, unmanned ground vehicles, and machine learning.

    Steven Grainger obtained his Ph.D. on the control of electric drives from Glasgow Caledonian University, Scotland and holds B.E. degrees in computing and electronic engineering. He is a lecturer in Control and Embedded Systems at the University of Adelaides School of Mechanical Engineering. His current research interests include nanopositioning systems and autonomous vehicles.

    Ben Cazzolato received his B.E. in mechanical engineering at the University of Adelaide, Australia, in 1990. At the same university, he received his Ph.D. in the field of active control for sound transmission in 1998. He is currently a professor at the University of Adelaide, teaching and researching in the fields of dynamics and control. Current research interests include modelling of complex electro-mechanical systems, control of unstable vehicles, active control and nano-positioning.

    View full text