Mobile robot indoor dual Kalman filter localisation based on inertial measurement and stereo vision

This study presents a novel navigation method designed to support a real-time, efficient, accurate indoor localisation for mobile robot system. It is applicable for inertial measurement units (IMU) consisting of gyroscopes, accelerometers, and magnetic besides stereo vision (SV). The current indoor mobile robot localisation technology adopts traditional active sensing devices such as laser, and ultrasonic method which belongs to the signal of localisation and navigation method which has low efficiency complex structure, and poor anti-interference ability. Through dual Kalman filter (DKF) algorithm, the accumulated error of gyroscope can be reduced, while combining with SV, mobile robot binocular SV orientation of inertial location can be realised under the DKF mechanism, which is introduced. First, high precision posture information of mobile robot can be obtained using fusing Kalman filter algorithm of accelerometer, gyroscope and magnetometer data. Second, inertial measurement precision can be optimised using Kalman filtering algorithm combined with machine vision localisation algorithm. The results indicate that the method achieves the levels of accuracy location comparable with that of the IMU/SV fusion algorithm; <0.0066 static RMS error, <0.0056 dynamic RMS error. The mobile robot using DKF algorithm of inertial navigation and SV indoor localisation is feasible.


Introduction
Over the past few years, indoor robot navigation, such as LP [1], Blue Tooth [2], WIFI [3], Zigbee [4] and Computer Vision, has achieved tremendous progress and is receiving more attentions. The method combined with inertial navigation and stereo vision (SV) localisation is considered as an efficient mechanism for mobile robot localisation, because it has the advantages of high localisation accuracy, high real-time performance, strong anti-interference ability and simple structure. It can effectively solve the problem of mobile robot indoor localisation in complex environment. A method combined with inertial measurement and SV localisation was proposed in this paper. It does not rely on active sensors, such as the laser, ultrasonic equipment, only uses inertial measurement module and SV to achieve the localisation of mobile robot in a complex environment.
Artificial intelligence laboratory at the Massachusetts Institute of Technology has put forward the visual computing theory, which is used for binocular matching at the earliest. The depth map is produced by two pieces of visual error pictures. All of the abovementioned work laid the foundation for the development of binocular SV theory. In recent years, machine vision has been paid wide attention and papers of related literature emerge in endlessly; however the results of localisation by binocular SV are relatively little. Wu et al. [5] realised the recognition of target tasks, distance estimation and location by computer vision navigation for mobile robot tracking method. The paper [6,7] proposed a method of depth detection based on the binocular significant area, using the left and right image map of deviation calculation error, hue, saturation, intensity of colour space and the mean shift algorithm for image decomposition. Samia et al. [8] realised 3d perception of humanoid robot and the three-dimensional image reconstruction by binocular camera. To get the accurate depth map, Lin et al. [9][10][11] adopted different algorithms to realise the match of binocular SV.
By using inertial measurement component (such as gyroscope, accelerometer and magnetometer), inertial navigation technology outputs linear acceleration and angular velocity in order to integrate and calculate every step of the location and posture without other outside signal. A single inertial measurement technology is not suitable for high precision indoor localisation and navigation because the inertial measurement has an accumulated error. Therefore another technology needs to be introduced into inertial measurement units (IMU) for improving the location precision. Then Salsh et al. [12] presented a high integrity navigation system based on GPS and IMU that the global information of inertial navigation used GPS to correct drift error, so as to realise the autonomous navigation of the vehicle. Wang et al. [13] proposed an autonomous navigation method using speed/distance correction of IMU navigation methods to eliminate the initial navigation error and IMU error of measurement based on innovative marketing with external speed and distance measurement correction.
In conclusion, single IMU localisation or SV localisation is weak to some extent. IMU localisation uses the mobile robot's initial location, the posture of the robot calculated by integral, and the mobile state of each moment. The location of the mobile robot can be calculated, which would produce a phenomenon of drift after a certain time. The estimation from SV localisation is uncertain due to the extraction of feature points in the space. However, the SV localisation can only obtain local location, and the location error occurs soon. In order to solve the abovementioned problems, the paper proposes a novel mechanism to realise robot indoor localisation by introducing dual Kalman filter (DKF) to decrease IMU accumulative posture error and combining with SV location to optimise the IMU location. Under the mechanism, the mobile robot achieves fast and autonomous navigation due to environment feature location for the indoor fuzzy map. It has certain location accuracy and real-time performance.
The major research works were conducted as follows: (i) By analysing the inertial measurement and SV localisation scheme, the principle and design idea of mobile robot for localisation are introduced.
(ii) Through introducing indoor 'Fuzzy Map', a novel method is proposed to realise mobile robot indoor localisation based on DKF mechanisms.
The organisation of the paper is as follows. Section 2 delineates the presentation of the DKF algorithm to solve mobile robot problem indoor localisation. Section 3 gives the results of IMU/SV indoor localisation using DKF algorithm to analyse the RMS of robot location about IMU localisation and IMU/SV fusion localisation. Finally, the conclusion and the future work of the study are summarised in Section 4.

Scheme and principle
Yu et al. [14] did some related researches on inertial navigation unit and binocular, which proposed the feasibility of the idea to implement posture estimation and location in UAV via analysing vision posture estimation algorithm about inertial navigation unit and vision. A kind of system solution is put forward to solve indoor mobile robot localisation and navigation by using IMU/SV. Finally, the software system design and implementation of the technology come true based on the Visual Studio platform. In the navigation process of mobile robot forward to the target point, cycle navigation principle is followed: inertial measurement -> visual localisation -> inertial measurement as shown in Fig. 1. (i) There are some environmental feature landmarks in Fuzzy Map, mobile robot can move to local environmental feature landmark fast, the speed and altitude of movement can be calculated by using inertial measurement technology. (ii) When the mobile robot arrives at the environment feature landmark, machine vision completes the correction of robot inertial measurement location, it determines whether it arrives to the global environmental feature landmark at the same time. If it is not the global environment feature landmark, it continues to cycle through (i) and (ii), until the global environmental feature landmark is detected.
In the physic experiment, JY901 module is fixed on the MT-R mobile robot platform to guarantee the module in the level state as much as possible. On the basis of the algorithm simulation validation, the development of corresponding software system is transplanted to MT-R hardware platforms. Mobile robot on indoor environment of rapid autonomous mobile robot localisation algorithm would be verified, as the symbolic flowchart shown in Fig. 2

Construction of indoor fuzzy map
The concept of indoor 'Fuzzy Map' is set forth through some analysis. Indoor Fuzzy Map of environment around the mobile robot performs as a simplified map for task (an example in Fig. 3). It mainly describes the environment important feature locations (such as door, window, corner etc.), such as the distribution of the landmark, the topology of the turning points, and internal map for mobile robot with proportioning of indoor environment map. Internal map for mobile robot is with proportioning of indoor environment map. The feature of the location distribution known multiple coordinates was landmarks, it is completed that in the fuzzy map to blur the mobile robot in its environment. These three-dimensional (3D) coordinate of environmental feature landmarks were recorded when the mobile robot in the environment identified landmarks. By the distance solved from the centre of the target, the vision location of mobile robot can be worked out combined with Fuzzy Map.

SV localisation
In this section, SV consists of two parts: identifying and locating; because the environment feature landmarks used in the experiments in this paper are QR code, the recognition of QR code and SV is introduced.

QR code recognition:
QR code is a special pattern, which consists of a series of black and white small blocks as any combination of the formation, these black and white blocks represent binary numbers 0 and 1, respectively, different QR   represents different binary strings. So QR code recognition has better uniqueness. QR code recognition steps are as follows: First, use the OSTU [15] algorithm to binarise the grayscale image. The OSTU algorithm principle is as follows: where t is segmentation threshold, that divides the image into foreground and background, q 1 (t) is the ratio of the foreground to the image, q 2 (t) is the ratio of the background to the image, u 1 (t) is the average grey level of the foreground, u 2 (t) is the average grey level of the background, u is the average grey level of the image, and d 2 B (t) is the variance between the two groups. When the variance between the two groups is the largest, the segmentation threshold can be determined.
Secondly, find the image and positioning graphics of QR code images. The image and positioning graphics with black and white block width ratio are black:white:black:white:black = 1:1:3:1:1. When the corners of QR code from left to right, from top to bottom are detected, the length of white and black blocks can be found, the graphics that match the scale are the image and the positioning graphics. Sampling grid can be established according to the centre of the image and positioning graphics, and the images are converted into a data matrix.
Finally, black and white blocks are determined according to the segmentation threshold in sampling grid. As mentioned above, the black block represents 1 and the white block represents 0, therefore, we can obtain the binary string corresponding to the QR code, so that QR code information will be obtained, such as the pixel of the centre point in QR code in the horizontal direction of the image.

SV localisation:
The binocular distance model is based on the pattern of the retina imaging of the human eye. The relative distance between the observing point in the navigation frame and the model of the binocular camera is calculated by using the left and right camera imaging parallax [16]. The SV model is shown in Fig. 4.
Any point in the binocular camera imaging process is vertical to the cross-section as shown in Fig. 5. The principle of binocular distance measurement is the use of any 3D space in the left and right camera imaging images on the horizontal coordinates of the difference (disparity d = X left − X right ). The parallax is inversely proportional to the distance between the plane and the point at which the camera is imaged [17], where X left and X right -are the pixel values in the horizontal direction of the target point (the centre point in QR code) in the left and right camera image coordinate systems, respectively.
Through the camera imaging principle, the relationship between the camera image coordinates and pixel coordinates can be described as conversion matrix Q in expression (3) by the triangular similarity principle of the target image, where c x and c y are the offsets of the camera's optical point in the horizontal and vertical directions of the image. c ′ x is the pixel of the centre point in QR code in the horizontal direction of the image. f is the focal length of the camera. T is the distance between the binoculars.
In the 3D reconstruction of an object, the homogeneous linear relationship between the pixel coordinates and the navigation coordinates is shown in the following formula (4), Y left is the pixel value of the target point in the vertical direction of the left camera image.
Through (3) and (4), the target point of the navigation coordinates X /W , Y /W , Z/W can be obtained. In order to obtain the Q, Preparatory work such as calibration for binocular camera needs to be done. The calibration of internal and external parameters of binocular is carried out using the checkerboard method.

Fusion localisation algorithm
The basic mechanism of localisation and navigation is explained earlier for combining IMU and SV. The following analysis focuses on the DKF algorithm of inertial measurement and binocular SV localisation fusion algorithm implementation process. In the indoor Fuzzy Map, the first mobile robot locates by the inertial measurement module. Combined with the current robot location and environment feature local landmark, by IMU, the robot parameters are converted into the robot motor control and mobile robot moves to the environment feature local location. Then mobile robot uses SV to locate and fuse inertial measurement location by KF algorithm to ensure the robot navigation smoothly to the global target.

Inertial measurement posture algorithm:
Posture decoding algorithm is one of the key technology to realise precise localisation and navigation in inertial measurement system. The quaternion vector posture algorithm has the effectiveness and correctness [18,19]. The robot's posture changes anytime in the  operation. In order to measure the robot posture angle (f, u, w), it is raised to use the quaternion to solve the robot posture in this paper. According to the quaternion differential equation: Then available quaternion differential equations can be described as follows:q In (5), w can be defined as follows: In order to facilitate solving, we utilise the differential equation of first-order Runge-Kutta numerical methods to solve the quaternion differential: If (6) and (8) are comprehensive, we can obtain available quaternion iterative relationship As conclusion quaternion q is available at real-time under w and sample frequency (1/T ) are known. Simultaneously, quaternion and Euler angle relationship is as follows: The quaternion representation of the coordinate transformation matrix can be calculated as follows: The coordinate transformation matrix is expressed according to Euler angle as (12) for s, c are marks for sin, cos, respectively [10].
According to the equation C B E = Q B E , robot Euler angle can be calculated as follows: Due to the accumulated error of gyroscope, the coordinate transformation matrix and the robot posture will spread over time.
In order to make the Euler angle state convergent, the observation on solving the amount can be used for KF processing which can make the posture angle of robot have convergence. Equation (14) is the state equation defined in KF process, and its observation equation is as (15): The five basic equations of KF are as follows: The quaternion can be defined as robot quantity of state X k = (q 0 , q 1 , q 2 , q 3 ) T . Through the gyroscope measure and calculation consist of the quaternion state equation (17), and its argument for A(k|k − 1) = I + (T /2)w.
By acceleration compute posture angle and inverse solution of the quaternion, which consists of observation equation gravitational acceleration value in the inertial navigation system and the body coordinates has transformation relations as follows: [ a x a y a z ] Expand (15), it can be obtained as: For magnetometer yaw angle calculation, the magnetometer should rotate to the horizontal plane, namely Then the two expressions can be put into (21) as By (19) and (21), the true quaternion can be worked out as Therefore, the mobile robot quaternion can be built as the observation equation Z k = [q 0 , q 1 , q 2 , q 3 ] for its H(k) = I. After obtaining the state equation and observation equation, we can put the quantity of state and quantity of observation into five equations of KF and real-time to update posture of the mobile robot.

Inertial measurement localisation algorithm:
Through KF algorithm to fuse 9 axis sensor data, the robot's posture can be calculated. Then it becomes convenient to convert the value of the accelerometer in navigation location calculation under the navigation system. Equation (23) describes the conversion relationship between them On the one hand, integral calculation from the accelerometer of navigation coordinates is finished to produce mobile robot instantaneous velocity estimation, on the other hand, the instantaneous velocity integral calculation is finished to produce the instantaneous location of the mobile robot, the implementation process is shown in Fig. 6. The accelerometer and gyroscope data can be calculated through the integration of the navigation of mobile robot's location and velocity in the navigation system. At the same time, the coordinate transformation, gestures calculation and the data average filtering processing for mobile robot can reduce the non-linear location error of the drift of sensor devices data, and the accumulated error of gyroscope is reduced by Kalman fusion accelerometer and magnetometer information.
Under the inertial measurement system at the same time, the local characteristics of mobile robot based on Fuzzy Map location area, the mobile robot and the distance from the centre of the local feature vector is converted into a mobile robot control, and navigation of mobile robot moves rapidly to the local characteristics of the location area. The quantity of state of the mobile robot at time k is defined as X k = [x(k)ẋ(k) y(k)ẏ(k)] T for its location is (x k , y k ) and velocity isẋ k ,ẏ k . a Ex (k), a Ey (k) are, respectively, the acceleration of east, north to the direction for the mobile robot at time k, and the sampling frequency T for sensors. After acquiring the known vehicle acceleration and output frequency of the case, the speed and location of robot can be solved at any time. Equation (24) describes the state equation by the acceleration

Fusion localisation algorithm by IMU/SV:
In this paper, inertial measurement localisation technology implementation process is analysed. After getting the acceleration value of the mobile robot, through integral algorithm, the sensor output frequency as integral cycle, inertial measurement location of the mobile robot can be acquired at any time. However, the accelerometer values will be affected by outside noise interference, leading to the location of the accumulated over time. If the error is not amended, the mobile robot will go bad and becomes uncontrollable after a period of time. In order to solve such problems, the fusion between inertial measurement localisation and stereo visual localisation can be realised by KF algorithm, and the location error of the mobile robot can be reduced to some degree.
When the mobile robot moves to environment feature location landmark, the robot can use the SV to locate the landmark in the environment. At the same time, the distance between robot and local landmark can be calculated based on SV mobile robot location algorithm. By solving robot altitude, the angle of real-time 3d distance information can be converted to the world coordinate system.
As shown in Fig. 7, it is assuming that the w is the robot's heading angle. Under the robot camera coordinate system, the coordinates of the target in the robot camera coordinate system are defined as (x c , y c , z c ). Considering the robot in the horizontal plane, vertical information y c can be ignored. According to the geometric relationship, (25) can be obtained, where (x E , y E ) is the location information of the robot under navigation coordinate system It is supposed that the Kalman filter fusion processing should be done for location of inertial measurement navigation real-time. Further to this concept of DKF fusion localisation technology based on IMU/SV, our research interest is focused on how to observe the mobile robot's velocity. It is assumed that the observation parameter of mobile robot's velocity is 0 which is mentioned in the paper. The following analysis can be performed. Combined with Kalman filter equations, a state variable X (k) = x(k), y(k),ẋ(k),ẏ(k) T is composed of the location and speed of the mobile robot. Amount of input control the related parameters can be obtained from the Kalman filter equations where Z k = [x E , y E ] T is the quantity of observation containing the location measurement of the vision system in the time k, combined Although the proposed fusion method is a kind of fusion of time-sharing mode, the optimisation of inertial navigation and location accuracy at the same time is to make sure that the real-time location, when mobile robot achieved environment feature and detect the landmark. It is fused by Kalman filter algorithm between visual location and inertial measurement navigation. Inertial measurement navigation is disassembled to block processing operations, at this time observation gain can be turned into the following equation:

Demonstration
After the series of theoretical analysis, according to the inertial measurement and the needs of the SV navigation scheme and algorithm implementation process, we carry out the following simulation and physical experiment to verify the effectiveness of the robot localisation and navigation using inertial measurement and the SV under complicated indoor environment. This paper focuses on the realisation of fusion algorithm, inertial navigation is the main, the SV is as a supplement; therefore, the following experiments aim to compare fusion experiments with inertial navigation experiments.

Simulation results
Through transformation matrix, acceleration value from JY901 module can be converted into the navigation system, the numerical can be imported by Matlab simulation platform to pass the low-pass filter for separation gravity acceleration value. Then the inertial measurement and posture algorithm are simulated (The acceleration value is shown in Fig. 8.). The red curve represents the acceleration value of carrier system, and the blue curve represents the acceleration value of navigation system. It can be analysed that the transition matrix for the unit matrix and the acceleration before and after the transformation were similar because the JY901 module keeps approximate linear motion, without horizontal rotation phenomenon. Besides it does not produce acceleration in the vertical direction, so the acceleration value approximation is zero, namely. Fig. 9 is robot's altitude angle (top) calculated by quaternion in the robot's movement, for blue curve for the robot's heading angle; because of the horizontal plane motion of the robot, roll angle and pitching angle are close to zero, as shown in figure curve of red and green curves. The robot only changes in the yaw angle, the change for the green curve. Robot's velocity (bottom) shows the robot of the speed change of the east and north in the process of movement. Fig. 8 describes the robot's navigation location. The black curve shows the true path of the robot measured by manual operation. The red and green curves are calculated path using IMU localisation technology and IMU/SV fusion localisation technology. As shown in Fig. 10, the path using IMU/SV fusion localisation technology is more close to the real path than that using the IMU localisation technology.
It is common to quantify the robot localisation location performance as the static and dynamic root-mean-square (RMS) errors in the robot localisation parameters describing the east displacement Se and north displacement Sn, respectively. Corresponding to the calibrated measurements of location, the IMU localisation is estimated and the proposed IMU/SV fusion localisation is estimated. The errors of estimated location parameters Se and Sn were computed as the difference between the estimated values and the calibrated measurement. Results were obtained for   a sequence of data about robot motionless and movement in indoor environment. The static and dynamic RMS error of IMU localisation and proposed IMU/SV fusion localisation implementation is in Table 1.
The results of the dynamic RMS values of Se and Sn are summarised in Fig. 11

Physical results
The proposed algorithm is tested on the MT-R mobile robot as shown in Fig. 12. It includes JY901 inertial measurement module containing tri-axis gyroscopes, accelerometers and magnetometers and bumblebee2 camera containing depth of circumstance. Sensor data is logged to a PC at 512 Hz and imports accompanying software to provide the calibrated sensor measurements which are processed by the proposed DKF algorithm. The stereo camera provides SV information for robot to fuzz IMU location. Through fusing gyroscopes, accelerometers and magnetometers based on KF algorithm, the robot can get IMU location in navigation coordinate. As both the IMU algorithm and proposed DKF for IMU/SV were computed using identical sensor data, the performance of each algorithm could be evaluated relative to one another, independent of the sensor performance. A vision localisation system consisting of two-dimensional location was used to provide reference measurements of the actual path. To do so, the JY901 module was fixed to the centre of mobile platform and its orientation and the axial of the robot coincide. For the measurements of the robot location in navigation, the coordinate stereo camera was required where its data was logged to a PC at 10 Hz using visual location and IMU location fusion to get the best robot location. Mobile robot localisation system based on machine SV is introduced earlier. The visual localisation system is aimed at the environment feature landmark localisation. The recognition of sign is not the focus of this article. By contrast, the recognition of QR code is better for its average of the recognition of colour, shape and texture etc., because the QR code identification has a higher recognition rate, rotation invariance, high stability and other features. Therefore, this physical experiment chooses the QR code as environment feature sign recognition, experiment put seven pieces of pictures of different QR code for robot vision recognition and localisation, at the same time. In order to guarantee the robot observe at any time in corporation, the QR code image and bumblebee2 are roughly put in the same height. The construction of the metric QR codes position is set hand by hand while the absolute coordinates of seven pieces of pictures of the QR code in the laboratory are shown in Table 2.
The system consists of a serial port turn USB module to collect sensor data and uses the Kalman fusion accelerometer and gyroscope and magnetometer to calculate the attitude angle of the robot. At the same time, the airborne acceleration value is converted to a value under the navigation system for calculating the location of the robot by the navigation algorithm. When the mobile robot arrives at the location in distribution of the indoor environment feature landmark near QR code (the QR code   distribution as shown in Fig. 13). Stereo visual location can be obtained through the bumblebee2. At this time, the mobile robot again fuses inertial measurement location and machine visual location-based Kalman filter algorithm to acquire optimisation location of the mobile robot. It will be real-time display on the mobile robot PC interface.
In this paper, the physical experiment site 602 laboratory of Wuhan University of Science and Technology information institute of science and engineering, for the convenience of description, the following data units are in meters, the robot's initial location is set as (0, 0) while final location as (0, 15.5). Through simulating, the robot's motion path curve of blue is as shown in Fig. 11. Experiments with the speed of the mobile robot by the initial location (0, 0) to the first QR code security target moving along a straight line (3.2, 3.2), which sets the mobile robot safety campaign radius of 0.2 m. Therefore the angle between initial point and end point can be calculated by 45°and the angle of the measured laboratory toward the north by east is 60°. Through the above, two calculation can make the direction of the mobile robot for 105°north by east, gesture decoding algorithm can be used to get the rotate angle of mobile robot, the advancing angle difference can get required rotation attitude angle of mobile robot. Then the mobile robot would be forwards to point of view by controlling chassis selection to the corresponding while the mobile robot would be gone in a rectilinear motion. At the same time the IMU location can be calculated by the inertial measurement module in the movement. When the distance of the QR code distance is <0.2 m, the mobile will stop and bumblebee2 module operates immediately, with recognition of QR code and calculation 3 d distance information. This experiment records the location data of mobile robot in Matlab to draw out as shown in Fig. 15. The red solid line for the  real path of mobile robot, the blue dotted line for different path for mobile robot localisation algorithm to calculate, it is obvious that the use of IMU/MV fusion localisation for mobile robot path (as shown in Fig. 15a)) than the use of IMU localisation is closer to the real path of mobile robot (as shown in Fig. 15b)), and the experimental results show that the proposed dual mechanism of Kalman filtering fusion algorithm of inertial navigation and SV measurement on mobile robot indoor localisation accuracy is effective.

Conclusion and future work
As can be seen from the experiment, using inertial measurement and binocular SV for mobile robot localisation technology is feasible. Dual Kalman filtering mechanism was introduced to improve the localisation accuracy. First, high precision location information of mobile robot can be obtained by using fusing Kalman filter algorithm of accelerometer, gyroscope and magnetometer data. Second, inertial measurement precision can be optimised by using Kalman filtering algorithm combined with machine vision localisation algorithm. This method has good real-time performance and precision, the max RMS value <0.5 m.

Acknowledgments
This work was support by the Nation Nature Science Foundation of China (60705035, 61075087, 61203331), and the project supported by the Zhejiang Open Foundation of the Most Important Subjects.