Radar/INS Integration and Map Matching for Land Vehicle Navigation in Urban Environments

Autonomous navigation requires multi-sensor fusion to achieve a high level of accuracy in different environments. Global navigation satellite system (GNSS) receivers are the main components in most navigation systems. However, GNSS signals are subject to blockage and multipath effects in challenging areas, e.g., tunnels, underground parking, and downtown or urban areas. Therefore, different sensors, such as inertial navigation systems (INSs) and radar, can be used to compensate for GNSS signal deterioration and to meet continuity requirements. In this paper, a novel algorithm was applied to improve land vehicle navigation in GNSS-challenging environments through radar/INS integration and map matching. Four radar units were utilized in this work. Two units were used to estimate the vehicle’s forward velocity, and the four units were used together to estimate the vehicle’s position. The integrated solution was estimated in two steps. First, the radar solution was fused with an INS through an extended Kalman filter (EKF). Second, map matching was used to correct the radar/INS integrated position using OpenStreetMap (OSM). The developed algorithm was evaluated using real data collected in Calgary’s urban area and downtown Toronto. The results show the efficiency of the proposed method, which had a horizontal position RMS error percentage of less than 1% of the distance traveled for three minutes of a simulated GNSS outage.


Introduction
Autonomous navigation relies on a global navigation satellite system (GNSS) to estimate the vehicle's location in open-sky and suburban environments [1]. However, relying only on the GNSS is inadequate because the GNSS signal can be affected by the surrounding environment. For example, the GNSS signal will be blocked in underground and indoor areas. Moreover, the GNSS signal suffers from multipath errors and loss of lock in GNSS-challenging areas such as downtown [2]. On the other hand, an inertial navigation system (INS) can accurately estimate the vehicle's relative position, velocity, and attitude but only for a short time, as the INS solution deteriorates with time [3]. Therefore, GNSS/INS integration is employed in various navigation applications to overcome the limitations of both the GNSS and INS. There are two major types of GNSS/INS integration: loosely coupled integration [4] and tightly coupled integration [5].
Although GNSS/INS integration estimates the vehicle's navigation states with reliable accuracy in the short term, the accuracy will deteriorate during long GNSS outages. During these outages, the system relies only on inertial measurement unit (IMU) sensors (accelerometers and gyroscopes) whose output drifts rapidly with time, especially when utilizing low-cost IMUs [6]. Therefore, in autonomous navigation systems, more sensors, such as vehicle onboard sensors, lidar, vision sensors, and radar, are required to aid the GNSS/INS system. However, each of these sensors has its advantages and disadvantages. Gao et al. [7] used a vehicle's chassis sensors to compensate for the GNSS outage. Wheel speed sensors were used to estimate the vehicle's forward velocity, while the lateral velocity was estimated using the steering wheel angle sensor. The fusion of the kinematic model with the estimated velocity from the onboard sensors improved the accuracy of the vehicle localization during GNSS signal outages.
Lidar has mainly been used to percept the surrounding environment [8]. Recently, lidar has been used to aid IMUs in GNSS-denied environments and for simultaneous localization and mapping (SLAM) [9][10][11]. However, lidar is affected by snowy weather and requires a high level of computational power. Moreover, lidar is a high-cost sensor. On the other hand, a vision sensor is mainly used to detect highway lanes and objects in the surrounding environment, and it can be fused with an INS to limit the IMU's errors [12,13]. Nevertheless, the drawbacks of vision sensors are that they cannot work under different light conditions and are affected by rain, fog, and snowy weather. An odometer can also aid the INS in autonomous navigation applications by providing the vehicle's forward velocity; however, it is affected by slippery roads, an unequal diameter, and unequal pressure [14].
In contrast, radar is a low-cost sensor that works in different weather and light conditions. Thus, it is known as an all-weather sensor. Therefore, the research in this paper is based on the fusion of radar and an INS in environments that are challenging for a GNSS. Furthermore, the motion constraints of a land vehicle [15] are also applied for better performance, e.g., the non-holonomic constraint (NHC) [15], zero-velocity update (ZUPT) [16], and the zero-integrated heading rate (ZIHR) [16].
Radar is mainly used in adaptive cruise control (ACC) to detect the ranges and relative speeds of surrounding vehicles and objects to make autonomous driving safe.
There are two types of radar: 360 • radar and static radar. The 360 • radar rotates 360 degrees to scan the surrounding environment. It is known as imaging radar since it returns only the range, azimuth, and density value for each detected object. In contrast, static radar does not rotate and has a limited field of view. There are two types of static radar. The first type, continuous wave (CW) radar, can measure the Doppler velocity between an object and the radar unit, whereas the second type can measure the range and the Doppler velocity of the objects; this type is known as frequency modulated CW (FMCW) radar. Both 360 • radar and static radar are used in autonomous navigation applications.
Elkholy et al. [17,18] applied the oriented fast and rotated brief (ORB) technique to detect and match the objects from 360 • radar frames to estimate the transition and rotation between every two frames. The estimated solution was integrated with an IMU through an EKF to improve the navigation solution. In Elkholy et al. [19], FMCW radar was utilized, and a novel algorithm was adopted to estimate the vehicle's relative position and rotation. With the knowledge of the vehicle's average speed and radar data rate, the change in the range and azimuth between the objects in any two successive radar frames can be estimated. Then, the corresponding objects can be determined. The corresponding points were used to estimate the rotation and transition between any two radar frames. Then, the radar solution was integrated with an IMU for an ego-motion estimation. Rashed et al. [20] used an FMCW radar fixed at the center of the front bumper to estimate the distance traveled. The radar output was integrated with a reduced inertial sensor system (RISS). A RISS employs only three sensors (one vertical gyroscope and two horizontal accelerometers), whereas the full IMU contains three gyroscopes and three accelerometers. In Abosekeen et al. [21], the FMCW radar was fixed facing the ground to estimate the vehicle's forward speed from the Doppler frequency of the reflected radar waves. A RISS solution was integrated with an IMU through an EKF to improve the navigation solution in GNSS-denied environments.
The iterative closest point (ICP) method is the most commonly used method for estimating the relative transition and rotation between any two radar frames by minimizing the distance between point clouds [22][23][24]. Normal distribution transform (NDT) is another technique that relies on building a normal distribution model for point clouds in every radar frame. These models are then matched to estimate the vehicle's ego-motion [25]. To improve the accuracy of the vehicle's position, a predefined map was utilized to correct the integrated solution by matching the integrated position with map links. The predefined map can be generated using sensors such as cameras, lidar, or radar [26]. However, using these sensors to make the maps is time-consuming due to the high computational cost and the high volume of data to be stored. The best solution is to use navigation maps such as Google Maps or OpenStreetMap (OSM) [27]. OSM was created by volunteers using standard surveying techniques. OSM provides rich information, including road links, buildings, traffic signs, the number of lanes, and other data. Radar frames can be matched with predefined digital maps to estimate the vehicle's transition and rotation [28].
The first step in map matching is to determine the vehicle's location on the map. In other words, to select the correct map link where the position will be projected. Different algorithms can be utilized to determine this map link. Geometrical and topological techniques depend on finding the distances between the position and the map links, and then the map link with the minimum distance will be selected [29]. The second technique is the probabilistic algorithm. This algorithm draws a confidence ellipse from the position's standard deviation and determines which map links intersect with this ellipse [30]. Another technique to determine the map link is to use the fuzzy logic algorithm. The fuzzy logic algorithm takes the inputs from the navigation sensors and returns the map links as likelihoods [31]. The main difference between the previous techniques is the method of selecting the correct map link for matching.
This research focuses on radar and INS integration to meet the continuity requirements, compensate for GNSS outages, and improve the navigation solution for land vehicles. Moreover, the integrated solution is corrected by applying map matching through open access OSM. Section 2 describes the methodology and the integration scheme used in this research. Section 3 shows the experimental work and the results from the proposed method. Finally, Section 4 concludes the presented work.

Methodology
The navigation system proposed in this research relies on four FMCW radar units, an IMU, and OpenStreetMap (OSM). The four radar units were used for the ego-motion estimation, while two of the four units were used to estimate the vehicle's forward speed. The map-matching technique was applied at the end to correct the integrated position to OSM. A successive EKF was implemented to correct the navigation solution and limit the IMU errors. Figure 1 shows a block diagram of the proposed method. The next subsections describe the techniques applied to use radar for an ego-motion estimation, to estimate the vehicle's forward velocity, in the EKF prediction and update stages, and in map matching.
predefined map can be generated using sensors such as cameras, lidar, or radar [26]. Ho ever, using these sensors to make the maps is time-consuming due to the high compu tional cost and the high volume of data to be stored. The best solution is to use navigati maps such as Google Maps or OpenStreetMap (OSM) [27]. OSM was created by volu teers using standard surveying techniques. OSM provides rich information, includi road links, buildings, traffic signs, the number of lanes, and other data. Radar frames c be matched with predefined digital maps to estimate the vehicle's transition and rotati [28].
The first step in map matching is to determine the vehicle's location on the map. other words, to select the correct map link where the position will be projected. Differe algorithms can be utilized to determine this map link. Geometrical and topological tec niques depend on finding the distances between the position and the map links, and th the map link with the minimum distance will be selected [29]. The second technique is t probabilistic algorithm. This algorithm draws a confidence ellipse from the position standard deviation and determines which map links intersect with this ellipse [30]. A other technique to determine the map link is to use the fuzzy logic algorithm. The fuz logic algorithm takes the inputs from the navigation sensors and returns the map links likelihoods [31]. The main difference between the previous techniques is the method selecting the correct map link for matching.
This research focuses on radar and INS integration to meet the continuity requi ments, compensate for GNSS outages, and improve the navigation solution for land ve cles. Moreover, the integrated solution is corrected by applying map matching throu open access OSM. Section 2 describes the methodology and the integration scheme us in this research. Section 3 shows the experimental work and the results from the propos method. Finally, Section 4 concludes the presented work.

Methodology
The navigation system proposed in this research relies on four FMCW radar uni an IMU, and OpenStreetMap (OSM). The four radar units were used for the ego-moti estimation, while two of the four units were used to estimate the vehicle's forward spee The map-matching technique was applied at the end to correct the integrated position OSM. A successive EKF was implemented to correct the navigation solution and limit t IMU errors. Figure 1 shows a block diagram of the proposed method. The next subsectio describe the techniques applied to use radar for an ego-motion estimation, to estimate t vehicle's forward velocity, in the EKF prediction and update stages, and in map matchin

Radar for Ego-Motion Estimation
The four static radar units were fixed on the roof of the vehicle to collect and detect the objects around the vehicle. The four frames from all the radar units were synchronized and combined into one frame representing the environment around the vehicle. Thus, the four static radar units worked as a 360 • radar unit.
Before processing the radar data, noisy data and outliers were detected and removed by applying a set of constraints. First, the close and far points were removed because these points were considered ghost points. The Doppler frequency was also used to differentiate between static and moving objects. The moving objects were removed from the radar frames. Finally, a threshold was applied to remove the outliers with intensity values lower than the threshold.
After the data analysis and preprocessing, data association was applied to find the corresponding points between radar frames. The change in the range and azimuth between any two corresponding points in two successive radar frames could be estimated based on the knowledge of the radar data rate (about 8 Hz) and the average vehicle speed.
A novel algorithm was implemented to estimate the vehicle's ego-motion ( Figure 2). This algorithm calculates the coordinates of the objects (x t i , y t i ) in the first radar frame. It then uses these calculated coordinates and the range in the second frame to estimate the coordinates of the radar center (x t+∆t c , y t+∆t c ). The difference between the calculated radar center in the second frame and the one in the first frame represents the transition between the two frames. x

Radar for Ego-Motion Estimation
The four static radar units were fixed on the roof of the vehicle to collect and detect the objects around the vehicle. The four frames from all the radar units were synchronized and combined into one frame representing the environment around the vehicle. Thus, the four static radar units worked as a 360° radar unit.
Before processing the radar data, noisy data and outliers were detected and removed by applying a set of constraints. First, the close and far points were removed because these points were considered ghost points. The Doppler frequency was also used to differentiate between static and moving objects. The moving objects were removed from the radar frames. Finally, a threshold was applied to remove the outliers with intensity values lower than the threshold.
After the data analysis and preprocessing, data association was applied to find the corresponding points between radar frames. The change in the range and azimuth between any two corresponding points in two successive radar frames could be estimated based on the knowledge of the radar data rate (about 8 Hz) and the average vehicle speed.
A novel algorithm was implemented to estimate the vehicle's ego-motion ( Figure 2). This algorithm calculates the coordinates of the objects ( , ) in the first radar frame. It then uses these calculated coordinates and the range in the second frame to estimate the coordinates of the radar center ( +∆ , +∆ ). The difference between the calculated radar center in the second frame and the one in the first frame represents the transition between the two frames. The rotation between the two frames can be estimated by calculating the difference between the bearings of the objects at a time ( ) and time ( + ∆ ).
If we have multiple corresponding objects between any two radar frames, the least squares method is applied to obtain the final transformation and rotation between the two frames. The rotation between the two frames can be estimated by calculating the difference between the bearings of the objects at a time (t) and time (t + ∆t).
If we have multiple corresponding objects between any two radar frames, the least squares method is applied to obtain the final transformation and rotation between the two frames.
where δx = is the vector containing the corrections for the transition (dx and dy) and the rotation (θ) between any two radar frames.

Estimating the Vehicle's Forward Speed
FMCW radar provides information about the Doppler frequency between the radar unit and the detected objects. Therefore, the relative velocity between the radar and the objects can be estimated from the Doppler frequency. This relative velocity will be the vehicle's forward velocity if the detected objects are static.
There were four radar units fixed on the roof of the vehicle. The two radar units at the front right and left of the vehicle were chosen to estimate the vehicle's forward velocity. Then, the average forward velocity was calculated from the two units. The radar provides the relative velocity of each object based on the Doppler frequency. The relative velocity value could be positive if the objects are moving far from the vehicle and negative if the objects are coming toward the vehicle. The vehicle velocity estimation depends on the static objects, while the moving objects act as outliers and should be excluded. The static objects will appear to approach the vehicle through the two front radar units. Therefore, only objects with a negative relative velocity were kept to exclude the objects moving away from the vehicle.
If a static object (i) is detected by a radar unit, as shown in Figure 3, the relative velocity between this object and the radar (v i ) will represent the absolute radar velocity (v R ) since the static object's velocity is zero. Using the bearing angle (θ i ), the radar velocity can be decomposed into the x and y components in the radar frame using the equation: where v Rx and v Ry are the radar velocity components in the radar frame (x and y).
where = is the vector containing the corrections for the transition (dx and dy) and the rotation ( ) between any two radar frames. A = the design matrix. = the measurement variance-covariance matrix. = the misclosure vector.

Estimating the Vehicle's Forward Speed
FMCW radar provides information about the Doppler frequency between the radar unit and the detected objects. Therefore, the relative velocity between the radar and the objects can be estimated from the Doppler frequency. This relative velocity will be the vehicle's forward velocity if the detected objects are static.
There were four radar units fixed on the roof of the vehicle. The two radar units at the front right and left of the vehicle were chosen to estimate the vehicle's forward velocity. Then, the average forward velocity was calculated from the two units. The radar provides the relative velocity of each object based on the Doppler frequency. The relative velocity value could be positive if the objects are moving far from the vehicle and negative if the objects are coming toward the vehicle. The vehicle velocity estimation depends on the static objects, while the moving objects act as outliers and should be excluded. The static objects will appear to approach the vehicle through the two front radar units. Therefore, only objects with a negative relative velocity were kept to exclude the objects moving away from the vehicle.
If a static object ( ) is detected by a radar unit, as shown in Figure 3, the relative velocity between this object and the radar ( ) will represent the absolute radar velocity ( ) since the static object's velocity is zero. Using the bearing angle ( ), the radar velocity can be decomposed into the and components in the radar frame using the equation: where and are the radar velocity components in the radar frame ( and ). Suppose there are multiple objects ( ) in the radar frame. Then, the radar velocities for and in radar local frame can be estimated. The least squares estimation is applied to estimate a reliable solution.  Suppose there are multiple objects (n) in the radar frame. Then, the radar velocities for x and y in radar local frame can be estimated. The least squares estimation is applied to estimate a reliable solution.
Sensors 2023, 23, 5119 6 of 17 Now, the magnitude (V R ) and the orientation angle (α) can be calculated from v Rx and v Ry .
The final step is to convert the radar velocity (V R ) from the radar frame into the vehicle frame to estimate the vehicle's forward speed (V veh ).
where ϕ is the boresight angle between the radar and vehicle frames.
Since the IMU body frame was aligned with the vehicle frame, the vehicle's estimated forward speed is used to estimate the vehicle's transition in the east and north directions. Then, the updated position of the vehicle can be estimated.
where ∆x t and ∆y t are the transitions in the east and north directions at time t, ∆t is the time difference between the radar frames, and az t is the heading angle at time t.

Radar/INS Integration
In open-sky environments, loosely coupled GNSS/INS integration was used to estimate the vehicle's navigation solution. However, a successive Kalman filter was implemented in case of a GNSS outage. First, the radar solution was integrated with the INS solution through a closed-loop EKF. Then, the corrected position, obtained via map matching, was also integrated through an EKF to aid the IMU and overcome long-time errors. One of the practical aspects of multi-sensor fusion is data synchronization, which can be achieved in various ways, such as interpolation or by searching for the closest values [32]. In our work, due to the high IMU rate compared to the radar rate, the algorithm searched for the closest IMU timestamp for each radar frame.
A KF consists of two phases (the prediction stage and the update stage). There are two main models needed for the KF. The system model is the INS model, and it is the core for the prediction stage to predict the error states and the covariance matrix. The other model is the observation model, which depends on radar to update the navigation filter to estimate the error state to correct the navigation solution. There are 15 error states (δx).
where δp 1x3 is the position error, δv 1x3 is the velocity error, δθ 1x3 is the attitude error, δb a1x3 is the accelerometer bias, and δb g1x3 is the gyroscope bias.
The system model in the continuous case is described as follows: where .
x is the time rate of change of the state vector, F is the dynamic matrix, x is the state vector, G is the noise coefficient matrix, and w is the system noise.
The system model in the discrete case is as follows: where x t+∆t is the predicted state vector at the time (t + ∆t), ∅ is the transition matrix, x t is the state vector at the time (t), p t+∆t is the predicted covariance matrix at the time (t + ∆t), p t is the covariance matrix at the time (t), w t is the system noise at the time (t), and Q t is the process noise matrix. The observation model is the core of the update stage, and it depends on the other available sensors to aid the IMU; in this case, the estimated position and heading from the radar were used to build the observation model.
where z represents the measurements, H represents the design matrix, and η represents the measurement noise. The updated stage is as follows: where K t+∆t is the gain matrix, R t+∆t is the covariance matrix of the measurement noise, x t+∆t is the updated state vector, andp t+∆t is the updated covariance matrix of the state vector.

Map Matching
OSM is an open access map anyone can download, edit, or update. Maps of Calgary and Toronto were downloaded and employed in this research. The map-matching technique was applied in two steps. The first step was to select the correct map link. Geometrical and topological techniques were exploited to determine the correct map link. A map link consists of points or nodes. The distance between the integrated position and the nodes was calculated (Figure 4), and the map link with the minimum distance was selected. Another constraint that can be adapted to ensure the selected map link is correct is the azimuth. The integrated azimuth should be equal to the map link azimuth. where +∆ ′ is the predicted state vector at the time ( + ∆ ), ∅ is the transition matrix, is the state vector at the time (t), +∆ ′ is the predicted covariance matrix at the time ( + ∆ ), is the covariance matrix at the time ( ), is the system noise at the time ( ), and is the process noise matrix. The observation model is the core of the update stage, and it depends on the other available sensors to aid the IMU; in this case, the estimated position and heading from the radar were used to build the observation model.
where represents the measurements, represents the design matrix, and represents the measurement noise.
The updated stage is as follows: where +∆ is the gain matrix, +∆ is the covariance matrix of the measurement noise, ̂+ ∆ is the updated state vector, and ̂+ ∆ is the updated covariance matrix of the state vector.

Map Matching
OSM is an open access map anyone can download, edit, or update. Maps of Calgary and Toronto were downloaded and employed in this research. The map-matching technique was applied in two steps. The first step was to select the correct map link. Geometrical and topological techniques were exploited to determine the correct map link. A map link consists of points or nodes. The distance between the integrated position and the nodes was calculated (Figure 4), and the map link with the minimum distance was selected. Another constraint that can be adapted to ensure the selected map link is correct is the azimuth. The integrated azimuth should be equal to the map link azimuth. The second step was to project the integrated position onto the selected map link to find the vehicle's corrected position. These two steps were implemented each time unless the vehicle began to turn. This turn could be detected by checking the azimuth angular velocity measurements. If the absolute azimuth angular velocity measurements were higher than a threshold, it meant that the vehicle had reached an intersection and was making a turn. Moreover, the azimuth angular velocity measurements could be utilized The second step was to project the integrated position onto the selected map link to find the vehicle's corrected position. These two steps were implemented each time unless the vehicle began to turn. This turn could be detected by checking the azimuth angular velocity measurements. If the absolute azimuth angular velocity measurements were higher than a threshold, it meant that the vehicle had reached an intersection and was making a turn. Moreover, the azimuth angular velocity measurements could be utilized to determine if the vehicle was turning right or left, which aided in selecting the new map link ( Figure 5).

Experimental Work and Results
Two real datasets were collected. The first one was collected in a suburban area in Calgary, Alberta, Canada, while the second data was collected in downtown Toronto, Ontario, Canada. For the Calgary dataset, four FMCW UMRR-11 Type 132 radar units were mounted on the vehicle's roof [33]. In addition, the Xsens MTi-G-710 module was used to collect the IMU and GNSS measurements [34], and the reference data were collected by a Novatel SPAN-SE system with an IMU-FSAS (Figures 6 and 7). On the other hand, in the Toronto test, four FMCW UMRR-96 Type 153 radar units were used [35]. In addition, the u-blox ZED-F9R module was used to collect the IMU and GNSS measurements [36]. Finally, the reference data were collected by a Novatel PwrPak7 system with an IMU-KVH1750.

Experimental Work and Results
Two real datasets were collected. The first one was collected in a suburban area in Calgary, Alberta, Canada, while the second data was collected in downtown Toronto, Ontario, Canada. For the Calgary dataset, four FMCW UMRR-11 Type 132 radar units were mounted on the vehicle's roof [33]. In addition, the Xsens MTi-G-710 module was used to collect the IMU and GNSS measurements [34], and the reference data were collected by a Novatel SPAN-SE system with an IMU-FSAS (Figures 6 and 7). On the other hand, in the Toronto test, four FMCW UMRR-96 Type 153 radar units were used [35]. In addition, the u-blox ZED-F9R module was used to collect the IMU and GNSS measurements [36]. Finally, the reference data were collected by a Novatel PwrPak7 system with an IMU-KVH1750.

Experimental Work and Results
Two real datasets were collected. The first one was collected in a suburban area in Calgary, Alberta, Canada, while the second data was collected in downtown Toronto, Ontario, Canada. For the Calgary dataset, four FMCW UMRR-11 Type 132 radar units were mounted on the vehicle's roof [33]. In addition, the Xsens MTi-G-710 module was used to collect the IMU and GNSS measurements [34], and the reference data were collected by a Novatel SPAN-SE system with an IMU-FSAS (Figures 6 and 7). On the other hand, in the Toronto test, four FMCW UMRR-96 Type 153 radar units were used [35]. In addition, the u-blox ZED-F9R module was used to collect the IMU and GNSS measurements [36]. Finally, the reference data were collected by a Novatel PwrPak7 system with an IMU-KVH1750. Figure 6. The four UMRR-11 Type 132 radar units used to collect the data in Calgary. Figure 6. The four UMRR-11 Type 132 radar units used to collect the data in Calgary.
The first algorithm applied was to estimate the vehicle's forward velocity from two of the four radar units. Data from the two radar units at the front right and left of the vehicle were utilized for this purpose. Then, radar odometry was applied to the four radar units. The first algorithm applied was to estimate the vehicle's forward velocity from two of the four radar units. Data from the two radar units at the front right and left of the vehicle were utilized for this purpose. Then, radar odometry was applied to the four radar units. Finally, the radar solution was integrated with the INS, and map matching was applied to the integrated position. The next subsections show the results for the two datasets.

Calgary Data
For the Calgary data, Figure 8 shows the estimated average forward velocity from the two radar units compared to the forward velocity from the reference system. The RMSE for the estimated average forward velocity is 1.08 m/s, the distance traveled was 2.1 km, and the test duration was 9.98 min. The map on which the data was collected was downloaded from the OSM website for map matching (Figure 9). A simulated GNSS outage was introduced for 90 s, and the corrected position after applying the map-matching technique is shown in Figure 10.

Calgary Data
For the Calgary data, Figure 8 shows the estimated average forward velocity from the two radar units compared to the forward velocity from the reference system. The RMSE for the estimated average forward velocity is 1.08 m/s, the distance traveled was 2.1 km, and the test duration was 9.98 min. The first algorithm applied was to estimate the vehicle's forward velocity from two of the four radar units. Data from the two radar units at the front right and left of the vehicle were utilized for this purpose. Then, radar odometry was applied to the four radar units. Finally, the radar solution was integrated with the INS, and map matching was applied to the integrated position. The next subsections show the results for the two datasets.

Calgary Data
For the Calgary data, Figure 8 shows the estimated average forward velocity from the two radar units compared to the forward velocity from the reference system. The RMSE for the estimated average forward velocity is 1.08 m/s, the distance traveled was 2.1 km, and the test duration was 9.98 min. The map on which the data was collected was downloaded from the OSM website for map matching (Figure 9). A simulated GNSS outage was introduced for 90 s, and the corrected position after applying the map-matching technique is shown in Figure 10. The map on which the data was collected was downloaded from the OSM website for map matching (Figure 9). A simulated GNSS outage was introduced for 90 s, and the corrected position after applying the map-matching technique is shown in Figure 10.

Toronto Data
The vehicle's forward velocity was estimated, as shown in Figure 11. The RMSE for the estimated average forward velocity is 2.65 m/s, the distance traveled was 9.94 km, and the test duration was 42.7 min.

Toronto Data
The vehicle's forward velocity was estimated, as shown in Figure 11. The RMSE for the estimated average forward velocity is 2.65 m/s, the distance traveled was 9.94 km, and the test duration was 42.7 min. Figure 11. The estimated average forward speed from the radar unit versus the forward reference speed for Toronto data.
Two different simulated GNSS signal outages were introduced to prove the efficiency of the proposed algorithm. The simulated outage duration ranged from 30 s to 180 s. Finally, the road map was downloaded to apply the map-matching technique ( Figure 12). Two different simulated GNSS signal outages were introduced to prove the efficiency of the proposed algorithm. The simulated outage duration ranged from 30 s to 180 s. Finally, the road map was downloaded to apply the map-matching technique ( Figure 12). Since these data were collected in the downtown area, the GNSS signal suffered from blockage and multipath errors. However, Figure 13 shows that the radar/INS integrated solution with map matching is better than the GNSS solution, demonstrating the benefit of the proposed integrated solution. Since these data were collected in the downtown area, the GNSS signal suffered from blockage and multipath errors. However, Figure 13 shows that the radar/INS integrated solution with map matching is better than the GNSS solution, demonstrating the benefit of the proposed integrated solution. Since these data were collected in the downtown area, the GNSS signal suffered from blockage and multipath errors. However, Figure 13 shows that the radar/INS integrated solution with map matching is better than the GNSS solution, demonstrating the benefit of the proposed integrated solution.  From Table 2, the position percentage error is less than 0.5% for the three minutes of the simulated GNSS outage. The position RMSE is 2.72 m, and the distance traveled was 611.69 m for the three-minute outage ( Figure 13). The percentage errors are 0.61%, 0.86%, and 1.01% for the 60 s, 90 s, and 120 s GNSS outages, respectively, while the traveled distance ranged from 204 m to 304 m.  Figure 14 shows the integrated solution with map matching during another simulated GNSS outage. Table 3 shows the results for the position error for this GNSS simulated outage. The GNSS outage duration was from 30 s to 180 s. The position RMSE is 4.47 m for a traveled distance of 845.74 m, and the percentage error during the three-minute GNSS signal outage is 0.53%. For 60 s, 90 s, and 120 s of GNSS outage, the percentage errors are 0.71%, 0.64%, and 0.61%, respectively, and the traveled distance was from 495.16 m to 679.49 m. Figure 14 shows the integrated solution with map matching during another simulated GNSS outage. Table 3 shows the results for the position error for this GNSS simulated outage. The GNSS outage duration was from 30 s to 180 s. The position RMSE is 4.47 m for a traveled distance of 845.74 m, and the percentage error during the three-minute GNSS signal outage is 0.53%. For 60 s, 90 s, and 120 s of GNSS outage, the percentage errors are 0.71%, 0.64%, and 0.61%, respectively, and the traveled distance was from 495.16 m to 679.49 m.

Enable and Disable Map Matching
The map-matching technique was applied during the GNSS outage. However, at the road intersections, the map-matching algorithm was disabled since the map lines intersected at 90 • (straight lines) while the vehicle moved from one line to another line through a curve. Therefore, the map-matching technique was disabled during intersections and enabled when the vehicle left the intersection and moved in a straight line, as shown in Figure 15. road intersections, the map-matching algorithm was disabled since the map lines intersected at 90° (straight lines) while the vehicle moved from one line to another line through a curve. Therefore, the map-matching technique was disabled during intersections and enabled when the vehicle left the intersection and moved in a straight line, as shown in Figure  15.

Conclusions
This research proposed a new technique based on radar/INS integration and map matching for robust vehicle navigation in GNSS-challenging environments. The proposed system consists of four static FMCW radar units and OSM for map matching. Two radar units were used to estimate the vehicle's forward speed, while the four radar units were used to estimate the vehicle's transition and rotation. The radar solution was first integrated with the INS solution, and then map-matching techniques were employed to correct the integrated position and obtain the final navigation solution. Road intersections were detected from the azimuth angular velocity measurements, and map matching was deactivated at the intersections. Real driving data were collected to test the proposed method, and simulated GNSS signal outages were introduced. The percentage RMSE error was less than 1% of the traveled distance in most cases during two and three minutes of GNSS signal outage.
One of the limitations of the proposed algorithm is that the map accuracy will limit

Conclusions
This research proposed a new technique based on radar/INS integration and map matching for robust vehicle navigation in GNSS-challenging environments. The proposed system consists of four static FMCW radar units and OSM for map matching. Two radar units were used to estimate the vehicle's forward speed, while the four radar units were used to estimate the vehicle's transition and rotation. The radar solution was first integrated with the INS solution, and then map-matching techniques were employed to correct the integrated position and obtain the final navigation solution. Road intersections were detected from the azimuth angular velocity measurements, and map matching was deactivated at the intersections. Real driving data were collected to test the proposed method, and simulated GNSS signal outages were introduced. The percentage RMSE error was less than 1% of the traveled distance in most cases during two and three minutes of GNSS signal outage.
One of the limitations of the proposed algorithm is that the map accuracy will limit the final accuracy. Additionally, the OSM maps are open source so anyone can edit or update the map. Therefore, it is recommended to use high-definition maps to improve the corrected position accuracy. In future work, a comparison of the estimated forward velocity from the radar units with the forward velocity from the vehicle odometer will be investigated. Furthermore, the use of one EKF or different sensor fusion filters and architectures will be studied. Finally, a smart module will be developed to automatically switch from GNSS/INS integration to radar/INS integration based on factors such as the satellite geometry and the number of visible satellites.