Skip to main content

Advertisement

Log in

Camera, LiDAR and Multi-modal SLAM Systems for Autonomous Ground Vehicles: a Survey

  • Survey Paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Simultaneous Localization and Mapping (SLAM) have been widely studied over the last years for autonomous vehicles. SLAM achieves its purpose by constructing a map of the unknown environment while keeping track of the location. A major challenge, which is paramount during the design of SLAM systems, lies in the efficient use of onboard sensors to perceive the environment. The most widely applied algorithms are camera-based SLAM and LiDAR-based SLAM. Recent research focuses on the fusion of camera-based and LiDAR-based frameworks that show promising results. In this paper, we present a study of commonly used sensors and the fundamental theories behind SLAM algorithms. The study then presents the hardware architectures used to process these algorithms and the performance obtained when possible. Secondly, we highlight state-of-the-art methodologies in each modality and in the multi-modal framework. A brief comparison followed by future challenges is then underlined. Additionally, we provide insights to possible fusion approaches that can increase the robustness and accuracy of modern SLAM algorithms; hence allowing the hardware-software co-design of embedded systems taking into account the algorithmic complexity and the embedded architectures and real-time constraints.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Code Availability

Not applicable

References

  1. Abouzahir, M, Elouardi, A, Latif, R, Bouaziz, S, algorithms, A.T.: Embedding slam has it come of age? Robot. Auton. Syst. 100, 14–26 (2018)

    Article  Google Scholar 

  2. Agarwal, S, Mierle, K, et al.: Ceres solver. http://ceres-solver.org

  3. Andresen, L, Brandemuehl, A, Hönger, A, Kuan, B, Vödisch, N, Blum, H, Reijgwart, V, Bernreiter, L, Schaupp, L, Chung, JJ, et al: Accurate mapping and planning for autonomous racing. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 4743–4749 (2020)

  4. Andrew, AM: Multiple view geometry in computer vision Kybernetes (2001)

  5. Arandjelovic, R, Gronat, P, Torii, A, Pajdla, T, Josef, S: NetVLAD: CNN architecture for weakly supervised place recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 5297 –5307 (2016)

  6. Arandjelovic, R, Zisserman, A: All about VLAD. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp 1578–1585 (2013)

  7. Sarvrood, YB, Hosseinyalamdary, S, Gao, Y: Visual-liDAR odometry aided by reduced IMU. ISPRS Int. J. Geo-Inform. 5(1), 3 (2016)

    Article  Google Scholar 

  8. Behley, J, Stachniss, C: Efficient surfel-based slam using 3d laser range data in urban environments. In: Robotics: Science and Systems 2018 (2018)

  9. Bernuy, F, Ruiz-del Solar, J.: Topological semantic mapping and localization in urban road scenarios. J. Intell. Robot. Syst. 92(1), 19–32 (2018)

    Article  Google Scholar 

  10. Berrio, J.S., Worrall, S., Shan, M., Nebot, E: Long-term map maintenance pipeline for autonomous vehicles. arXiv:2008.12449 (2020)

  11. Besl, P.J., McKay, ND: Method for registration of 3-d shapes. In: Sensor Fusion IV: Control Paradigms and Data Structures, vol. 1611, pp 586–606. International Society for Optics and Photonics (1992)

  12. Biber, P, Straßer, W: The normal distributions transform A new approach to laser scan matching. In: Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453), vol. 3, pp. 2743–2748. IEEE (2003)

  13. Blanco, J-L, Fernández-Madrigal, J-A, Gonzalez, J: Toward a unified Bayesian approach to hybrid metric–topological SLAM. IEEE Trans. Robot. 24(2), 259–270 (2008)

    Article  Google Scholar 

  14. Borenstein, J, Everett, H R, Feng, L, Wehe, D: Mobile robot positioning: Sensors and techniques. J. Robot. Syst. 14(4), 231–249 (1997)

    Article  Google Scholar 

  15. Bowman, SL, Atanasov, N, Daniilidis, K, Pappas, GJ: Probabilistic data association for semantic SLAM. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp 1722–1729. IEEE (2017)

  16. Cadena, C, Carlone, L, Carrillo, H, Latif, Y, Scaramuzza, D, Neira, J, Reid, I, Leonard, JJ: Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 32(6), 1309–1332 (2016)

    Article  Google Scholar 

  17. Campos, C, Elvira, R, Gómez Rodríguez, JJ, Montiel, JMM, Tardós, JD: ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM. arXiv:2007.11898(2020)

  18. Cao, F, Zhuang, Y, Zhang, H, Wang, W: Robust place recognition and loop closing in laser-based SLAM for ugvs in urban environments. IEEE Sensors J. 18(10), 4242–4252 (2018)

    Article  Google Scholar 

  19. Censi, A: An ICP variant using a point-to-line metric. In: 2008 IEEE International Conference on Robotics and Automation, pp. 19–25. IEEE (2008)

  20. Chen, X, Milioto, A, Palazzolo, E, Giguere, P, Behley, J, Stachniss, C: SuMa++: Efficient LiDAR-based semantic slam. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 4530–4537. IEEE (2019)

  21. Concha, A, Civera, J: DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 5686–5693. IEEE (2015)

  22. Cvišić, I, Ćesić, J, Marković, I, Petrović, I: SOFT-SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles. J. Field Robot. 35 (4), 578–595 (2018)

    Article  Google Scholar 

  23. Das, A, Waslander, SL: Scan registration with multi-scale k-means normal distributions transform. In: 2012 IEEE/RSJ International Conference On Intelligent Robots and Systems, pp 2705–2710. IEEE (2012)

  24. Davison, AJ: Real-time simultaneous localisation and mapping with a single camera. In: IEEE International Conference on Computer Vision, vol. 3, pp 1403–1403. IEEE Computer Society (2003)

  25. Davison, AJ, Reid, ID, Molton, ND, Olivier, S: MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)

    Article  Google Scholar 

  26. De Croce, M, Pire, T, Bergero, F: DS-PTAM: Distributed stereo parallel tracking and mapping SLAM system. J. Intell. Robot. Syst. 95(2), 365–377 (2019)

    Article  Google Scholar 

  27. Debeunne, C, Vivet, D: A review of visual-lidar fusion based simultaneous localization and mapping. Sensors 20(7), 2068 (2020)

    Article  Google Scholar 

  28. Dellaert, F: Factor graphs and GTSAM: A handson introduction. Technical report, Georgia Institute of Technology (2012)

  29. Deschaud, J-E: IMLS-SLAM: Scan-to-model matching based on 3d data. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp 2480–2485. IEEE (2018)

  30. Ding, X, Wang, Y, Xiong, R, Li, D, Li, T, Yin, H, Zhao, L: Persistent stereo visual localization on cross-modal invariant map. IEEE Trans. Intell. Transp. Syst. 21(11), 4646–4658 (2019)

    Article  Google Scholar 

  31. Dissanayake, GM W M, Newman, P, Clark, S, Durrant-Whyte, HF, Csorba, M: A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 17(3), 229–241 (2001)

    Article  Google Scholar 

  32. Dubé, R, Cramariuc, A, Dugas, D, Nieto, J, Siegwart, R, Cadena, C: SegMap: 3d segment mapping using data-driven descriptors. arXiv:1804.09557 (2018)

  33. Einhorn, E, Gross, H-M: Generic NDT mapping in dynamic environments and its application for lifelong SLAM. Robot. Auton. Syst. 69, 28–39 (2015)

    Article  Google Scholar 

  34. Engel, J, Koltun, V, Cremers, D: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2017)

    Article  Google Scholar 

  35. Engel, J, Schöps, T, Cremers, D: LSD-SLAM: Large-scale direct monocular SLAM. In: European Conference on Computer Vision, pp 834–849. Springer (2014)

  36. Forster, C, Zhang, Z, Gassner, M, Werlberger, M, Davide, S: SVO: Semidirect Visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 33(2), 249–265 (2016)

    Article  Google Scholar 

  37. Fuentes-Pacheco, J, Ruiz-Ascencio, J, Rendón-Mancha, JM: Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev. 43(1), 55–81 (2015)

    Article  Google Scholar 

  38. Gálvez-López, D, Tardos, JD: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Robot. 28(5), 1188–1197 (2012)

    Article  Google Scholar 

  39. Geiger, A, Lenz, P, Urtasun, R: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2012)

  40. Geiger, A, Ziegler, J, Stiller, C: StereoScan: Dense 3d reconstruction in real-time. In: Intelligent Vehicles Symposium (IV) (2011)

  41. Gong, Z, Ying, R, Wen, F, Qian, J, Liu, P: Tightly coupled integration of GNSS and vision SLAM using 10-DoF optimization on manifold. IEEE Sensors J. 19(24), 12105–12117 (2019)

    Article  Google Scholar 

  42. Graeter, J, Wilczynski, A, Lauer, M: LIMO: Lidar-monocular visual odometry. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 7872–7879. IEEE (2018)

  43. Grisetti, G, Kümmerle, R, Stachniss, C, Burgard, W: A tutorial on graph-based SLAM. IEEE Intell. Transp. Syst. Mag. 2(4), 31–43 (2010)

    Article  Google Scholar 

  44. Grisetti, G, Stachniss, C, Burgard, W: Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Robot. 23(1), 34–46 (2007)

    Article  Google Scholar 

  45. Guo, Y, Sohel, F, Bennamoun, M, Lu, M, Wan, J: Rotational projection statistics for 3d local surface description and object recognition. Int. J. Comput. Vis. 105(1), 63–86 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  46. He, K, Gkioxari, G, DollṔar, P, Girshick, R: Mask r-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp 2961–2969 (2017)

  47. Henry, P, Krainin, M, Herbst, E, Ren, X, Fox, D: RGB-D mapping: Using depth cameras for dense 3d modeling of indoor environments. In: Experimental Robotics, pp 477–491. Springer (2014)

  48. Hess, W, Kohler, D, Rapp, H, Andor, D: Real-time loop closure in 2d liDAR SLAM. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp 1271–1278. IEEE (2016)

  49. Hong, Z, Petillot, Y, Sen, W: RadarSLAM: Radar based large-scale SLAM, in all weathers. arXiv:2005.02198 (2020)

  50. Hornung, A, Wurm, KM, Bennewitz, M, Stachniss, C, Burgard, W: OctoMap: An efficient probabilistic 3d mapping framework based on octrees. Autonom Rob 34(3), 189–206 (2013)

    Article  Google Scholar 

  51. Houseago, C, Bloesch, M, Leutenegger, S: KO-Fusion: dense visual SLAM with tightly-coupled kinematic and odometric tracking. In: 2019 International Conference on Robotics and Automation (ICRA), pp 4054–4060. IEEE (2019)

  52. Hyun, E, Jin, Y-S, Lee, J-H: Moving and stationary target detection scheme using coherent integration and subtraction for automotive fmcw radar systems. In: 2017 IEEE Radar Conference (RadarConf), pp 0476–0481. IEEE (2017)

  53. Iandola, FN, Han, S, Moskewicz, MW, Ashraf, K, Dally, WJ, Keutzer, K: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 mb model size. arXiv:1602.07360 (2016)

  54. Ji, K, Chen, H, Di, H, Gong, J, Xiong, G, Qi, J, Yi, T: CPFG-SLAM: a robust simultaneous localization and mapping based on LIDAR in off-road environment. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp 650–655. IEEE (2018)

  55. Jiang, G, Yin, L, Jin, S, Tian, C, Ma, X, Ou, Y: A simultaneous localization and mapping (SLAM) framework for 2.5 d map building based on low-cost liDAR and vision fusion. Appl. Sci. 9 (10), 2105 (2019)

    Article  Google Scholar 

  56. Kaess, M, Ranganathan, A, Frank, D: ISAM: Incremental smoothing and mapping. IEEE Trans. Robot. 24(6), 1365–1378 (2008)

    Article  Google Scholar 

  57. Kim, G, Kim, A: Scan context: Egocentric spatial descriptor for place recognition within 3D point cloud map. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems Madrid (2018)

  58. Kim, H, Leutenegger, S, Davison, AJ: Real-time 3d reconstruction and 6-dof tracking with an event camera. In: European Conference on Computer Vision, pp 349–364. Springer (2016)

  59. Kim, U-H, Kim, S, Jong-Hwan, K: SimVODIS: Simultaneous visual odometry, object detection, and instance segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)

  60. Klein, G, Murray, D: Parallel tracking and mapping for small AR workspaces. In: 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp 225–234. IEEE (2007)

  61. Klein, G, Murray, D: Parallel tracking and mapping on a camera phone. In: 2009 8th IEEE International Symposium on Mixed and Augmented Reality, pp 83–86. IEEE (2009)

  62. Kohlbrecher, S, Meyer, J, von Stryk, O, Klingauf, U: A flexible and scalable SLAM system with full 3d motion estimation. In: Proc. IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR). IEEE (2011)

  63. Konolige, K, Grisetti, G, Kümmerle, R, Burgard, W, Limketkai, B, Vincent, R: Efficient sparse pose adjustment for 2d mapping. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 22–29. IEEE (2010)

  64. Kümmerle, R, Grisetti, G, Strasdat, H, Konolige, K, Burgard, W: g2o: A general framework for graph optimization. In: 2011 IEEE International Conference on Robotics and Automation, pp 3607–3613. IEEE (2011)

  65. Laidlow, T, Bloesch, M, Li, W, Leutenegger, S: Dense RGB-D-Inertial SLAM with map deformations. In: 2017 IEEE/RSJ International Conference On Intelligent Robots and Systems (IROS), pp 6741–6748. IEEE (2017)

  66. Li, Q, Chen, S, Wang, C, Li, X, Wen, C, Cheng, M, Li, J: LO-Net: Deep real-time liDAR odometry. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 8473–8482 (2019)

  67. Li, R, Wang, S, Gu, D: DeepSLAM: A robust monocular SLAM system with unsupervised deep learning. IEEE Trans. Ind. Electron. 68(4), 3577–3587 (2020)

    Article  Google Scholar 

  68. Li, Y, Ushiku, Y, Tatsuya, H: Pose graph optimization for unsupervised monocular visual odometry. In: 2019 International Conference on Robotics and Automation (ICRA), pp 5439–5445. IEEE (2019)

  69. Liang, X, Chen, H, Li, Y, Liu, Y: Visual laser-SLAM in large-scale indoor environments. In: 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp 19–24. IEEE (2016)

  70. Lin, J, Zheng, C, Xu, W, Fu, Z: R2LIVE: A robust, real-time, LiDAR-inertial-visual tightly-coupled state estimator and mapping, arXiv:2102.12400 (2021)

  71. Liu, Q, Duan, F: Fast and consistent matching for landmark-based place recognition. Journal of Intelligent & Robotic Systems, 1–14 (2020)

  72. Liu, Y, Yang, D, Li, J, Gu, Y, Pi, J, Zhang, X: Stereo visual-inertial SLAM with points and lines. IEEE Access 6, 69381–69392 (2018)

    Article  Google Scholar 

  73. Liu, Y, Thrun, S: Results for outdoor-SLAM using sparse extended information filters. In: 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), vol. 1, pp 1227–1233. IEEE (2003)

  74. López, E, García, S, Barea, R, Bergasa, LM, Molinos, EJ, Arroyo, R, Romera, E, Pardo, S: A multi-sensorial simultaneous localization and mapping (SLAM) system for low-cost micro aerial vehicles in GPS-denied environments. Sensors 17(4), 802 (2017)

    Article  Google Scholar 

  75. Low, K-L: Linear least-squares optimization for point-to-plane ICP surface registration. Chapel Hill, University of North Carolina 4(10), 1–3 (2004)

    Google Scholar 

  76. Lu, W, Wan, G, Zhou, Y, Fu, X, Yuan, P, Song, S: DeepICP: An end-to-end deep neural network for 3d point cloud registration. arXiv:1905.04153 (2019)

  77. Lu, W, Wan, G, Zhou, Y, Fu, X, Yuan, P, Shiyu, S: DeepVCP: An end-to-end deep neural network for point cloud registration. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 12–21 (2019)

  78. Lu, W, Zhou, Y, Wan, G, Hou, S, Shiyu, S: L3-Net: Towards Learning based liDAR localization for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 6389–6398 (2019)

  79. Magnusson, M, Lilienthal, A, Duckett, T: Scan registration for autonomous mining vehicles using 3d-NDT. J Field Robot 24(10), 803–827 (2007)

    Article  Google Scholar 

  80. McCormac, J, Clark, R, Bloesch, M, Davison, A, Leutenegger, S: Fusion++: Volumetric object-level SLAM. In: 2018 International Conference on 3D Vision (3DV), pp 32–41. IEEE (2018)

  81. Mendes, E, Koch, P, Lacroix, S: pose-graph SLAM. In: 2016 Icp-based IEEE International Symposium On Safety, Security, and Rescue Robotics (SSRR), pp 195–200. IEEE (2016)

  82. Mithun, NC, Sikka, K, Chiu, H-P, Samarasekera, S, Rakesh, K: RGB2LIDAR: Towards solving large-scale cross-modal visual localization. In: Proceedings of the 28th ACM International Conference on Multimedia, pp 934–954 (2020)

  83. Muja, M, Lowe, DG: Fast approximate nearest neighbors with automatic algorithm configuration. VISAPP (1) 2(331-340), 2 (2009)

    Google Scholar 

  84. Munoz-Salinas, R, Rafael, M-C: UcoSLAM: Simultaneous localization and mapping by fusion of keypoints and squared planar markers. Pattern Recogn. 101, 107193 (2020)

    Article  Google Scholar 

  85. Mur-Artal, R, Martinez Montiel, JM, Tardos, JD: ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  86. Mur-Artal, R, Tardós, JD: ORB-SLAM2:An open-source SLAM system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)

    Article  Google Scholar 

  87. Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: Real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp 127–136 (2011)

  88. Newcombe, R.A, Lovegrove, S.J, Davison, AJ: DTAM: Dense tracking and mapping in real-time. In: 2011 International Conference on Computer Vision, pp 2320–2327. IEEE (2011)

  89. Newman, P, Cole, D, Ho, K: Outdoor SLAM using visual appearance and laser ranging. In: 2006 Proceedings 2006 IEEE International Conference on Robotics and Automation ICRA 2006., pp 1180–1187. IEEE (2006)

  90. Bowden, R, Kaygusuz, N, Mendez, O: MDN-VO: Estimating visual odometry with confidence. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2021)

  91. Pascoe, G, Maddern, W, Tanner, M, Piniés, P, Paul, N: NID-SLAM: Robust Monocular SLAM using normalised information distance. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1435–1444 (2017)

  92. Pfrommer, B, Kostas, D: TagSLAM: Robust SLAM, with fiducial markers. arXiv:1910.00679 (2019)

  93. Pire, T, Fischer, T, Civera, J, De Cristóforis, P, Berlles, JJ: Stereo parallel tracking and mapping for robot localization. In: 2015 Stereo parallel IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 1373–1378. IEEE (2015)

  94. Polok, L, Ila, V, Solony, M, Smrz, P, Zemcik, P: Incremental block cholesky factorization for nonlinear least squares in robotics. In: Robotics: Science and Systems, pp 328–336 (2013)

  95. Prophet, R, Li, G, Sturm, C, Vossiek, M: Semantic segmentation on automotive radar maps. In: 2019 IEEE Intelligent Vehicles Symposium (IV), pp 756–763. IEEE (2019)

  96. Qayyum, U, Ahsan, Q, Mahmood, Z: IMU aided RGB-D SLAM. In: 2017 14th International Bhurban Conference on Applied Sciences and Technology (IBCAST), pp 337–341. IEEE (2017)

  97. Qi, CR, Su, H, Mo, K, Guibas, LJ: PointNet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 652–660 (2017)

  98. Radmanesh, R, Wang, Z, Chipade, VS, Tsechpenakis, G, Panagou, D: LIV-LAM: LiDAR and visual localization and mapping. In: 2020 American Control Conference (ACC), pp 659–664. IEEE (2020)

  99. Rebecq, H, Horstschäfer, T, Gallego, G, Davide, S: EVO: A geometric approach to event-based 6-DOF parallel tracking and mapping in real-time. IEEE Robot. Autom. Lett. 2(2), 593–600 (2016)

    Article  Google Scholar 

  100. Redmon, J, Ali, F.: YOLOV3: An incremental improvement. arXiv:1804.02767 (2018)

  101. Rusu, RB, Blodow, N, Beetz, M: Fast point feature histograms (FPFH) for 3d registration. In: 2009 Fast Point Feature IEEE International Conference on Robotics and Automation, pp 3212–3217. IEEE (2009)

  102. Salas-Moreno, RF, Glocken, B, Kelly, PHJ, Davison, AJ: Dense planar SLAM. In: 2014 Dense IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp 157–164. IEEE (2014)

  103. Sallab, AE, Sobh, I, Zahran, M, Essam, N: Lidar sensor modeling and data augmentation with gans for autonomous driving, arXiv:1905.07290 (2019)

  104. Schuster, F, Keller, CG, Rapp, M, Haueis, M, Curio, C: SLAM using graph optimization. In: 2016 Landmark based IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp 2559–2564. IEEE (2016)

  105. Segal, A, Haehnel, D, Thrun, S: Generalized-ICP. In: Robotics: Science and Systems, vol. 2, p 435, Seattle (2009)

  106. Seo, Y, Chou, C-c: A tight coupling of vision-liDAR measurements for an effective odometry. IEEE (2019)

  107. Servières, M, Renaudin, V, Dupuis, A, Antigny, N: Visual and visual-inertial SLAM: State of the art, classification, and experimental benchmarking. Journal of Sensors, 2021 (2021)

  108. Shan, T, Englot, B: LeGO-LOAM: Lightweight and ground-optimized LiDAR odometry and mapping on variable terrain. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 4758–4765. IEEE (2018)

  109. Shan, T, Englot, B, Meyers, D, Wang, W, Ratti, C, Rus, D: LIO-SAM: Tightly-coupled LiDAR inertial odometry via smoothing and mapping. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 5135–5142. IEEE (2020)

  110. Shao, W, Vijayarangan, S, Li, C, Kantor, G: Stereo visual inertial LiDAR, simultaneous localization and mapping, arXiv:1902.10741 (2019)

  111. Shin, H, Kim, D, Kwon, Y, Kim, Y: Illusion and dazzle: Adversarial optical channel exploits against lidars for automotive applications. In: International Conference on Cryptographic Hardware and Embedded Systems, pp 445–467 . Springer (2017)

  112. Shin, Y-S, Park, YS, Kim, A: Direct visual SLAM using sparse depth for camera- LiDAR system. In: 2018 Direct IEEE International Conference on Robotics and Automation (ICRA), pp 5144–5151. IEEE (2018)

  113. Shin, Y-S, Park, YS, Kim, A: DVL-SLAM: Sparse depth enhanced direct visual-liDAR SLAM. Auton. Robot. 44(2), 115–130 (2020)

    Article  Google Scholar 

  114. Song, H, Shin, H-C: Classification and spectral mapping of stationary and moving objects in road environments using fmcw radar. IEEE Access 8, 22955–22963 (2020)

    Article  Google Scholar 

  115. Steder, B, Rusu, RB, Konolige, K, Burgard, W: NARF: 3d range image features for object recognition. In: Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ. Int. Conf. on Intelligent Robots and Systems (IROS), vol. 44 (2010)

  116. Strasdat, H, Montiel, JMM, Davison, AJ: Visual SLAM: Why filter? Image Vis. Comput. 30(2), 65–77 (2012)

    Article  Google Scholar 

  117. Sünderhauf, N, Pham, TT, Latif, Y, Milford, M, Reid, I: Meaningful maps with object-oriented semantic mapping. In: 2017 IEEE/RSJ International Conference On Intelligent Robots and Systems (IROS), pp 5079–5085. IEEE (2017)

  118. Szeliski, R: Computer Vision: Algorithms and Applications. Springer Science & Business Media (2010)

  119. Taketomi, T, Uchiyama, H, Ikeda, S: Visual SLAM algorithms: a survey from 2010 to 2016. IPSJ Trans. Comput. Vis. Applic. 9(1), 1–11 (2017)

    Google Scholar 

  120. Tam, GKL, Cheng, Z-Q, Lai, Y-K, Langbein, FC, Liu, Y, Marshall, D, Martin, RR, Sun, X-F, Rosin, PL: Registration of 3d point clouds and meshes: a survey from rigid to nonrigid. IEEE Trans. Visual. Comput. Graph. 19(7), 1199–1217 (2012)

    Article  Google Scholar 

  121. Tian, Y, Suwoyo, H, Wang, W, Mbemba, D, Li, L : An AEKF-SLAM algorithm with recursive noise statistic based on MLE and EM. J Intelli Robot Syst 97(2), 339–355 (2020)

    Article  Google Scholar 

  122. Uy, MA, Lee, HG: pointnetVLAD: Deep point cloud based retrieval for large-scale place recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4470–4479 (2018)

  123. Vidal, AR, Rebecq, H, Horstschaefer, T, Scaramuzza, D: Ultimate SLAM? Combining events, images, and imu for robust visual slam in hdr and high-speed scenarios. IEEE Robot. Autom. Lett. 3(2), 994–1001 (2018)

    Article  Google Scholar 

  124. Wan, Z, Yu, B, Li, TY, Tang, J, Zhu, Y, Yu, W, Raychowdhury, A, Liu, S: A survey of fpga-based robotic computing. IEEE Circ. Syst. Mag. 21(2), 48–74 (2021)

    Article  Google Scholar 

  125. Wang, R, Schworer, M, Daniel, C: Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras. In: Proceedings of the IEEE International Conference on Computer Vision, pp 3903–3911 (2017)

  126. Wang, Y, Shi, T, Yun, P, Tai, L, Ming, L: PointSeg: Real-time semantic segmentation based on 3d LiDAR, point cloud. arXiv:1807.06288 (2018)

  127. Wofk, D, Ma, F, Yang, T-J, Karaman, S, Sze, V: FastDepth: Fast monocular depth estimation on embedded systems. In: 2019 International Conference on Robotics and Automation (ICRA), pp 6101–6108. IEEE (2019)

  128. Yan, M, Wang, J, Li, J, Zhang, C: Loose coupling visual-LiDAR odometry by combining VISO2 and LOAM. In: 2017 36th Chinese Control Conference (CCC), pp 6841–6846. IEEE (2017)

  129. Yang, N, von Stumberg, L, Wang, R, Daniel, C: D3VO: Deep depth, deep pose and deep uncertainty for monocular visual odometry. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1281–1292 (2020)

  130. Yu, W, Amigoni, F: Standard for robot map data representation for navigation. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 3–4 (2014)

  131. Zhang, G, Liu, H, Dong, Z, Jia, J, Wong, T-T, Bao, H: Efficient non-consecutive feature tracking for robust structure-from-motion. IEEE Trans. Image Process. 25(12), 5957–5970 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  132. Ji, Z, Kaess, M, Singh, S: Real-time depth enhanced monocular odometry. In: 2014 IEEE/RSJ International Conference On Intelligent Robots and Systems, p 2014. IEEE (2014)

  133. Ji, Z, Kaess, M, Singh, S: A real-time method for depth enhanced visual odometry. Auton. Robot. 41(1), 31–43 (2017)

    Article  Google Scholar 

  134. Ji, Z, Sanjiv, S: LOAM: Lidar Odometry and mapping in real-time. In: Robotics: Science and Systems, vol. 2 (2014)

  135. Zhang, J, Singh, S: Visual-LiDAR odometry and mapping: Low-drift, robust, and fast. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp 2174–2181. IEEE (2015)

  136. Zheng, X, Huang, B, Ni, D, Xu, Q: A novel intelligent vehicle risk assessment method combined with multi-sensor fusion in dense traffic environment. Journal of Intelligent and Connected Vehicles (2018)

  137. Zhou, Y, Gallego, G, Shen, S: Event-based stereo visual odometry, arXiv:2007.15548(2020)

  138. Zuo, X, Geneva, P, Lee, W, Liu, Y, Huang, G: LIC-Fusion: LiDAR-Inertial-Camera odometry, arXiv:1909.04102 (2019)

  139. Zuo, X, Yang, Y, Geneva, P, Lv, J, Liu, Y, Huang, G, Pollefeys, M: LIC-Fusion, 2.0: LiDAR-inertial-camera odometry with sliding-window plane-feature tracking, arXiv:2008.07196(2020)

Download references

Funding

This work was supported by the French Ministry of Higher Education, Research and Innovation.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and methodology. Abdelhafid El Ouardi and Sergio Rodriguez had the idea for the article and supervised the process. Sergio Rodriguez was in charge of providing useful insights to the algorithmic aspects of the discussed SLAM methods. Abdelhafid El Ouardi was in charge of providing useful insights to the hardware aspects of the discussed SLAM systems. Mohammed Chghaf was in charge of synthetizing recent mono-modal and multi-modal SLAM strategies and prepared the first draft of the manuscript. He was in charge of investigation, literature search and data analysis. Sergio Rodriguez and Abdelhafid El Ouardi commented on previous versions of the manuscript and critically revised the work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mohammed Chghaf.

Ethics declarations

Ethics approval and consent to participate

Not applicable

Consent for Publication

Not applicable

Conflicts of interest/Competing interests

The authors declare that they have no known conflicts or competing interests that could have appeared to influence the work reported in this paper.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chghaf, M., Rodriguez, S. & Ouardi, A.E. Camera, LiDAR and Multi-modal SLAM Systems for Autonomous Ground Vehicles: a Survey. J Intell Robot Syst 105, 2 (2022). https://doi.org/10.1007/s10846-022-01582-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-022-01582-8

Keywords

Navigation