Skip to main content

Vehicle-Related Distance Estimation Using Customized YOLOv7

  • Conference paper
  • First Online:
Image and Vision Computing (IVCNZ 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13836))

Included in the following conference series:

Abstract

With the popularity of autonomous driving, the development of ADAS (Advanced Driver Assistance Systems), especially collision avoidance systems, has become an important branch in the field of deep learning. In the face of complex traffic environments, collision avoidance systems need to detect vehicles quickly and accurately in traffic distance to the vehicle in front. Against this background, in this paper, we aim at investigating how to build a fast and robust model for vehicle distance estimation. The theoretical insights are synthesized in the context of odometry and customized YOLOv7 based on what a conceptual framework is proposed. In this paper, KITTI is employed as the dataset for model training and testing. Being one of the pioneer works on distance estimation based on KITTI, the unique value of this research work lies in the first time using YOLOv7 with attention model as a distance estimation model and getting 4.253 on RMSE.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Tinchev, G., Penate-Sanchez, A., Fallon, M.: Learning to see the wood for the trees: deep laser localization in urban and natural environments on a CPU. IEEE Robot. Autom. Lett. 4(2), 1327–1334 (2019)

    Article  Google Scholar 

  2. Kuznietsov, Y., Stuckler, J., Leibe, B.: Semi-supervised deep learning for monocular depth map prediction. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 6647–6655, (2017)

    Google Scholar 

  3. Liao, Y., Huang, L., Wang, Y., Kodagoda, S., Yu, Y., Liu, Y.: Parse geometry from a line: Monocular depth estimation with partial laser observation. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 5059–5066, (2017)

    Google Scholar 

  4. Zhang, J., Hu, S., Shi, H.: Deep learning based object distance measurement method for binocular stereo vision blind area. Int. J. Adv. Comput. Sci. Appl. 9(9) (2018)

    Google Scholar 

  5. Chiu, C.C., Chung, M.L., Chen, W.C.: Real-time front vehicle detection algorithm for an asynchronous binocular system. J. Inf. Sci. Eng. 26(3), 735–752 (2010)

    Google Scholar 

  6. Zhao, M., Mammeri, A., Boukerche, A.: Distance measurement system for smart vehicles. In: International Conference on New Technologies, Mobility and Security (NTMS), pp. 1–5 (2015)

    Google Scholar 

  7. Paul, V., Michael, J.: Rapid object detection using a boosted cascade of simple features. In: International Conference on Computer Vision and Pattern Recognition (2001)

    Google Scholar 

  8. Goncalo, M., Paulo, P., Urbano, N.: Vision-based pedestrian detection using HAAR-LIKE features, Robotica 46, 321–328 (2006)

    Google Scholar 

  9. Rainer, L., Alexander, K., Vadim, P.: An empirical analysis of boosting algorithms for rapid objects with an extended set of Haar-like features. Intel Technical Report MRL-TR (2002)

    Google Scholar 

  10. Bhowmick, B., Bhadra, S., Sinharay, A.: Stereo vision based Pedestrians detection and distance measurement for automotive application. In: International Conference on Intelligent Systems, Modelling and Simulation, pp. 25–29 (2011)

    Google Scholar 

  11. Gunawan, A.A.S., et al.: Detection of vehicle position and speed using camera calibration and image projection methods. Procedia Comp. Sci. 157, 255–265 (2019)

    Article  Google Scholar 

  12. Kim, J.-H., et al.: Reliability verification of vehicle speed estimate method in forensic videos. Foren. Sci. Int. 287, 195–206 (2018)

    Article  Google Scholar 

  13. Huang, T.: Traffic speed estimation from surveillance video data. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 161–165 (2018)

    Google Scholar 

  14. Vakili, E., et al.: Single-camera vehicle speed measurement using the geometry of the imaging system. Mult. Tools Apps. 79, 19307–19327 (2020)

    Article  Google Scholar 

  15. Llorca, D.F., et al.: Two-camera based accurate vehicle speed measurement using average speed at a fixed point. In: The IEEE International Conference on Intelligent Transportation Systems, pp. 2533–2538 (2016)

    Google Scholar 

  16. Wu, W., et al.: Vehicle speed estimation using a monocular camera. In: Proceedings of SPIE 9407, Video Surveillance and Transportation Imaging Applications. SPIE (2015)

    Google Scholar 

  17. Dahl, M., Javadi, S.: Analytical modeling for a video-based vehicle speed measurement framework. Sensors 20, 160 (2020)

    Article  Google Scholar 

  18. Javadi, S., et al.: Vehicle speed measurement model for video-based systems. Comp. Elec. Eng. 76, 238–248 (2019)

    Article  Google Scholar 

  19. Czapla, Z.: Vehicle speed estimation with the use of gradient-based image conversion into binary form. In: 2017 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), pp. 213–216 (2017)

    Google Scholar 

  20. Fernández Llorca, D., Hernández Martínez, A., García Daza, I.: Vision-based vehicle speed estimation: a survey. IET Intel. Transport Syst. 15(8), 987–1005 (2021)

    Article  Google Scholar 

  21. Kim, J.: Efficient vehicle detection and distance estimation based on aggregated channel features and inverse perspective mapping from a single camera. Symmetry 11(10) 1205 (2019)

    Google Scholar 

  22. Arabi, S., Sharma, A., Reyes, M., Hamann, C., Peek-Asa, C.: Farm vehicle following distance estimation using deep learning and monocular camera images. Sensors 22(7), 2736 (2022)

    Article  Google Scholar 

  23. Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors (2022). https://doi.org/10.48550/arXiv.2207.02696

  24. Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_1

    Chapter  Google Scholar 

  25. Khan, M., Paul, P., Rashid, M., Hossain, M., Ahad, M.: An AI-based visual aid with integrated reading assistant for the completely blind. In: IEEE Transactions on Human-Machine Systems, pp. 91–99 (2017)

    Google Scholar 

  26. Liu, X. Yan, W.: Depth estimation of traffic scenes from image sequence using deep learning PSIVT (2022)

    Google Scholar 

  27. Liu, X., Yan, W.: Traffic-light sign recognition using Capsule network. Multim. Tools Appl. 80, 15161–15171 (2021)

    Google Scholar 

  28. Liu, X., Yan, W.: Vehicle-related scene segmentation using CapsNets. In: IEEE IVCNZ (2020)

    Google Scholar 

  29. Liu, X., Nguyen, M., Yan, W.: Vehicle-related scene understanding using deep learn. In: Asian Conference on Pattern Recognition (2019)

    Google Scholar 

  30. Liu, X.: Vehicle-related scene understanding using deep learning. Master’s Thesis, Auckland University of Technology, New Zealand (2019)

    Google Scholar 

  31. Mehtab, S., Yan, W.: FlexiNet: fast and accurate vehicle detection for autonomous vehicles-2D vehicle detection using deep neural network. In: ACM ICCCV: (2021)

    Google Scholar 

  32. Mehtab, S., Yan, W.: Flexible neural network for fast and accurate road scene perception. Multim. Tools Appl. 81, 7169–7181 (2021)

    Google Scholar 

  33. Mehtab, S., Yan, W., Narayanan, A.: 3D vehicle detection using cheap LiDAR and camera sensors. In: IEEE IVCNZ (2021)

    Google Scholar 

  34. Yan, W.: Computational Methods for Deep Learning: Theoretic Practice and Applications. Springer, Cham (2021)

    Book  MATH  Google Scholar 

  35. Yan, W.: Introduction to Intelligent Surveillance: Surveillance Data Capture, Transmission, and Analytics. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-10713-0

  36. Gu, Q., Yang, J., Kong, L., Yan, W., Klette, R.: Embedded and real-time vehicle detection system for challenging on-road scenes. Opt. Eng. 56(6), 06310210 (2017)

    Article  Google Scholar 

  37. Ming, Y., Li, Y., Zhang, Z., Yan, W.: A survey of path planning algorithms for au-tonomous vehicles. Int. J. Commer. Veh. 3, 448-468 (2021)

    Google Scholar 

  38. Shen, D., Xin, C., Nguyen, M., Yan, W.: Flame detection using deep learning. In: In-ternational Conference on Control, Automation and Robotics (2018)

    Google Scholar 

  39. Xin, C., Nguyen, M., Yan, W.: Multiple flames recognition using deep learning. In: Handbook of Research on Multimedia Cyber Security, pp. 296–307 (2020)

    Google Scholar 

  40. Luo, Z., Nguyen, M., Yan, W.: Kayak and sailboat detection based on the improved YOLO with Transformer. In: ACM ICCCV (2022)

    Google Scholar 

  41. Le, R., Nguyen, M., Yan, W.: Training a convolutional neural network for transportation sign detection using synthetic dataset. In: IEEE IVCNZ (2021)

    Google Scholar 

  42. Alexey, B., ChienYao, W., Mark, L.: YOLOv4: optimal speed and accuracy of object detection. Image and Video Processing (2020)

    Google Scholar 

  43. Chuyi, L. et al.: YOLOv6: a single-stage object detection framework for industrial applications. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2022)

    Google Scholar 

  44. Chienyao, W., Alexey, B., Mark, L.: YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2022)

    Google Scholar 

  45. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)

    Google Scholar 

  46. Cao, Y.T., Wang, J.M., Sun, Y.K., Duan, X.J.: Circle marker based distance measurement using a single camera. Lect. Notes Softw. Eng. 1(4), 376 (2013)

    Article  Google Scholar 

  47. Pan, C., Liu, J., Yan, W., Zhou, Y.: Salient object detection based on visual perceptual saturation and two-stream hybrid networks. In: IEEE Transactions on Image Processing (2021)

    Google Scholar 

  48. Pan, C., Yan, W.: Object detection based on saturation of visual perception. Multim. Tools Appl. 79(27–28), 19925–19944 (2020)

    Google Scholar 

  49. Pan, C., Yan, W.: A learning-based positive feedback in salient object detection. In: IEEE IVCNZ (2018)

    Google Scholar 

  50. Shen, Y., Yan, W.: Blind spot monitoring using deep learning. In: IEEE IVCNZ (2018)

    Google Scholar 

  51. Zheng, K., Yan, W., Nand, P.: Video dynamics detection using deep neural networks. IEEE Trans. Emerg. Top. Comput. Intell. 2, 24–234(2017)

    Google Scholar 

  52. An, N., Yan, W.: Multitarget tracking using Siamese neural networks. In: ACM Transactions on Multimedia Computing, Communications and Applications (2021)

    Google Scholar 

  53. Leslie, M., et al.: Identification of the MuRF1 skeletal muscle ubiquitylome through quantitative proteomics. Function 192(4), zqab029 (2021)

    Google Scholar 

  54. Xinyu, Z., Hongbo, G., Jianhui, Z.H.A.O., Mo, Z.H.O.U.: Overview of deep learning intelligent driving methods. J. Tsinghua Univ. (Sci. Technol.) 58(4), 438–444 (2018)

    Google Scholar 

  55. Grigorescu, S., Trasnea, B., Cocias, T., Macesanu, G.: A survey of deep learning techniques for autonomous driving. J. Field Robot. 37(3), 362–386 (2020)

    Article  Google Scholar 

  56. Muhammad, K., Ullah, A., Lloret, J., Del Ser, J., de Albuquerque, V.H.C.: Deep learning for safe autonomous driving: current challenges and future directions. IEEE Trans. Intell. Transp. Syst. 22(7), 4316–4336 (2020)

    Article  Google Scholar 

  57. Mozaffari, S., Al-Jarrah, O.Y., Dianati, M., Jennings, P., Mouzakitis, A.: Deep learning-based vehicle behavior prediction for autonomous driving applications: a review. IEEE Trans. Intell. Transp. Syst. 23(1), 33–47 (2020)

    Article  Google Scholar 

  58. Li, Y., et al.: Deep learning for lidar point clouds in autonomous driving: a review. IEEE Trans. Neural Netw. Learn. Syst. 32(8), 3412–3432 (2020)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoxu Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, X., Yan, W.Q. (2023). Vehicle-Related Distance Estimation Using Customized YOLOv7. In: Yan, W.Q., Nguyen, M., Stommel, M. (eds) Image and Vision Computing. IVCNZ 2022. Lecture Notes in Computer Science, vol 13836. Springer, Cham. https://doi.org/10.1007/978-3-031-25825-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25825-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25824-4

  • Online ISBN: 978-3-031-25825-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics