Skip to main content
Log in

Eye Fixation Location Recommendation in Advanced Driver Assistance System

  • Original Article
  • Published:
Journal of Electrical Engineering & Technology Aims and scope Submit manuscript

Abstract

Recent research progress on the approach of visual attention modeling for mediated perception to advanced driver assistance system (ADAS) has drawn the attention of computer and human vision researchers. However, it is still debatable whether the actual driver’s eye fixation locations (EFLs) or the predicted EFLs which are calculated by computational visual attention models (CVAMs) are more reliable for safe driving under real-life driving conditions. We analyzed the suitability of the following two EFLs using ten typical categories of natural driving video clips: the EFLs of human drivers and the EFLs predicted by CVAMs. In the suitability analysis, we used the EFLs confirmed by two experienced drivers as the reference EFLs. We found that both approaches alone are not suitable for safe driving and EFL suitable for safe driving depends on the driving conditions. Based on this finding, we propose a novel strategy for recommending one of the EFLs to the driver for ADAS under predefined 10 real-life driving conditions. We propose to recommend one of the following 3 EFL modes for different driving conditions: driver’s EFL only, CVAM’s EFL only, and interchangeable EFL. In interchangeable EFL mode, driver’s EFL and CVAM’s EFL are interchangeable. The selection of two EFLs is a typical binary classification problem, so we apply support vector machines (SVMs) to solve this problem. We also provide a quantitative evaluation of the classifiers. The performance evaluation of the proposed recommendation method indicates that it is potentially useful to ADAS for future safe driving.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Crundall D, Underwood G (1998) Effects of experience and processing demands on visual information acquisition in drivers. Ergonomics 41:448–458

    Article  Google Scholar 

  2. Lee Y, Lee J, Boyle L (2007) Visual attention in driving: the effects of cognitive load and visual disruption. Hum Fact 49(4):721–733

    Article  Google Scholar 

  3. Konstantopoulos P, Chapman P, Crundall D (2010) Driver’s visual attention as a function of driving experience and visibility. Using a driving simulator to explore drivers’ eye movements in day, night and rain driving. Accid Anal Prev 42(3):827–834

    Article  Google Scholar 

  4. Xu J, Yue S, Menchinell F, Guo K (2017) What have been missed for predicting human attention in viewing driving videos? PeerJ 5:e2946

    Article  Google Scholar 

  5. Cartwright-Finch U, Lavie N (2007) The role of perceptual load in inattentional blindness. Cognition 102(3):321–340

    Article  Google Scholar 

  6. Galpin A, Underwood G, Crundall D (2009) Change blindness in driving scenes. Trans Res Part F 12(2):179–185

    Article  Google Scholar 

  7. Weitzenhoffer AM (2000) The practice of hypnotism. Wiley, Hoboken, pp 413–414

    Google Scholar 

  8. Zhai Y, Shah M (2006) Visual attention detection in video sequences using spatiotemporal cues. In: Proceeding of the 14th ACM international conference on Multimedia, pp 815–824

  9. Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259

    Article  Google Scholar 

  10. Bruce N, Tsotsos J (2006) Saliency based on information maximization. In: Proceeding of advances in neural information processing systems (NIPS), pp 155–162

  11. Hou X, Zhang L (2009) Dynamic visual attention: Searching for coding length increments. In: Advances in neural information processing systems (NIPS), pp. 681–688

  12. Mahadevan V, Vasconcelos N (2010) Spatiotemporal saliency in dynamic scenes. IEEE Trans Pattern Anal Mach Intell 32(1):171–177

    Article  Google Scholar 

  13. Marat S, Phuoc TH, Granjon L, Guyader N, Pellerin D, Guèrin-Duguè A (2009) Modelling spatio-temporal saliency to predict gaze direction for short videos. Int J Comput Vis 82:231–243

    Article  Google Scholar 

  14. Xu J, Yue S (2014) Mimicking visual searching with integrated top down cues and low-level features. Neurocomputing 133:1–17

    Article  Google Scholar 

  15. Judd T, Ehinger K, Durand F, Torralba A (2009) Learning to predict where humans look. In: Proceeding of international conference on computer vision (ICCV), pp 2106–2113

  16. VIP: A unifying framework for eye-gaze research, 2015. http://mmas.comp.nus.edu.sg/VIP.html. Accessed 29 Dec. 2015

  17. NUSEF: The National University of Singapore Eye-Fixation database (2015). Available: http://mmas.comp.nus.edu.sg/NUSEF.html. Accessed 30 June 2018

  18. Bruce N (2015) Eye tracking data. http://www-sop.inria.fr/members/Neil.Bruce. Accessed 30 June 2018

  19. Ehinger K, Hidalgo-Sotelo B, Torralba A, Oliva A (2009) Modelling search for people in 900 scenes: a combined source model of eye guidance. Vis Cogn 17(6–7):945–978

    Article  Google Scholar 

  20. Cerf M, Frady E, Koch C (2009) Faces and text attract gaze independent of the task: experimental data and computer model. J Vis 12(10):1–15

    Google Scholar 

  21. van der Linde I, Rajashekar U, Bovik A, Cormack K (2009) DOVES: a database of visual eye movements. Spat Vis 22(2):161–177

    Article  Google Scholar 

  22. LeMeur O (2014) ‘eye-tracking dataset’. http://people.irisa.fr/Olivier.Le_Meur/visualAttention/#database. Accessed 30 June 2018

  23. IRCCyN lab, ‘Visual attention and eye-tracking—databases IVC’ (2015), http://ivc.univ-nantes.fr/en/pages/view/23/. Accessed 30 June 2018

  24. CRCNS.org, ‘eye-1’ (2015), http://crcns.org/data-sets/eye/eye-1. Accessed 30 June 2018

  25. The DIEM Project (2015), http://thediemproject.wordpress.com/. Accessed 30 June 2018

  26. Hadizadeh H, Enriquez MJ, Bajić IV (2012) Eye-tracking database for a set of standard video sequences. IEEE Trans Image Process 21(2):898–903

    Article  MathSciNet  MATH  Google Scholar 

  27. Riche N, Mancas M, Ćulibrk D, Ćrnojevic V, Gosselin B, Dutoit T (2012) Dynamic saliency models and human attention: a comparative study on videos. In: Proceedings of the 11th Asian conference on computer vision (ACCV), Daejeon, Korea, pp 586–598

  28. Enns J, Lleras A (2008) What’s next? New evidence for prediction in human vision. Trends Cogn Sci 12(9):327–333

    Article  Google Scholar 

  29. Grossberg S (1973) Contour enhancement, short-term memory, and constancies in reverberating neural networks. Stud Appl Math 52:213–257

    Article  MathSciNet  MATH  Google Scholar 

  30. Howard I, Rogers B (2012) Perceiving in depth, 1st edn. Oxford University, New York

    Book  Google Scholar 

  31. Google D (2018) Natural driving database. zip. [online] Available at: https://sites.google.com/view/xu-jiawei/database. Accessed 30 June 2018

  32. Röhrbein F, Goddard P, Schneider M, James G, Guo K (2015) How does image noise affect actual and predicted human gaze allocation in assessing image quality? Vis Res 112:11–25

    Article  Google Scholar 

  33. Wang D, Hou X, Xu J, Yue S, Liu C (2017) Traffic sign detection using a cascade method with fast feature extraction and saliency test. IEEE Trans Intell Transp Syst 18(12):3290–3302

    Article  Google Scholar 

  34. Zhou Y, Liu L, Shao L, Mellor M (2018) Fast automatic vehicle annotation for urban traffic surveillance. IEEE Trans Intell Transp Syst 19(6):1973–1984

    Article  Google Scholar 

  35. Dollar P, Wojek C, Schiele B, Perona P (2012) Pedestrian detection: an evaluation of the state of the art. IEEE Trans Pattern Anal Mach Intell 34(4):743–761

    Article  Google Scholar 

  36. Anderson N, Bischof W, Laidlaw K, Risko E, Kingstone A (2013) Recurrence quantification analysis of eye movements. Behav Res Methods 45(3):842–856

    Article  Google Scholar 

  37. Lappin J, Tadin D, Nyquist J, Corn A (2009) Spatial and temporal limits of motion perception across variations in speed, eccentricity, and low vision. J Vis 1(30):1–14

    Google Scholar 

  38. Everingham M, Van Gool L, Williams C, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput Vis 88(2):303–338

    Article  Google Scholar 

  39. Sivaraman S, Trivedi M (2010) A general active-learning framework for on-road vehicle recognition and tracking. IEEE Trans Intell Trans Syst 11(2):267–276

    Article  Google Scholar 

  40. Houben S, Stallkamp J, Salmen J, Schlipsing M, Igel C (2013) Detection of traffic signs in real-world images: The German traffic sign detection benchmark. In: Proceeding of international joint conference on neural network, pp 1–8

  41. Johnson J, Hollingworth A, Luck S (2010) The role of attention in binding features in visual working memory. J Vis 5(8):426–427

    Article  Google Scholar 

  42. Underwood G (1998) Eye guidance in reading and scene perception, 1st edn. Elsevier, Amsterdam

    Google Scholar 

  43. Nabatilan L, Aghazadeh F, Nimbarte A, Harvey C, Chowdhury S (2012) Effect of driving experience on visual behaviour and driving performance under different driving conditions. Cogn Technol Work 14:355–363

    Article  Google Scholar 

  44. Peden M, Scurfield R, Sleet D, Mohan D, Hyder AA, Jarawan E, Mathers C (2004) World report on road traffic injury prevention. World Health Organization, Geneva

    Google Scholar 

  45. Chang C, Lin C (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):27

    Google Scholar 

  46. Powers D (2011) Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. J Mach Learn Technol 2(1):37–63

    MathSciNet  Google Scholar 

  47. Guo K, Meints K, Hall C, Hall S, Mills D (2009) Left gaze bias in humans, rhesus monkeys and domestic dogs. Anim Cogn 12:409–418

    Article  Google Scholar 

  48. Xu J, Chen Y, Guo K, Wang J, Menchinelli F, Jiang C, Zhang C, Shao L (2017) What has been missed for real life driving? An inspirational thinking from human innate biases. In: IEEE international conference on advanced video and signal based surveillance (AVSS), Lecce, pp 1–6

  49. Meador K, Loring D, Lee G, Brooks B, Nichols F, Thompson E, Thompson W, Heliman K (1989) Hemisphere asymmetry for eye gaze mechanisms. Brain 112(1):103–111

    Article  Google Scholar 

  50. Borji A, Sihite DN, Itti L (2012) Probabilistic learning of task-specific visual attention. In: IEEE international conference on computer vision and pattern recognition, pp 470–477

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seop Hyeong Park.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, J., Guo, K., Menchinelli, F. et al. Eye Fixation Location Recommendation in Advanced Driver Assistance System. J. Electr. Eng. Technol. 14, 965–978 (2019). https://doi.org/10.1007/s42835-019-00091-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42835-019-00091-3

Keywords

Navigation