Skip to main content

Automatic Detection of a Driver’s Complex Mental States

  • Conference paper
  • First Online:
Computational Science and Its Applications – ICCSA 2017 (ICCSA 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10406))

Included in the following conference series:

Abstract

Automatic classification of drivers’ mental states is an important yet relatively unexplored topic. In this paper, we define a taxonomy of a set of complex mental states that are relevant to driving, namely: Happy, Bothered, Concentrated and Confused. We present our video segmentation and annotation methodology of a spontaneous dataset of natural driving videos from 10 different drivers. We also present our real-time annotation tool used for labelling the dataset via an emotion perception experiment and discuss the challenges faced in obtaining the ground truth labels. Finally, we present a methodology for automatic classification of drivers’ mental states. We compare SVM models trained on our dataset with an existing nearest neighbour model pre-trained on posed dataset, using facial Action Units as input features. We demonstrate that our temporal SVM approach yields better results. The dataset’s extracted features and validated emotion labels, together with the annotation tool, will be made available to the research community.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Please email marwa.mahmoud@cl.cam.ac.uk for the link and password.

  2. 2.

    https://github.com/mzy0369/VideoAnnotator.

References

  1. Adams, A., Robinson, P.: Automated recognition of complex categorical emotions from facial expressions and head motions. In: 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 355–361. IEEE (2015)

    Google Scholar 

  2. Baltru, T., Robinson, P., Morency, L.P., et al.: Openface: an open source facial behavior analysis toolkit. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–10. IEEE (2016)

    Google Scholar 

  3. Baltrušaitis, T., Mahmoud, M., Robinson, P.: Cross-dataset learning and person-specific normalisation for automatic action unit detection. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6, pp. 1–6. IEEE (2015)

    Google Scholar 

  4. Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Recognizing facial expression: machine learning and application to spontaneous behavior. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 2, pp. 568–573. IEEE (2005)

    Google Scholar 

  5. Cohn, J.F., De la Torre, F.: Automated face analysis for affective. In: The Oxford Handbook of Affective Computing, p. 131 (2014)

    Google Scholar 

  6. Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M., Schröder, M.: ‘FEELTRACE’: An instrument for recording perceived emotion in real time. In: ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion (2000)

    Google Scholar 

  7. Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992)

    Article  Google Scholar 

  8. Ekman, P., Rosenberg, E.L.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press, New York (1997)

    Google Scholar 

  9. El Kaliouby, R., Robinson, P.: Real-time inference of complex mental states from facial expressions and head gestures. In: Kisačanin, B., Pavlović, V., Huang, T.S. (eds.) Real-time Vision for Human-computer Interaction, pp. 181–200. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  10. Gudi, A., Tasli, H.E., den Uyl, T.M., Maroulis, A.: Deep learning based FACS action unit occurrence and intensity estimation. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6, pp. 1–5. IEEE (2015)

    Google Scholar 

  11. Gunes, H., Schuller, B.: Categorical and dimensional affect analysis in continuous input: Current trends and future directions. Image Vis. Comput. 31(2), 120–136 (2013)

    Article  Google Scholar 

  12. van den Haak, P., van Lon, R., van der Meer, J., Rothkrantz, L.: Stress assessment of car-drivers using EEG-analysis. In: Proceedings of the 11th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing on International Conference on Computer Systems and Technologies, pp. 473–477. ACM (2010)

    Google Scholar 

  13. Hu, S., Zheng, G.: Driver drowsiness detection with eyelid related parameters by support vector machine. Expert Syst. Appl. 36(4), 7651–7658 (2009)

    Article  Google Scholar 

  14. Jones, C.M., Jonsson, I.M.: Automatic recognition of affective cues in the speech of car drivers to allow appropriate responses. In: Proceedings of the 17th Australia Conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future. Computer-Human Interaction Special Interest Group (CHISIG) of Australia, pp. 1–10 (2005)

    Google Scholar 

  15. Katsis, C., Goletsis, Y., Rigas, G., Fotiadis, D.: A wearable system for the affective monitoring of car racing drivers during simulated conditions. Transp. Res. Part C: Emerg. Technol. 19(3), 541–551 (2011)

    Article  Google Scholar 

  16. Katsis, C.D., Katertsidis, N., Ganiatsas, G., Fotiadis, D.I.: Toward emotion recognition in car-racing drivers: A biosignal processing approach. IEEE Trans. Syst. Man Cybern. Part A: Syst. Hum. 38(3), 502–512 (2008)

    Article  Google Scholar 

  17. Krippendorff, K.: Agreement and information in the reliability of coding. Commun. Methods Measures 5(2), 93–112 (2011)

    Article  Google Scholar 

  18. Lee, H.C., Cameron, D., Lee, A.H.: Assessing the driving performance of older adult drivers: on-road versus simulated driving. Accid. Anal. Prev. 35(5), 797–803 (2003)

    Article  Google Scholar 

  19. Lisetti, C.L., Nasoz, F.: Affective intelligent car interfaces with emotion recognition. In: Proceedings of 11th International Conference on Human Computer Interaction, Las Vegas. Citeseer (2005)

    Google Scholar 

  20. Mahmoud, M., Baltrušaitis, T., Robinson, P., Riek, L.D.: 3D corpus of spontaneous complex mental states. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011. LNCS, vol. 6974, pp. 205–214. Springer, Heidelberg (2011). doi:10.1007/978-3-642-24600-5_24

    Chapter  Google Scholar 

  21. McKeown, G., Valstar, M.F., Cowie, R., Pantic, M.: The semaine corpus of emotionally coloured character interactions. In: 2010 IEEE International Conference on Multimedia and Expo (ICME), pp. 1079–1084. IEEE (2010)

    Google Scholar 

  22. Nasoz, F., Ozyer, O., Lisetti, C.L., Finkelstein, N.: Multimodal affective driver interfaces for future cars. In: Proceedings of the Tenth ACM International Conference on Multimedia, pp. 319–322. ACM (2002)

    Google Scholar 

  23. Oehl, M., Siebert, F.W., Tews, T.-K., Höger, R., Pfister, H.-R.: Improving human-machine interaction–a non invasive approach to detect emotions in car drivers. In: Jacko, J.A. (ed.) HCI 2011. LNCS, vol. 6763, pp. 577–585. Springer, Heidelberg (2011). doi:10.1007/978-3-642-21616-9_65

    Chapter  Google Scholar 

  24. O?Reilly, H., Pigat, D., Fridenson, S., Berggren, S., Tal, S., Golan, O., Bölte, S., Baron-Cohen, S., Lundqvist, D.: The EU-emotion stimulus set A validation study. Behav. Res. Methods 48(2), 1–10 (2015)

    Google Scholar 

  25. Ringeval, F., Sonderegger, A., Sauer, J., Lalanne, D.: Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In: 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–8. IEEE (2013)

    Google Scholar 

  26. Roidl, E., Frehse, B., Höger, R.: Emotional states of drivers and the impact on speed, acceleration and traffic violations? a simulator study. Accid. Anal. Prev. 70, 282–292 (2014)

    Article  Google Scholar 

  27. Rozin, P., Cohen, A.B.: High frequency of facial expressions corresponding to confusion, concentration, and worry in an analysis of naturally occurring facial expressions of americans. Emotion 3(1), 68 (2003)

    Article  Google Scholar 

  28. Baron-Cohen, S., Ofer Golan, S.W.: A new taxonomy of human emotions (2004)

    Google Scholar 

  29. Valstar, M.F., Almaev, T., Girard, J.M., McKeown, G., Mehu, M., Yin, L., Pantic, M., Cohn, J.F.: FERA 2015-second facial expression recognition and analysis challenge. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6, pp. 1–8. IEEE (2015)

    Google Scholar 

  30. Whitehill, J., Bartlett, M., Movellan, J.: Automatic facial expression recognition for intelligent tutoring systems. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2008, pp. 1–6. IEEE (2008)

    Google Scholar 

  31. Zhang, X., Yin, L., Cohn, J.F., Canavan, S., Reale, M., Horowitz, A., Liu, P., Girard, J.M.: BP4D-spontaneous: a high-resolution spontaneous 3D dynamic facial expression database. Image Vis. Comput. 32(10), 692–706 (2014)

    Article  Google Scholar 

Download references

Acknowledgment

The work presented in this paper was funded and supported by Jaguar Land Rover, Coventry, UK.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiyi Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Ma, Z., Mahmoud, M., Robinson, P., Dias, E., Skrypchuk, L. (2017). Automatic Detection of a Driver’s Complex Mental States. In: Gervasi, O., et al. Computational Science and Its Applications – ICCSA 2017. ICCSA 2017. Lecture Notes in Computer Science(), vol 10406. Springer, Cham. https://doi.org/10.1007/978-3-319-62398-6_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-62398-6_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-62397-9

  • Online ISBN: 978-3-319-62398-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics