Skip to main content

Recognizing Driver Activities Using Deep Learning Approaches Based on Smartphone Sensors

  • Conference paper
  • First Online:
Multi-disciplinary Trends in Artificial Intelligence (MIWAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13651))

  • 275 Accesses

Abstract

Human motion detection based on smartphone sensors has gained popularity for identifying everyday activities and enhancing situational awareness in pervasive and ubiquitous computing research. Modern machine learning and deep learning classifiers have been demonstrated on benchmark datasets to interpret people’s behaviors, including driving activities. While driving, driver behavior recognition may assist in activating accident detection. In this paper, we investigate driving behavior detection using deep learning techniques and smartphone sensors. We proposed the DriveNeXt classifier, which employs convolutional layers to extract spatial information and multi-branch aggregation transformation. This research evaluated the proposed model using a publicly available benchmark dataset that captures four activities: a driver entering/exiting and sitting/standing out of a vehicle. Classifier performance was evaluated using two common HAR indicators (accuracy and F1-score). The recommended DriveNeXt outperforms previous baseline deep learning models with the most fantastic accuracy of 96.95% and the highest F1-score of 96.82%, as shown by many investigations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bevilacqua, A., MacDonald, K., Rangarej, A., Widjaya, V., Caulfield, B., Kechadi, T.: Human activity recognition with convolutional neural networks. In: Brefeld, U., et al. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11053, pp. 541–552. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-10997-4_33

    Chapter  Google Scholar 

  2. Gil-Martín, M., San-Segundo, R., Fernández-Martínez, F., Ferreiros-López, J.: Time analysis in human activity recognition. Neural Process. Lett. 53(6), 4507–4525 (2021). https://doi.org/10.1007/s11063-021-10611-w

    Article  Google Scholar 

  3. Hirawat, A.: Driver entry into and exit from a car using smartphone sensors. https://data.mendeley.com/datasets/3czshz7zpr/1, https://doi.org/10.17632/3czshz7zpr.1. Accessed 01 July 2022

  4. Hirawat, A., Bhargava, D.: Enhanced accident detection system using safety application for emergency in mobile environment: SafeMe. In: Das, K.N., Deep, K., Pant, M., Bansal, J.C., Nagar, A. (eds.) Proceedings of Fourth International Conference on Soft Computing for Problem Solving. AISC, vol. 336, pp. 177–183. Springer, New Delhi (2015). https://doi.org/10.1007/978-81-322-2220-0_14

    Chapter  Google Scholar 

  5. Hnoohom, N., Mekruksavanich, S., Jitpattanakul, A.: An efficient resnetse architecture for smoking activity recognition from smartwatch. Intell. Autom. Soft Comput. 35(1), 1245–1259 (2023). https://doi.org/10.32604/iasc.2023.028290

  6. Ismail Fawaz, H., et al.: InceptionTime: finding AlexNet for time series classification. Data Min. Knowl. Discov. 34(6), 1936–1962 (2020). https://doi.org/10.1007/s10618-020-00710-y

    Article  MathSciNet  Google Scholar 

  7. Lawal, I.A., Bano, S.: Deep human activity recognition using wearable sensors. In: Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, PETRA 2019, pp. 45–48. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3316782.3321538

  8. Mekruksavanich, S., Hnoohom, N., Jitpattanakul, A.: A hybrid deep residual network for efficient transitional activity recognition based on wearable sensors. Appl. Sci. 12(10), 4988 (2022). https://doi.org/10.3390/app12104988

    Article  Google Scholar 

  9. Mekruksavanich, S., Jitpattanakul, A.: Deep learning approaches for continuous authentication based on activity patterns using mobile sensing. Sensors 21(22), 7519 (2021). https://doi.org/10.3390/s21227519

    Article  Google Scholar 

  10. Mekruksavanich, S., Jitpattanakul, A.: Multimodal wearable sensing for sport-related activity recognition using deep learning networks. J. Adv. Inf. Technol. 13(2), 132–138 (2022). https://doi.org/10.12720/jait.13.2.132-138

    Article  Google Scholar 

  11. Mekruksavanich, S., Jitpattanakul, A.: Sport-related activity recognition from wearable sensors using bidirectional GRU network. Intell. Autom. Soft Comput. 34(3), 1907–1925 (2022). https://doi.org/10.32604/iasc.2022.027233

    Article  Google Scholar 

  12. Mekruksavanich, S., Jitpattanakul, A., Sitthithakerngkiet, K., Youplao, P., Yupapin, P.: ResNet-SE: channel attention-based deep residual network for complex activity recognition using wrist-worn wearable sensors. IEEE Access 10, 51142–51154 (2022). https://doi.org/10.1109/ACCESS.2022.3174124

    Article  Google Scholar 

  13. Noppitak, S., Surinta, O.: dropCyclic: snapshot ensemble convolutional neural network based on a new learning rate schedule for land use classification. IEEE Access 10, 60725–60737 (2022). https://doi.org/10.1109/ACCESS.2022.3180844

    Article  Google Scholar 

  14. Sikder, N., Chowdhury, M.S., Arif, A.S.M., Nahid, A.A.: Human activity recognition using multichannel convolutional neural network. In: 2019 5th International Conference on Advances in Electrical Engineering (ICAEE), pp. 560–565 (2019). https://doi.org/10.1109/ICAEE48663.2019.8975649

  15. Silla, C., Freitas, A.: A survey of hierarchical classification across different application domains. Data Min. Knowl. Disc. 22, 31–72 (2011). https://doi.org/10.1007/s10618-010-0175-9

    Article  MathSciNet  MATH  Google Scholar 

  16. White, J., Thompson, C., Turner, H., Dougherty, B., Schmidt, D.: WreckWatch: automatic traffic accident detection and notification with smartphones. Mob. Netw. Appl. 16, 285–303 (2011). https://doi.org/10.1007/s11036-011-0304-8

    Article  Google Scholar 

  17. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987–5995 (2017). https://doi.org/10.1109/CVPR.2017.634

  18. Xu, W., Pang, Y., Yang, Y., Liu, Y.: Human activity recognition based on convolutional neural network. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 165–170 (2018). https://doi.org/10.1109/ICPR.2018.8545435

  19. Zehra, N., Azeem, S.H., Farhan, M.: Human activity recognition through ensemble learning of multiple convolutional neural networks. In: 2021 55th Annual Conference on Information Sciences and Systems (CISS), pp. 1–5 (2021). https://doi.org/10.1109/CISS50987.2021.9400290

  20. Zhang, H., Xiao, Z., Wang, J., Li, F., Szczerbicki, E.: A novel IoT-perceptive human activity recognition (HAR) approach using multihead convolutional attention. IEEE Internet Things J. 7(2), 1072–1080 (2020). https://doi.org/10.1109/JIOT.2019.2949715

    Article  Google Scholar 

  21. Zhu, R., et al.: Deep ensemble learning for human activity recognition using smartphone. In: 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), pp. 1–5 (2018). https://doi.org/10.1109/ICDSP.2018.8631677

Download references

Acknowledgment

This research project was supported by the Thailand Science Research and Innovation fund; the University of Phayao (Grant No. FF65-RIM041); National Science, Research and Innovation (NSRF); and King Mongkut’s University of Technology North Bangkok with Contract No. KMUTNB-FF-66-07.

The authors also gratefully acknowledge the support provided by Thammasat University Research fund under the TSRI, Contract No. TUFF19/2564 and TUFF24/2565, for the project of “AI Ready City Networking in RUN”, based on the RUN Digital Cluster collaboration scheme.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sakorn Mekruksavanich .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mekruksavanich, S., Jantawong, P., Hnoohom, N., Jitpattanakul, A. (2022). Recognizing Driver Activities Using Deep Learning Approaches Based on Smartphone Sensors. In: Surinta, O., Kam Fung Yuen, K. (eds) Multi-disciplinary Trends in Artificial Intelligence. MIWAI 2022. Lecture Notes in Computer Science(), vol 13651. Springer, Cham. https://doi.org/10.1007/978-3-031-20992-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20992-5_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20991-8

  • Online ISBN: 978-3-031-20992-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics