Preprint / Version 1

Human Emotion Recognition and Song Recommendation Model

##article.authors##

  • Advik Katiyar Oberoi International School
  • Mohith Manohar

DOI:

https://doi.org/10.58445/rars.1046

Keywords:

computer science, machine learning, emotion recognition, song recommendation

Abstract

In this paper, I present a project that recognizes human emotions and recommends a song based on them. This model uses machine learning techniques to identify the five basic human emotions from facial images: happiness, anger, sadness, surprise, and neutral. This model employs a single machine learning model, which is trained on a dataset of labeled facial images and uses Haar Cascade classifier and Local Binary Patterns Histograms algorithm (for feature extraction and classification) to recognize emotions in uploaded images. A supporting code is used to recommend songs based on the recognized emotion. This paper evaluates the accuracy of the human emotion recognition model and the effectiveness of the song recommendation system. The findings of this study can contribute to the development of more advanced emotion recognition models and music recommendation systems in the future.

References

S. Koelsch, “Brain correlates of music-evoked emotions,” Nature Re- views Neuroscience, vol. 15, pp. 170–180, 2014.

V. Verma, N. Marathe, P. Sanghavi, and D. Nitnaware, “Music rec- ommendation system using machine learning,” International Journal of Scientific Research in Computer Science, Engineering and Information Technology, pp. 80–88, 11 2021.

J. Kim, H. Park, and Y. Kim, “Emotion recognition using facial expression recognition based on convolutional neural networks,” Journal of the Korean Society of Industrial and Systems Engineering, vol. 42, no. 2, pp. 70–76, 2019.

Z. Zhou, Y. He, and Y. Zhao, “Emotion recognition based on deep learning model,” in Proceedings of the International Conference on Advanced Intelligent Systems and Informatics, pp. 510–520, Springer, 2020.

X. Wang, H. Li, X. Ma, and Y. Zhu, “Jointly modeling audio and lyrics for music recommendation with pre-training,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 9, pp. 4137–4148, 2020.

B. McFee, T. Bertin-Mahieux, D. P. Ellis, and G. R. G. Lanckriet, “The million song dataset challenge,” in WWW, pp. 909–916, 2012.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.

C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image and Vision Computing, vol. 27, no. 6, pp. 803–816, 2009.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.

L. Li, Z. Sun, C. Li, and Z. Zhang, “Deep learning for generic object recognition: A survey,” International Journal of Computer Vision, vol. 128, no. 2, pp. 261–318, 2020.

Y.Liu,A.Jain,andH.Kautz,“Howmanytrainingexamplesareneeded for machine learning? a progress report,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1648–1664, 2019.

Downloads

Additional Files

Posted

2024-03-25