BTSAMA: A Personalized Music Recommendation Method Combining TextCNN and Attention

BTSAMA: A Personalized Music Recommendation Method Combining TextCNN and Attention

Shaomin Lv, Li Pan
Copyright: © 2023 |Volume: 14 |Issue: 1 |Pages: 23
ISSN: 1941-6237|EISSN: 1941-6245|EISBN13: 9781668479247|DOI: 10.4018/IJACI.327351
Cite Article Cite Article

MLA

Lv, Shaomin, and Li Pan. "BTSAMA: A Personalized Music Recommendation Method Combining TextCNN and Attention." IJACI vol.14, no.1 2023: pp.1-23. http://doi.org/10.4018/IJACI.327351

APA

Lv, S. & Pan, L. (2023). BTSAMA: A Personalized Music Recommendation Method Combining TextCNN and Attention. International Journal of Ambient Computing and Intelligence (IJACI), 14(1), 1-23. http://doi.org/10.4018/IJACI.327351

Chicago

Lv, Shaomin, and Li Pan. "BTSAMA: A Personalized Music Recommendation Method Combining TextCNN and Attention," International Journal of Ambient Computing and Intelligence (IJACI) 14, no.1: 1-23. http://doi.org/10.4018/IJACI.327351

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

To deal with the problems of occurring personalized music recommendation methods, for instance, low explanation, low accuracy of recommendation, and difficulty extracting information effectively, a personalized music recommendation method combining TextCNN and attention is proposed. Firstly, TextCNN model and BERT are combined to capture local music continuous features. Secondly, self-attention is introduced to solve the remaining omitted non-continuous features that are not paid attention by TextCNN. Finally, multi-headed attention mechanism is used to get features of hotspot music and user's interest music, and cascading fusion method is used to achieve click prediction. Experimentally, the proposed model can effectively recommend personalized music, its MAE values on FMA and GTZAN datasets are 0.156 and 0.146, respectively, improving by at least 6.6% and 3.3% compared to other comparative models. And its RMSE result values on the FMA and GTZAN datasets are 0.185 and 0.164, respectively, improving by at least 12.4% and 5.2% compared to other comparative models.