A Cross-Domain Recommender System for Literary Books Using Multi-Head Self-Attention Interaction and Knowledge Transfer Learning

A Cross-Domain Recommender System for Literary Books Using Multi-Head Self-Attention Interaction and Knowledge Transfer Learning

Yuan Cui, Yuexing Duan, Yueqin Zhang, Li Pan
Copyright: © 2023 |Volume: 19 |Issue: 1 |Pages: 22
ISSN: 1548-3924|EISSN: 1548-3932|EISBN13: 9781668479025|DOI: 10.4018/IJDWM.334122
Cite Article Cite Article

MLA

Cui, Yuan, et al. "A Cross-Domain Recommender System for Literary Books Using Multi-Head Self-Attention Interaction and Knowledge Transfer Learning." IJDWM vol.19, no.1 2023: pp.1-22. http://doi.org/10.4018/IJDWM.334122

APA

Cui, Y., Duan, Y., Zhang, Y., & Pan, L. (2023). A Cross-Domain Recommender System for Literary Books Using Multi-Head Self-Attention Interaction and Knowledge Transfer Learning. International Journal of Data Warehousing and Mining (IJDWM), 19(1), 1-22. http://doi.org/10.4018/IJDWM.334122

Chicago

Cui, Yuan, et al. "A Cross-Domain Recommender System for Literary Books Using Multi-Head Self-Attention Interaction and Knowledge Transfer Learning," International Journal of Data Warehousing and Mining (IJDWM) 19, no.1: 1-22. http://doi.org/10.4018/IJDWM.334122

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Existing book recommendation methods often overlook the rich information contained in the comment text, which can limit their effectiveness. Therefore, a cross-domain recommender system for literary books that leverages multi-head self-attention interaction and knowledge transfer learning is proposed. Firstly, the BERT model is employed to obtain word vectors, and CNN is used to extract user and project features. Then, higher-level features are captured through the fusion of multi-head self-attention and addition pooling. Finally, knowledge transfer learning is introduced to conduct joint modeling between different domains by simultaneously extracting domain-specific features and shared features between domains. On the Amazon dataset, the proposed model achieved MAE and MSE of 0.801 and 1.058 in the “movie-book” recommendation task and 0.787 and 0.805 in the “music-book” recommendation task, respectively. This performance is significantly superior to other advanced recommendation models. Moreover, the proposed model also has good universality on the Chinese dataset.