Skip to main content

Time Series Forecasting Model Based on Domain Adaptation and Shared Attention

  • Conference paper
  • First Online:
Advances and Trends in Artificial Intelligence. Theory and Applications (IEA/AIE 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13926))

  • 446 Accesses

Abstract

Time series forecasting is an essential problem involving many fields. Recently, with the development of big data technology, deep learning methods have been widely studied and achieved promising performance in time series forecasting tasks. But there is a limited number of time series or observations per time series. In this case, a time series forecasting model, which is based on domain adaptation and shared attention (DA-SA), is proposed in this study. First, we employ Transformer architecture as the basic framework of our model. Then, we specially design a selectively shared attention module to transfer valuable information from the data-rich domain to the data-poor domain by inducing domain-invariant latent features (queries and keys) and retraining domain-specific features (values). Besides, convolutional neural network is introduced to incorporate local context into the self-attention mechanism and captures the short-term dependencies of data. Finally, adversarial training is utilized to enhance the robustness of the model and improve prediction accuracy. The practicality and effectiveness of DA-SA for time series forecasting are verified on real-world datasets.

Supported by the National Nature Science Foundation of China under Grant 62003344 and the National Key Research and Development Program of China under Grant 2022YFB3304602.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.microsoft.com/en-us/research/project/urban-air/.

  2. 2.

    https://mimic.physionet.org/gettingstarted/demo/.

References

  1. Box, G.E., Jenkins, G.M., Bacon, D.W.: Models for forecasting seasonal and non-seasonal time series. Technical report, WISCONSIN UNIV MADISON DEPT OF STATISTICS (1967)

    Google Scholar 

  2. da Costa, P.R.D.O., Akçay, A., Zhang, Y., Kaymak, U.: Remaining useful lifetime prediction via deep domain adaptation. Reliab. Eng. Syst. Saf. 195, 106682 (2020)

    Google Scholar 

  3. Guo, H., Pasunuru, R., Bansal, M.: Multi-source domain adaptation for text classification via distancenet-bandits. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 7830–7838 (2020)

    Google Scholar 

  4. Gururangan, S., et al.: Don’t stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964 (2020)

  5. Hu, H., Tang, M.J., Bai, C.: Datsing: data augmented time series forecasting with adversarial domain adaptation. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 2061–2064. ACM, Virtual Event Ireland (2020)

    Google Scholar 

  6. Hoffman, J., et al.: CyCADA: cycle-consistent adversarial domain adaptation. In: International Conference on Machine Learning, pp. 1989–1998. PMLR (2018)

    Google Scholar 

  7. Kalpakis, K., Gada, D., Puttagunta, V.: Distance measures for effective clustering of ARIMA time-series. In: Proceedings 2001 IEEE International Conference on Data Mining, pp. 273–280. IEEE (2001)

    Google Scholar 

  8. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  9. Li, S., et al.: Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Adv. Neural Inf. Process. Syst. 32 (2019)

    Google Scholar 

  10. Li, Y., Wang, H., Li, J., Tan, J.: A 2D long short-term memory fusion networks for bearing remaining useful life prediction. IEEE Sens. J. 22, 21806–21815 (2022)

    Article  Google Scholar 

  11. Lim, B., Zohren, S., Roberts, S.: Recurrent neural filters: learning independent Bayesian filtering steps for time series prediction. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2020)

    Google Scholar 

  12. Wang, M., Deng, W.: Deep visual domain adaptation: a survey. Neurocomputing 312, 135–153 (2018)

    Article  Google Scholar 

  13. Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc. (2018)

    Google Scholar 

  14. Motiian, S., Jones, Q., Iranmanesh, S., Doretto, G.: Few-shot adversarial domain adaptation. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  15. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32 (2019)

    Google Scholar 

  16. Rangapuram, S.S., Seeger, M.W., Gasthaus, J., Stella, L., Wang, Y., Januschowski, T.: Deep state space models for time series forecasting. Adv. Neural. Inf. Process. Syst. 31, 7785–7794 (2018)

    Google Scholar 

  17. Cai, R., et al.: Time series domain adaptation via sparse associative structure alignment. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 8, pp. 6859–6867 (2021)

    Google Scholar 

  18. Salinas, D., Flunkert, V., Gasthaus, J., Januschowski, T.: DeepAR: probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast. 36(3), 1181–1191 (2020)

    Article  Google Scholar 

  19. Wang, H., Bai, X., Tan, J., Yang, J.: Deep prototypical networks based domain adaptation for fault diagnosis. J. Intell. Manuf. 33(4), 973–983 (2020). https://doi.org/10.1007/s10845-020-01709-4

    Article  Google Scholar 

  20. Wang, H., Bai, X., Wang, S., Tan, J., Liu, C.: Generalization on unseen domains via model-agnostic learning for intelligent fault diagnosis. IEEE Trans. Instrum. Meas. 71, 1–11 (2022)

    Article  Google Scholar 

  21. Wang, Y., Smola, A., Maddix, D., Gasthaus, J., Foster, D., Januschowski, T.: Deep factors for forecasting. In: International Conference on Machine Learning, pp. 6607–6617. PMLR (2019)

    Google Scholar 

  22. Wu, H., Xu, J., Wang, J., Long, M.: AutoFormer: decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural. Inf. Process. Syst. 34, 22419–22430 (2021)

    Google Scholar 

  23. Wu, S., Xiao, X., Ding, Q., Zhao, P., Wei, Y., Huang, J.: Adversarial sparse transformer for time series forecasting. Adv. Neural. Inf. Process. Syst. 33, 17105–17115 (2020)

    Google Scholar 

  24. Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent trends in deep learning based natural language processing. IEEE Comput. Intell. Mag. 13(3), 55–75 (2018)

    Article  Google Scholar 

  25. Zhou, H., et al.: Informer: beyond efficient transformer for long sequence time-series forecasting. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 11106–11115 (2021)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Nature Science Foundation of China under Grant 62003344 and the National Key Research and Development Program of China under Grant 2022YFB3304602.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jie Tan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Y., Li, J., Liu, C., Tan, J. (2023). Time Series Forecasting Model Based on Domain Adaptation and Shared Attention. In: Fujita, H., Wang, Y., Xiao, Y., Moonis, A. (eds) Advances and Trends in Artificial Intelligence. Theory and Applications. IEA/AIE 2023. Lecture Notes in Computer Science(), vol 13926. Springer, Cham. https://doi.org/10.1007/978-3-031-36822-6_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-36822-6_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-36821-9

  • Online ISBN: 978-3-031-36822-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics