Skip to main content

Transfer Learning in Complex Environments

  • Chapter
  • First Online:
Introduction to Transfer Learning

Abstract

The application environment is dynamically changing, so does the algorithm. In order to cope with the changing environment, there are several new transfer learning algorithms being developed. We define the complex environment mainly based on the traits of the training data since data is the key to modern transfer learning. In this chapter, we briefly introduce some of these complex environments and show how transfer learning algorithms can be adapted to deal with such new challenges. Note that there are multiple new settings in recent literature and we cannot cover them all.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Free shipping worldwide - see info
Hardcover Book
USD 79.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In general, m i may be different for different time segments.

  2. 2.

    When r = 1, it becomes the one-step prediction problem.

References

  • Ando, S. and Huang, C. Y. (2017). Deep over-sampling framework for classifying imbalanced data. arXiv preprint arXiv:1704.07515.

    Google Scholar 

  • Chawla, N. V., Bowyer, K. W., Hall, L. O., and Kegelmeyer, W. P. (2002). Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16:321–357.

    Article  MATH  Google Scholar 

  • Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., and Singer, Y. (2006). Online passive-aggressive algorithms. Journal of Machine Learning Research, 7(Mar):551–585.

    MathSciNet  MATH  Google Scholar 

  • Crammer, K., Kearns, M., and Wortman, J. (2008). Learning from multiple sources. JMLR, 9(Aug):1757–1774.

    MathSciNet  MATH  Google Scholar 

  • Du, Y., Tan, Z., Chen, Q., Zhang, Y., and Wang, C. (2020). Homogeneous online transfer learning with online distribution discrepancy minimization. In ECAI.

    Google Scholar 

  • Du, Y., Wang, J., Feng, W., Pan, S., Qin, T., Xu, R., and Wang, C. (2021). Adarnn: Adaptive learning and forecasting of time series. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 402–411.

    Google Scholar 

  • Duan, L., Tsang, I. W., Xu, D., and Chua, T.-S. (2009). Domain adaptation from multiple sources via auxiliary classifiers. In ICML, pages 289–296.

    Google Scholar 

  • Fang, Z., Lu, J., Liu, F., Xuan, J., and Zhang, G. (2019). Open set domain adaptation: Theoretical bound and algorithm. arXiv preprint arXiv:1907.08375.

    Google Scholar 

  • Ganganwar, V. (2012). An overview of classification algorithms for imbalanced datasets. International Journal of Emerging Technology and Advanced Engineering, 2(4):42–47.

    Google Scholar 

  • Gao, C., Sang, N., and Huang, R. (2012). Online transfer boosting for object tracking. In Pattern Recognition (ICPR), 2012 21st International Conference on, pages 906–909. IEEE.

    Google Scholar 

  • He, H. and Garcia, E. A. (2009). Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, 21(9):1263–1284.

    Article  Google Scholar 

  • Hoi, S. C., Sahoo, D., Lu, J., and Zhao, P. (2018). Online learning: A comprehensive survey. arXiv preprint arXiv:1802.02871.

    Google Scholar 

  • Hsiao, P.-H., Chang, F.-J., and Lin, Y.-Y. (2016). Learning discriminatively reconstructed source data for object recognition with few examples. IEEE Transactions on Image Processing, 25(8):3518–3532.

    Article  MathSciNet  MATH  Google Scholar 

  • Huang, C., Li, Y., Change Loy, C., and Tang, X. (2016). Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5375–5384.

    Google Scholar 

  • Kriminger, E., Principe, J. C., and Lakshminarayan, C. (2012). Nearest neighbor distributions for imbalanced classification. In The 2012 International Joint Conference on Neural Networks (IJCNN), pages 1–5. IEEE.

    Google Scholar 

  • Kuznetsov, V. and Mohri, M. (2014). Generalization bounds for time series prediction with non-stationary processes. In ALT, pages 260–274. Springer.

    Google Scholar 

  • Li, S., Song, S., and Huang, G. (2016). Prediction reweighting for domain adaptation. IEEE Transactions on Neural Networks and Learning Systems, (99):1–14.

    Google Scholar 

  • Li, S., Wang, Z., Zhou, G., and Lee, S. Y. M. (2011). Semi-supervised learning for imbalanced sentiment classification. In Twenty-Second International Joint Conference on Artificial Intelligence.

    Google Scholar 

  • Liu, X.-Y., Wu, J., and Zhou, Z.-H. (2008). Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(2):539–550.

    Google Scholar 

  • Lu, W., Wang, J., Chen, Y., and Sun, X. (2022). Generalized representation learning for time series classification. In International conference on machine learning.

    Google Scholar 

  • Mansour, Y., Mohri, M., and Rostamizadeh, A. (2009). Domain adaptation with multiple sources. In NeuIPS, pages 1041–1048.

    Google Scholar 

  • McKay, H., Griffiths, N., Taylor, P., Damoulas, T., and Xu, Z. (2019). Online transfer learning for concept drifting data streams. In BigMine@ KDD.

    Google Scholar 

  • Ming Harry Hsu, T., Yu Chen, W., Hou, C.-A., Hubert Tsai, Y.-H., Yeh, Y.-R., and Frank Wang, Y.-C. (2015). Unsupervised domain adaptation with imbalanced cross-domain data. In Proceedings of the IEEE International Conference on Computer Vision, pages 4121–4129.

    Google Scholar 

  • Pan, S. J., Tsang, I. W., Kwok, J. T., and Yang, Q. (2011). Domain adaptation via transfer component analysis. IEEE TNN, 22(2):199–210.

    Google Scholar 

  • Panareda Busto, P. and Gall, J. (2017). Open set domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pages 754–763.

    Google Scholar 

  • Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., and Wang, B. (2019). Moment matching for multi-source domain adaptation. In ICCV, pages 1406–1415.

    Google Scholar 

  • Saito, K., Yamamoto, S., Ushiku, Y., and Harada, T. (2018). Open set domain adaptation by backpropagation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 153–168.

    Google Scholar 

  • Salinas, D., Flunkert, V., Gasthaus, J., and Januschowski, T. (2020). DeepAR: Probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast, 36(3):1181–1191.

    Article  Google Scholar 

  • Schweikert, G., Rätsch, G., Widmer, C., and Schölkopf, B. (2009). An empirical analysis of domain adaptation algorithms for genomic sequence analysis. In NeuIPS, pages 1433–1440.

    Google Scholar 

  • Sun, Q., Chattopadhyay, R., Panchanathan, S., and Ye, J. (2011). A two-stage weighting framework for multi-source domain adaptation. In NeuIPS, pages 505–513.

    Google Scholar 

  • Sun, S.-L. and Shi, H.-L. (2013). Bayesian multi-source domain adaptation. In 2013 International Conference on Machine Learning and Cybernetics, volume 1, pages 24–28. IEEE.

    Google Scholar 

  • Sun, Y., Kamel, M. S., Wong, A. K., and Wang, Y. (2007). Cost-sensitive boosting for classification of imbalanced data. Pattern Recognition, 40(12):3358–3378.

    Article  MATH  Google Scholar 

  • Sun, Y., Wong, A. K., and Kamel, M. S. (2009). Classification of imbalanced data: A review. International Journal of Pattern Recognition and Artificial Intelligence, 23(04):687–719.

    Article  Google Scholar 

  • Tang, Y., Zhang, Y.-Q., Chawla, N. V., and Krasser, S. (2009). SVMS modeling for highly imbalanced classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(1):281–288.

    Article  Google Scholar 

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.

    Google Scholar 

  • Vincent, L. and Thome, N. (2019). Shape and time distortion loss for training deep time series forecasting models. In NeurIPS, pages 4189–4201.

    Google Scholar 

  • Wang, J., Chen, Y., Hao, S., et al. (2017). Balanced distribution adaptation for transfer learning. In ICDM, pages 1129–1134.

    Google Scholar 

  • Wang, J., Zhao, P., Hoi, S. C., and Jin, R. (2013). Online feature selection and its applications. IEEE Transactions on Knowledge and Data Engineering, 26(3):698–710.

    Article  Google Scholar 

  • Weiss, K. R. and Khoshgoftaar, T. M. (2016). Investigating transfer learners for robustness to domain class imbalance. In 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 207–213. IEEE.

    Google Scholar 

  • Wu, F., Jing, X.-Y., Shan, S., Zuo, W., and Yang, J.-Y. (2017a). Multiset feature learning for highly imbalanced data classification. In Thirty-First AAAI Conference on Artificial Intelligence.

    Google Scholar 

  • Wu, Q., Zhou, X., Yan, Y., Wu, H., and Min, H. (2017b). Online transfer learning by leveraging multiple source domains. Knowledge and Information Systems, 52(3):687–707.

    Article  Google Scholar 

  • Xu, R., Chen, Z., Zuo, W., Yan, J., and Lin, L. (2018). Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In CVPR, pages 3964–3973.

    Google Scholar 

  • Xu, R., Liu, P., Zhang, Y., Cai, F., Wang, J., Liang, S., Ying, H., and Yin, J. (2020). Joint partial optimal transport for open set domain adaptation. In International Joint Conference on Artificial Intelligence, pages 2540–2546.

    Google Scholar 

  • Yan, H., Ding, Y., Li, P., Wang, Q., Xu, Y., and Zuo, W. (2017). Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation. arXiv preprint arXiv:1705.00609.

    Google Scholar 

  • Yan, Y., Wu, Q., Tan, M., and Min, H. (2016). Online heterogeneous transfer learning by weighted offline and online classifiers. In European Conference on Computer Vision, pages 467–474. Springer.

    Google Scholar 

  • Zhan, Y. and Taylor, M. E. (2015). Online transfer learning in reinforcement learning domains. arXiv preprint arXiv:1507.00436.

    Google Scholar 

  • Zhao, H., Zhang, S., Wu, G., Moura, J. M., Costeira, J. P., and Gordon, G. J. (2018). Adversarial multiple source domain adaptation. In NeuIPS, pages 8559–8570.

    Google Scholar 

  • Zhao, P. and Hoi, S. C. (2010). OTL: A framework of online transfer learning. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 1231–1238.

    Google Scholar 

  • Zhu, Y., Zhuang, F., and Wang, D. (2019). Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources. In AAAI, volume 33, pages 5989–5996.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Wang, J., Chen, Y. (2023). Transfer Learning in Complex Environments. In: Introduction to Transfer Learning. Machine Learning: Foundations, Methodologies, and Applications. Springer, Singapore. https://doi.org/10.1007/978-981-19-7584-4_13

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-7584-4_13

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-7583-7

  • Online ISBN: 978-981-19-7584-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics