Abstract
The application environment is dynamically changing, so does the algorithm. In order to cope with the changing environment, there are several new transfer learning algorithms being developed. We define the complex environment mainly based on the traits of the training data since data is the key to modern transfer learning. In this chapter, we briefly introduce some of these complex environments and show how transfer learning algorithms can be adapted to deal with such new challenges. Note that there are multiple new settings in recent literature and we cannot cover them all.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
In general, m i may be different for different time segments.
- 2.
When r = 1, it becomes the one-step prediction problem.
References
Ando, S. and Huang, C. Y. (2017). Deep over-sampling framework for classifying imbalanced data. arXiv preprint arXiv:1704.07515.
Chawla, N. V., Bowyer, K. W., Hall, L. O., and Kegelmeyer, W. P. (2002). Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16:321–357.
Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., and Singer, Y. (2006). Online passive-aggressive algorithms. Journal of Machine Learning Research, 7(Mar):551–585.
Crammer, K., Kearns, M., and Wortman, J. (2008). Learning from multiple sources. JMLR, 9(Aug):1757–1774.
Du, Y., Tan, Z., Chen, Q., Zhang, Y., and Wang, C. (2020). Homogeneous online transfer learning with online distribution discrepancy minimization. In ECAI.
Du, Y., Wang, J., Feng, W., Pan, S., Qin, T., Xu, R., and Wang, C. (2021). Adarnn: Adaptive learning and forecasting of time series. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 402–411.
Duan, L., Tsang, I. W., Xu, D., and Chua, T.-S. (2009). Domain adaptation from multiple sources via auxiliary classifiers. In ICML, pages 289–296.
Fang, Z., Lu, J., Liu, F., Xuan, J., and Zhang, G. (2019). Open set domain adaptation: Theoretical bound and algorithm. arXiv preprint arXiv:1907.08375.
Ganganwar, V. (2012). An overview of classification algorithms for imbalanced datasets. International Journal of Emerging Technology and Advanced Engineering, 2(4):42–47.
Gao, C., Sang, N., and Huang, R. (2012). Online transfer boosting for object tracking. In Pattern Recognition (ICPR), 2012 21st International Conference on, pages 906–909. IEEE.
He, H. and Garcia, E. A. (2009). Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, 21(9):1263–1284.
Hoi, S. C., Sahoo, D., Lu, J., and Zhao, P. (2018). Online learning: A comprehensive survey. arXiv preprint arXiv:1802.02871.
Hsiao, P.-H., Chang, F.-J., and Lin, Y.-Y. (2016). Learning discriminatively reconstructed source data for object recognition with few examples. IEEE Transactions on Image Processing, 25(8):3518–3532.
Huang, C., Li, Y., Change Loy, C., and Tang, X. (2016). Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5375–5384.
Kriminger, E., Principe, J. C., and Lakshminarayan, C. (2012). Nearest neighbor distributions for imbalanced classification. In The 2012 International Joint Conference on Neural Networks (IJCNN), pages 1–5. IEEE.
Kuznetsov, V. and Mohri, M. (2014). Generalization bounds for time series prediction with non-stationary processes. In ALT, pages 260–274. Springer.
Li, S., Song, S., and Huang, G. (2016). Prediction reweighting for domain adaptation. IEEE Transactions on Neural Networks and Learning Systems, (99):1–14.
Li, S., Wang, Z., Zhou, G., and Lee, S. Y. M. (2011). Semi-supervised learning for imbalanced sentiment classification. In Twenty-Second International Joint Conference on Artificial Intelligence.
Liu, X.-Y., Wu, J., and Zhou, Z.-H. (2008). Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(2):539–550.
Lu, W., Wang, J., Chen, Y., and Sun, X. (2022). Generalized representation learning for time series classification. In International conference on machine learning.
Mansour, Y., Mohri, M., and Rostamizadeh, A. (2009). Domain adaptation with multiple sources. In NeuIPS, pages 1041–1048.
McKay, H., Griffiths, N., Taylor, P., Damoulas, T., and Xu, Z. (2019). Online transfer learning for concept drifting data streams. In BigMine@ KDD.
Ming Harry Hsu, T., Yu Chen, W., Hou, C.-A., Hubert Tsai, Y.-H., Yeh, Y.-R., and Frank Wang, Y.-C. (2015). Unsupervised domain adaptation with imbalanced cross-domain data. In Proceedings of the IEEE International Conference on Computer Vision, pages 4121–4129.
Pan, S. J., Tsang, I. W., Kwok, J. T., and Yang, Q. (2011). Domain adaptation via transfer component analysis. IEEE TNN, 22(2):199–210.
Panareda Busto, P. and Gall, J. (2017). Open set domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pages 754–763.
Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., and Wang, B. (2019). Moment matching for multi-source domain adaptation. In ICCV, pages 1406–1415.
Saito, K., Yamamoto, S., Ushiku, Y., and Harada, T. (2018). Open set domain adaptation by backpropagation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 153–168.
Salinas, D., Flunkert, V., Gasthaus, J., and Januschowski, T. (2020). DeepAR: Probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast, 36(3):1181–1191.
Schweikert, G., Rätsch, G., Widmer, C., and Schölkopf, B. (2009). An empirical analysis of domain adaptation algorithms for genomic sequence analysis. In NeuIPS, pages 1433–1440.
Sun, Q., Chattopadhyay, R., Panchanathan, S., and Ye, J. (2011). A two-stage weighting framework for multi-source domain adaptation. In NeuIPS, pages 505–513.
Sun, S.-L. and Shi, H.-L. (2013). Bayesian multi-source domain adaptation. In 2013 International Conference on Machine Learning and Cybernetics, volume 1, pages 24–28. IEEE.
Sun, Y., Kamel, M. S., Wong, A. K., and Wang, Y. (2007). Cost-sensitive boosting for classification of imbalanced data. Pattern Recognition, 40(12):3358–3378.
Sun, Y., Wong, A. K., and Kamel, M. S. (2009). Classification of imbalanced data: A review. International Journal of Pattern Recognition and Artificial Intelligence, 23(04):687–719.
Tang, Y., Zhang, Y.-Q., Chawla, N. V., and Krasser, S. (2009). SVMS modeling for highly imbalanced classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(1):281–288.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Vincent, L. and Thome, N. (2019). Shape and time distortion loss for training deep time series forecasting models. In NeurIPS, pages 4189–4201.
Wang, J., Chen, Y., Hao, S., et al. (2017). Balanced distribution adaptation for transfer learning. In ICDM, pages 1129–1134.
Wang, J., Zhao, P., Hoi, S. C., and Jin, R. (2013). Online feature selection and its applications. IEEE Transactions on Knowledge and Data Engineering, 26(3):698–710.
Weiss, K. R. and Khoshgoftaar, T. M. (2016). Investigating transfer learners for robustness to domain class imbalance. In 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 207–213. IEEE.
Wu, F., Jing, X.-Y., Shan, S., Zuo, W., and Yang, J.-Y. (2017a). Multiset feature learning for highly imbalanced data classification. In Thirty-First AAAI Conference on Artificial Intelligence.
Wu, Q., Zhou, X., Yan, Y., Wu, H., and Min, H. (2017b). Online transfer learning by leveraging multiple source domains. Knowledge and Information Systems, 52(3):687–707.
Xu, R., Chen, Z., Zuo, W., Yan, J., and Lin, L. (2018). Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In CVPR, pages 3964–3973.
Xu, R., Liu, P., Zhang, Y., Cai, F., Wang, J., Liang, S., Ying, H., and Yin, J. (2020). Joint partial optimal transport for open set domain adaptation. In International Joint Conference on Artificial Intelligence, pages 2540–2546.
Yan, H., Ding, Y., Li, P., Wang, Q., Xu, Y., and Zuo, W. (2017). Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation. arXiv preprint arXiv:1705.00609.
Yan, Y., Wu, Q., Tan, M., and Min, H. (2016). Online heterogeneous transfer learning by weighted offline and online classifiers. In European Conference on Computer Vision, pages 467–474. Springer.
Zhan, Y. and Taylor, M. E. (2015). Online transfer learning in reinforcement learning domains. arXiv preprint arXiv:1507.00436.
Zhao, H., Zhang, S., Wu, G., Moura, J. M., Costeira, J. P., and Gordon, G. J. (2018). Adversarial multiple source domain adaptation. In NeuIPS, pages 8559–8570.
Zhao, P. and Hoi, S. C. (2010). OTL: A framework of online transfer learning. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 1231–1238.
Zhu, Y., Zhuang, F., and Wang, D. (2019). Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources. In AAAI, volume 33, pages 5989–5996.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Wang, J., Chen, Y. (2023). Transfer Learning in Complex Environments. In: Introduction to Transfer Learning. Machine Learning: Foundations, Methodologies, and Applications. Springer, Singapore. https://doi.org/10.1007/978-981-19-7584-4_13
Download citation
DOI: https://doi.org/10.1007/978-981-19-7584-4_13
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7583-7
Online ISBN: 978-981-19-7584-4
eBook Packages: Computer ScienceComputer Science (R0)