Skip to main content

Canonical Correlation Discriminative Learning for Domain Adaptation

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12269))

Abstract

Domain adaptation aims to diminish the discrepancy between the source and target domains and enhance the classification ability for the target samples by using well-labeled source domain data. However, most existing methods concentrate on learning domain invariant features for cross-domain tasks, but ignore the correlation and discriminative information between different domains. If the learned features from the source and target domains are not correlated, the adaptability of domain adaptation methods will be greatly degraded. To make up for this deficiency, we propose a novel domain adaptation approach, referred to as Canonical Correlation Discriminative Learning (CCDL) for domain adaptation. By introducing a novel correlation representation, CCDL maximizes the correlations of the learned features from the two domains as much as possible. Specifically, CCDL learns a latent feature representation to reduce the difference by jointly adapting the marginal and conditional distributions between the source and target domains, and simultaneously maximizes the inter-class distance and minimizes the intra-class scatter. The experiments certify that CCDL is superior to several state-of-the-art methods on four visual benchmark databases.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://github.com/jindongwang/transferlearning.

References

  1. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  2. Long, M.S., Wang, J.M., Ding, G.G., Sun, J.G., Yu, P.S.: Transfer joint matching for unsupervised domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition (2014)

    Google Scholar 

  3. Hu, J.L., Lu, J.W., Tan, Y.P.: Deep transfer metric learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 325–333 (2015)

    Google Scholar 

  4. Long, M.S., Wang, J.M., Ding, G.G., Sun, J.G., Yu, P.S.: Transfer feature learning with joint distribution adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2200–2207 (2013)

    Google Scholar 

  5. Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2011)

    Article  Google Scholar 

  6. Hotelling, H.: Relations between two sets of variates. Biometrika (1936)

    Google Scholar 

  7. Chu, D.L., Liao, L.Z., Ng, M.K., Zhang, X.W.: Sparse canonical correlation analysis: new formulation and algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 3050–3065 (2013)

    Article  Google Scholar 

  8. Xia, R., Hu, X.L., Lu, J.F., Yang, J., Zong, C.Q.: Instance selection and instance weighting for cross-domain sentiment classification via PU learning. In: International Conference on Artificial Intelligence, pp. 2176–2182 (2013)

    Google Scholar 

  9. Wang, J.D., Feng, W.J., Chen, Y.Q., Yu, H., Huang, M.Y., Yu, P.S.: Visual domain adaptation with manifold embedded distribution alignment. In: ACM International Conference on Multimedia (2018)

    Google Scholar 

  10. Li, S., Song, S.J., Huang, G.: Prediction reweighting for domain adaptation. IEEE Trans. Neural Netw. Learn. Syst. 28(7), 1682–1695 (2017)

    Article  MathSciNet  Google Scholar 

  11. Huang, J.Y., Smola, A.J., Gretton, A., Borgwardt, K.M., Scholkopf, B.: Correcting sample selection bias by unlabeled data. In: Neural Information Processing Systems, pp. 601–608 (2007)

    Google Scholar 

  12. Li, S., Song, S.J., Huang, G., Ding, Z.M., Wu, C.: Domain invariant and class discriminative feature learning for visual domain adaptation. IEEE Trans. Image Process. 27(9), 4260–4272 (2018)

    Article  MathSciNet  Google Scholar 

  13. Gong, B.Q., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition (2012)

    Google Scholar 

  14. Deng, W.Y., Lendasse, A., Ong, Y.S., Tsang, I.W.H., Chen, L., Zheng, Q.H.: Domain adaptation via feature selection on explicit feature map. IEEE Trans. Neural Netw. Learn. Syst. 30(4), 1180–1190 (2019)

    Article  MathSciNet  Google Scholar 

  15. Zhang, L., Wang, P., Wei, W., Lu, H., Shen, C.H.: Unsupervised domain adaptation using robust class-wise matching. IEEE Trans. Circ. Syst. Video Technol. 29(5), 1339–1349 (2019)

    Article  Google Scholar 

  16. Yan, K., Kou, L., Zhang, D.: Learning domain-invariant subspace using domain features and independence maximization. IEEE Trans. Cybern. 48(1), 288–299 (2018)

    Article  Google Scholar 

  17. Wold, S., Esbensen, K., Geladi, P.: Principal component analysis. Chemometr. Intell. Lab. Syst. 2(1), 37–52 (1987)

    Article  Google Scholar 

  18. Wang, J.D., Chen, Y.Q., Hao, S.J., Feng, W.J., Shen, Z.Q.: Balanced distribution adaptation for transfer learning. In: IEEE Conference on Data Mining, pp. 1129–1134 (2017)

    Google Scholar 

  19. Zhang, J., Li, W.Q., Ogunbona, P.: Joint geometrical and statistical alignment for visual domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5150–5158 (2017)

    Google Scholar 

  20. Li, J.J., Jing, M.M., Lu, K., Zhu, L., Shen, H.T.: Locality preserving joint transfer for domain adaptation. IEEE Trans. Image Process. 28(12), 6103–6115 (2019)

    Article  MathSciNet  Google Scholar 

  21. Wang, J.D., Chen, Y.Q., Yu, H., Huang, M.Y., Yang, Q.: Easy transfer learning by exploiting intra-domain structures. In: IEEE International Conference on Multimedia and Expo, pp. 1210–1215 (2019)

    Google Scholar 

  22. Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11), 2579–2605 (2008)

    MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant Nos. 61672357, 61732011), the Guangdong Basic and Applied Basic Research Foundation (2019A1515011493), the Natural Science Foundation of Shenzhen University (No. 2019046), and the Science Foundation of Shenzhen (Grant No. JCYJ20160422144110140).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenjing Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, W., Lu, Y., Lai, Z. (2020). Canonical Correlation Discriminative Learning for Domain Adaptation. In: Bäck, T., et al. Parallel Problem Solving from Nature – PPSN XVI. PPSN 2020. Lecture Notes in Computer Science(), vol 12269. Springer, Cham. https://doi.org/10.1007/978-3-030-58112-1_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58112-1_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58111-4

  • Online ISBN: 978-3-030-58112-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics