SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation

Authors

  • Wanqing Zhu Fujian Province Key Laboratory of Information Security and Network Systems, Fuzhou 350108, China College of Computer Science and Big Data, Fuzhou University, Fuzhou 350108, China
  • Jia-Li Yin Fujian Province Key Laboratory of Information Security and Network Systems, Fuzhou 350108, China College of Computer Science and Big Data, Fuzhou University, Fuzhou 350108, China
  • Bo-Hao Chen Department of Computer Science and Engineering, Yuan Ze University, Taiwan
  • Ximeng Liu Fujian Province Key Laboratory of Information Security and Network Systems, Fuzhou 350108, China College of Computer Science and Big Data, Fuzhou University, Fuzhou 350108, China

DOI:

https://doi.org/10.1609/aaai.v37i3.25498

Keywords:

CV: Adversarial Attacks & Robustness, ML: Adversarial Learning & Robustness, ML: Transfer, Domain Adaptation, Multi-Task Learning

Abstract

As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which transfers knowledge learned from a rich-label dataset to the unlabeled target dataset, is gaining increasingly more popularity. While extensive studies have been devoted to improving the model accuracy on target domain, an important issue of model robustness is neglected. To make things worse, conventional adversarial training (AT) methods for improving model robustness are inapplicable under UDA scenario since they train models on adversarial examples that are generated by supervised loss function. In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models. Based on self-training paradigm, SRoUDA starts with pre-training a source model by applying UDA baseline on source labeled data and taraget unlabeled data with a developed random masked augmentation (RMA), and then alternates between adversarial target model training on pseudo-labeled target data and fine-tuning source model by a meta step. While self-training allows the direct incorporation of AT in UDA, the meta step in SRoUDA further helps in mitigating error propagation from noisy pseudo labels. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SRoUDA where it achieves significant model robustness improvement without harming clean accuracy.

Downloads

Published

2023-06-26

How to Cite

Zhu, W., Yin, J.-L., Chen, B.-H., & Liu, X. (2023). SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3852-3860. https://doi.org/10.1609/aaai.v37i3.25498

Issue

Section

AAAI Technical Track on Computer Vision III