Unsupervised Domain Adaptation Through Transferring both the Source-Knowledge and Target-Relatedness Simultaneously

Unsupervised domain adaptation (UDA) is an emerging research topic in the field of machine learning and pattern recognition, which aims to help the learning of unlabeled target domain by transferring knowledge from the source domain.


Introduction
In this paper, we concentrate on 1SmT and proposes a UDA model through transferring both the Source-Knowledge and Target-Relatedness, coined as UDA-SKTR for short.In addition, we also present an alternating optimization algorithm to solve the proposed model with convergence guarantee.Overall, the contributions of this paper are threefold as follows.
1.A 1SmT UDA model, coined as UDA-SKTR for short, is constructed by transferring both the sourceknowledge and the target-relatedness, which is solved by a specially designed alternating algorithm with convergence guarantee.2. Different from existing methods that transfer directly from source domain data, we perform domain knowledge transfer from the source domain projection rather than the source domain data itself, which better protecting the privacy of the source data.3. Extensive evaluation experiments testify the effectiveness and superiority of the proposed method.

Related work
In this section, we briefly review several methods mostly related to our work, i.e., SLMC.
where W ∈ ℝ × is the projection matrix with being the feature dimension and being the total number of data classes, means the number of instances, denotes the clustering degree of membership for the th instance to class , and = [0, ⋯ , 0, 1, 0, ⋯ , 0] ∈ ℝ represents one-hot coding labels for the th class with the th element being 1 while other elements being 0.

Unsupervised Domain Adaptation through transferring source-knowledge and target-relatedness
3.1.Formulation

Knowledge transfer from source domain to target domains
To perform 1SmT UDA, we obviously need to transfer knowledge from the source domain to the target domains.Considering that the target domains are not consistent with each other and the domain shift from the source to these targets, it is necessary to introduce a transforming matrix (denoted as Q) to increase their matching flexibility.Along this line, we can mathematically formulate the scheme above as In (2), the first term is responsible for knowledge transfer from the source domain W to the target domains W , characterized by domain transforming matrix Q and domain representation matrix V .W is learned from K-means algorithm.The second term encourages the UDA learning to select the most related knowledge components from the source domain to the targets.is a nonnegative tradeoff parameter to keep a balance between the two terms.In order to prevent degenerated solutions of the transforming matrix Q, we restrict it column-orthogonal by = .
arXiv The transfer transforming matrix

Knowledge transfer between target domains
In the 1SmT scenario, multiple target domains are involved, which frequently exhibit potential promising correlations among them.Such relatedness may contribute to the target models training.To exploit these shared knowledge among these target domains, we propose to establish a over-complete representation dictionary to potentially extract such target-relatedness.Along this line, we can consequently construct the formulation for knowledge transfer between the multi-target domains as follows, In (3), the shared dictionary D among the target domains plays the role of bridging them and exploring their relatedness to facilitate their learning.More specifically, the shared dictionary D is established in the targets common space and it is over-complete to cover each of the target domains.That is, the projection matrix W for the th target domain can be recovered by D with its reconstruction coefficient V .In this way, the potential relatedness among the target domains is integrated into their learning.The second term of (3) aims to select the most knowledge components from the dictionary to corresponding target domain.

Unsupervised Domain adaptation by considering both source-knowledge and target-relatedness
Through taking into account both the considerations aforementioned in Section 3.1.1and 3.1.2,we achieve incorporating both the source-knowledge and target-relatedness into 1SmT UDA.In consideration that we concentrate on supervised source domain and unsupervised target domains, without loss of generality, we readily take the SLMC objective function (1) for target domain clustering.Eventually, we can consequently build the complete objective function of the UDA model through transferring Source-Knowledge and Target-Relatedness, UDA-SKTR for short, as follows, where 1 to 4 are nonnegative tradeoff parameters.The first two terms are the objective w.r.t.SLMC on the target domains, the third term models target-relatedness among the target domains, while the fourth term transfers knowledge from the source to the target domains.

Time complexity analysis
The time complexity of UDA-SKTR is mainly consisted of the alternating optimization steps.Assume that our model converges after max iterations, and let max and max denote the maximum sample number and class number of the target domains.Taking into accounts all the time costs, we conclude that the total time complexity is ( max 2 + max ( max ) 3 + 3 ).

Conclusion
In this paper, we proposed a kind of 1SmT UDA model through transferring both the Source-Knowledge and Target-Relatedness, i.e., UDA-SKTR.In this way, not only the supervision knowledge from the source domain, but also the potential relatedness among the target domains are simultaneously modeled for exploitation in 1SmT UDA.In addition, we constructed an alternating optimization algorithm to solve the variables of the proposed model with convergence guarantee.Finally, through extensive experiments on both benchmark and real datasets, we validated the effectiveness and superiority of the proposed method.In the future, we will consider to extend the model to more challenging multisource multi-target (mSmT) scenarios and extend it to practical application, such as disease detection [2] in medical field and fault detection [3] in industry.