Pred&Guide: Labeled Target Class Prediction for Guiding Semi-Supervised Domain Adaptation
Abstract
Semi-supervised domain adaptation aims to classify data belonging to a target domain by utilizing a related label-rich source domain and very few labeled examples of the target domain. Here, we propose a novel framework, Pred&Guide, which leverages the inconsistency between the predicted and the actual class labels of the few labeled target examples to effectively guide the domain adaptation in a semi-supervised setting. Pred&Guide consists of three stages, as follows (1) First, in order to treat all the target samples equally, we perform unsupervised domain adaptation coupled with self-training; (2) Second is the label prediction stage, where the current model is used to predict the labels of the few labeled target examples, and (3) Finally, the correctness of the label predictions are used to effectively weigh source examples class-wise to better guide the domain adaptation process. Extensive experiments show that the proposed Pred&Guide framework achieves state-of-the-art results for two large-scale benchmark datasets, namely Office-Home and DomainNet.
Index Terms:
semi-supervised domain adaptation, source example weighting, pseudo-labeling.1 Introduction
In real world, the training data distribution is often different from the testing data distribution, necessitating that deep learning models learnt using data from one domain are adapted to data from a different distribution. Unsupervised Domain Adaptation (UDA) [1, 2, 3, 4] addresses this problem by utilizing a related label rich source domain and unlabeled samples from the target domain. Recent research has shown that only unlabeled target samples may not be sufficient to counter the domain shift, and thus the area of Semi-supervised Domain Adaptation (SSDA) [5, 6, 7, 8, 9, 10] is gaining traction, where additionally, a few labeled targets are utilized to aid the adaptation process.
The advantage of SSDA over UDA approaches lies in the manner in which the few labeled targets are utilized in the adaptation process. A straightforward approach is to include them along with the labeled source examples in the Cross-Entropy loss [5, 6]. Though this approach helps in improving the final performance, it does not fully leverage the usefulness of the labeled target samples, since they form a very small part of the final loss as compared to the large number of source examples.
In this work, we propose a novel framework Pred&Guide, which utilizes the class label prediction inconsistencies on the few target labeled examples to effectively guide the domain adaptation process. Pred&Guide has three main stages: (1) UDA with Self-Training: Access to a few labeled target data can create bias in the domain adaptation process from the very beginning towards those samples as also noted in [3, 11]. Thus, we propose to utilize an UDA framework to start the domain adaptation process in an unbiased manner, which utilizes all the target data, but ignores the available labels of the few target examples. Simultaneously, we also compute the pseudo-labels of all the unlabeled data using the current model, and perform strong augmentations of those target examples, for which we are confident about their pseudo-labels [12, 13, 13]. (2) Labeled Target Prediction: The model trained in the first stage is used to predict the labels of the targets (which have ground truth labels available) to analyze the adaptation process, and further guide it. (3) Source Example Weighting: We surmise that different source examples either help or hinder the process of domain adaptation. Based on the inconsistency of the predicted and true labels of the few labeled target examples, Pred&Guide steers the adaptation process by appropriately weighing the source samples according to their cosine similarity to the labeled target examples of the corresponding class. Specifically, the source samples in the neighborhood of the incorrectly classified targets are given more weight and the ones which are far away are down-weighed, following a linear weighing scheme as described in Section 4.3, so that the classifier can eventually classify the labeled targets correctly.
We extensively evaluate the proposed Pred&Guide framework on two large-scale benchmark datasets, namely Office-Home [14] and DomainNet [15]. Pred&Guide outperforms all the state-of-the-art approaches for both the datasets. The main contributions of our work are as follows:
-
1.
We propose a novel approach for semi-supervised domain adaptation, termed Pred&Guide, by leveraging the prediction inconsistency of the labeled targets.
-
2.
We propose to effectively weigh the source examples based on the label prediction inconsistency to guide the DA process.
-
3.
Extensive experiments on two benchmark datasets show the effectiveness of the proposed framework.
We now describe the related work in literature followed by the proposed approach and results of extensive evaluation.
2 Related Work
Here, we provide pointers to the related work in literature on domain adaptation, data augmentation and semi-supervised learning.
Domain Adaptation:
Most of the current literature in domain adaptation can be broadly categorized into unsupervised (UDA) and semi-supervised domain adaptation (SSDA).
UDA approaches, which use only unlabeled target domain data have been explored rigorously in [1, 2, 11, 16, 17]. Initial approaches used statistical methods such as Maximum Mean Discrepancy [18], Correlation Alignment (CORAL) [19] and Geodesic Flow Kernel [4] to handle the mismatch in feature distribution.
More recently, DL based UDA approaches such as [2, 1] aim to learn a domain invariant features by a domain confusion module for feature alignment. It has been observed that UDA performance can be boosted significantly with just a few labeled examples from the target domain, as noted in works such as [5, 8, 9, 20].
Most of the recent SSDA approaches use labeled target examples per class.
The seminal work in [5] uses a mini-max entropy to compute domain-invariant class-representatives.
In [21] authors propose an adversarial loss instead of min-max entropy.
In [9], adversarial examples are used to bridge the source and target domain gap.
[20, 22] use
a style-agnostic network and contrastive learning for reducing the source and target domain gaps.
However, majority of approaches treat examples of labeled target no differently than the labeled source, thus not fully leveraging information contained in them. Some works have attempted to use labeled target in triplet loss for bringing source closed to target [23] and [24] propose an effective pseudo labeling strategy using labeled target.
Presence of few labeled target examples in SSDA setting results in intra-domain discrepancy, which is recently noted and addressed in [8].
We also address this critical issue Pred&Guide.
Data Augmentation:
Data augmentation [25, 26, 27] is widely used as a regularizer to prevent overfitting.
Different data augmentation policies have been proposed in works such as [26, 25, 12].
From a pool of augmentations, [25] randomly selects augmentation to be used.
[26] selects an augmentation during model training using reinforcement learning.
Other approaches such as CT-Augment have been proposed in [12, 28] which uses a similar strategy as AutoAugment [26], but uses a fixed algorithm for assigning augmentation magnitudes.
In our work, we use an augmentation based consistency-regularization, based on RandAugment [25] to assign pseudo-labels to the unlabeled targets to boost the performance.
Semi-Supervised Learning (SSL):
SSL approaches utilize a small amount of labeled data and a huge amount of unlabeled data for the required task [29, 30, 31, 12, 32, 33, 13, 28], which reduces the labeling cost.
Several SSL algorithms use consistency constraint, i.e. they aim to minimize the entropy between prediction of different versions of a data-point [12, 13, 13, 33].
Mean-Techer [33] trains a student and a teacher model and forces their predictions to match. The student model is updated at every iteration, while the teacher model is updated with a suitable momentum. Virtual Adversarial Training [32] uses adversarial perturbations to train the SSL model. Pseudo-labeling [31] predicts labels of the unlabeled data and uses these as true labels for supervised learning. Recent approaches such as Mixmatch [13], ReMixmatch [28] and Fixmatch [12] provide an elegant framework for SSL, leveraging consistency regularization and MixUp [34] to obtain state-of-the-art results for SSL. The proposed SSDA framework Pred&Guide is inspired by these seminal SSL approaches, but utilizes the few labeled target domain examples in a novel manner to guide the adaptation process.
3 Problem Definition
We first discuss the problem statement and the notations used. In this work, we address the SSDA task, where there are very few labeled target examples available (i.e. one and three shot settings). The training data consists of (1) labeled source: , where is the number of labeled source examples, (2) unlabeled target: , where is the number of unlabeled target examples and (3) few labeled target: , where is the number of labeled target examples. Here denotes the data-point and denotes the corresponding class label, which belongs to one of the classes (same for both source and target domains). We test the model’s performance on . For all the experiments, we have a feature extractor network (with parameters ), sequentially followed by a classifier (with parameters ).
4 Proposed Pred&Guide Framework

The proposed SSDA framework has three sequential modules, namely (1) unsupervised domain adaptation (UDA) with self-training (2) labeled target prediction, where we predict the labels of the target domain data using the current model, for which we have the ground truth labels and finally (3) source sample weighting, where the source examples are effectively weighted using the prediction inconsistencies of the labeled target examples in the previous step. Figure 1 illustrates the proposed Pred&Guide framework and its different modules. Each of the modules with its motivation are explained below in detail.
4.1 UDA with Self-Training
In SSDA, though few labeled target samples are available in the training data, using them from the beginning may bias the domain adaptation process towards these examples, as noted in recent works [3, 11]. This has motivated us to use an UDA approach to initiate the adaptation process in an unbiased manner, which treats all the target samples equally. For this, any UDA approach can be utilized. Most UDA approaches usually learn a domain invariant feature extractor and a classifier [2, 1, 3]. Let us denote this model as .
Inspired by the recent works in consistency regularization and self-training/pseudo-labeling [12, 28, 13], while training , we incorporate an augmentation-consistency based self-training to further aid the unsupervised learning, which is briefly described below for completion.
During every training iteration, for all the unlabeled target data we compute
(1) |
where and are the strong and weak augmentations of the given data . The strong augmentation function is obtained using RandAugment [25], while the weak augmentation function is simply flipping, random cropping and padding. In every forward pass of the current model , for all the unlabeled target samples, we compute the predicted class probability distribution of the weak and strong augmentations, denoted by and respectively. Using , we compute the one-hot pseudo-label
(2) |
where the function produces a valid one-hot vector corresponding to the predicted label. Our per example pseudo-labeled target loss is:
(3) |
Here, we only consider the loss for the unlabeled targets whose prediction confidence is greater than a threshold . is the standard cross-entropy function, where the first parameter is the ground truth label and the second is the predicted class probability distribution over the labels. The proposed method calculates the pseudo-labels on-the-fly for every example as opposed to static pseudo-labels being updated at different time-steps [31].
For the source, we consider standard cross entropy loss , until . Here, is the number of iterations for which UDA with self-training runs till the validation accuracy saturates. After , we use the weighted cross entropy for the source examples, as explained later in Section 4.3. The loss for source example is given as:
(4) |
where is the label of the source example and is the predicted class probability distribution of .
For the unlabeled target example we use the mini-max entropy approach [5], with the loss function denoted by , where is given by:
(5) |
Thus, in this step, the domain adaptation uses all the target samples, but without the labels of the few target examples which are available. In the next stage, we check how good the current model is in predicting the labels of the labeled targets and its augmentations, which is then used to guide the adaptation process.
4.2 Labeled Target Prediction
In this stage, we leverage the additional label information provided for few target examples to analyze how well the current model has adapted to the target domain. Since the number of labeled targets is very less, we leverage their strong and weak augmentations denoted by and for this analysis, calculated following equation (1). Let us denote this new augmented data as .
First, the current model is used to predict the class labels of the examples in for which we know the ground truth class labels. Computing the label prediction accuracy for each class in this manner gives a weak indication of the class-wise adaptation of the current model. For example, if most of the (labeled) target samples of a class have been classified correctly, it indicates that domain adaptation is successful for that class, otherwise, it is not satisfactory. Obviously, as the number of labeled target samples increases, this becomes a strong indicator of the class-wise domain adaptation accuracy, hence we use the labeled data along with its augmentations. We define the class-wise accuracy vector as follows:
(6) |
and ’s are the individual class accuracies as calculated on using the current model , and is the total number of classes. Pred&Guide aims to utilize the class-wise information about adaptability, rather than just the domain-wise adaptability, since different classes vary in their ability to adapt. This follows from the fact that it may easier for some classes of a domain to adapt (if those classes look similar across domains), while it may be difficult for others.
The goal is to aid the classes which have not yet adapted satisfactorily, based on the accuracy computed. For the classes which have lesser accuracy, more (less) weightage is given to the neighboring (far) source samples of the corresponding labeled target examples. This helps the corresponding class prototypes adapt to the target domain samples, as will be explained in detailed next.
4.3 Source Example Weighing
Once UDA is performed, the label prediction in the previous module is used to weigh source examples accordingly at regular intervals.
If a target example is wrongly classified, we weigh its neighbouring source samples of the same class relatively more (based on a linear weighing scheme as explained next), which will guide the model towards classifying this target sample correctly.
To further aid the adaptation, we also relatively down-weigh the source samples of the same class far away from the wrongly classified labeled targets, since these source examples hinder the adaptation process.
To perform this step efficiently with reduced computational complexity, we use a feature bank to store the representations of the source examples as described next.
Feature Bank Based Source Identification: To compute the distances of source examples of the same class as the labeled target examples, we need to compute the distance between the labeled target and the source examples of the same class.
To compute this distance efficiently, we maintain a feature bank as defined below:
(7) |
where is the representation of the source example. denotes the number of examples in the source domain (as defined in Section 3) and is the dimension of the representation space. is updated on the fly batch-wise with momentum as:
(8) |
where are the representations of the current batch of source examples. denotes the feature bank at a given iteration-step . For simplicity we assume that in (8), only the source examples corresponding to the ones in the mini-batch are updated. Now, we describe how our linear weighing scheme works.
Let denote the set of source examples belonging to class . Similarly, let be the set of labeled targets belonging to class . Now for every element in (sample index is omitted for simplicity), we compute an ordered set where is the corresponding feature from the feature bank for sample and is cosine similarity between the feature representations of the datapoints. Let and denote the minimum and maximum cosine similarities from the set of source examples () of the given class . We compute the class wise max and min weights as:
(9) |
where is the accuracy computed using the predictions of the labeled targets of class , as computed in the previous step of Pred&Guide and is a hyper-parameter. Thus, the source sample belonging to class is weighted with as
(10) |
where is the slope of the linear weighting scheme
(11) |
In other words, we up-weigh the near source samples and down-weigh the far source samples, with the weight varying according to a linear function dependent on the similarity of the source sample with respect to the labeled target sample as in equation (10). If the accuracy for class is less (i.e. is small), the prototype for that particular class probably has not adapted properly. For these classes, the weights of source examples in the neighbourhood of the labeled targets of that class is increased, and those which are far away are decreased, which will help the domain adaptation process. Thus, the weighted loss for a source example with predicted class probability distribution is expressed as follows:
(12) |
where is weight of the example whose loss is being calculated. Figure 2 illustrates the near and far examples at different iterations of Pred&Guide using t-SNE plots [35].




Once we have used the labeled target accuracy to calculate and assign weights for the source examples number of iterations as described above, we bring in the labeled target examples , with predicted class probability , into the training. For these samples, we use the standard cross-entropy loss starting at iteration (number of iterations after which source example weighing validation accuracy saturates) as follows:
(13) |
Input: Feature extractor , classifier , total number of iterations
Data: Labeled source , unlabeled target , labeled target , augmented labeled target .
while do
4.4 Complete Pred&Guide Framework
The complete Pred&Guide Framework is summarized in this section. As stated earlier, we optimize the parameters of the feature extractor and classifier using the mini-max approach [5]:
(14) |
Here is the sum of all losses except loss given by
(15) |
Here, is the iteration after which we start weighing the source examples and is the iteration at which the labels of the few target examples are exposed to the training algorithm. More details about calculating and are given in Section 5. The complete Pred&Guide algorithm is given in Algorithm 1.
5 Experimental Evaluation
Here, we provide details of the extensive experiments conducted to evaluate the efficacy of Pred&Guide for semi-supervised domain adaptation.
First, we discuss the benchmark datasets used, the implementation details before reporting the results.
We also conduct detailed analysis of Pred&Guide and perform ablation studies to evaluate the usefulness of the individual modules.
Datasets Used and Evaluation Protocol:
To evaluate Pred&Guide, we use two large-scale benchmark domain adaptation datasets, DomainNet [15] and Office-Home [14]. DomainNet consists of domains with classes, comprising of about images. Since the entire DomainNet dataset is noisy, we select domains (Real, Painting, Clipart, Sketch) and classes with standard domain adaptation scenarios, which are generally used to benchmark DA methods [5, 9, 8, 6].
In addition to DomainNet, we also evaluate Pred&Guide on the Office-Home dataset, which is a smaller, but challenging dataset. It consists of classes and domains (Real, Clipart, Product, Art). We evaluate Pred&Guide on all the possible adaptation scenarios for Office-Home.
Implementation Details:
The code is implemented using PyTorch [36] on a single Nvidia RTX-2080 GPU.
The underlying domain adaptation technique that we used in Pred&Guide is the very successful mini-max entropy approach [5].
We use Resnet34 [37] and AlexNet [38] for the feature extractor network to compare with state-of-the-art approaches.
For the classifier , we use a fully-connected layer for AlexNet and a fully-connected layer for ResNet34.
We use SGD optimizer, with the starting learning rate of , momentum of , weight decay of and for the weight of .
We use batch size of and for Resnet34 and Alexnet backbones respectively on labeled samples, and twice the batch size for unlabeled data. We set in Equation (9) for DomainNet and for Office-Home, and in Equation 8 for all our experiments.
We run the first module, i.e. UDA with self-training for iterations. is the iteration-step when UDA with self-training has converged.
Then, the weights of the source examples are computed every iterations. for DomainNet and for Office-Home.
Finally, we bring in the labeled target examples in the CE loss at the iteration. is started when the source example weighing performance has converged.
To calculate and , we consider the validation accuracy to be converged when it has not increased for iterations.
Evaluation for SSDA:
The results of Pred&Guide for Office-Home dataset with AlexNet backbone for both -shot and -shot scenarios are reported in Table I.
Comparison with the recent approaches show that Pred&Guide achieves the state-of-art results on an average for both the settings and also outperforms the others for most of the individual domain pair settings.
The results on DomainNet dataset for both and shots (i.e. one and three target samples of each class in the target domain are labeled) using AlexNet and ResNet34 backbones are reported in Table II.
The results of all the other approaches has been directly taken from [5].
We observe that the proposed approach significantly outperforms all the recent SSDA approaches, thus justifying its usefulness.
Method | R to C | R to P | R to A | P to R | P to C | P to A | A to P | A to C | A to R | C to R | C to A | C to P | Mean |
One-Shot | |||||||||||||
S+T [39] | 37.5 | 63.1 | 44.8 | 54.3 | 31.7 | 31.5 | 48.8 | 31.1 | 53.3 | 48.5 | 33.9 | 50.8 | 44.1 |
DANN [2] | 42.5 | 64.2 | 45.1 | 56.4 | 36.6 | 32.7 | 43.5 | 34.4 | 51.9 | 51.0 | 33.8 | 49.4 | 45.1 |
ADR [6] | 37.8 | 63.5 | 45.4 | 53.5 | 32.5 | 32.2 | 49.5 | 31.8 | 53.4 | 49.7 | 34.2 | 50.4 | 44.5 |
CDAN [40] | 36.1 | 62.3 | 42.2 | 52.7 | 28.0 | 27.8 | 48.7 | 28.0 | 51.3 | 41.0 | 26.8 | 49.9 | 41.2 |
ENT [30] | 26.8 | 65.8 | 45.8 | 56.3 | 23.5 | 21.9 | 47.4 | 22.1 | 53.4 | 30.8 | 18.1 | 53.6 | 38.8 |
BiAT [9] | - | - | - | - | - | - | - | - | - | - | - | - | 49.6 |
MME [5] | 42.0 | 69.6 | 48.3 | 58.7 | 37.8 | 34.9 | 52.5 | 36.4 | 57.0 | 54.1 | 39.5 | 59.1 | 49.2 |
Pred&Guide | 44.4 | 73.2 | 50.0 | 59.6 | 38.2 | 37.0 | 54.4 | 34.8 | 55.6 | 52.8 | 38.0 | 59.2 | 49.8 |
Three-Shot | |||||||||||||
S+T [39] | 44.6 | 66.7 | 47.7 | 57.8 | 44.4 | 36.1 | 57.6 | 38.8 | 57.0 | 54.3 | 37.5 | 57.9 | 50.0 |
DANN [2] | 47.2 | 66.7 | 46.6 | 58.1 | 44.4 | 36.1 | 57.2 | 39.8 | 56.6 | 54.3 | 38.6 | 57.9 | 50.3 |
ADR [6] | 45.0 | 69.3 | 46.9 | 57.3 | 38.9 | 36.3 | 57.5 | 40.0 | 57.8 | 53.4 | 37.3 | 57.7 | 49.5 |
CDAN [40] | 41.8 | 69.9 | 43.2 | 53.6 | 35.8 | 32.0 | 56.3 | 34.5 | 53.5 | 49.3 | 27.9 | 56.2 | 46.2 |
ENT [30] | 44.9 | 70.4 | 47.1 | 60.3 | 41.2 | 34.6 | 60.7 | 37.8 | 60.5 | 58.0 | 31.8 | 63.4 | 50.9 |
BiAT [9] | - | - | - | - | - | - | - | - | - | - | - | - | 56.4 |
MME [5] | 51.2 | 73.0 | 50.3 | 61.6 | 47.2 | 40.7 | 63.9 | 43.8 | 61.4 | 59.9 | 44.7 | 64.7 | 55.2 |
APE [8] | 51.9 | 74.6 | 51.2 | 61.6 | 47.9 | 42.1 | 65.5 | 44.5 | 60.9 | 58.1 | 44.3 | 64.8 | 55.6 |
Pred&Guide | 53.4 | 75.0 | 51.9 | 64.0 | 48.7 | 43.6 | 65.7 | 45.7 | 60.6 | 60.0 | 43.0 | 67.5 | 56.6 |
Net | Method | R to C | R to P | P to C | C to S | S to P | R to S | P to R | Mean | ||||||||
1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | ||
Alexnet | S+T [39] | 43.3 | 47.1 | 42.4 | 45.0 | 40.1 | 44.9 | 33.6 | 36.4 | 35.7 | 38.4 | 29.1 | 33.3 | 55.8 | 58.7 | 40.0 | 43.4 |
DANN [2] | 43.3 | 46.1 | 41.6 | 43.8 | 39.1 | 41.0 | 35.9 | 36.5 | 36.9 | 38.9 | 32.5 | 33.4 | 53.6 | 57.3 | 40.4 | 42.4 | |
ADR [6] | 43.1 | 46.2 | 41.4 | 44.4 | 39.3 | 43.6 | 32.8 | 36.4 | 33.1 | 38.9 | 29.1 | 32.4 | 55.9 | 57.3 | 39.2 | 42.7 | |
CDAN [40] | 46.3 | 46.8 | 45.7 | 45.0 | 38.3 | 42.3 | 27.5 | 29.5 | 30.2 | 33.7 | 28.8 | 31.3 | 56.7 | 58.7 | 39.1 | 41.0 | |
ENT [30] | 37.0 | 45.5 | 35.6 | 42.6 | 26.8 | 40.4 | 18.9 | 31.1 | 15.1 | 29.6 | 18.0 | 29.6 | 52.2 | 60.0 | 29.1 | 39.8 | |
MME [5] | 48.9 | 55.6 | 48.0 | 49.0 | 46.7 | 51.7 | 36.3 | 39.4 | 39.4 | 43.0 | 33.3 | 37.9 | 56.8 | 60.7 | 44.2 | 48.2 | |
BiAT [9] | 54.2 | 58.6 | 49.2 | 50.6 | 44.0 | 52.0 | 37.7 | 41.9 | 39.6 | 42.1 | 37.2 | 42.0 | 56.9 | 58.8 | 45.5 | 49.4 | |
APE [8] | 47.7 | 54.6 | 49.0 | 50.5 | 46.9 | 52.1 | 38.5 | 42.6 | 38.5 | 42.2 | 33.8 | 38.7 | 57.5 | 61.4 | 44.6 | 48.9 | |
Pred&Guide | 54.5 | 57.3 | 54.4 | 56.7 | 52.8 | 56.9 | 41.6 | 48.3 | 36.9 | 44.9 | 38.4 | 46.9 | 61.9 | 65.4 | 48.6 | 53.8 | |
Resnet34 | S+T [39] | 55.6 | 60.0 | 60.6 | 62.2 | 56.8 | 59.4 | 50.8 | 55.0 | 56.0 | 59.5 | 46.3 | 50.1 | 71.8 | 73.9 | 56.9 | 60.0 |
DANN [2] | 58.2 | 59.8 | 61.4 | 62.8 | 56.3 | 59.6 | 52.8 | 55.4 | 57.4 | 59.9 | 52.2 | 54.9 | 70.3 | 72.2 | 58.4 | 60.7 | |
ADR [6] | 57.1 | 60.7 | 61.3 | 61.9 | 57.0 | 60.7 | 51.0 | 54.4 | 56.0 | 59.9 | 49.0 | 51.1 | 72.0 | 74.2 | 57.6 | 60.4 | |
CDAN [40] | 65.0 | 69.0 | 64.9 | 67.3 | 63.7 | 68.4 | 53.1 | 57.8 | 63.4 | 65.3 | 54.5 | 59.0 | 73.2 | 78.5 | 62.5 | 66.5 | |
ENT [30] | 65.2 | 71.0 | 65.9 | 69.2 | 65.4 | 71.1 | 54.6 | 60.0 | 59.7 | 62.1 | 52.1 | 61.1 | 75.0 | 78.6 | 62.6 | 67.6 | |
MME [5] | 70.0 | 72.2 | 67.7 | 69.7 | 69.0 | 71.7 | 56.3 | 61.8 | 64.8 | 66.8 | 61.0 | 61.9 | 76.1 | 78.5 | 66.4 | 68.9 | |
BiAT [9] | 73.0 | 74.9 | 68.0 | 68.8 | 71.6 | 74.6 | 57.9 | 61.5 | 63.9 | 67.5 | 58.5 | 62.1 | 77.0 | 78.6 | 67.1 | 69.7 | |
APE [8] | 70.4 | 76.6 | 70.8 | 72.1 | 72.9 | 76.7 | 56.7 | 63.1 | 64.5 | 66.1 | 63.0 | 67.8 | 76.6 | 79.4 | 67.6 | 71.7 | |
Pred&Guide | 74.7 | 79.2 | 75.9 | 74.8 | 74.8 | 75.9 | 66.0 | 69.5 | 71.4 | 71.2 | 68.7 | 72.5 | 78.5 | 80.2 | 72.9 | 74.8 |
6 Additional Analysis
Here, we perform extensive analysis and ablation studies of the proposed Pred&Guide framework.
Importance of Source Example Weighing:
To analyze the importance of weighting the source examples in the proposed Pred&Guide framework, we run the algorithm with and without the source example weighing on the DomainNet dataset.
The results in Table IV show that when the algorithm is trained using the standard cross entropy loss without the weighting (i.e. No weights), the performance is considerably lower as compared to when the source samples are properly weighted.
Figure 3 gives a visual illustration of the near and far source examples of the labeled targets for one class of DomainNet.
We would like to highlight that the proposed source example weighting (SEW) can be seamlessly integrated with other SSDA methods to improve their performance. We experiment with a recent SSDA approach CDAC [41], and observe that SEW improves the mean performance () of CDAC for both 1-shot and 3-shot settings from 73.6 to 74.4 and from 76.0 to 76.2 respectively.
Weighting Scheme:
Now, we analyze the effectiveness of the proposed weighting strategy based on class-wise adaptability.
Specifically, we compare with
fixed up-weighing and down-weighting.
For this experiment, we used fixed weights of and for the near and far samples respectively.
We observe from Table III that fixed weighting can give unpredictable results, i.e. for some cases (1-shot, R to C), it improves the results, and for other cases (3-shot R to S and R to C), the performance degrades. We observe that for almost all scenarios, the proposed source example weighting helps to boost the performance.
We also compare with another popular weighting scheme, namely Focal Loss [42], which emphasizes on correct classification of hard training examples by weighing them more.
We observe that for all scenarios, the proposed technique outperforms focal loss.
As an example, for 3-shot setting, the proposed technique gives 72.5 and 79.2 for R to S and R to C as compared to 69.6 and 74.8 obtained using Focal loss.
Near & Far Weighing: Next, we analyze the effect of weighing only the near or far samples, and the results are reported in Table IV.
Near source samples are defined as the samples having weight and far samples are defined as the ones having .
The results obtained using only UDA with self-training is shown in the first row.
If we introduce the labeled target samples right after this stage, as expected, the performance improves as shown in the second row.
The third and fourth rows show the performance when only near and far away source examples are weighted.
We observe that both these steps help to boost the performance.
The final row is the complete Pred&Guide, which weighs both the near and far away source examples and outperforms all the other variants.
Sensitivity Analysis of Hyperparameters:
There are two primary hyperparamters in Pred&Guide, which control the amount of weighing for the source examples and . We plot the SSDA results (%) for a wide range of values for the 3-shot R to S setting of DomainNet in Fig. 4 (using Resnet34).
We observe that the performance is quite stable for a wide range of parameter values, and even the lowest accuracy is better than the second best approach.
We also vary the values of between 1000 to 2000 for DomainNet and observe that the performance is stable for all the values. Note that we have used the same and values for all the domain settings of a dataset.
Weighing Strategy | R to S | R to C | ||
---|---|---|---|---|
1 shot | 3 shot | 1 shot | 3 shot | |
No Weights | 68.8 | 70.8 | 71.3 | 76.3 |
Fixed Weights | 68.8 | 68.6 | 74.3 | 73.2 |
Class-wise Adaptability Based Weights | 68.7 | 72.5 | 74.7 | 79.2 |

Components | R to S | R to C | |||||
---|---|---|---|---|---|---|---|
UDA | N | F | LT | 1 shot | 3 shot | 1 shot | 3 shot |
✓ | 63.5 | 63.5 | 69.2 | 69.2 | |||
✓ | ✓ | 68.8 | 70.8 | 71.3 | 76.3 | ||
✓ | ✓ | ✓ | 69.0 | 68.9 | 71.5 | 77.4 | |
✓ | ✓ | ✓ | 68.1 | 71.9 | 72.4 | 77.3 | |
✓ | ✓ | ✓ | ✓ | 68.7 | 72.5 | 74.7 | 79.2 |
7 Conclusion
In this work, we addressed the importance of using labeled target samples effectively for the domain adaptation task in the semi-supervised setting. The proposed Pred&Guide framework initially performs the domain adaptation in an unsupervised manner to avoid any bias that may be introduced due to the few labeled target examples. After this stage, we introduce two effective modules, labeled target prediction and source example weighting to effectively weigh the source examples to better guide the adaptation. With Pred&Guide, we set a new state-of-the-art for semi-supervised domain adaptation as illustrated by extensive experiments on two large-scale benchmark datasets.
References
- [1] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” in JMLR, 2016.
- [2] Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in ICML, 2015.
- [3] W. Li and S. Chen, “Unsupervised domain adaptation with progressive adaptation of subspaces,” in arXiv, 2020.
- [4] B. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in CVPR, 2012.
- [5] K. Saito, D. Kim, S. Sclaroff, T. Darrell, and K. Saenko, “Semi-supervised domain adaptation via minimax entropy,” in ICCV, 2019.
- [6] K. Saito, Y. Ushiku, T. Harada, and K. Saenko, “Adversarial dropout regularization,” in ICLR, 2018.
- [7] J. Donahue, J. Hoffman, E. Rodner, K. Saenko, and T. Darrell, “Semi-supervised domain adaptation with instance constraints,” in CVPR, 2013.
- [8] T. Kim and C. Kim, “Attract, perturb, and explore: Learning a feature alignment network for semi-supervised domain adaptation,” in ECCV, 2020.
- [9] P. Jiang, A. Wu, Y. Han, Y. Shao, M. Qi, and B. Li, “Bidirectional adversarial training for semi-supervised domain adaptation,” in IJCAI, 2020.
- [10] T. Yao, Yingwei Pan, C. Ngo, Houqiang Li, and Tao Mei, “Semi-supervised domain adaptation with subspace learning for visual recognition,” in CVPR, 2015.
- [11] M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Unsupervised domain adaptation with residual transfer networks,” in NIPS, 2016.
- [12] K. Sohn, D. Berthelot, C.-L. Li, Z. Zhang, N. Carlini, E. D. Cubuk, A. Kurakin, H. Zhang, and C. Raffel, “Fixmatch: Simplifying semi-supervised learning with consistency and confidence,” in NIPS, 2020.
- [13] D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, and C. Raffel, “Mixmatch: A holistic approach to semi-supervised learning,” in NIPS, 2019.
- [14] H. Venkateswara, J. Eusebio, S. Chakraborty, , and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in CVPR, 2017.
- [15] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang., “Moment matching for multi-source domain adaptation,” in ICCV, 2019.
- [16] K. Saito, K. Watanabe, Y. Ushiku, and T. Harada, “Maximum classifier discrepancy for unsupervised domain adaptation,” in CVPR, 2018.
- [17] B. Gong, K. Grauman, and F. Sha, “Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation,” in ICML, 2013.
- [18] D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu, “Equivalence of distance-based and rkhs-based statistics in hypothesis testing,” The Annals of Statistics, 2013.
- [19] B. Sun, J. Feng, and K. Saenko, “Correlation alignment for unsupervised domain adaptation,” in Domain Adaptation in Computer Vision Applications, 2016.
- [20] H. Nam, H. Lee, J. Park, W. Yoon, and D. Yoo, “Reducing domain gap via style-agnostic networks,” in arXiv, 2020.
- [21] J. Li, G. Li, Y. Shi, and Y. Yu, “Cross-domain adaptive clustering for semi-supervised domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2505–2514.
- [22] A. Singh, “Clda: Contrastive learning for semi-supervised domain adaptation,” Advances in Neural Information Processing Systems, vol. 34, pp. 5089–5101, 2021.
- [23] K. Li, C. Liu, H. Zhao, Y. Zhang, and Y. Fu, “Ecacl: A holistic framework for semi-supervised domain adaptation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 8578–8587.
- [24] J. Liang, D. Hu, and J. Feng, “Domain adaptation with auxiliary target domain-oriented classifier,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 632–16 642.
- [25] E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le, “Randaugment: Practical automated data augmentation with a reduced search space,” in NIPS, 2020.
- [26] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “Autoaugment: Learning augmentation policies from data,” in CVPR, 2019.
- [27] F. H. K. dos Santos Tanaka and C. Aranha, “Data augmentation using gans,” in arXiv, 2019.
- [28] D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn, H. Zhang, and C. Raffel, “Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring,” in ICLR, 2020.
- [29] Y. Chen, C. Wei, A. Kumar, and T. Ma, “Self-training avoids using spurious features under domain shift,” in NIPS, 2020.
- [30] Y. Grandvalet and Y. Bengio, “Semi-supervised learning by entropy minimization,” in NIPS, 2004.
- [31] D.-H. Lee, “Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks,” ICML 2013 Workshop : Challenges in Representation Learning (WREPL).
- [32] T. Miyato, S. ichi Maeda, M. Koyama, and S. Ishii, “Virtual adversarial training: A regularization method for supervised and semi-supervised learning,” in IEEE Transactions on PAMI, 2018.
- [33] A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” in NIPS, 2017.
- [34] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” in ICLR, 2018.
- [35] L. van der Maaten and G. Hinton, “Visualizing data using t-sne,” JMLR, vol. 9, 2008. [Online]. Available: http://jmlr.org/papers/v9/vandermaaten08a.html
- [36] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in NIPS, 2019.
- [37] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016.
- [38] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.
- [39] R. Ranjan, C. D. Castillo, and R. Chellappa, “L2-constrained softmax loss for discriminative face verification,” in arXiv, 2017.
- [40] M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Conditional adversarial domain adaptation,” in NIPS, 2018.
- [41] J. Li, L. Guanbin, S. Yemin, and Y. Yu, “Domain adaptation with auxiliary target domain-oriented classifier,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 632–16 642.
- [42] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988.
![]() |
Megh Bhalerao received his Bachelor’s of Technology Degree in Electrical Engineering from the National Institute of Technology Karanataka, India in 2020. He is currently an MS EE Student at the University of Washington Seattle. |
![]() |
Anurag Singh is currently MS in Computer Science student at TU Munich. He completed his undergraduate in CS from NSIT, University of Delhi in 2018. |
![]() |
Soma Biswas received the PhD degree in Electrical Engineering from University of Maryland, College Park in 2009. She is currently an Associate Professor in the Electrical Engineering Department, Indian Institute of Science, Bangalore. Her research interests is in Computer Vision, Pattern Recognition, Machine Learning and related areas. She is a Senior Member of IEEE. |
8 Supplimentary Material
8.1 Modularity of our Source Example Weighing Framework
We can seamlessly integrate our source example weighing scheme with any given SSDA framework. In Table V we show that our proposed source example weighing framework can improve the performance of a recently proposed SSDA work Cross Domain Adaptive Clustering [41].
Net | Method | R to C | R to P | P to C | C to S | S to P | R to S | P to R | Mean | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | 1 shot | 3 shot | ||
ResNet34 | CDAC | 77.4 | 79.6 | 74.2 | 75.1 | 75.5 | 79.3 | 67.6 | 69.9 | 71.0 | 73.4 | 69.2 | 72.5 | 80.4 | 81.9 | 73.6 | 76.0 |
CDAC + SEW | 78.1 | 80.1 | 75.1 | 75.5 | 75.7 | 78.7 | 68.7 | 70.7 | 71.8 | 73.7 | 70.8 | 72.8 | 80.9 | 82.6 | 74.4 | 76.2 |
8.2 Comparison of SEW with standard focal loss
In table VI, we compare our proposed source example weighing scheme with the standard focal loss which assigns weights to examples based on hardness. Our SEW scheme is in contrast to focal loss where we assign example weights based on domain similarity.
Weighting-Strategy | R to S | R to C |
---|---|---|
No Weights | 70.8 | 76.3 |
Fixed Weights | 68.6 | 73.2 |
Focal Loss | 69.6 | 74.8 |
Proposed Class-wise Adaptability Based Weights | 72.5 | 79.2 |