This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Pred&Guide: Labeled Target Class Prediction for Guiding Semi-Supervised Domain Adaptation

Megh Bhalerao, Anurag Singh, and Soma Biswas Megh Bhalerao, Anurag Singh and Soma Biswas collaborated in this work at the Department of Electrical Engineering at the Indian Institute of Science, Bangalore, India. E-mail (corresponding author): somabiswas@iisc.ac.in
Abstract

Semi-supervised domain adaptation aims to classify data belonging to a target domain by utilizing a related label-rich source domain and very few labeled examples of the target domain. Here, we propose a novel framework, Pred&Guide, which leverages the inconsistency between the predicted and the actual class labels of the few labeled target examples to effectively guide the domain adaptation in a semi-supervised setting. Pred&Guide consists of three stages, as follows (1) First, in order to treat all the target samples equally, we perform unsupervised domain adaptation coupled with self-training; (2) Second is the label prediction stage, where the current model is used to predict the labels of the few labeled target examples, and (3) Finally, the correctness of the label predictions are used to effectively weigh source examples class-wise to better guide the domain adaptation process. Extensive experiments show that the proposed Pred&Guide framework achieves state-of-the-art results for two large-scale benchmark datasets, namely Office-Home and DomainNet.

Index Terms:
semi-supervised domain adaptation, source example weighting, pseudo-labeling.

1 Introduction

In real world, the training data distribution is often different from the testing data distribution, necessitating that deep learning models learnt using data from one domain are adapted to data from a different distribution. Unsupervised Domain Adaptation (UDA) [1, 2, 3, 4] addresses this problem by utilizing a related label rich source domain and unlabeled samples from the target domain. Recent research has shown that only unlabeled target samples may not be sufficient to counter the domain shift, and thus the area of Semi-supervised Domain Adaptation (SSDA) [5, 6, 7, 8, 9, 10] is gaining traction, where additionally, a few labeled targets are utilized to aid the adaptation process.

The advantage of SSDA over UDA approaches lies in the manner in which the few labeled targets are utilized in the adaptation process. A straightforward approach is to include them along with the labeled source examples in the Cross-Entropy loss  [5, 6]. Though this approach helps in improving the final performance, it does not fully leverage the usefulness of the labeled target samples, since they form a very small part of the final loss as compared to the large number of source examples.

In this work, we propose a novel framework Pred&Guide, which utilizes the class label prediction inconsistencies on the few target labeled examples to effectively guide the domain adaptation process. Pred&Guide has three main stages: (1) UDA with Self-Training: Access to a few labeled target data can create bias in the domain adaptation process from the very beginning towards those samples as also noted in [3, 11]. Thus, we propose to utilize an UDA framework to start the domain adaptation process in an unbiased manner, which utilizes all the target data, but ignores the available labels of the few target examples. Simultaneously, we also compute the pseudo-labels of all the unlabeled data using the current model, and perform strong augmentations of those target examples, for which we are confident about their pseudo-labels  [12, 13, 13]. (2) Labeled Target Prediction: The model trained in the first stage is used to predict the labels of the targets (which have ground truth labels available) to analyze the adaptation process, and further guide it. (3) Source Example Weighting: We surmise that different source examples either help or hinder the process of domain adaptation. Based on the inconsistency of the predicted and true labels of the few labeled target examples, Pred&Guide steers the adaptation process by appropriately weighing the source samples according to their cosine similarity to the labeled target examples of the corresponding class. Specifically, the source samples in the neighborhood of the incorrectly classified targets are given more weight and the ones which are far away are down-weighed, following a linear weighing scheme as described in Section 4.3, so that the classifier can eventually classify the labeled targets correctly.

We extensively evaluate the proposed Pred&Guide framework on two large-scale benchmark datasets, namely Office-Home [14] and DomainNet [15]. Pred&Guide outperforms all the state-of-the-art approaches for both the datasets. The main contributions of our work are as follows:

  1. 1.

    We propose a novel approach for semi-supervised domain adaptation, termed Pred&Guide, by leveraging the prediction inconsistency of the labeled targets.

  2. 2.

    We propose to effectively weigh the source examples based on the label prediction inconsistency to guide the DA process.

  3. 3.

    Extensive experiments on two benchmark datasets show the effectiveness of the proposed framework.

We now describe the related work in literature followed by the proposed approach and results of extensive evaluation.

2 Related Work

Here, we provide pointers to the related work in literature on domain adaptation, data augmentation and semi-supervised learning.
Domain Adaptation: Most of the current literature in domain adaptation can be broadly categorized into unsupervised (UDA) and semi-supervised domain adaptation (SSDA). UDA approaches, which use only unlabeled target domain data have been explored rigorously in [1, 2, 11, 16, 17]. Initial approaches used statistical methods such as Maximum Mean Discrepancy [18], Correlation Alignment (CORAL) [19] and Geodesic Flow Kernel [4] to handle the mismatch in feature distribution. More recently, DL based UDA approaches such as [2, 1] aim to learn a domain invariant features by a domain confusion module for feature alignment. It has been observed that UDA performance can be boosted significantly with just a few labeled examples from the target domain, as noted in works such as [5, 8, 9, 20]. Most of the recent SSDA approaches use 131-3 labeled target examples per class. The seminal work in [5] uses a mini-max entropy to compute domain-invariant class-representatives. In [21] authors propose an adversarial loss instead of min-max entropy. In [9], adversarial examples are used to bridge the source and target domain gap.  [20, 22] use a style-agnostic network and contrastive learning for reducing the source and target domain gaps.

However, majority of approaches treat examples of labeled target no differently than the labeled source, thus not fully leveraging information contained in them. Some works have attempted to use labeled target in triplet loss for bringing source closed to target [23] and [24] propose an effective pseudo labeling strategy using labeled target. Presence of few labeled target examples in SSDA setting results in intra-domain discrepancy, which is recently noted and addressed in [8]. We also address this critical issue Pred&Guide.
 
Data Augmentation: Data augmentation [25, 26, 27] is widely used as a regularizer to prevent overfitting. Different data augmentation policies have been proposed in works such as  [26, 25, 12]. From a pool of augmentations,  [25] randomly selects augmentation to be used.  [26] selects an augmentation during model training using reinforcement learning. Other approaches such as CT-Augment have been proposed in [12, 28] which uses a similar strategy as AutoAugment [26], but uses a fixed algorithm for assigning augmentation magnitudes. In our work, we use an augmentation based consistency-regularization, based on RandAugment [25] to assign pseudo-labels to the unlabeled targets to boost the performance.
 
Semi-Supervised Learning (SSL): SSL approaches utilize a small amount of labeled data and a huge amount of unlabeled data for the required task [29, 30, 31, 12, 32, 33, 13, 28], which reduces the labeling cost. Several SSL algorithms use consistency constraint, i.e. they aim to minimize the entropy between prediction of different versions of a data-point [12, 13, 13, 33].

Mean-Techer [33] trains a student and a teacher model and forces their predictions to match. The student model is updated at every iteration, while the teacher model is updated with a suitable momentum. Virtual Adversarial Training [32] uses adversarial perturbations to train the SSL model. Pseudo-labeling [31] predicts labels of the unlabeled data and uses these as true labels for supervised learning. Recent approaches such as Mixmatch [13], ReMixmatch [28] and Fixmatch [12] provide an elegant framework for SSL, leveraging consistency regularization and MixUp  [34] to obtain state-of-the-art results for SSL. The proposed SSDA framework Pred&Guide is inspired by these seminal SSL approaches, but utilizes the few labeled target domain examples in a novel manner to guide the adaptation process.

3 Problem Definition

We first discuss the problem statement and the notations used. In this work, we address the SSDA task, where there are very few labeled target examples available (i.e. one and three shot settings). The training data consists of (1) labeled source: 𝒟ls={xis,yis}i=1i=ns\mathcal{D}_{ls}=\{x^{s}_{i},y^{s}_{i}\}_{i=1}^{i=n_{s}}, where nsn_{s} is the number of labeled source examples, (2) unlabeled target: 𝒟ut={xiut}i=1i=nut\mathcal{D}_{ut}=\{x^{ut}_{i}\}_{i=1}^{i=n_{ut}}, where nutn_{ut} is the number of unlabeled target examples and (3) few labeled target: 𝒟lt={xit,yit}i=1i=nt\mathcal{D}_{lt}=\{x^{t}_{i},y^{t}_{i}\}_{i=1}^{i=n_{t}}, where ntn_{t} is the number of labeled target examples. Here xix_{i} denotes the ithi^{th} data-point and yiy_{i} denotes the corresponding class label, which belongs to one of the KK classes (same for both source and target domains). We test the model’s performance on 𝒟ut\mathcal{D}_{ut}. For all the experiments, we have a feature extractor network FF (with parameters ΘF\Theta_{F}), sequentially followed by a classifier CC (with parameters ΘC\Theta_{C}).

4 Proposed Pred&Guide Framework

Refer to caption
Figure 1: Figure showing the complete Pred&Guide framework and its different modules. The different modules get activated at different iteration steps, when the corresponding losses are computed as seen from the indicator functions in equation (15). The distance to source samples are calculated efficiently using a feature bank (not shown in figure) in the feature space.

The proposed SSDA framework has three sequential modules, namely (1) unsupervised domain adaptation (UDA) with self-training (2) labeled target prediction, where we predict the labels of the target domain data using the current model, for which we have the ground truth labels and finally (3) source sample weighting, where the source examples are effectively weighted using the prediction inconsistencies of the labeled target examples in the previous step. Figure 1 illustrates the proposed Pred&Guide framework and its different modules. Each of the modules with its motivation are explained below in detail.

4.1 UDA with Self-Training

In SSDA, though few labeled target samples are available in the training data, using them from the beginning may bias the domain adaptation process towards these examples, as noted in recent works [3, 11]. This has motivated us to use an UDA approach to initiate the adaptation process in an unbiased manner, which treats all the target samples equally. For this, any UDA approach can be utilized. Most UDA approaches usually learn a domain invariant feature extractor and a classifier [2, 1, 3]. Let us denote this model as 𝐔𝐌\mathbf{U_{M}}.

Inspired by the recent works in consistency regularization and self-training/pseudo-labeling [12, 28, 13], while training 𝐔𝐌\mathbf{U_{M}}, we incorporate an augmentation-consistency based self-training to further aid the unsupervised learning, which is briefly described below for completion.

During every training iteration, for all the unlabeled target data xut𝒟utx^{ut}\>\in\>\mathcal{D}_{ut} we compute

xuts=𝒮(xut);xutw=𝒲(xut)\vspace{-8px}x^{ut_{s}}=\mathcal{S}(x^{ut});\qquad x^{ut_{w}}=\mathcal{W}(x^{ut}) (1)

where xutsx^{ut_{s}} and xutwx^{ut_{w}} are the strong and weak augmentations of the given data xutx^{ut}. The strong augmentation function 𝒮\mathcal{S} is obtained using RandAugment [25], while the weak augmentation function 𝒲\mathcal{W} is simply flipping, random cropping and padding. In every forward pass of the current model 𝐔𝐌\mathbf{U_{M}}, for all the unlabeled target samples, we compute the predicted class probability distribution of the weak and strong augmentations, denoted by putwp^{ut_{w}} and putsp^{ut_{s}} respectively. Using putwp^{ut_{w}}, we compute the one-hot pseudo-label

yutw=𝐎(argmax(putw))y^{ut_{w}}=\mathbf{O}(\arg\max(p^{ut_{w}})) (2)

where the function 𝐎\mathbf{O} produces a valid one-hot vector corresponding to the predicted label. Our per example pseudo-labeled target loss is:

p=𝟏(max(putw)>τ)𝐇(yutw,puts)\begin{split}\mathcal{L}_{p}=\mathbf{1}(\max(p^{ut_{w}})>\tau)\,\mathbf{H}(y^{ut_{w}},p^{ut_{s}})\end{split} (3)

Here, we only consider the loss for the unlabeled targets whose prediction confidence is greater than a threshold τ\tau. 𝐇(.,.)\mathbf{H}(.,.) is the standard cross-entropy function, where the first parameter is the ground truth label and the second is the predicted class probability distribution over the labels. The proposed method calculates the pseudo-labels on-the-fly for every example as opposed to static pseudo-labels being updated at different time-steps [31].

For the source, we consider standard cross entropy loss 𝐇(.,.)\mathbf{H(.,.)}, until t<T1t<T_{1}. Here, T1T_{1} is the number of iterations for which UDA with self-training runs till the validation accuracy saturates. After T1T_{1}, we use the weighted cross entropy for the source examples, as explained later in Section 4.3. The loss for source example (xs,ys)(x^{s},y^{s}) is given as:

s=𝐇(ys,ps)\mathcal{L}_{s}=\mathbf{H}(y^{s},p^{s}) (4)

where ysy^{s} is the label of the source example and psp^{s} is the predicted class probability distribution of xsx^{s}.

For the unlabeled target example xutwx^{ut_{w}} we use the mini-max entropy approach [5], with the loss function denoted by ULT\mathcal{L}_{ULT}, where ULT\mathcal{L}_{ULT} is given by:

ULT=k=1K(pkutw)log(pkutw)\vspace{-3px}\mathcal{L}_{ULT}=-\sum_{k=1}^{K}(p^{ut_{w}}_{k})\log(p^{ut_{w}}_{k}) (5)

Thus, in this step, the domain adaptation uses all the target samples, but without the labels of the few target examples which are available. In the next stage, we check how good the current model is in predicting the labels of the labeled targets and its augmentations, which is then used to guide the adaptation process.

4.2 Labeled Target Prediction

In this stage, we leverage the additional label information provided for few target examples to analyze how well the current model has adapted to the target domain. Since the number of labeled targets is very less, we leverage their strong and weak augmentations denoted by xtsx^{t_{s}} and xtwx^{t_{w}} for this analysis, calculated following equation (1). Let us denote this new augmented data as 𝒟lta\mathcal{D}_{lt_{a}}.

First, the current model is used to predict the class labels of the examples in 𝒟lta\mathcal{D}_{lt_{a}} for which we know the ground truth class labels. Computing the label prediction accuracy for each class in this manner gives a weak indication of the class-wise adaptation of the current model. For example, if most of the (labeled) target samples of a class have been classified correctly, it indicates that domain adaptation is successful for that class, otherwise, it is not satisfactory. Obviously, as the number of labeled target samples increases, this becomes a strong indicator of the class-wise domain adaptation accuracy, hence we use the labeled data along with its augmentations. We define the class-wise accuracy vector as follows:

𝐀=[a1,a2,,aK],𝐀K×1\mathbf{A}=[a_{1},a_{2},\dots,a_{K}],\;\;\mathbf{A}\in\mathbb{R}^{K\times 1} (6)

and aia_{i}’s are the individual class accuracies as calculated on 𝒟lta\mathcal{D}_{lt_{a}} using the current model 𝐔𝐌\mathbf{U_{M}}, and KK is the total number of classes. Pred&Guide aims to utilize the class-wise information about adaptability, rather than just the domain-wise adaptability, since different classes vary in their ability to adapt. This follows from the fact that it may easier for some classes of a domain to adapt (if those classes look similar across domains), while it may be difficult for others.

The goal is to aid the classes which have not yet adapted satisfactorily, based on the accuracy computed. For the classes which have lesser accuracy, more (less) weightage is given to the neighboring (far) source samples of the corresponding labeled target examples. This helps the corresponding class prototypes adapt to the target domain samples, as will be explained in detailed next.

4.3 Source Example Weighing

Once UDA is performed, the label prediction in the previous module is used to weigh source examples accordingly at regular intervals. If a target example is wrongly classified, we weigh its neighbouring source samples of the same class relatively more (based on a linear weighing scheme as explained next), which will guide the model towards classifying this target sample correctly. To further aid the adaptation, we also relatively down-weigh the source samples of the same class far away from the wrongly classified labeled targets, since these source examples hinder the adaptation process. To perform this step efficiently with reduced computational complexity, we use a feature bank to store the representations of the source examples as described next.

Feature Bank Based Source Identification: To compute the distances of source examples of the same class as the labeled target examples, we need to compute the distance between the labeled target and the source examples of the same class. To compute this distance efficiently, we maintain a feature bank 𝐒\mathbf{S} as defined below:

𝐒=[s1,s2,,sns],𝐒df×ns\vspace{-7px}\mathbf{S}=[s_{1},s_{2},\dots,s_{n_{s}}],\;\;\mathbf{S}\in\mathbb{R}^{d_{f}\times n_{s}} (7)

where sis_{i} is the representation of the ithi^{th} source example. nsn_{s} denotes the number of examples in the source domain (as defined in Section 3) and dfd_{f} is the dimension of the representation space. 𝐒\mathbf{S} is updated on the fly batch-wise with momentum msm_{s} as:

𝐒t+1ms𝐒t+(1ms)𝐟bs\vspace{-5px}\mathbf{S}_{t+1}\leftarrow m_{s}\mathbf{S}_{t}+(1-m_{s})\,\mathbf{f}_{bs} (8)

where 𝐟bs\mathbf{f}_{bs} are the representations of the current batch of source examples. 𝐒t\mathbf{S}_{t} denotes the feature bank at a given iteration-step tt. For simplicity we assume that in (8), only the source examples corresponding to the ones in the mini-batch are updated. Now, we describe how our linear weighing scheme works.

Let 𝒟lsk𝒟ls\mathcal{D}^{k}_{ls}\subset\mathcal{D}_{ls} denote the set of source examples belonging to class k{1,2,,K}k\>\in\>\{1,2,\dots,K\}. Similarly, let 𝒟ltk𝒟lt\mathcal{D}^{k}_{lt}\subset\mathcal{D}_{lt} be the set of labeled targets belonging to class kk. Now for every element xt,kx^{t,k} in 𝒟ltk\mathcal{D}_{lt}^{k} (sample index ii is omitted for simplicity), we compute an ordered set 𝒟~lsk={xjs,k𝒟lsk s.t. d(sj+1s,k,F(xt,k))>d(sjs,k,F(xt,k))}\tilde{\mathcal{D}}_{ls}^{k}=\{x^{s,k}_{j}\in\mathcal{D}_{ls}^{k}\text{ s.t. }d(s^{s,k}_{j+1},F(x^{t,k}))>d(s^{s,k}_{j},F(x^{t,k}))\} where sjs,ks^{s,k}_{j} is the corresponding feature from the feature bank 𝐒t\mathbf{S}_{t} for sample xjs,kx^{s,k}_{j} and d(.,.)d(.,.) is cosine similarity between the feature representations of the datapoints. Let minsimmin_{sim} and maxsimmax_{sim} denote the minimum and maximum cosine similarities from the set of source examples (𝒟~lsk\tilde{\mathcal{D}}^{k}_{ls}) of the given class kk. We compute the class wise max and min weights as:

maxw=1+ϕ/exp(ak)minw=1ϕ/exp(ak)\begin{split}max_{w}=1+\phi/\exp(a_{k})\\ min_{w}=1-\phi/\exp(a_{k})\end{split} (9)

where aka_{k} is the accuracy computed using the predictions of the labeled targets of class kk, as computed in the previous step of Pred&Guide and ϕ\phi is a hyper-parameter. Thus, the ithi^{th} source sample belonging to class kk is weighted with wiw_{i} as

wi=m×(d(sis,k,F(xt,k))minsim)+minww_{i}=m\times(d(s^{s,k}_{i},F(x^{t,k}))-min_{sim})+min_{w} (10)

where mm is the slope of the linear weighting scheme

m=maxwminwmaxsimminsimm=\frac{max_{w}-min_{w}}{max_{sim}-min_{sim}} (11)

In other words, we up-weigh the near source samples and down-weigh the far source samples, with the weight varying according to a linear function dependent on the similarity of the source sample with respect to the labeled target sample as in equation (10). If the accuracy for class kk is less (i.e. aka_{k} is small), the prototype for that particular class probably has not adapted properly. For these classes, the weights of source examples wiw_{i} in the neighbourhood of the labeled targets of that class is increased, and those which are far away are decreased, which will help the domain adaptation process. Thus, the weighted loss for a source example (xs,ys)(x^{s},y^{s}) with predicted class probability distribution psp^{s} is expressed as follows:

sw=ws×𝐇(ys,ps)\mathcal{L}^{w}_{s}=w^{s}\times\mathbf{H}(y^{s},p^{s}) (12)

where wsw^{s} is weight of the example whose loss is being calculated. Figure 2 illustrates the near and far examples at different iterations of Pred&Guide using t-SNE plots [35].

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Figure 2: Figure showing the t-SNE  [35] plots at different stages of training for one particular class in the P to C domain adaptation setting for DomainNet. The labeled source and labeled target (above) and the unlabeled target (below) are shown in separate plots for clarity (but share the same set of axes). The target classification accuracy for this class at these three stages (from left to right) are 47.9%47.9\%, 76.7%76.7\% and 81.5%81.5\% for the 2000th2000^{th}, 7500th7500^{th}, and 15000th15000^{th} iterations respectively, as can be clearly seen from the increasing number of correctly classified target examples. Color coding - ×\times: unweighted labeled source, \blacktriangledown and \blacktriangle : 5 near and 5 far weighted samples respectively; Orange Square: labeled targets; ×\times: wrongly classified unlabeled targets; ×\times: correctly classified unlabeled targets.
Refer to caption
Figure 3: Figure showing 5 nearest and farthest source samples corresponding to labeled targets of four different classes. Green (red) boundaries of the labeled targets indicate whether they are correctly (incorrectly) classified after UDA with self-training (first column) and after Pred&Guide training is complete with effective source weighting (last column). We observe that for the first two rows, Pred&Guide is able to effectively correct the labels, while in the third row, though the near and far source examples are quite intuitive, the final prediction is still incorrect. The last row shows one example which is always correctly classified. This example is for the Real to Sketch setting of DomainNet with ResNet34 backbone.

Once we have used the labeled target accuracy to calculate and assign weights for the source examples T1T_{1} number of iterations as described above, we bring in the labeled target examples (xt,yt)(x^{t},y^{t}), with predicted class probability ptp^{t}, into the training. For these samples, we use the standard cross-entropy loss starting at iteration T2T_{2} (number of iterations after which source example weighing validation accuracy saturates) as follows:

lt=𝐇(yt,pt)\mathcal{L}_{lt}=\mathbf{H}(y^{t},p^{t}) (13)

Input: Feature extractor FF, classifier CC, total number of iterations TT
Data: Labeled source 𝒟ls\mathcal{D}_{ls}, unlabeled target 𝒟ut\mathcal{D}_{ut}, labeled target 𝒟lt\mathcal{D}_{lt}, augmented labeled target 𝒟lta\mathcal{D}_{lt_{a}}.

while t<Tt<T do

       Calculate pseudo-labeled target loss p\mathcal{L}_{p} according to (3).
Calculate unlabeled target loss ULT\mathcal{L}_{ULT} according to (5).
if t>T1t>T_{1} then
             if tmodTn==0t\bmod T_{n}==0 then
                   Calculate accuracy on 𝒟lta\mathcal{D}_{lt_{a}} to obtain 𝐀\mathbf{A} as in equation (6).
Calculate and update source example weights using equation (10).
             end if
            Calculate and backpropagate sw\mathcal{L}^{w}_{s} according to equation (12).
       end if
      else if t<T1t<T_{1} then
             Calculate cross-entropy loss for the labeled source examples s\mathcal{L}_{s} according to (4).
       end if
      if t>T2t>T_{2} then
             Calculate cross-entropy loss for the labeled target examples lt\mathcal{L}_{lt} according to (13).
       end if
      tt+1t\leftarrow t+1
end while
Algorithm 1 Proposed Pred&Guide algorithm

4.4 Complete Pred&Guide Framework

The complete Pred&Guide Framework is summarized in this section. As stated earlier, we optimize the parameters of the feature extractor ΘF\Theta_{F} and classifier ΘC\Theta_{C} using the mini-max approach [5]:

Θ^F=argminΘF𝟏+λULTΘ^C=argminΘC𝟏λULT\begin{split}\hat{\Theta}_{F}=\underset{\Theta_{F}}{\operatorname*{argmin}}\;\mathbf{\mathcal{L}_{1}}+\lambda\mathcal{L}_{ULT}\\ \hat{\Theta}_{C}=\underset{\Theta_{C}}{\operatorname*{argmin}}\;\mathbf{\mathcal{L}_{1}}-\lambda\mathcal{L}_{ULT}\end{split} (14)

Here 𝟏\mathbf{\mathcal{L}_{1}} is the sum of all losses except ULT\mathcal{L}_{ULT} loss given by

𝟏=p+𝟏(t<T1)s+𝟏(t>T1)sw+𝟏(t>T2)lt\begin{split}\mathbf{\mathcal{L}_{1}}=\mathcal{L}_{p}+\mathbf{1}(t<T_{1})\mathcal{L}_{s}+\mathbf{1}(t>T_{1})\mathcal{L}^{w}_{s}+\mathbf{1}(t>T_{2})\mathcal{L}_{lt}\end{split} (15)

Here, T1T_{1} is the iteration after which we start weighing the source examples and T2T_{2} is the iteration at which the labels of the few target examples are exposed to the training algorithm. More details about calculating T1T_{1} and T2T_{2} are given in Section 5. The complete Pred&Guide algorithm is given in Algorithm 1.

5 Experimental Evaluation

Here, we provide details of the extensive experiments conducted to evaluate the efficacy of Pred&Guide for semi-supervised domain adaptation. First, we discuss the benchmark datasets used, the implementation details before reporting the results. We also conduct detailed analysis of Pred&Guide and perform ablation studies to evaluate the usefulness of the individual modules.

Datasets Used and Evaluation Protocol: To evaluate Pred&Guide, we use two large-scale benchmark domain adaptation datasets, DomainNet [15] and Office-Home [14]. DomainNet consists of 66 domains with 345345 classes, comprising of about 0.6M0.6M images. Since the entire DomainNet dataset is noisy, we select 44 domains (Real, Painting, Clipart, Sketch) and 126126 classes with 77 standard domain adaptation scenarios, which are generally used to benchmark DA methods  [5, 9, 8, 6]. In addition to DomainNet, we also evaluate Pred&Guide on the Office-Home dataset, which is a smaller, but challenging dataset. It consists of 6565 classes and 44 domains (Real, Clipart, Product, Art). We evaluate Pred&Guide on all the 1212 possible adaptation scenarios for Office-Home.

Implementation Details: The code is implemented using PyTorch  [36] on a single Nvidia RTX-2080 GPU. The underlying domain adaptation technique that we used in Pred&Guide is the very successful mini-max entropy approach [5]. We use Resnet34 [37] and AlexNet [38] for the feature extractor network FF to compare with state-of-the-art approaches. For the classifier CC, we use a 4096×K4096\times K fully-connected layer for AlexNet and a 512×K512\times K fully-connected layer for ResNet34. We use SGD optimizer, with the starting learning rate of 0.010.01, momentum of 0.90.9, weight decay of 0.00050.0005 and λ=0.1\lambda=0.1 for the weight of ULT\mathcal{L}_{ULT}. We use batch size of 2424 and 3232 for Resnet34 and Alexnet backbones respectively on labeled samples, and twice the batch size for unlabeled data. We set ϕ=0.5\phi=0.5 in Equation (9) for DomainNet and ϕ=0.1\phi=0.1 for Office-Home, and ms=0.1m_{s}=0.1 in Equation 8 for all our experiments. We run the first module, i.e. UDA with self-training for T1T_{1} iterations. T1T_{1} is the iteration-step when UDA with self-training has converged. Then, the weights of the source examples are computed every TnT_{n} iterations. Tn=1000T_{n}=1000 for DomainNet and Tn=140T_{n}=140 for Office-Home. Finally, we bring in the labeled target examples in the CE loss at the T2T_{2} iteration. T2T_{2} is started when the source example weighing performance has converged. To calculate T1T_{1} and T2T_{2}, we consider the validation accuracy to be converged when it has not increased for 500500 iterations.

Evaluation for SSDA: The results of Pred&Guide for Office-Home dataset with AlexNet backbone for both 11-shot and 33-shot scenarios are reported in Table I. Comparison with the recent approaches show that Pred&Guide achieves the state-of-art results on an average for both the settings and also outperforms the others for most of the individual domain pair settings. The results on DomainNet dataset for both 11 and 33 shots (i.e. one and three target samples of each class in the target domain are labeled) using AlexNet and ResNet34 backbones are reported in Table II. The results of all the other approaches has been directly taken from  [5]. We observe that the proposed approach significantly outperforms all the recent SSDA approaches, thus justifying its usefulness.

Method R to C R to P R to A P to R P to C P to A A to P A to C A to R C to R C to A C to P Mean
One-Shot
S+T [39] 37.5 63.1 44.8 54.3 31.7 31.5 48.8 31.1 53.3 48.5 33.9 50.8 44.1
DANN [2] 42.5 64.2 45.1 56.4 36.6 32.7 43.5 34.4 51.9 51.0 33.8 49.4 45.1
ADR [6] 37.8 63.5 45.4 53.5 32.5 32.2 49.5 31.8 53.4 49.7 34.2 50.4 44.5
CDAN [40] 36.1 62.3 42.2 52.7 28.0 27.8 48.7 28.0 51.3 41.0 26.8 49.9 41.2
ENT [30] 26.8 65.8 45.8 56.3 23.5 21.9 47.4 22.1 53.4 30.8 18.1 53.6 38.8
BiAT [9] - - - - - - - - - - - - 49.6
MME [5] 42.0 69.6 48.3 58.7 37.8 34.9 52.5 36.4 57.0 54.1 39.5 59.1 49.2
Pred&Guide 44.4 73.2 50.0 59.6 38.2 37.0 54.4 34.8 55.6 52.8 38.0 59.2 49.8
Three-Shot
S+T [39] 44.6 66.7 47.7 57.8 44.4 36.1 57.6 38.8 57.0 54.3 37.5 57.9 50.0
DANN [2] 47.2 66.7 46.6 58.1 44.4 36.1 57.2 39.8 56.6 54.3 38.6 57.9 50.3
ADR [6] 45.0 69.3 46.9 57.3 38.9 36.3 57.5 40.0 57.8 53.4 37.3 57.7 49.5
CDAN [40] 41.8 69.9 43.2 53.6 35.8 32.0 56.3 34.5 53.5 49.3 27.9 56.2 46.2
ENT [30] 44.9 70.4 47.1 60.3 41.2 34.6 60.7 37.8 60.5 58.0 31.8 63.4 50.9
BiAT [9] - - - - - - - - - - - - 56.4
MME [5] 51.2 73.0 50.3 61.6 47.2 40.7 63.9 43.8 61.4 59.9 44.7 64.7 55.2
APE [8] 51.9 74.6 51.2 61.6 47.9 42.1 65.5 44.5 60.9 58.1 44.3 64.8 55.6
Pred&Guide 53.4 75.0 51.9 64.0 48.7 43.6 65.7 45.7 60.6 60.0 43.0 67.5 56.6
TABLE I: Semi-supervised DA results (%) on Office-Home data for both 11 and 33-shot protocols using Alexnet backbone for four domains with 1212 total domain combinations.
Net Method R to C R to P P to C C to S S to P R to S P to R Mean
1 shot 3 shot 1 shot 3 shot 1 shot 3 shot 1 shot 3 shot 1 shot 3 shot 1 shot 3 shot 1 shot 3 shot 1 shot 3 shot
Alexnet S+T [39] 43.3 47.1 42.4 45.0 40.1 44.9 33.6 36.4 35.7 38.4 29.1 33.3 55.8 58.7 40.0 43.4
DANN [2] 43.3 46.1 41.6 43.8 39.1 41.0 35.9 36.5 36.9 38.9 32.5 33.4 53.6 57.3 40.4 42.4
ADR [6] 43.1 46.2 41.4 44.4 39.3 43.6 32.8 36.4 33.1 38.9 29.1 32.4 55.9 57.3 39.2 42.7
CDAN [40] 46.3 46.8 45.7 45.0 38.3 42.3 27.5 29.5 30.2 33.7 28.8 31.3 56.7 58.7 39.1 41.0
ENT [30] 37.0 45.5 35.6 42.6 26.8 40.4 18.9 31.1 15.1 29.6 18.0 29.6 52.2 60.0 29.1 39.8
MME [5] 48.9 55.6 48.0 49.0 46.7 51.7 36.3 39.4 39.4 43.0 33.3 37.9 56.8 60.7 44.2 48.2
BiAT [9] 54.2 58.6 49.2 50.6 44.0 52.0 37.7 41.9 39.6 42.1 37.2 42.0 56.9 58.8 45.5 49.4
APE [8] 47.7 54.6 49.0 50.5 46.9 52.1 38.5 42.6 38.5 42.2 33.8 38.7 57.5 61.4 44.6 48.9
Pred&Guide 54.5 57.3 54.4 56.7 52.8 56.9 41.6 48.3 36.9 44.9 38.4 46.9 61.9 65.4 48.6 53.8
Resnet34 S+T [39] 55.6 60.0 60.6 62.2 56.8 59.4 50.8 55.0 56.0 59.5 46.3 50.1 71.8 73.9 56.9 60.0
DANN [2] 58.2 59.8 61.4 62.8 56.3 59.6 52.8 55.4 57.4 59.9 52.2 54.9 70.3 72.2 58.4 60.7
ADR [6] 57.1 60.7 61.3 61.9 57.0 60.7 51.0 54.4 56.0 59.9 49.0 51.1 72.0 74.2 57.6 60.4
CDAN [40] 65.0 69.0 64.9 67.3 63.7 68.4 53.1 57.8 63.4 65.3 54.5 59.0 73.2 78.5 62.5 66.5
ENT [30] 65.2 71.0 65.9 69.2 65.4 71.1 54.6 60.0 59.7 62.1 52.1 61.1 75.0 78.6 62.6 67.6
MME [5] 70.0 72.2 67.7 69.7 69.0 71.7 56.3 61.8 64.8 66.8 61.0 61.9 76.1 78.5 66.4 68.9
BiAT [9] 73.0 74.9 68.0 68.8 71.6 74.6 57.9 61.5 63.9 67.5 58.5 62.1 77.0 78.6 67.1 69.7
APE [8] 70.4 76.6 70.8 72.1 72.9 76.7 56.7 63.1 64.5 66.1 63.0 67.8 76.6 79.4 67.6 71.7
Pred&Guide 74.7 79.2 75.9 74.8 74.8 75.9 66.0 69.5 71.4 71.2 68.7 72.5 78.5 80.2 72.9 74.8
TABLE II: Semi-supervised DA performance (%\%) on DomainNet dataset for both 11 and 33-shot protocols on 4 domains, R: Real, C: Clipart, P: Clipart, S: Sketch.

6 Additional Analysis

Here, we perform extensive analysis and ablation studies of the proposed Pred&Guide framework.
 
Importance of Source Example Weighing: To analyze the importance of weighting the source examples in the proposed Pred&Guide framework, we run the algorithm with and without the source example weighing on the DomainNet dataset. The results in Table IV show that when the algorithm is trained using the standard cross entropy loss s\mathcal{L}_{s} without the weighting (i.e. No weights), the performance is considerably lower as compared to when the source samples are properly weighted. Figure 3 gives a visual illustration of the near and far source examples of the labeled targets for one class of DomainNet. We would like to highlight that the proposed source example weighting (SEW) can be seamlessly integrated with other SSDA methods to improve their performance. We experiment with a recent SSDA approach CDAC [41], and observe that SEW improves the mean performance (%\%) of CDAC for both 1-shot and 3-shot settings from 73.6 to 74.4 and from 76.0 to 76.2 respectively.
 
Weighting Scheme: Now, we analyze the effectiveness of the proposed weighting strategy based on class-wise adaptability. Specifically, we compare with fixed up-weighing and down-weighting. For this experiment, we used fixed weights of 1.51.5 and 0.50.5 for the near and far samples respectively. We observe from Table III that fixed weighting can give unpredictable results, i.e. for some cases (1-shot, R to C), it improves the results, and for other cases (3-shot R to S and R to C), the performance degrades. We observe that for almost all scenarios, the proposed source example weighting helps to boost the performance. We also compare with another popular weighting scheme, namely Focal Loss [42], which emphasizes on correct classification of hard training examples by weighing them more. We observe that for all scenarios, the proposed technique outperforms focal loss. As an example, for 3-shot setting, the proposed technique gives 72.5%\% and 79.2%\% for R to S and R to C as compared to 69.6%\% and 74.8%\% obtained using Focal loss.
 

Near & Far Weighing: Next, we analyze the effect of weighing only the near or far samples, and the results are reported in Table IV. Near source samples are defined as the samples having weight wi>1w_{i}>1 and far samples are defined as the ones having wi<1w_{i}<1. The results obtained using only UDA with self-training is shown in the first row. If we introduce the labeled target samples right after this stage, as expected, the performance improves as shown in the second row. The third and fourth rows show the performance when only near and far away source examples are weighted. We observe that both these steps help to boost the performance. The final row is the complete Pred&Guide, which weighs both the near and far away source examples and outperforms all the other variants.
 
Sensitivity Analysis of Hyperparameters: There are two primary hyperparamters in Pred&Guide, ϕ\phi which control the amount of weighing for the source examples and TnT_{n}. We plot the SSDA results (%) for a wide range of ϕ\phi values for the 3-shot R to S setting of DomainNet in Fig. 4 (using Resnet34). We observe that the performance is quite stable for a wide range of parameter values, and even the lowest accuracy is better than the second best approach. We also vary the values of TnT_{n} between 1000 to 2000 for DomainNet and observe that the performance is stable for all the TnT_{n} values. Note that we have used the same ϕ\phi and TnT_{n} values for all the domain settings of a dataset.

Weighing Strategy R to S R to C
1 shot 3 shot 1 shot 3 shot
No Weights 68.8 70.8 71.3 76.3
Fixed Weights 68.8 68.6 74.3 73.2
Class-wise Adaptability Based Weights 68.7 72.5 74.7 79.2
TABLE III: Performance of Pred&Guide with no weighting, fixed weighting and classwise adaptability based weighing of source examples.
Refer to caption
Figure 4: Plot of hyperparameter ϕ\phi vs. accuracy for 3 shot R to S setting of the DomainNet dataset
Components R to S R to C
UDA N F LT 1 shot 3 shot 1 shot 3 shot
63.5 63.5 69.2 69.2
68.8 70.8 71.3 76.3
69.0 68.9 71.5 77.4
68.1 71.9 72.4 77.3
68.7 72.5 74.7 79.2
TABLE IV: Ablation study for Pred&Guide depicting performance of individual components using Resnet34: (1) UDA: Unsupervised domain adaptation with self-training, (2) N and F: class-wise source weighing of Near and Far examples respectively, (3) LT: labeled target examples are included in training starting at iteration T2T_{2}.

7 Conclusion

In this work, we addressed the importance of using labeled target samples effectively for the domain adaptation task in the semi-supervised setting. The proposed Pred&Guide framework initially performs the domain adaptation in an unsupervised manner to avoid any bias that may be introduced due to the few labeled target examples. After this stage, we introduce two effective modules, labeled target prediction and source example weighting to effectively weigh the source examples to better guide the adaptation. With Pred&Guide, we set a new state-of-the-art for semi-supervised domain adaptation as illustrated by extensive experiments on two large-scale benchmark datasets.

References

  • [1] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” in JMLR, 2016.
  • [2] Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in ICML, 2015.
  • [3] W. Li and S. Chen, “Unsupervised domain adaptation with progressive adaptation of subspaces,” in arXiv, 2020.
  • [4] B. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in CVPR, 2012.
  • [5] K. Saito, D. Kim, S. Sclaroff, T. Darrell, and K. Saenko, “Semi-supervised domain adaptation via minimax entropy,” in ICCV, 2019.
  • [6] K. Saito, Y. Ushiku, T. Harada, and K. Saenko, “Adversarial dropout regularization,” in ICLR, 2018.
  • [7] J. Donahue, J. Hoffman, E. Rodner, K. Saenko, and T. Darrell, “Semi-supervised domain adaptation with instance constraints,” in CVPR, 2013.
  • [8] T. Kim and C. Kim, “Attract, perturb, and explore: Learning a feature alignment network for semi-supervised domain adaptation,” in ECCV, 2020.
  • [9] P. Jiang, A. Wu, Y. Han, Y. Shao, M. Qi, and B. Li, “Bidirectional adversarial training for semi-supervised domain adaptation,” in IJCAI, 2020.
  • [10] T. Yao, Yingwei Pan, C. Ngo, Houqiang Li, and Tao Mei, “Semi-supervised domain adaptation with subspace learning for visual recognition,” in CVPR, 2015.
  • [11] M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Unsupervised domain adaptation with residual transfer networks,” in NIPS, 2016.
  • [12] K. Sohn, D. Berthelot, C.-L. Li, Z. Zhang, N. Carlini, E. D. Cubuk, A. Kurakin, H. Zhang, and C. Raffel, “Fixmatch: Simplifying semi-supervised learning with consistency and confidence,” in NIPS, 2020.
  • [13] D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, and C. Raffel, “Mixmatch: A holistic approach to semi-supervised learning,” in NIPS, 2019.
  • [14] H. Venkateswara, J. Eusebio, S. Chakraborty, , and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in CVPR, 2017.
  • [15] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang., “Moment matching for multi-source domain adaptation,” in ICCV, 2019.
  • [16] K. Saito, K. Watanabe, Y. Ushiku, and T. Harada, “Maximum classifier discrepancy for unsupervised domain adaptation,” in CVPR, 2018.
  • [17] B. Gong, K. Grauman, and F. Sha, “Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation,” in ICML, 2013.
  • [18] D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu, “Equivalence of distance-based and rkhs-based statistics in hypothesis testing,” The Annals of Statistics, 2013.
  • [19] B. Sun, J. Feng, and K. Saenko, “Correlation alignment for unsupervised domain adaptation,” in Domain Adaptation in Computer Vision Applications, 2016.
  • [20] H. Nam, H. Lee, J. Park, W. Yoon, and D. Yoo, “Reducing domain gap via style-agnostic networks,” in arXiv, 2020.
  • [21] J. Li, G. Li, Y. Shi, and Y. Yu, “Cross-domain adaptive clustering for semi-supervised domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2505–2514.
  • [22] A. Singh, “Clda: Contrastive learning for semi-supervised domain adaptation,” Advances in Neural Information Processing Systems, vol. 34, pp. 5089–5101, 2021.
  • [23] K. Li, C. Liu, H. Zhao, Y. Zhang, and Y. Fu, “Ecacl: A holistic framework for semi-supervised domain adaptation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 8578–8587.
  • [24] J. Liang, D. Hu, and J. Feng, “Domain adaptation with auxiliary target domain-oriented classifier,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 632–16 642.
  • [25] E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le, “Randaugment: Practical automated data augmentation with a reduced search space,” in NIPS, 2020.
  • [26] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “Autoaugment: Learning augmentation policies from data,” in CVPR, 2019.
  • [27] F. H. K. dos Santos Tanaka and C. Aranha, “Data augmentation using gans,” in arXiv, 2019.
  • [28] D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn, H. Zhang, and C. Raffel, “Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring,” in ICLR, 2020.
  • [29] Y. Chen, C. Wei, A. Kumar, and T. Ma, “Self-training avoids using spurious features under domain shift,” in NIPS, 2020.
  • [30] Y. Grandvalet and Y. Bengio, “Semi-supervised learning by entropy minimization,” in NIPS, 2004.
  • [31] D.-H. Lee, “Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks,” ICML 2013 Workshop : Challenges in Representation Learning (WREPL).
  • [32] T. Miyato, S. ichi Maeda, M. Koyama, and S. Ishii, “Virtual adversarial training: A regularization method for supervised and semi-supervised learning,” in IEEE Transactions on PAMI, 2018.
  • [33] A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” in NIPS, 2017.
  • [34] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” in ICLR, 2018.
  • [35] L. van der Maaten and G. Hinton, “Visualizing data using t-sne,” JMLR, vol. 9, 2008. [Online]. Available: http://jmlr.org/papers/v9/vandermaaten08a.html
  • [36] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in NIPS, 2019.
  • [37] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016.
  • [38] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.
  • [39] R. Ranjan, C. D. Castillo, and R. Chellappa, “L2-constrained softmax loss for discriminative face verification,” in arXiv, 2017.
  • [40] M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Conditional adversarial domain adaptation,” in NIPS, 2018.
  • [41] J. Li, L. Guanbin, S. Yemin, and Y. Yu, “Domain adaptation with auxiliary target domain-oriented classifier,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 632–16 642.
  • [42] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988.
[Uncaptioned image] Megh Bhalerao received his Bachelor’s of Technology Degree in Electrical Engineering from the National Institute of Technology Karanataka, India in 2020. He is currently an MS EE Student at the University of Washington Seattle.
[Uncaptioned image] Anurag Singh is currently MS in Computer Science student at TU Munich. He completed his undergraduate in CS from NSIT, University of Delhi in 2018.
[Uncaptioned image] Soma Biswas received the PhD degree in Electrical Engineering from University of Maryland, College Park in 2009. She is currently an Associate Professor in the Electrical Engineering Department, Indian Institute of Science, Bangalore. Her research interests is in Computer Vision, Pattern Recognition, Machine Learning and related areas. She is a Senior Member of IEEE.

8 Supplimentary Material

8.1 Modularity of our Source Example Weighing Framework

We can seamlessly integrate our source example weighing scheme with any given SSDA framework. In Table V we show that our proposed source example weighing framework can improve the performance of a recently proposed SSDA work Cross Domain Adaptive Clustering [41].

Net Method R to C R to P P to C C to S S to P R to S P to R Mean
1 shot 3 shot 1 shot 3 shot 1 shot 3 shot 1 shot 3 shot 1 shot 3 shot 1 shot 3 shot 1 shot 3 shot 1 shot 3 shot
ResNet34 CDAC 77.4 79.6 74.2 75.1 75.5 79.3 67.6 69.9 71.0 73.4 69.2 72.5 80.4 81.9 73.6 76.0
CDAC + SEW 78.1 80.1 75.1 75.5 75.7 78.7 68.7 70.7 71.8 73.7 70.8 72.8 80.9 82.6 74.4 76.2
TABLE V: Comparison of CDAC performance (%\%) w.r.t. CDAC + SEW (source example weighing) on DomainNet dataset for both 11 and 33-shot protocols on 4 domains, R: Real, C: Clipart, P: Clipart, S: Sketch.

8.2 Comparison of SEW with standard focal loss

In table VI, we compare our proposed source example weighing scheme with the standard focal loss which assigns weights to examples based on hardness. Our SEW scheme is in contrast to focal loss where we assign example weights based on domain similarity.

Weighting-Strategy R to S R to C
No Weights 70.8 76.3
Fixed Weights 68.6 73.2
Focal Loss 69.6 74.8
Proposed Class-wise Adaptability Based Weights 72.5 79.2
TABLE VI: Comparison of the proposed weighting scheme and Focal loss in 3-Shot setting on Domainnet dataset with ResNet-34 as backbone.