This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: 1 New Laboratory of Pattern Recognition, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
2 University of Chinese Academy of Sciences, Beijing, China.
3 Centre for Artificial Intelligence and Robotics, Hong Kong Institute of Science & Innovation, Chinese Academy of Sciences, Hong Kong, China.
4 Shanghai Jiao Tong University, Shanghai, China.
5 Chinese University of Hong Kong, Hong Kong, China.
Zhaoxiang Zhang is the corresponding author.
E-mail: {wanghaochen2022, zhaoxiang.zhang}@ia.ac.cn.

Using Unreliable Pseudo-Labels for Label-Efficient Semantic Segmentation

Haochen Wang1,2    Yuchao Wang4    Yujun Shen5    Junsong Fan3    Yuxi Wang3    Zhaoxiang Zhang1,2,3∗
(Received: date / Accepted: date)
Abstract

The crux of label-efficient semantic segmentation is to produce high-quality pseudo-labels to leverage a large amount of unlabeled or weakly labeled data. A common practice is to select the highly confident predictions as the pseudo-ground-truths for each pixel, but it leads to a problem that most pixels may be left unused due to their unreliability. However, we argue that every pixel matters to the model training, even those unreliable and ambiguous pixels. Intuitively, an unreliable prediction may get confused among the top classes, however, it should be confident about the pixel not belonging to the remaining classes. Hence, such a pixel can be convincingly treated as a negative key to those most unlikely categories. Therefore, we develop an effective pipeline to make sufficient use of unlabeled data. Concretely, we separate reliable and unreliable pixels via the entropy of predictions, push each unreliable pixel to a category-wise queue that consists of negative keys, and manage to train the model with all candidate pixels. Considering the training evolution, we adaptively adjust the threshold for the reliable-unreliable partition. Experimental results on various benchmarks and training settings demonstrate the superiority of our approach over the state-of-the-art alternatives.

Keywords:
Semi-Supervised Learning Domain Adaption Weakly Supervised Learning Semantic Segmentation

1 Introduction

Refer to caption
Figure 1: Category-wise performance and statistics on the number of pixels with reliable and unreliable predictions. Model is trained using 732732 labeled images on PASCAL VOC 2012 (Everingham et al., , 2010) and evaluated on the remaining 9,8509,850 images.

Semantic segmentation is a fundamental task in the computer vision field and has been significantly advanced along with the rise of deep neural networks (Long et al., 2015a, ; Ronneberger et al., , 2015; Zhao et al., , 2017; Chen et al., 2017a, ). However, existing supervised approaches rely on large-scale annotated data, which can be too costly to acquire in practice. For instance, it takes around 90 minutes to annotate just a single image of Cityscapes (Cordts et al., , 2016), and this number even increases to 200 under adverse conditions (Sakaridis et al., , 2021). To alleviate this problem, many attempts have been made towards label-efficient semantic segmentation (Ru et al., , 2022; Wang et al., 2022b, ; Hoyer et al., , 2022). Under this setting, self-training, i.e., assigning pixel-level pseudo-labels for each weakly-labeled sample, becomes a typical solution. Specifically, given a weakly-labeled image, prior arts (Lee et al., , 2013; Xie et al., , 2020) borrow predictions from the model trained on labeled data or leverage Class Activation Maps (CAMs) (Zhou et al., , 2016) to obtain pixel-wise prediction, and then use them as the “ground-truth” to, in turn, boost the model. Along with it, many attempts have been made to produce high-quality pseudo-labels. A typical solution is to filter the predictions using their confidence scores (Yang et al., , 2022; Zou et al., , 2020; Zuo et al., , 2021; Xu et al., 2021b, ; Hoyer et al., , 2022; Ru et al., , 2022). In this way, only the highly confident predictions are served as the pseudo-labels, while those ambiguous ones are simply discarded.

One potential problem caused by this paradigm is that some pixels may never be learned in the entire training process. For example, if the model cannot satisfyingly predict some certain class, it becomes difficult to assign accurate pseudo-labels to the pixels regarding such a class, which may lead to insufficient and categorically imbalanced training. For instance, as illustrated in Fig. 1, underperformed category chair tends to have fewer reliable predictions, and the model will be probably biased to dominant classes, e.g., background and cat, when we filter reliable predictions to be ground-truths for supervision and simply discard those ambiguous predictions. This issue even becomes more severe under domain adaptive or weakly supervised settings. Under both settings, predictions of unlabeled or weakly-labeled images usually run into undesirable chaos, and thus only a few pixels can be regarded as reliable ones when selecting highly confident pseudo-labels. From this perspective, we argue that to make full use of the unlabeled data, every pixel should be properly utilized.

However, how to use these unreliable pixels appropriately is non-trivial. Directly using the unreliable predictions as the pseudo-labels will cause the performance degradation (Arazo et al., , 2020) because it is almost impossible for unreliable predictions to assign exactly correct pseudo-labels. Therefore, in this paper, we propose an alternative way of Using Unreliable Pseudo-Labels (U2PL).

First, we observe that an unreliable prediction usually gets confused among only a few classes instead of all classes. Taking Fig. 2 as an instance, the pixel with a white cross is an unreliable prediction that receives similar probabilities on class motorbike and person, but the model is pretty sure about this pixel not belonging to class car and train. Based on this observation, we reconsider those unreliable pixels as the negative keys to those unlikely categories, which is a simple and intuitive way to make full use of all predictions. Specifically, after getting the prediction from an unlabeled image, we first leverage the per-pixel entropy as the metric (see Fig. 2a) to separate all pixels into two groups, i.e., reliable ones and unreliable ones. All reliable predictions are then used to derive positive pseudo-labels, while the pixels with unreliable predictions are pushed into a memory bank, which is full of negative keys. To avoid all negative pseudo-labels only coming from a subset of categories, we employ a queue for each category. Such a design ensures that the number of negative keys for each class is balanced, preventing being overwhelmed by those dominant categories. Meanwhile, considering that the quality of pseudo-labels becomes higher as the model gets more and more accurate, we come up with a strategy to adaptively adjust the threshold for the partition of reliable and unreliable pixels.

Extensions of the conference version (Wang et al., 2022b, ).

To better demonstrate the efficacy of using unreliable pseudo-labels, instead of studying only under the semi-supervised setting, we extend our original conference publication (Wang et al., 2022b, ) to domain adaptive and weakly-supervised settings, indicating that using unreliable pseudo-labels is crucial and effective on various label-efficient settings, bringing significant improvements consistently. Moreover, to produce high-quality pseudo-labels, we further make the category-wise prototype momentum updated during training to build a consistent set of keys and propose a denoising technique to enhance the quality of pseudo-labels. Additionally, a symmetric cross-entropy loss (Wang et al., , 2019) is used considering the pseudo-labels are still noisy even after filtering and denoising. We call the extended framework as U2{}^{\text{2}}PL+.

In the following, we provide a brief discussion of how these three settings differ. Semi-supervised (SS) approaches aim to train a segmentation model with only a few labeled pixel-level ground-truths (Yang et al., , 2022; Chen et al., 2021c, ; Chen et al., 2021a, ; Alonso et al., , 2021; French et al., , 2020; Ouali et al., , 2020; Wang et al., 2022b, ) together with numerous unlabeled ones. Domain adaptive (DA) alternatives (Hoyer et al., , 2022; Zhang et al., 2021b, ; Li et al., 2022b, ; Hoffman et al., , 2018; Wang et al., 2020a, ; Wang et al., 2023b, ) introduce synthetic (source) datasets (Richter et al., , 2016; Ros et al., , 2016) into training and try to generalize the segmentation model to real-world (target) domains (Cordts et al., , 2016; Sakaridis et al., , 2021) without access to target labels. Because in most cases, dense labels for those synthetic datasets can be obtained with minor effort. Weakly-supervised (WS) methods (Ru et al., , 2022; Fan et al., 2020c, ; Fan et al., 2020b, ; Fan et al., 2020a, ; Li et al., 2022a, ; Zhang et al., 2020a, ) leverage weak supervision signals that are easier to obtain, such as image labels (Papandreou et al., , 2015), bounding boxes (Dai et al., , 2015), points (Fan et al., , 2022), scribbles (Lin et al., , 2016), instead of pixel-level dense annotations to train a segmentation model.

Refer to caption
Figure 2: Illustration on unreliable pseudo-labels. (a) Pixel-wise entropy predicted from an unlabeled image. (b) Pixel-wise pseudo-labels from reliable predictions only, where pixels within the white region are not assigned a pseudo-label. (c) Category-wise probability of a reliable prediction (i.e., the yellow cross). (d) Category-wise probability of an unreliable prediction (i.e., the white cross), which hovers between motorbike and person, yet is confident enough of not belonging to car and train.

We evaluate the proposed U2{}^{\text{2}}PL+ on both (1) SS, (2) DA, and (3) WS semantic segmentation settings, where U2{}^{\text{2}}PL+ manages to bring significant improvements consistently over baselines. In SS, we evaluate our U2{}^{\text{2}}PL+ on PASCAL VOC 2012 (Everingham et al., , 2010) and Cityscapes (Cordts et al., , 2016) under a wide range of training settings. In DA, we evaluate our U2{}^{\text{2}}PL+ on two widely adopted benchmarks, i.e., GTA5 (Richter et al., , 2016) \to Cityscapes (Cordts et al., , 2016) and SYNTHIA (Ros et al., , 2016) \to Cityscapes (Cordts et al., , 2016). In WS, we evaluate our U2{}^{\text{2}}PL+ on PASCAL VOC 2012 (Everingham et al., , 2010) benchmark using only image-level supervisions. Furthermore, through visualizing the segmentation results, we find that our method achieves much better performance on those ambiguous regions (e.g., the border between different objects), thanks to our adequate use of the unreliable pseudo-labels. Our contributions are summarized as follows:

  1. 1.

    Based on the observation that unreliable predictions usually get confused among only a few classes instead of all classes, we build an intuitive framework U2PL that aims to mine the inherited information of discarded unreliable keys.

  2. 2.

    We extend the original version of U2PL to (1) domain adaptive and (2) weakly supervised semantic segmentation settings, demonstrating that using unreliable pseudo-labels is crucial in both settings.

  3. 3.

    To produce high-quality pseudo-labels, we further incorporate three carefully designed techniques, i.e., momentum prototype updating, prototypical denoising, and symmetric cross-entropy loss.

  4. 4.

    U2{}^{\text{2}}PL+ outperforms previous methods across extensive settings on both SS, DA, and WS benchmarks.

2 Related Work

Semantic segmentation aims to assign each pixel a pre-defined class label, and tremendous success in segmentation brought by deep convolutional neural networks (CNNs) has been witnessed (Ronneberger et al., , 2015; Long et al., 2015a, ; Yu and Koltun, , 2015; Chen et al., 2017a, ; Chen et al., 2017b, ; Zhao et al., , 2017; Chen et al., , 2018; Badrinarayanan et al., , 2017). Recently, Vision Transformers (ViTs) (Dosovitskiy et al., , 2021) provides a new feature extractor for images, and researchers have successfully demonstrated the feasibility of using ViTs in semantic segmentation (Zheng et al., , 2021; Xie et al., , 2021; Cheng et al., , 2021; Strudel et al., , 2021; Cheng et al., , 2022; Xu et al., , 2022). However, despite the success of these deep models, they usually thrive with dense per-pixel annotations, which are extremely expensive and laborious to obtain (Cordts et al., , 2016; Sakaridis et al., , 2021).

Semi-supervised semantic segmentation

methods aim to train a segmentation model with only a few labeled images and a large number of unlabeled images. There are two typical paradigms for semi-supervised learning: consistency regularization (Bachman et al., , 2014; Ouali et al., , 2020; French et al., , 2020; Sajjadi et al., , 2016; Xu et al., 2021b, ) and entropy minimization (Grandvalet and Bengio, , 2004; Chen et al., 2021a, ). Recently, a variant framework of entropy minimization, i.e., self-training (Lee et al., , 2013), has become the mainstream thanks to its simplicity and efficacy. On the basis of self-training, several methods (French et al., , 2020; Yuan et al., , 2021; Yang et al., , 2022; Wang et al., 2023e, ; Du et al., 2022b, ) further leverage strong data augmentation techniques such as (DeVries and Taylor, , 2017; Yun et al., , 2019; Olsson et al., , 2021), to produce meaningful supervision signals. However, in the typical weak-to-strong self-training paradigm (Sohn et al., , 2020), unreliable pixels are usually simply discarded. U2{}^{\text{2}}PL+, on the contrary, fully utilizes those discarded unreliable pseudo-labels, contributing to boosted segmentation results.

Domain adaptive semantic segmentation

focuses on training a model on a labeled source (synthetic) domain and generalizing it to an unlabeled target (real-world) domain. This is a more complicated task compared with semi-supervised semantic segmentation due to the domain shift between source and target domains. To overcome the domain gap, most previous methods optimize some custom distance (Long et al., 2015b, ; Lee et al., 2019a, ; Wang et al., 2023b, ) or apply adversarial training (Goodfellow et al., , 2014; Nowozin et al., , 2016), in order to align distributions at the image level (Hoffman et al., , 2018; Murez et al., , 2018; Sankaranarayanan et al., , 2018; Li et al., , 2019; Gong et al., , 2019; Choi et al., , 2019; Wu et al., , 2019; Abramov et al., , 2020; Zhang et al., 2020b, ), intermediate feature level (Hoffman et al., , 2016; Hong et al., , 2018; Hoffman et al., , 2018; Saito et al., , 2018; Chang et al., , 2019; Chen et al., , 2019; Wan et al., , 2020; Li et al., 2021a, ; Wang et al., 2023b, ), or output level (Tsai et al., , 2018; Luo et al., , 2019; Melas-Kyriazi and Manrai, , 2021). Few studies pay attention to unreliable pseudo-labels under this setting. To the best of our knowledge, we are the first to recycle those unreliable predictions when there exists a distribution shift between different domains.

Weakly supervised semantic segmentation

seeks to train semantic segmentation models using only weak annotations, and can be mainly categorized into image-level labels (Du et al., 2022a, ; Ru et al., , 2022, 2023; Fan et al., 2020c, ; Fan et al., 2020a, ; Ahn and Kwak, , 2018; Lee et al., 2021c, ; Wu et al., , 2021; Li et al., 2021b, ; Lee et al., 2019b, ; Lee et al., 2021b, ), points (Fan et al., , 2022), scribbles (Lin et al., , 2016), and bounding boxes (Dai et al., , 2015). This paper mainly discusses the image-level supervision setting, which is the most challenging among all weakly supervised scenarios. Most methods (Wei et al., , 2017; Zhang et al., 2021a, ; Sun et al., , 2021; Jiang et al., , 2019; Kim et al., , 2021; Yao et al., , 2021) are designed with a multi-stage process, where a classification network is trained to produce the initial pseudo-masks at pixel level using CAMs (Zhou et al., , 2016). This paper focuses on end-to-end frameworks (Pinheiro and Collobert, , 2015; Papandreou et al., , 2015; Roy and Todorovic, , 2017; Zhang et al., 2020a, ; Araslanov and Roth, , 2020) in weakly supervised semantic segmentation with the goal of making full use of pixel-level predictions, i.e., CAMs.

Contrastive learning

is widely used by many successful works in unsupervised visual representation learning (Chen et al., , 2020; Chen et al., 2021b, ; Wang et al., 2023c, ; Wang et al., 2023a, ). In semantic segmentation, contrastive learning has become a promising new paradigm (Liu et al., , 2021; Wang et al., , 2021; Zhao et al., , 2021). The following methods try to go deeper by adopting the contrastive learning framework for semi-supervised semantic segmentation tasks. (Zhong et al., , 2021) minimizes the mean square error between two positive samples and introduces several strategies to sample negative pixels. (Alonso et al., , 2021) utilizes a class-wise memory bank to store representative negative pixels for each class. However, these methods ignore the common false negative samples in semi-supervised segmentation, where unreliable pixels may be wrongly pushed away in a contrastive loss. Based on the observation that unreliable predictions usually get confused among only a few categories, U2{}^{\text{2}}PL+ alleviates this problem by discriminating the unlikely categories of unreliable pixels. In the field of domain adaptive semantic segmentation, only a few methods apply contrastive learning. (Kang et al., , 2020) adopts pixel-level cycle association in conducting positive pairs. (Zhou et al., , 2021) apply regional contrastive consistency regularization. (Wang et al., 2023b, ) introduces an image translation engine to ensure cross-domain positive pairs are matched precisely. The underlying motivation of these methods is to build a category-discriminative target representation space. However, we focus on how to make full use of unreliable pixels, which is quite different from existing contrastive learning-based DA alternatives. Contrastive learning is also studied in weakly supervised semantic segmentation. For instance, (Du et al., 2022a, ) proposes pixel-to-prototype contrast to improve the quality of CAMs by pulling pixels close to their positive prototypes. (Ru et al., , 2023) extend this idea by incorporating the self-attention map of ViTs. On the contrary, the goal of using contrastive learning in U2{}^{\text{2}}PL+ is not to improve the quality of CAMs. The contrast of U2{}^{\text{2}}PL+ is conducted after CAM values are obtained with the goal of fully using unreliable predictions.

Segment Anything Model

(SAM) (Kirillov et al., , 2023) shows strong generalization capabilities by training on over 1 billion object masks. However, it is important to note that SAM is not designed for semantic segmentation. Instead, it produces binary masks within an image. Given the impracticality of manually classifying each object mask from the SA-1B (Kirillov et al., , 2023) dataset into specific categories, label-efficient semantic segmentation is still worth studying. Moreover, only \approx1% masks of SA-1B are manually annotated. The authors have leveraged a segmentation model to assist in data collection and enhance mask diversity. They even include a fully automated annotation phase, where the model generates masks without any human input, accounting for 99.1% of the masks. Based on this, we hypothesize that an appropriate self-training pipeline might contribute to a more powerful segmentation model. It is also worth noting that SAM is not universally effective across all domains. For instance, it fails in medical segmentation and camouflaged object segmentation (Ji et al., , 2023). To adapt SAMs to specific domains, additional domain-specific annotations are typically required. Specifically, Ma et al., (2024) combined a large amount of public-available medical segmentation datasets and built a dataset with over 1.5M image-mask pairs, which is much less diverse compared with the SA-1B dataset due to the absence of unlabeled images. We believe that developing an efficient pipeline for adapting SAM to specific domains, leveraging unlabeled data with minimal annotation costs, is also a worthwhile direction for future research.

3 Method

Refer to caption
Figure 3: Illustration of U2{}^{\text{2}}PL+. Segmentation predictions are first split into reliable ones and unreliable ones based on their pixel-level entropy. The reliable predictions are used to be the pseudo-labels and to compute category-wise prototypes. Each unreliable prediction is pushed into a category-wise memory bank and regarded as negative keys for its unlikely classes. Pixels in each memory bank are regarded as the negative samples to the corresponding class, which is formulated as Eq. (10).

In this section, we first introduce background knowledge of label-efficient semantic segmentation in Sec. 3.1. Next, the elaboration of U2{}^{\text{2}}PL+ is described in Sec. 3.2. In Sec. 3.3 and Sec. 3.4, we introduce how to filter high-quality pseudo-labels and the denoising technique, respectively. Then, in Sec. 3.5, we specify how U2{}^{\text{2}}PL+ can be used in three label-efficient learning tasks, i.e., semi-supervised/domain adaptive, and weakly supervised settings.

3.1 Preliminaries

Label-efficient semantic segmentation. The common goal of label-efficient semantic segmentation methods is leveraging the incomplete set of labels 𝒴={𝐲i}i=1Nl\mathcal{Y}=\{\mathbf{y}_{i}\}_{i=1}^{N_{l}} to train a model that segments well on the whole set of images 𝒳={𝐱i}i=1N\mathcal{X}=\{\mathbf{x}_{i}\}_{i=1}^{N}, where NN indicates the total images and NlN_{l} is the number of labeled samples. 𝐱iH×W×3\mathbf{x}_{i}\in\mathbb{R}^{H\times W\times 3} where HH and WW are input resolutions. The first NlN_{l} image samples 𝒳l={𝐱il}i=1Nl\mathcal{X}_{l}=\{\mathbf{x}^{l}_{i}\}_{i=1}^{N_{l}} are matched with unique labels, and they consist of the labeled set 𝒟l={(𝐱il,𝐲il)}i=1Nl\mathcal{D}_{l}=\left\{(\mathbf{x}_{i}^{l},\mathbf{y}_{i}^{l})\right\}_{i=1}^{N_{l}}, while the remaining samples consist of the unlabeled set 𝒟u={𝐱iu}i=1Nu\mathcal{D}_{u}=\left\{\mathbf{x}_{i}^{u}\right\}_{i=1}^{N_{u}}.

In practice, usually, we have NlNN_{l}\ll N and each label 𝐲i{0,1}H×W×C\mathbf{y}_{i}\in\{0,1\}^{H\times W\times C} is the one-hot pixel-level annotation, where CC is the number of categories. The labeled set is collected from distribution 𝒫\mathcal{P} while the unlabeled set is sampled from distribution 𝒬\mathcal{Q}. A general case is 𝒫𝒬\mathcal{P}\neq\mathcal{Q}, and thus the problem falls into domain adaptive semantic segmentation. Otherwise, if 𝒫=𝒬\mathcal{P}=\mathcal{Q}, it is usually treated as a semi-supervised semantic segmentation task.

However, when 𝒫=𝒬\mathcal{P}=\mathcal{Q} but those labels are not collected at pixel-level, it becomes weakly supervised semantic segmentation. Under such a setting, each image 𝐱i\mathbf{x}_{i} is matched with its corresponding weak label 𝐲i\mathbf{y}_{i}, such as image labels (Papandreou et al., , 2015), bounding boxes (Dai et al., , 2015), points (Fan et al., , 2022), and scribbles (Lin et al., , 2016), and thus N=NlN=N_{l}. This paper studies the image-level supervision setting, i.e., 𝐲i{0,1}C\mathbf{y}_{i}\in\{0,1\}^{C}, which is the most challenging scenario. Note that 𝐲i\mathbf{y}_{i} is not the one-hot label since each image usually contains more than one category.

The overall objective of both settings usually contains a supervised term s\mathcal{L}_{s} and an unsupervised term u\mathcal{L}_{u}:

=s+λuu,\mathcal{L}=\mathcal{L}_{s}+\lambda_{u}\mathcal{L}_{u}, (1)

where λu\lambda_{u} is the weight of the unsupervised loss, which controls the balance of these two terms. Next, we will introduce the conventional pipeline of different settings, respectively.

Semi-supervised and domain adaptive semantic segmentation.

The typical paradigm of these two settings is the self-training framework (Tarvainen and Valpola, , 2017; Sohn et al., , 2020), which consists of two models with the same architecture, named teacher and student, respectively. These two models differ only when updating their weights. θs\theta_{s} indicates the weight of the student and is updated consistently with the common practice using back-propagation, while the teacher’s weights θt\theta_{t} are exponential moving average (EMA) updated by the student’s weights:

θtmθt+(1m)θs,\theta_{t}\leftarrow m\theta_{t}+(1-m)\theta_{s}, (2)

where mm is the momentum coefficient.

Each model consists of an encoder hh, a segmentation head ff, and a representation head gg used only for U2{}^{\text{2}}PL+. At each training step, we equally sample BB labeled images l\mathcal{B}_{l}and BB unlabeled images u\mathcal{B}_{u}, and try to minimize Eq. (1). Mathematically, s\mathcal{L}_{s} is the vanilla pixel-level cross-entropy:

s=1|l|(𝐱il,𝐲il)lce(fh(𝐱il;θ),𝐲il),\displaystyle\mathcal{L}_{s}=\frac{1}{|\mathcal{B}_{l}|}\sum_{(\mathbf{x}_{i}^{l},\mathbf{y}_{i}^{l})\in\mathcal{B}_{l}}\ell_{ce}(f\circ h(\mathbf{x}_{i}^{l};\theta),\mathbf{y}_{i}^{l}), (3)
ce(𝐩,𝐲)=𝐲log𝐩,\displaystyle\ell_{ce}(\mathbf{p},\mathbf{y})=-\mathbf{y}^{\top}\log\mathbf{p},

where 𝐱il\mathbf{x}_{i}^{l} indicates the ii-th labeled image and 𝐲il\mathbf{y}_{i}^{l} represents the corresponding one-hot hand-annotated segmentation map. fhf\circ h is the composition function of hh and ff, which means the images are first fed into hh and then ff to get segmentation results. As the unsupervised term u\mathcal{L}_{u}, we set it as the symmetric cross-entropy (SCE) loss (Wang et al., , 2019) for a stable training procedure, especially in the early stage. When computing u\mathcal{L}_{u}, we first take each unlabeled sample 𝐱iu\mathbf{x}^{u}_{i} into the teacher model and get predictions. Then, based on the pixel-level entropy map, we ignore unreliable pseudo-labels. Specifically,

u=1|u|𝐱iuusce(fh(𝐱iu;θ),𝐲~iu),\displaystyle\mathcal{L}_{u}=\frac{1}{|\mathcal{B}_{u}|}\sum_{\mathbf{x}_{i}^{u}\in\mathcal{B}_{u}}\ell_{sce}(f\circ h(\mathbf{x}_{i}^{u};\theta),\tilde{\mathbf{y}}_{i}^{u}), (4)
sce(𝐩,𝐲)=ξ1ce(𝐩,𝐲)+ξ2ce(𝐲,𝐩),\displaystyle\ell_{sce}(\mathbf{p},\mathbf{y})=\xi_{1}\ell_{ce}(\mathbf{p},\mathbf{y})+\xi_{2}\ell_{ce}(\mathbf{y},\mathbf{p}),

where 𝐲~iu\tilde{\mathbf{y}}_{i}^{u} is the one-hot pseudo-label for the ii-th unlabeled image. We set ξ1\xi_{1} to 11 and ξ2\xi_{2} to 0.50.5 follow (Wang et al., , 2019) and (Zhang et al., 2021b, ).

By minimizing s\mathcal{L}_{s} and u\mathcal{L}_{u} simultaneously, the model is able to leverage both the small labeled set 𝒟l\mathcal{D}_{l} and the larger unlabeled set 𝒟u\mathcal{D}_{u}.

Weakly supervised semantic segmentation using image-level labels.

Weakly supervised semantic segmentation methods aim to leverage CAMs (Zhou et al., , 2016) to produce pixel-level pseudo-masks first and then train a segmentation module synchronously (Ru et al., , 2022; Araslanov and Roth, , 2020), i.e., end-to-end, or asynchronously (Du et al., 2022a, ; Lee et al., 2021c, ), i.e., multi-stage. We begin with a brief review of how to generate CAMs.

Given an image classifier, e.g., ResNet (He et al., , 2016) and ViT (Dosovitskiy et al., , 2021), we denote the last feature maps as 𝐅d×hw\mathbf{F}\in\mathbb{R}^{d\times hw}, where hwhw is the spatial size and dd indicates the channel dimension. The activation map 𝐌c\mathbf{M}^{c} for class cc is generated via weighting the feature maps 𝐅\mathbf{F} with their contribution to class cc:

𝐌c=ReLU(i=1d𝐰c,i𝐅i,:),\mathbf{M}^{c}=\texttt{ReLU}\left(\sum_{i=1}^{d}\mathbf{w}_{c,i}\mathbf{F}_{i,:}\right), (5)

where 𝐰C×d\mathbf{w}\in\mathbb{R}^{C\times d} is the parameters of the last fully connected layers. Then, min-max normalization is applied to re-scale 𝐌c\mathbf{M}^{c} to [0,1][0,1]. β(0,1)\beta\in(0,1) is used to discriminate foreground regions from background, i.e., when the value of 𝐌c\mathbf{M}^{c} of a particular pixel is larger then β\beta, it is regarded as reliable predictions as well as pseudo-labels.

At each training step, we randomly sample BB images and their corresponding image-level labels, resulting in a batch ={𝐱i,𝐲i}i=1B\mathcal{B}=\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i=1}^{B}. Since we only have image-level labels this time, an additional MLP fclsf_{\mathrm{cls}} is introduced to perform image-level classification, and the supervised loss is thus the multi-label soft margin loss:

s=1||(𝐱i,𝐲i)ce(fclsh(𝐱i;θ).\mathcal{L}_{s}=\frac{1}{|\mathcal{B}|}\sum_{(\mathbf{x}_{i},\mathbf{y}_{i})\in\mathcal{B}}\ell_{ce}(f_{\mathrm{cls}}\circ h(\mathbf{x}_{i};\theta). (6)

As for the unsupervised term u\mathcal{L}_{u}, it is the vanilla cross-entropy loss that leverages 𝐌i\mathbf{M}_{i} to be supervision signals:

u=1||(𝐱i,𝐲i)ce(fh(𝐱i;θ),𝐲~i),\displaystyle\mathcal{L}_{u}=\frac{1}{|\mathcal{B}|}\sum_{(\mathbf{x}_{i},\mathbf{y}_{i})\in\mathcal{B}}\ell_{ce}(f\circ h(\mathbf{x}_{i};\theta),\tilde{\mathbf{y}}_{i}), (7)

where the pixel-level pseudo-label 𝐲~i\tilde{\mathbf{y}}_{i} is generated by a simple threshold strategy following (Ru et al., , 2022):

y~ic=𝟙[𝐌ic>β],\tilde{y}_{ic}=\mathbbm{1}[\mathbf{M}_{i}^{c}>\beta], (8)

where 𝟙[]\mathbbm{1}[\cdot] is the indicator function.

3.2 Elaboration of U2{}^{\text{2}}PL+

In label-efficient learning, discarding unreliable pseudo-labels is widely used to prevent performance degradation (Zou et al., , 2020; Yang et al., , 2022; Sohn et al., , 2020; Xie et al., , 2020). However, such contempt for unreliable pseudo-labels may result in information loss. It is obvious that unreliable pseudo-labels can provide information for better discrimination. For example, the white cross in Fig. 2, is typically an unreliable pixel, whose distribution demonstrates its uncertainty to distinguish between class person and class motorbike. However, this distribution also demonstrates its certainty not to discriminate this pixel as class car, class train, class bicycle, and so on. Such characteristic gives us the main insight of using unreliable pseudo-labels for label-efficient semantic segmentation.

Mathematically, U2{}^{\text{2}}PL+ aims to recycle unreliable predictions into training by adding an extra term c\mathcal{L}_{c} into Eq. (1). Therefore, the overall objective becomes to

U2PL+=s+λuu+λcc,\mathcal{L}_{\mathrm{U^{2}PL+}}=\mathcal{L}_{s}+\lambda_{u}\mathcal{L}_{u}+\lambda_{c}\mathcal{L}_{c}, (9)

where λc\lambda_{c} is an extra weight to balance its contribution. Specifically, c\mathcal{L}_{c} is the pixel-level InfoNCE (Oord et al., , 2018) loss:

c=\displaystyle\mathcal{L}_{c}= 1C×Mc=0C1i=1M\displaystyle-\frac{1}{C\times M}\sum_{c=0}^{C-1}\sum_{i=1}^{M} (10)
log[e𝐳ci,𝐳ci+/τe𝐳ci,𝐳ci+/τ+j=1Ne𝐳ci,𝐳cij/τ],\displaystyle\log\left[\frac{e^{\langle\mathbf{z}_{ci},\mathbf{z}_{ci}^{+}\rangle/\tau}}{e^{\langle\mathbf{z}_{ci},\mathbf{z}_{ci}^{+}\rangle/\tau}+\sum_{j=1}^{N}e^{\langle\mathbf{z}_{ci},\mathbf{z}_{cij}^{-}\rangle/\tau}}\right],

where MM is the total number of anchor pixels, and 𝐳ci\mathbf{z}_{ci} denotes the representation of the ii-th anchor of class cc. Each anchor pixel is followed with a positive key and NN negative keys, whose representations are 𝐳ci+\mathbf{z}_{ci}^{+} and {𝐳cij}j=1N\{\mathbf{z}_{cij}^{-}\}_{j=1}^{N}, respectively. 𝐳=gh(𝐱)\mathbf{z}=g\circ h(\mathbf{x}) is the output of the representation head. ,\langle\cdot,\cdot\rangle is the cosine similarity between features from two different pixels, whose range is limited between 1-1 to 11, hence the need for temperature τ\tau. We set M=256M=256, N=50N=50 and τ=0.5\tau=0.5 in practice. How to select (a) anchor pixels (queries) 𝐳ci\mathbf{z}_{ci}, (b) the positive key 𝐳ci+\mathbf{z}_{ci}^{+} for each anchor, and (c) negative keys 𝐳cij\mathbf{z}_{cij}^{-} for each anchor, is different among different label-efficient settings.

Fig. 3 illustrates how U2{}^{\text{2}}PL+ conducts contrastive pairs. Concretely, predictions are first split into reliable ones and unreliable ones by leveraging the pixel-level entropy map. Reliable predictions (pixels marked in color other than white) are first averaged and then used to update category-wise prototypes, which are served as positive keys for given queries. Unreliable predictions (pixels marked in white) are served as negative keys regarding unlikely classes and are pushed into category-wise memory banks.

Next, we introduce how to apply U2{}^{\text{2}}PL+ to label-efficient semantic segmentation as follows. Specifically, we first introduce how we produce pseudo-labels in Sec. 3.3. Next, denoising techniques are described in Sec. 3.4. Finally, how to sample contrastive pairs are explored in Sec. 3.5. Note that pseudo-labeling and denoising is used only in semi-supervised and domain-adaptive semantic segmentation.

3.3 Pseudo-Labeling

To avoid overfitting incorrect pseudo-labels, we utilize entropy of every pixel’s probability distribution to filter high-quality pseudo-labels for further supervision. Specifically, we denote 𝐩ijC\mathbf{p}_{ij}\in\mathbb{R}^{C} as the softmax probabilities generated by the segmentation head of the teacher model for the ii-th unlabeled image at pixel jj, where CC is the number of classes. Its entropy is computed by:

(𝐩ij)=c=0C1pij(c)logpij(c),\mathcal{H}(\mathbf{p}_{ij})=-\sum_{c=0}^{C-1}p_{ij}(c)\log p_{ij}(c), (11)

where pij(c)p_{ij}(c) is the value of 𝐩ij\mathbf{p}_{ij} at cc-th dimension.

Then, we define pixels whose entropy on top αt\alpha_{t} as unreliable pseudo-labels at training epoch tt. Such unreliable pseudo-labels are not qualified for supervision. Therefore, we define the pseudo-label for the ii-th unlabeled image at pixel jj as:

y^iju={argmaxcpij(c),if(𝐩ij)<γt,ignore,otherwise,\hat{y}_{ij}^{u}=\left\{\begin{aligned} &\arg\max_{c}p_{ij}(c),&&\mathrm{if}\ \mathcal{H}(\mathbf{p}_{ij})<\gamma_{t},\\ &\mathrm{ignore},&&\mathrm{otherwise},\end{aligned}\right. (12)

where γt\gamma_{t} represents the entropy threshold at tt-th training step. We set γt\gamma_{t} as the quantile corresponding to αt\alpha_{t}, i.e., γt\gamma_{t} = np.percentile(H.flatten(),100*(1-αt\alpha_{t})), where H is per-pixel entropy map. We adopt the following adjustment strategies in the pseudo-labeling process for better performance.

Dynamic Partition Adjustment.

During the training procedure, the pseudo-labels tend to be reliable gradually. Base on this intuition, we adjust unreliable pixels’ proportion αt\alpha_{t} with linear strategy every epoch:

αt=α0(1ttotalepoch),\alpha_{t}=\alpha_{0}\cdot\left(1-\frac{t}{\mathrm{total\ epoch}}\right), (13)

where α0\alpha_{0} is the initial proportion and is set to 20%20\%, and tt is the current training epoch.

Adaptive Weight Adjustment.

After obtaining reliable pseudo-labels, we involve them in the unsupervised loss in Eq. (LABEL:eq:unsloss). The weight λu\lambda_{u} for this loss is defined as the reciprocal of the percentage of pixels with entropy smaller than threshold γt\gamma_{t} in the current mini-batch multiplied by a base weight η\eta:

λu=η|u|×H×Wi=1|u|j=1H×W𝟙[y^ijuignore],\lambda_{u}=\eta\cdot\frac{|\mathcal{B}_{u}|\times H\times W}{\sum_{i=1}^{|\mathcal{B}_{u}|}\sum_{j=1}^{H\times W}\mathbbm{1}\left[\hat{y}_{ij}^{u}\neq\mathrm{ignore}\right]}, (14)

where 𝟙()\mathbbm{1}(\cdot) is the indicator function and η\eta is set to 11.

3.4 Pseudo-Label Denoising

It is widely known that self-training-based methods often suffer a lot from confirmation bias (Arazo et al., , 2020). Filtering reliable pseudo-labels (Yang et al., , 2022; Zou et al., , 2020) (which has been introduced in Sec. 3.3) and applying strong data augmentation (Yun et al., , 2019; DeVries and Taylor, , 2017; Olsson et al., , 2021) are two typical ways to face this issue. However, considering the domain shift between the labeled source domain and the unlabeled target domain, the model tends to be over-confident (Zhang et al., 2021b, ) and there is not enough to simply select pseudo-labels based on their reliability. To this end, we maintain a prototype for each class and use them to denoise those pseudo-labels.

Let 𝐳cproto\mathbf{z}^{\mathrm{proto}}_{c} be the prototype of class cc, as known as the center of the representation space 𝒫c\mathcal{P}_{c}

𝐳cproto=1|𝒫c|𝐳c𝒫c𝐳c,\mathbf{z}^{\mathrm{proto}}_{c}=\frac{1}{|\mathcal{P}_{c}|}\sum_{\mathbf{z}_{c}\in\mathcal{P}_{c}}\mathbf{z}_{c}, (15)

and concretely, 𝒫c\mathcal{P}_{c} contains all labeled pixels belongs to class cc and reliable unlabeled pixels predicted to probably belongs to class cc:

𝒫c=𝒫cl𝒫cu,\mathcal{P}_{c}=\mathcal{P}_{c}^{l}\cup\mathcal{P}_{c}^{u}, (16)

where 𝒫cl\mathcal{P}_{c}^{l} and 𝒫cu\mathcal{P}_{c}^{u} denote the representation space of labeled pixels and unlabeled pixels respectively:

𝒫cl\displaystyle\mathcal{P}_{c}^{l} ={𝐳ij=gh(𝐱il;θt)jyij=c,(𝐱il,𝐲i)l},\displaystyle=\left\{\mathbf{z}_{ij}=g\circ h(\mathbf{x}^{l}_{i};\theta_{t})_{j}\mid y_{ij}=c,(\mathbf{x}^{l}_{i},\mathbf{y}_{i})\in\mathcal{B}_{l}\right\}, (17)
𝒫cu\displaystyle\mathcal{P}_{c}^{u} ={𝐳ij=gh(𝐱iu;θt)jy^ij=c,(𝐱iu,𝐲^i)u},\displaystyle=\left\{\mathbf{z}_{ij}=g\circ h(\mathbf{x}^{u}_{i};\theta_{t})_{j}\mid\hat{y}_{ij}=c,(\mathbf{x}^{u}_{i},\mathbf{\hat{y}}_{i})\in\mathcal{B}_{u}\right\},

where index ii means the ii-th labeled (unlabeled) image, and index jj means the jj-th pixel of that image.

Momentum Prototype.

For the stability of training, all representations are supposed to be consistent (He et al., , 2020), forwarded by the teacher model hence, and are momentum updated by the centroid in the current mini-batch. Specifically, for each training step, the prototype of class cc is estimated as

𝐳cprotomproto𝐳cproto+(1mproto)𝐳cproto,\mathbf{z}^{\mathrm{proto}}_{c}\leftarrow m^{\mathrm{proto}}\mathbf{z}^{\prime\mathrm{proto}}_{c}+(1-m^{\mathrm{proto}})\mathbf{z}^{\mathrm{proto}}_{c}, (18)

where 𝐳cproto\mathbf{z}^{\mathrm{proto}}_{c} is the current center defined by Eq. (15) and 𝐳cproto\mathbf{z}^{\prime\mathrm{proto}}_{c} is the prototype for last training step, and mprotom^{\mathrm{proto}} is the momentum coefficient which is set to 0.9990.999.

Prototypical Denoising.

Let 𝐰iC\mathbf{w}_{i}\in\mathbb{R}^{C} be the weight between ii-th unlabeled image 𝐱iu\mathbf{x}_{i}^{u} and each class. Concretely, we define wi(c)w_{i}(c) as the softmax over feature distances to prototypes:

wi(c)=exp(f(𝐱iu;θt)𝐳cproto)c=1Cexp(f(𝐱iu;θt)𝐳cproto).w_{i}(c)=\frac{\exp\left(-||f(\mathbf{x}_{i}^{u};\theta_{t})-\mathbf{z}^{\mathrm{proto}}_{c}||\right)}{\sum_{c=1}^{C}\exp\left(-||f(\mathbf{x}_{i}^{u};\theta_{t})-\mathbf{z}^{\mathrm{proto}}_{c}||\right)}. (19)

Then, we get denoised prediction vector 𝐩ij\mathbf{p}^{*}_{ij} for pixel jj of ii-th unlabeled image based on its original prediction vector 𝐩ij\mathbf{p}_{ij} and class weight wicw_{ic}:

𝐩ij=𝐰i𝐩ij,\mathbf{p}^{*}_{ij}=\mathbf{w}_{i}\odot\mathbf{p}_{ij}, (20)

where \odot denotes the element-wise dot.

Finally, we take 𝐩ij\mathbf{p}^{*}_{ij} into Eq. (12) and get denoised pseudo-label y~iju=argmaxcpij\tilde{y}^{u}_{ij}=\arg\max_{c}p^{*}_{ij} for pixel jj of the ii-th unlabeled image in Eq. (LABEL:eq:unsloss).

3.5 Contrastive Pairs Sampling

Formulated in Eq. (10), our U2{}^{\text{2}}PL+ aims to make sufficient use of unreliable predictions by leveraging an extra contrastive objective that pulls positive pairs (𝐳,𝐳+)(\mathbf{z},\mathbf{z}^{+}) together while pushes negatives pairs (𝐳,𝐳)(\mathbf{z},\mathbf{z}^{-}) away. In the following, we provide a detailed description on conducting contrastive pairs for both settings.

1
2Initialize 0\mathcal{L}\leftarrow 0;
3 Sample labeled images l\mathcal{B}_{l} and unlabeled images u\mathcal{B}_{u};
4
5for 𝐱ilu\mathbf{x}_{i}\in\mathcal{B}_{l}\cup\mathcal{B}_{u} do
6       Get probabilities: 𝐩ifh(𝐱i;θt)\mathbf{p}_{i}\leftarrow f\circ h(\mathbf{x}_{i};\theta_{t});
7       Get representations: 𝐳igh(𝐱i;θs)\mathbf{z}_{i}\leftarrow g\circ h(\mathbf{x}_{i};\theta_{s});
8      
9      for c0c\leftarrow 0 to C1C-1 do
10             Get anchors 𝒜c\mathcal{A}_{c} based on Eq. (23) or Eq. (29);
11             Sample MM anchors: A\mathcal{B}_{A}\leftarrow sample (𝒜c)(\mathcal{A}_{c});
12            
13            Get negatives 𝒩c\mathcal{N}_{c} based on Eq. (28) or Eq. (31);
14             Push 𝒩c\mathcal{N}_{c} into memory bank 𝒬c\mathcal{Q}_{c};
15             Pop oldest ones out of 𝒬c\mathcal{Q}_{c} if necessary;
16             Sample NN negatives: N\mathcal{B}_{N}\leftarrow sample (𝒬c)(\mathcal{Q}_{c});
17            
18            Get 𝐳+\mathbf{z}^{+} based on Eq. (24) or Eq. (30);
19            
20            +(A,N,𝐳+)\mathcal{L}\leftarrow\mathcal{L}+\ell(\mathcal{B}_{A},\mathcal{B}_{N},\mathbf{z}^{+}) based on Eq. (10);
21            
22       end for
23      
24 end for
25
Output: contrastive loss c1||×C\mathcal{L}_{c}\leftarrow\frac{1}{|\mathcal{B}|\times C}\mathcal{L}
Algorithm 1 Using Unreliable Pseudo-Labels

3.5.1 For SS and DA Settings

Anchor Pixels (Queries). During training, we sample anchor pixels (queries) for each class that appears in the current mini-batch. We denote the set of features of all labeled candidate anchor pixels for class cc as 𝒜cl\mathcal{A}_{c}^{l},

𝒜cl={𝐳ijyij=c,pij(c)>δp},\mathcal{A}_{c}^{l}=\left\{\mathbf{z}_{ij}\mid y_{ij}=c,p_{ij}(c)>\delta_{p}\right\}, (21)

where yijy_{ij} is the ground truth for the jj-th pixel of labeled image ii, and δp\delta_{p} denotes the positive threshold for a particular class and is set to 0.30.3 following (Liu et al., , 2021). 𝐳ij\mathbf{z}_{ij} means the representation of the jj-th pixel of labeled image ii. For unlabeled data, counterpart 𝒜cu\mathcal{A}_{c}^{u} can be computed as:

𝒜cu={𝐳ijy~ij=c,pij(c)>δp}.\mathcal{A}_{c}^{u}=\left\{\mathbf{z}_{ij}\mid\tilde{y}_{ij}=c,p_{ij}(c)>\delta_{p}\right\}. (22)

It is similar to 𝒜cl\mathcal{A}_{c}^{l}, and the only difference is that we use pseudo-label y^ij\hat{y}_{ij} based on Eq. (12) rather than the hand-annotated label, which implies that qualified anchor pixels are reliable, i.e., (𝐩ij)γt\mathcal{H}(\mathbf{p}_{ij})\leq\gamma_{t}. Therefore, for class cc, the set of all qualified anchors is

𝒜c=𝒜cl𝒜cu.\mathcal{A}_{c}=\mathcal{A}_{c}^{l}\cup\mathcal{A}_{c}^{u}. (23)
Positive Keys.

The positive sample is the same for all anchors from the same class. It is the prototype of class cc defined in Eq. (15):

𝐳c+=𝐳cproto.\mathbf{z}_{c}^{+}=\mathbf{z}^{\mathrm{proto}}_{c}. (24)
Negative Keys.

We define a binary variable nij(c)n_{ij}(c) to identify whether the jj-th pixel of image ii is qualified to be negative samples of class cc.

nij(c)={nijl(c),ifimageiislabeled,niju(c),otherwise,n_{ij}(c)=\left\{\begin{aligned} &n_{ij}^{l}(c),&\mathrm{if\ image\ }i\mathrm{\ is\ labeled},\\ &n_{ij}^{u}(c),&\mathrm{otherwise},\end{aligned}\right. (25)

where nijl(c)n_{ij}^{l}(c) and niju(c)n_{ij}^{u}(c) are indicators of whether the jj-th pixel of labeled and unlabeled image ii is qualified to be negative samples of class cc respectively.

For ii-th labeled image, a qualified negative sample for class cc should be: (a) not belonging to class cc; (b) difficult to distinguish between class cc and its ground-truth category. Therefore, we introduce the pixel-level category order 𝒪ij=argsort(𝐩ij)\mathcal{O}_{ij}=\texttt{argsort}(\mathbf{p}_{ij}). Obviously, we have 𝒪ij(argmax𝐩ij)=0\mathcal{O}_{ij}(\arg\max\mathbf{p}_{ij})=0 and 𝒪ij(argmin𝐩ij)=C1\mathcal{O}_{ij}(\arg\min\mathbf{p}_{ij})=C-1.

nijl(c)=𝟙[yijc]𝟙[0𝒪ij(c)<rl],n_{ij}^{l}(c)=\mathbbm{1}\left[y_{ij}\neq c\right]\cdot\mathbbm{1}\left[0\leq\mathcal{O}_{ij}(c)<r_{l}\right], (26)

where rlr_{l} is the low-rank threshold. Here, a small rlr_{l} represents an aggressive strategy, which tries to make full use of unreliable predictions but may introduce too much noise. We set to it 33. Two indicators reflect (a) and (b), respectively.

For ii-th unlabeled image, a qualified negative sample for class cc should: (a) be unreliable; (b) probably not belongs to class cc; (c) not belongs to most unlikely classes. Similarly, we also use 𝒪ij\mathcal{O}_{ij} to define niju(c)n_{ij}^{u}(c):

niju(c)=𝟙[(𝐩ij)>γt]𝟙[rl𝒪ij(c)<rh],n_{ij}^{u}(c)=\mathbbm{1}\left[\mathcal{H}(\mathbf{p}_{ij})>\gamma_{t}\right]\cdot\mathbbm{1}\left[r_{l}\leq\mathcal{O}_{ij}(c)<r_{h}\right], (27)

where rhr_{h} is the high-rank threshold and is set to 2020. Finally, the set of negative samples of class cc is

𝒩c={𝐳ijnij(c)=1}.\mathcal{N}_{c}=\left\{\mathbf{z}_{ij}\mid n_{ij}(c)=1\right\}. (28)
Category-wise Memory Bank.

Due to the long tail phenomenon of the dataset, negative candidates in some particular categories are extremely limited in a mini-batch. In order to maintain a stable number of negative samples, we use category-wise memory bank 𝒬c\mathcal{Q}_{c} (FIFO queue) to store the negative samples for class cc. We simply follow MoCo (He et al., , 2020), setting the queue size to 65,53665,536. A large queue size contributes diverse negative samples while introducing extra computational costs.

Finally, the whole process to use unreliable pseudo-labels is shown in Algorithm 1. All features of anchors are attached to the gradient, and come from the student hence, while features of positive and negative samples are from the teacher.

3.5.2 For the WS Setting

The main difference when applying U2{}^{\text{2}}PL+ to the WS setting is we do not have any pixel-level annotations now, and thus Eqs. (21) and (26) are invalid. Also, pseudo-labeling and denoising techniques introduced in Secs. 3.3 and 3.4 are not necessary here. This is because applying a simple threshold β(0,1)\beta\in(0,1) to re-scaled CAMs 𝐌c\mathbf{M}^{c} to discriminate foreground regions from the background is effective (Ru et al., , 2022). To this end, the elaboration of U2{}^{\text{2}}PL+ on weakly supervised semantic segmentation becomes much easier. Next, how to select (1) anchor pixels, (2) positive keys, and (3) negative keys, are described in detail.

Anchor Pixels (Queries).

This time, as segmentation maps of the whole dataset 𝒟\mathcal{D} are inaccessible, we cannot sample anchors using ground-truth labels as Eq. (21) does. Instead, we simply regard those pixels with a value of CAM larger than β\beta as candidates:

𝒜c={𝐳ij𝐌ijc>β}.\mathcal{A}_{c}=\{\mathbf{z}_{ij}\mid\mathbf{M}^{c}_{ij}>\beta\}. (29)
Positive Keys.

Similarly, positive keys for each query 𝐪c𝒜c\mathbf{q}_{c}\in\mathcal{A}_{c} are the prototype of class cc, i.e., 𝐳c+=𝐳cproto\mathbf{z}_{c}^{+}=\mathbf{z}_{c}^{\mathrm{proto}}. Similarly, the prototypes are momentum updated described in Eq. (18), but category-wise centroids of the current mini-batch 𝐳cproto\mathbf{z}_{c}^{\prime\mathrm{proto}} are computed by

𝐳cproto=1|𝒜c|𝐳c𝒜c𝐳c.\mathbf{z}_{c}^{\prime\mathrm{proto}}=\frac{1}{|\mathcal{A}_{c}|}\sum_{\mathbf{z}_{c}\in\mathcal{A}_{c}}\mathbf{z}_{c}. (30)
Negative Keys.

Determining negative keys is much easier under the weakly supervised setting because a set of image-level labels is given for each image. Concretely, given a image 𝐱i\mathbf{x}_{i}, its image-level label is 𝐲i{0,1}C\mathbf{y}_{i}\in\{0,1\}^{C}, where yic=1y_{i}^{c}=1 indicates that class cc exists in this image and vise versa. Therefore, the indicator nij(c)n_{ij}(c) representing whether pixel jj from the ii-th image is a qualified negative key for class cc natually becomes

nij(c)=𝟙[yic=0]or 1[𝐌ijc<β],n_{ij}(c)=\mathbbm{1}[y_{i}^{c}=0]\ \mathrm{or}\ \mathbbm{1}[\mathbf{M}^{c}_{ij}<\beta], (31)

where the first term 𝟙[yic=0]\mathbbm{1}[y_{i}^{c}=0] means this image does not contain category cc at all, while the second term 𝟙[𝐌ijc<β]\mathbbm{1}[\mathbf{M}^{c}_{ij}<\beta] indicates this prediction is unreliable to determine the pixel belongs to category cc. The prediction becomes a qualified negative key when either of these two conditions is met.

4 Experiments

We conduct experiments on (1) semi-supervised, (2) domain adaptive, and (3) weakly supervised semantic segmentation benchmarks to verify the effectiveness of U2{}^{\text{2}}PL+. The settings, baselines, and quantitative and qualitative results are provided in Secs. 4.1, 4.2, and 4.3. Due to limited computational resources, ablation studies are conducted only on semi-supervised benchmarks.

Table 1: Comparison with state-of-the-art methods on classic PASCAL VOC 2012 val set under different partition protocols. The labeled images are selected from the original VOC train set, which consists of 1,4641,464 samples in total. The fractions denote the percentage of labeled data used for training, followed by the actual number of images. All the images from SBD are regarded as unlabeled data. “SupOnly” stands for supervised training without using any unlabeled data. † means we reproduce the approach. The best performances are highlighted in bold font and the second scores are underlined.
Method 1/16 (92) 1/8 (183) 1/4 (366) 1/2 (732) Full (1464)
SupOnly 45.77 54.92 65.88 71.69 72.50
MT 51.72 58.93 63.86 69.51 70.96
CutMix 52.16 63.47 69.46 73.73 76.54
PseudoSeg 57.60 65.50 69.14 72.41 73.23
PC2Seg 57.00 66.28 69.78 73.05 74.15
U2{}^{\text{2}}PL 67.98 69.15 73.66 76.16 79.49
U2{}^{\text{2}}PL+ 69.29 73.40 75.03 77.09 79.52

4.1 Experiments on Semi-Supervised Segmentation

In this section, we provide experimental results on semi-supervised semantic segmentation benchmarks. We first describe the experimental setup, including datasets, network structure, evaluation metric, and implementation details. Then, we compare with recent methods in Sec. 4.1.1, and perform ablation studies on both PASCAL VOC 2012 and Cityscapes in Sec. 4.1.2. Moreover, in Sec. 4.1.3, we provide qualitative segmentation results.

Table 2: Comparison with state-of-the-art methods on blender PASCAL VOC 2012 val set and Cityscapes val set under different partition protocols. For blender VOC, all labeled images are selected from the augmented VOC train set, which consists of 10,58210,582 samples in total. For Cityscapes, all labeled images are selected from the Cityscapes train set, which contains 2,9752,975 samples in total. “SupOnly” stands for supervised training without using any unlabeled data. † means we reproduce the approach. The best performances are highlighted in bold font and the second scores are underlined.
Method Blender PASCAL VOC 2012 Method Cityscapes
1/16 (662) 1/8 (1323) 1/4 (2646) 1/2 (5291) 1/16 (186) 1/8 (372) 1/4 (744) 1/2 (1488)
SupOnly 67.87 71.55 75.80 77.13 SupOnly 65.74 72.53 74.43 77.83
MT 70.51 71.53 73.02 76.58 MT 69.03 72.06 74.20 78.15
CutMix 71.66 75.51 77.33 78.21 CutMix 67.06 71.83 76.36 78.25
CCT 71.86 73.68 76.51 77.40 CCT 69.32 74.12 75.99 78.10
GCT 70.90 73.29 76.66 77.98 GCT 66.75 72.66 76.11 78.34
CPS 74.48 76.44 77.68 78.64 CPS 69.78 74.31 74.58 76.81
AEL 77.20 77.57 78.06 80.29 AEL 74.45 75.55 77.48 79.01
U2{}^{\text{2}}PL 77.21 79.01 79.30 80.50 U2{}^{\text{2}}PL 74.90 76.48 78.51 79.12
U2{}^{\text{2}}PL+ 77.23 79.35 80.21 80.78 U2{}^{\text{2}}PL+ 76.09 78.00 79.02 79.62
Datasets.

In semi-supervised semantic segmentation, PASCAL VOC 2012 (Everingham et al., , 2010) and Cityscapes (Cordts et al., , 2016) are two widely used datasets for evaluation. PASCAL VOC 2012 Dataset is a standard semantic segmentation benchmark with 20 semantic classes of objects and 1 class of background. The training set and the validation set include 1,4641,464 and 1,4491,449 images respectively. Following (Hu et al., , 2021; Yang et al., , 2022; Chen et al., 2021c, ), we use SBD (Hariharan et al., , 2011) as the augmented set with 9,1189,118 additional training images. Since the SBD dataset is coarsely annotated, (Zou et al., , 2020) takes only the standard 1,4641,464 images as the whole labeled set, while other methods (Chen et al., 2021c, ; Hu et al., , 2021) take all 10,58210,582 images as candidate labeled data. Therefore, we evaluate our method on both the classic set (1,4641,464 candidate labeled images) and the blender set (10,58210,582 candidate labeled images). Cityscapes, a dataset designed for urban scene understanding, consists of 2,9752,975 training images with fine-annotated masks and 500500 validation images. For each dataset, we compare U2{}^{\text{2}}PL+ with other methods under 1/21/2, 1/41/4, 1/81/8, and 1/161/16 partition protocols.

Network Structure.

For SS segmentation, we use ResNet-101 (He et al., , 2016) pre-trained on ImageNet-1K (Deng et al., , 2009) as the backbone and DeepLabv3+ (Chen et al., , 2018) as the decoder. The representation head consists of two Conv-BN-ReLU blocks, where both blocks preserve the feature map resolution, and the first block halves the number of channels, mapping the extracted features into 256256 dimensional representation space. The architecture of the representation head remains the same in three different settings. The extra representation head introduces an additional \approx 2.8 M parameters, resulting in roughly 10% more computational overhead compared to the baseline.

Evaluation Metric.

We adopt the mean of Intersection over Union (mIoU) as the metric to evaluate these cropped images. For the SS segmentation task, all results are measured on the val set on both Cityscapes (Cordts et al., , 2016) and PASCAL VOC 2012 (Everingham et al., , 2010), where VOC images are center cropped to a fixed resolution while slide window evaluation is used for Cityscapes following common practices (Hu et al., , 2021; Wang et al., 2022b, ).

Implementation Details.

For the training on the blender and classic PASCAL VOC 2012 dataset, we use stochastic gradient descent (SGD) optimizer with initial learning rate 0.0010.001, weight decay as 0.00010.0001, crop size as 513×513513\times 513, batch size as 1616 and training epochs as 8080. For the training on the Cityscapes dataset, we also use stochastic gradient descent (SGD) optimizer with an initial learning rate of 0.010.01, weight decay as 0.00050.0005, crop size as 769×769769\times 769, batch size as 1616 and training epochs as 200200. In all experiments, the decoder’s learning rate is ten times that of the backbone. We use the poly scheduling to decay the learning rate during the training process: lr=lrbase(1itertotaliter)0.9lr=lr_{\mathrm{base}}\cdot\left(1-\frac{\mathrm{iter}}{\mathrm{total\ iter}}\right)^{0.9}. All SS experiments are conducted with 88 Tesla V100 GPUs.

Table 3: Ablation study on using pseudo pixels with different reliability as negative keys. The reliability is measured by the entropy of pixel-wise prediction (see Sec. 3.5). “U” denotes unreliable, which takes pixels with the top 20% highest entropy scores as negative candidates. “R” indicates reliable, and means the bottom 20% counterpart. We prove this effectiveness under 1/41/4 and 1/81/8 partition protocols on blender PASCAL VOC 2012 val set, and 1/21/2 and 1/41/4 partition protocols on Cityscapes val set, respectively. The best performances are highlighted in bold font and the second scores are underlined.
𝒟l\mathcal{D}_{l} 𝒟u\mathcal{D}_{u} Blender VOC Cityscapes
R U 1/8 (1323) 1/4 (2646) 1/4 (744) 1/2 (1488)
74.29 75.76 74.39 78.10
75.15 76.48 74.60 77.93
78.37 79.01 77.19 78.16
77.30 77.35 75.16 77.19
79.35 80.21 79.02 79.62
77.40 77.57 74.51 76.96

4.1.1 Comparison with State-of-the-Art Alternatives

We compare our method with following recent representative semi-supervised semantic segmentation methods: Mean Teacher (MT) (Tarvainen and Valpola, , 2017), CCT (Ouali et al., , 2020), GCT (Ke et al., , 2020), PseudoSeg (Zou et al., , 2020), CutMix (French et al., , 2020), CPS (Chen et al., 2021c, ), PC2Seg (Zhong et al., , 2021), AEL (Hu et al., , 2021). We re-implement MT (Tarvainen and Valpola, , 2017), CutMix (Yun et al., , 2019) for a fair comparison. For Cityscapes (Cordts et al., , 2016), we also reproduce CPS (Chen et al., 2021c, ) and AEL (Hu et al., , 2021). All results are equipped with the same network architecture (DeepLabv3+ as decoder and ResNet-101 as encoder). It is important to note the classic PASCAL VOC 2012 Dataset and blender PASCAL VOC 2012 Dataset only differ in the training set. Their validation set is the same with 1,4491,449 images.

Results on classic PASCAL VOC 2012 Dataset.

Tab. 1 compares our method with the other state-of-the-art methods on classic PASCAL VOC 2012 Dataset. U2{}^{\text{2}}PL+ outperforms the supervised baseline by +23.52%+23.52\%, +18.48%+18.48\%, +9.15%+9.15\%, +5.40%+5.40\%, and +7.02%+7.02\% under 1/161/16, 1/81/8, 1/41/4, 1/21/2, and “full” partition protocols respectively, indicating that semi-supervised learning fully mines the inherent information of unlabeled images. When comparing to state-of-the-arts, our U2{}^{\text{2}}PL+ outperforms PC2Seg under all partition protocols by +12.29%+12.29\%, +7.12%+7.12\%, +5.25%+5.25\%, +4.04%+4.04\%, and +5.37%+5.37\% respectively. Note that when labeled data is extremely limited, e.g., when we only have 9292 labeled data, our U2{}^{\text{2}}PL+ outperforms previous methods by a large margin (+12.29%+12.29\% under 1/161/16 split for classic PASCAL VOC 2012), proofing the efficiency of using unreliable pseudo-labels. Furthermore, by introducing extra strategies, U2{}^{\text{2}}PL+ can outperform its previous version U2{}^{\text{2}}PL by +1.31%+1.31\%, +4.25%+4.25\%, +1.37%+1.37\%, +0.93%+0.93\%, and +0.03%+0.03\% respectively. The fewer labeled images we have, the larger improvement it brings, indicating that U2{}^{\text{2}}PL+ is more capable of dealing with training noise than U2{}^{\text{2}}PL.

Results on blender PASCAL VOC 2012 Dataset.

Tab. 2 shows the comparison results on blender PASCAL VOC 2012 Dataset. Our method U2{}^{\text{2}}PL+ outperforms all the other methods under various partition protocols. Compared with the supervised baseline, U2{}^{\text{2}}PL+ achieves improvements of +9.36%+9.36\%, +7.80%+7.80\%, +4.41%+4.41\% and +3.65%+3.65\% under 1/161/16, 1/81/8, 1/41/4 and 1/21/2 partition protocols respectively. Compared with the existing state-of-the-art methods, U2{}^{\text{2}}PL+ surpasses them under all partition protocols. Especially under 1/81/8 protocol and 1/41/4 protocol, U2{}^{\text{2}}PL+ outperforms AEL by +1.78%+1.78\% and +2.15%+2.15\%. When comparing U2{}^{\text{2}}PL+ to its previous version U2{}^{\text{2}}PL, it brings improvements of +0.02%+0.02\%, +0.34%+0.34\%, +0.91%+0.91\%, and +0.28%+0.28\% respectively. Compared to the classic VOC counterpart, it seems that U2{}^{\text{2}}PL+ may struggle to bring significant improvements when we access adequate labeled images (i.e., the number of labeled data is extended from 1,4641,464 to 10,58210,582).

Table 4: Ablation study on the effectiveness of various components in our U2{}^{\text{2}}PL+, including unsupervised loss u\mathcal{L}_{u}, contrastive loss c\mathcal{L}_{c}, category-wise memory bank 𝒬c\mathcal{Q}_{c}, Prototypical Denoising (PD), Momentum Prototype (MP), Symmetric Cross-Entropy (SCE) for unlabeled images, Dynamic Partition Adjustment (DPA), Probability Rank Threshold (PRT), and using unreliable pseudo-labels in contrastive learning (Un).
c\mathcal{L}_{c} 𝒬c\mathcal{Q}_{c} PD MP SCE DPA PRT Un 1/4 (2646)
77.33
77.08
78.49
79.07
77.57
79.30
77.93
77.69
78.01
78.24
80.21
Table 5: Ablation study on prototypical denoising (PD) on blender PASCAL VOC 2012 val set under different partition protocols.
1/16 (662) 1/8 (1323) 1/4 (2646) 1/2 (5291)
U2{}^{\text{2}}PL+ (w/o PD) 77.09 79.10 79.48 80.43
U2{}^{\text{2}}PL+ (w/ PD) 77.23 79.35 80.21 80.78
Table 6: Ablation study on β\beta introduced in Eq. (18) on blender PASCAL VOC val set and Cityscapes val set.
β\beta Blender VOC Cityscapes
1/8 (1323) 1/4 (2646) 1/4 (744) 1/2 (1488)
0.9 79.07 79.29 77.81 78.51
0.99 78.91 79.37 77.93 78.54
0.999 79.35 80.21 79.02 79.62
0.9999 79.18 79.91 78.82 79.11
Table 7: Ablation study on (ξ1,ξ2)(\xi_{1},\xi_{2}) on blender PASCAL VOC 2012 val set and Cityscapes val set.
ξ1\xi_{1} ξ2\xi_{2} Blender VOC Cityscapes
1/8 (1323) 1/4 (2646) 1/8 (372) 1/4 (744)
0 1 77.76 78.63 76.71 78.02
1 0 79.06 79.68 77.03 78.53
0.1 1 79.11 79.83 77.43 78.72
1 0.5 79.35 80.21 78.00 79.02
0.5 0.5 79.33 80.20 77.79 78.81
Results on Cityscapes Dataset.

Tab. 2 illustrates the comparison results on the Cityscapes val set. U2{}^{\text{2}}PL+ improves the supervised only baseline by +10.35%+10.35\%, +5.47%+5.47\%, +4.59%+4.59\% and +1.79%+1.79\% under 1/161/16, 1/81/8, 1/41/4, and 1/21/2 partition protocols, and outperforms existing state-of-the-art methods by a notable margin. In particular, U2{}^{\text{2}}PL+ outperforms AEL by +1.64%+1.64\%, +2.45%+2.45\%, +1.54%+1.54\% and +0.61%+0.61\% under 1/161/16, 1/81/8, 1/41/4, and 1/21/2 partition protocols Compared to its previous version U2{}^{\text{2}}PL, U2{}^{\text{2}}PL+ achieves improvements of +1.19%+1.19\%, +1.52%+1.52\%, +0.51%+0.51\%, and +0.50%+0.50\% respectively.

4.1.2 Ablation Studies

In this section, we first design experiments in Tab. 3 to validate our main insight: using unreliable pseudo-labels is significant for semi-supervised semantic segmentation. Next, we ablate each component of our proposed U2{}^{\text{2}}PL+ in Tab. 4, including using contrastive learning (c\mathcal{L}_{c}), applying a memory bank to store abundant negative samples (𝒬c\mathcal{Q}_{c}), prototype-based pseudo-labels denoising (PD), momentum prototype (MP), using symmetric cross-entropy for unlabeled images (SCE), dynamic partition adjustment (DPA), probability rank threshold (PRT), and only regard unreliable pseudo-labels as negative samples (Un). In Tab. 5, we evaluate the performance of prototypical denoising under different partition protocols on both PASCAL VOC 2012 and Cityscapes. Finally, we ablate the hyper-parameter β\beta for momentum prototype, (ξ1,ξ2)(\xi_{1},\xi_{2}) for symmetric cross-entropy loss, (rl,rh)(r_{l},r_{h}) for probability rank threshold, initial reliable-unreliable partition α0\alpha_{0}, base learning rate lrbaselr_{\mathrm{base}}, and temperature τ\tau, respectively.

Effectiveness of Using Unreliable Pseudo-Labels.

To prove our core insight, i.e., using unreliable pseudo-labels promotes semi-supervised semantic segmentation, we conduct experiments about selecting negative candidates (which is described in Sec. 3.5) with different reliability, i.e., whether to regard only unreliable pseudo-labels to be negative samples of a particular query.

Tab. 3 demonstrates the mIoU results on PASCAL VOC 2012 val set and Cityscapes val set, respectively. Containing “U” in negative keys outperforms other options, proving using unreliable pseudo-labels does help, and our U2{}^{\text{2}}PL+ fully mines the information of all pixels, especially those unreliable ones. Note that “R” in Tab. 3 indicates negative keys in memory banks are reliable. Containing both “U” and “R” means that all features are stored in memory banks without filtering. It is worth noticing that containing features from labeled set 𝒟l\mathcal{D}_{l} brings marginal improvements, but significant improvements are brought because of the introduction of unreliable predictions, i.e., “U” in Tab. 3.

Effectiveness of Components.

We conduct experiments in Tab. 4 to ablate each component of U2{}^{\text{2}}PL+ step by step. For a fair comparison, all the ablations are under 1/4 partition protocol on the blender PASCAL VOC 2012 Dataset.

Above all, we use no c\mathcal{L}_{c} trained model as our baseline, achieving mIoU of 77.33%77.33\% (CutMix in Tab. 2). We first ablate components in contrastive learning, including 𝒬c\mathcal{Q}_{c}, DPA, PRT, and Un. Simply adding vanilla c\mathcal{L}_{c} even contributes to the performance degradation by 0.27%-0.27\%. Category-wise memory bank 𝒬c\mathcal{Q}_{c}, along with PRT and high entropy filtering brings an improvement by +1.41%+1.41\% to vanilla c\mathcal{L}_{c}. Dynamic Partition Adjustment (DPA) together with high entropy filtering, brings an improvement by +1.99%+1.99\% to vanilla c\mathcal{L}_{c}. Note that DPA is a linear adjustment without tuning (refer to Eq. (13)), which is simple yet efficient. For Probability Rank Threshold (PRT) component, we set the corresponding parameter according to Tab. 8. Without high entropy filtering, the improvement decreased significantly to +0.49%+0.49\%.

Then, we ablate components in noisy label learning and its denoising strategies, including PD, MP, and SCE. Introducing Prototypical Denoising (PD) to pseudo-labels improves the performance by +0.60%+0.60\% to CutMix. Using Symmetric Corss-Entropy (SCE) in computing unsupervised loss, improves the performance by +0.36%+0.36\% to CutMix. Adding them together brings an improvement by +0.68%+0.68\%. Adopting an extra Momentum Prototype (MP) on the basis of the above two techniques brings an improvement by +0.91%+0.91\%.

Finally, when adding all the contributions together, our method achieves state-of-the-art results under 1/41/4 partition protocol with mIoU of 80.21%80.21\%. Following this result, we apply these components and corresponding parameters in all experiments on Tab. 1 and Tab. 2.

Table 8: Ablation study on PRT on PASCAL VOC 2012 val set and Cityscapes val set.
rlr_{l} rhr_{h} Blender VOC Cityscapes
1/8 (1323) 1/4 (2646) 1/8 (372) 1/4 (744)
1 3 78.57 79.03 73.44 77.27
1 20 78.64 79.07 75.03 78.04
3 10 78.27 78.91 76.12 78.01
3 20 79.35 80.21 78.00 79.02
10 20 78.62 78.94 75.33 77.18
Table 9: Ablation study on α0\alpha_{0} in Eq. (13) on blender PASCAL VOC 2012 val set and Cityscapes val set, which controls the initial proportion between reliable and unreliable pixels.
α0\alpha_{0} Blender VOC Cityscapes
1/8 (1323) 1/4 (2646) 1/8 (372) 1/4 (744)
40% 76.77 76.92 75.07 77.20
30% 77.34 76.38 75.93 78.08
20% 79.35 80.21 78.00 79.02
10% 77.80 77.95 74.63 78.40
Table 10: Ablation study on base learning rate and temperature under 1/4 partition protocol (2646) on blender VOC PASCAL 2012 Dataset.
lrbaselr_{\mathrm{base}} 10110^{-1} 10210^{-2} 10310^{-3} 10410^{-4} 10510^{-5}
mIoU 3.49 77.82 80.21 74.58 65.69
τ\tau 1010 11 0.50.5 0.10.1 0.010.01
mIoU 78.88 78.91 80.21 79.22 78.78
Effectiveness of Prototypical Denoising.

To further see the impact of prototypical denoising (PD), we conduct experiments in Tab. 5 to find out how PD affects the performance under different partition protocols of blender PASCAL VOC 2012. From Tab. 5, we can tell that PD brings improvements of +0.14%+0.14\%, +0.25%+0.25\%, +0.73%+0.73\%, and +0.35%+0.35\% under 1/161/16, 1/81/8, 1/41/4, and 1/21/2 partition protocols respectively, indicating that PD manages to enhance the quality of pseudo-labels by weighted averaging predictions from similar feature representations.

Table 11: Semi-supervised semantic segmentation on Cityscapes with transformer-based network architecture We adopt DAFormer (Hoyer et al., , 2022) with MiT-B5 (Xie et al., , 2021). All methods are trained with 40k iterations for efficient evaluation. The input resolution is 512×512512\times 512. U2{}^{\text{2}}PL+ manages to bring significant improvements when using transformer-based models.
Method 1/16 (186) 1/8 (372) 1/4 (744) 1/2 (1488)
SupOnly 63.11 68.84 70.83 74.56
CutMix 67.84 71.29 72.37 75.46
U2{}^{\text{2}}PL+ 72.45 75.03 75.70 76.75
Table 12: Experiments on semi-supervised image classification. We tested our U2{}^{\text{2}}PL+ on CIFAR-100 (Krizhevsky et al., , 2009).
Method 1/125 (400) 1/20 (2500) 1/5 (10000)
FixMatch 51.15 71.71 77.40
U2{}^{\text{2}}PL+ (w/ FixMatch) 57.33 73.07 79.02
FreeMatch 62.02 73.53 78.32
U2{}^{\text{2}}PL+ (w/ FreeMatch) 63.74 74.01 79.18
Momentum Coefficient β\beta for Updating Prototypes.

We conduct experiments in Tab. 6 to find out the best β\beta for momentum prototype (MP) on 1/81/8 and 1/41/4 partition protocols on blender PASCAL VOC 2012, and 1/41/4 and 1/21/2 partition protocols on Cityscapes, respectively β=0.999\beta=0.999 yields slightly better than other settings, indicating that our U2{}^{\text{2}}PL+ is quite robust against different β\beta.

Weights in Symmetric Cross-Entropy Loss ξ1\xi_{1} and ξ2\xi_{2}.

To find the optimal ξ1\xi_{1} and ξ2\xi_{2} for symmetric cross-entropy (SCE) loss, we conduct experiments in Tab. 7 under 1/81/8 and 1/41/4 partition protocols on both blender PASCAL VOC 2012 and Cityscapes. When ξ=1\xi=1 and ξ2=0.1\xi_{2}=0.1, it achieves the best performance, better than standard cross-entropy loss (i.e., when ξ1=1\xi_{1}=1 and ξ2=0\xi_{2}=0) by +0.29%+0.29\% and +0.53%+0.53\% on 1/81/8 and 1/41/4 partition protocols respectively. Note that the performances drop heavily when using only the reserve CE loss (i.e., ξ1=0\xi_{1}=0 and ξ2=1\xi_{2}=1).

Probability Rank Thresholds rlr_{l} and rhr_{h}.

Sec. 3.5 proposes to use a probability rank threshold to balance informativeness and confusion caused by unreliable pixels. Tab. 8 provides a verification that such balance promotes performance. rl=3r_{l}=3 and rh=20r_{h}=20 outperform other options by a large margin. When rl=1r_{l}=1, false negative candidates would not be filtered out, causing the intra-class features of pixels incorrectly distinguished by c\mathcal{L}_{c}. When rl=10r_{l}=10, negative candidates tend to become irrelevant with corresponding anchor pixels in semantics, making such discrimination less informative.

Initial Reliable-Unreliable Partition α0\alpha_{0}.

Tab. 9 studies the impact of different initial reliable-unreliable partition α0\alpha_{0}. α0\alpha_{0} has a certain impact on performance. We find α0=20%\alpha_{0}=20\% achieves the best performance. Small α0\alpha_{0} will introduce incorrect pseudo labels for supervision, and large α0\alpha_{0} will make the information of some high-confidence samples underutilized.

Base Learning Rate lrbaselr_{\mathrm{base}}.

The impact of the base learning rate is shown in Tab. 10. Results are based on the blender VOC PASCAL 2012 dataset. We find that 0.001 outperforms other alternatives.

Temperature τ\tau.

Tab. 10 gives a study on the effect of temperature τ\tau. Temperature τ\tau plays an important role to adjust the importance of hard samples When τ=0.5\tau=0.5, our U2{}^{\text{2}}PL+ achieves the best results. Too large or too small of τ\tau will have an adverse effect on overall performance.

Refer to caption
Figure 4: Qualitative results on PASCAL VOC 2012 val set. All models are trained under the 1/41/4 partition protocol of blender set, which contains 2,6462,646 labeled images and 7,3967,396 unlabeled images. (a) Input images. (b) Labels for the corresponding image. (c) Only labeled images are used for training without any unlabeled data. (d) Predictions from our conference version U2{}^{\text{2}}PL. (e) Predictions from U2{}^{\text{2}}PL+.
Extend U2{}^{\text{2}}PL+ to Transformer-based Models.

To further verify the generalization of U2{}^{\text{2}}PL+ towards different network architectures, We apply the network architecture of DAFormer (Hoyer et al., , 2022) on semi-supervised benchmarks in Tab. 11. We train models for 40k iterations instead of 200 epochs for efficient evaluation. U2{}^{\text{2}}PL+ manages to bring significant improvements when using transformer-based models.

Extend U2{}^{\text{2}}PL+ to Semi-Supervised Image Classification.

To further demonstrate the generalization of U2{}^{\text{2}}PL+, we extend our U2{}^{\text{2}}PL+ to semi-supervised image classification. We implement our U2{}^{\text{2}}PL+ on the basis of FixMatch (Sohn et al., , 2020) and FreeMatch (Wang et al., 2023d, ), and almost no modifications are needed when adapting our U2{}^{\text{2}}PL+ from segmentation to classification. Similarly, we tend to make full use of unreliable image-level pseudo labels under the pipeline of U2{}^{\text{2}}PL+. Our implementation is based on USB (Wang et al., 2022a, ). From Tab. 12, we can tell that incorporating U2{}^{\text{2}}PL+ brings significant improvements.

Refer to caption
Figure 5: Qualitative results on Cityscapes val set. All models are trained under the 1/21/2 partition protocol, which contains 1,4881,488 labeled images and 1,4871,487 unlabeled images. (a) Input images. (b) Hand-annotated labels for the corresponding image. (c) Only labeled images are used for training. (d) Predictions from our conference version, i.e., U2{}^{\text{2}}PL (Wang et al., 2022b, ). (e) Predictions from U2{}^{\text{2}}PL+. Yellow rectangles highlight the promotion by adequately using unreliable pseudo-labels.

4.1.3 Qualitative Results

Fig. 4 and Fig. 5 show the results of different methods on the PASCAL VOC 2012 val set and Cityscapes val set, respectively. Benefiting from using unreliable pseudo-labels, U2{}^{\text{2}}PL+ outperforms the supervised baseline, and U2{}^{\text{2}}PL+ is able to generate a more accurate segmentation map.

Furthermore, through visualizing the segmentation results, we find that our method achieves much better performance on those ambiguous regions (e.g., the border between different objects). Such visual difference proves that our method finally makes the reliability of unreliable prediction labels stronger.

Table 13: Comparison with state-of-the-art methods for DA. The results are averaged over 3 random seeds. Note that for SYNTHIA \to Cityscapes, we only compute the mIoU over 16 classes (mIoU16). The top performance is highlighted in bold font and the second score is underlined.
GTA5 \to Cityscapes
Method

Road

S.walk

Build.

Wall

Fence

Pole

T.light

Sign

Veget.

Terrain

Sky

Person

Rider

Car

Truck

Bus

Train

M.bike

Bike

mIoU
AdaptSeg 86.5 25.9 79.8 22.1 20.0 23.6 33.1 21.8 81.8 25.9 75.9 57.3 26.2 76.3 29.8 32.1 7.2 29.5 32.5 41.4
CyCADA 86.7 35.6 80.1 19.8 17.5 38.0 39.9 41.5 82.7 27.9 73.6 64.9 19.0 65.0 12.0 28.6 4.5 31.1 42.0 42.7
ADVENT 89.4 33.1 81.0 26.6 26.8 27.2 33.5 24.7 83.9 36.7 78.8 58.7 30.5 84.8 38.5 44.5 1.7 31.6 32.4 45.5
CBST 91.8 53.5 80.5 32.7 21.0 34.0 28.9 20.4 83.9 34.2 80.9 53.1 24.0 82.7 30.3 35.9 16.0 25.9 42.8 45.9
FADA 92.5 47.5 85.1 37.6 32.8 33.4 33.8 18.4 85.3 37.7 83.5 63.2 39.7 87.5 32.9 47.8 1.6 34.9 39.5 49.2
CAG_DA 90.4 51.6 83.8 34.2 27.8 38.4 25.3 48.4 85.4 38.2 78.1 58.6 34.6 84.7 21.9 42.7 41.1 29.3 37.2 50.2
FDA 92.5 53.3 82.4 26.5 27.6 36.4 40.6 38.9 82.3 39.8 78.0 62.6 34.4 84.9 34.1 53.1 16.9 27.7 46.4 50.5
PIT 87.5 43.4 78.8 31.2 30.2 36.3 39.3 42.0 79.2 37.1 79.3 65.4 37.5 83.2 46.0 45.6 25.7 23.5 49.9 50.6
IAST 93.8 57.8 85.1 39.5 26.7 26.2 43.1 34.7 84.9 32.9 88.0 62.6 29.0 87.3 39.2 49.6 23.2 34.7 39.6 51.5
ProDA 91.5 52.4 82.9 42.0 35.7 40.0 44.4 43.3 87.0 43.8 79.5 66.5 31.4 86.7 41.1 52.5 0.0 45.4 53.8 53.7
DAFormer 95.7 70.2 89.4 53.5 48.1 49.6 55.8 59.4 89.9 47.9 92.5 72.2 44.7 92.3 74.5 78.2 65.1 55.9 61.8 68.3
U2{}^{\text{2}}PL+ 95.7 70.9 89.7 53.3 46.3 47.0 59.1 58.3 90.4 48.6 92.9 73.2 44.6 92.6 77.5 77.8 76.7 59.3 67.5 69.6
SYNTHIA \to Cityscapes
Method

Road

S.walk

Build.

Wall

Fence

Pole

T.light

Sign

Veget.

Terrain

Sky

Person

Rider

Car

Truck

Bus

Train

M.bike

Bike

mIoU16
ADVENT 85.6 42.2 79.7 8.7 0.4 25.9 5.4 8.1 80.4 - 84.1 57.9 23.8 73.3 - 36.4 - 14.2 33.0 41.2
CBST 68.0 29.9 76.3 10.8 1.4 33.9 22.8 29.5 77.6 - 78.3 60.6 28.3 81.6 - 23.5 - 18.8 39.8 42.6
CAG_DA 84.7 40.8 81.7 7.8 0.0 35.1 13.3 22.7 84.5 - 77.6 64.2 27.8 80.9 - 19.7 - 22.7 48.3 44.5
PIT 83.1 27.6 81.5 8.9 0.3 21.8 26.4 33.8 76.4 - 78.8 64.2 27.6 79.6 - 31.2 - 31.0 31.3 44.0
FADA 84.5 40.1 83.1 4.8 0.0 34.3 20.1 27.2 84.8 - 84.0 53.5 22.6 85.4 - 43.7 - 26.8 27.8 45.2
PyCDA 75.5 30.9 83.3 20.8 0.7 32.7 27.3 33.5 84.7 - 85.0 64.1 25.4 85.0 - 45.2 - 21.2 32.0 46.7
IAST 81.9 41.5 83.3 17.7 4.6 32.3 30.9 28.8 83.4 - 85.0 65.5 30.8 86.5 - 38.2 - 33.1 52.7 49.8
SAC 89.3 47.2 85.5 26.5 1.3 43.0 45.5 32.0 87.1 - 89.3 63.6 25.4 86.9 - 35.6 - 30.4 53.0 52.6
ProDA 87.1 44.0 83.2 26.9 0.7 42.0 45.8 34.2 86.7 - 81.3 68.4 22.1 87.7 - 50.0 - 31.4 38.6 51.9
DAFormer 84.5 40.7 88.4 41.5 6.5 50.0 55.0 54.6 86.0 - 89.8 73.2 48.2 87.2 - 53.2 - 53.9 61.7 60.9
U2{}^{\text{2}}PL+ 85.3 45.7 87.6 42.8 5.0 41.9 57.8 49.4 86.8 - 89.9 75.8 49.0 88.0 - 61.6 - 54.1 64.8 61.6

4.2 Experiments on Domain Adaptive Segmentation

In this section, we evaluate the efficacy of our U2{}^{\text{2}}PL+ under the domain adaptation (DA) setting. Different from semi-supervised learning, DA often suffers from a domain shift between labeled source domain and unlabeled target domain (Hoyer et al., , 2022; Zhang et al., 2021b, ; Li et al., 2022b, ; Chen et al., , 2019; Hoffman et al., , 2018; Li et al., , 2019; Luo et al., , 2019), which is more important to make sufficient use of all pseudo-labels. We first describe experimental settings for domain adaptive semantic segmentation. Then, we compare our U2{}^{\text{2}}PL+ with state-of-the-art alternatives. Finally, we provide qualitative results.

Datasets.

In this setting, we use synthetic images from either GTA5 (Richter et al., , 2016) or SYNTHIA (Ros et al., , 2016) as the source domain and use real-world images from Cityscapes (Cordts et al., , 2016) as the target domain. GTA5 (Richter et al., , 2016) consists of 24,99624,996 images with resolution 1914×10521914\times 1052, and SYNTHIA (Ros et al., , 2016) contains 9,4009,400 images with resolution 1280×7601280\times 760.

Network Structure.

For DA segmentation, we use MiT-B5 (Xie et al., , 2021) pre-trained on ImageNet-1K (Deng et al., , 2009) as the backbone and DAFormer (Hoyer et al., , 2022) as the decoder. The representation head is the same that in the semi-supervised setting.

Evaluation Metric.

For the DA segmentation task, we report results on the Cityscapes val set. Particularly, on SYNTHIA \to Cityscapes DA benchmark, 16 of the 19 classes of Cityscapes are used to calculate mIoU, following the common practice (Hoyer et al., , 2022).

Table 14: Ablation study on using pseudo pixels with different reliability as negative keys. The reliability is measured by the entropy of pixel-wise prediction (see Sec. 3.5). “U” denotes unreliable, which takes pixels with the top 20% highest entropy scores as negative candidates. “R” indicates reliable, and means the bottom 20% counterpart. 𝒟l\mathcal{D}_{l} and 𝒟u\mathcal{D}_{u} are the labeled source domain and the unlabeled target domain, respectively. The best performances are highlighted in bold font and the second scores are underlined.
𝒟l\mathcal{D}_{l} 𝒟u\mathcal{D}_{u} GTA5 \to Cityscapes SYNTHIA \to Cityscapes
R U mIoU mIoU16
68.4 60.8
68.6 60.9
69.3 61.2
68.7 60.9
69.6 61.6
68.5 60.8
Table 15: Performance on tailed classes under domain adaptation settings. The results are averaged over 3 random seeds. U2{}^{\text{2}}PL+ contributes more to tailed classes.
GTA5 \to Cityscapes
Method

Wall

T.light

Sign

Rider

Truck

Bus

Train

M.bike

Bike

mIoU
DAFormer 53.5 55.8 59.4 44.7 74.5 78.2 65.1 55.9 61.8 60.9
U2{}^{\text{2}}PL+ 53.3 59.1 58.3 44.6 77.5 77.8 76.7 59.3 67.5 63.8
Refer to caption
Figure 6: Qualitative results on Cityscapes val set under GTA5 \to Cityscapes DA benchmark. (a) Input images. (b) Hand-annotated labels for the corresponding image. (c) Only labeled source images are used for training. (d) Predictions from DAFormer (Hoyer et al., , 2022). (e) Predictions from U2{}^{\text{2}}PL+. Yellow rectangles highlight the promotion by adequately using unreliable pseudo-labels.
Implementation Details.

Following previous methods (Zhang et al., 2021b, ; Hoyer et al., , 2022), we first resize the images to 1024×5121024\times 512 for Cityscapes and 1280×7201280\times 720 for GTA5, and then randomly crop them into 512×512512\times 512 for training. For the testing stage, we just resize the images to 1024×5121024\times 512. We use AdamW (Loshchilov and Hutter, , 2017) optimizer with weight decay of 0.010.01 and initial learning rate lrbase=6×105lr_{\mathrm{base}}=6\times 10^{-5} for the encoder and lrbase=6×104lr_{\mathrm{base}}=6\times 10^{-4} for the decoder follow (Hoyer et al., , 2022). In accordance with DAFormer (Hoyer et al., , 2022), we warm up the learning rate in a linear schedule with 1.51.5k steps and linear decay since then. In total, the model is trained with a batch of one 512×512512\times 512 labeled source image and one 512×512512\times 512 unlabeled target image for 4040k iterations. We adopt DAFormer (Hoyer et al., , 2022) as our baseline and apply extra contrastive learning with our sample strategy. We evaluate our proposed U2{}^{\text{2}}PL+ under two DA benchmarks (i.e., GTA5 \to Cityscapes and SYNTHIA \to Cityscapes). All DA experiments are conducted with a single Tesla A100 GPU.

Comparisons with State-of-the-Art Alternatives.

We compare our proposed U2{}^{\text{2}}PL+ with state-of-the-art alternatives, including adversarial training methods (AdaptSeg (Tsai et al., , 2018), CyCADA (Hoffman et al., , 2018), FADA (Wang et al., 2020a, ), and ADVENT (Vu et al., , 2019)), and self-training methods (CBST (Zou et al., , 2018), IAST (Mei et al., , 2020), CAG_DA (Zhang et al., , 2019), ProDA (Zhang et al., 2021b, ), CPSL (Li et al., 2022b, ), SAC (Araslanov and Roth, , 2021), and DAFormer (Hoyer et al., , 2022)).

GTA5 \to Cityscapes.

As shown in Tab. 13, our proposed U2{}^{\text{2}}PL+ achieves the best IoU performance on 13 out of 19 categories, and the IoU scores of the remaining 6 classes rank second. Its mIoU is 69.6%69.6\% and outperforms DAFormer (Hoyer et al., , 2022) by 1.3%1.3\%. Since DAFormer (Hoyer et al., , 2022) is our baseline, the results prove that mining all unreliable pseudo-labels do help the model. Note that there is no need to apply knowledge distillation as ProDA (Zhang et al., 2021b, ) and CPSL (Li et al., 2022b, ) do, which incarnates the efficiency and simplicity of our method and the robustness of our core insight. From Tab. 13, we can tell that by introducing unreliable pseudo-labels into training, U2{}^{\text{2}}PL+ attains 76.7%76.7\% IoU on class train, where ProDA (Zhang et al., 2021b, ), FADA (Wang et al., 2020a, ), and ADVENT (Vu et al., , 2019) even fail on this difficult long-tailed class.

SYNTHIA \to Cityscapes.

This DA task is more challenging than GTA5 \to Cityscapes due to the larger domain gap between the labeled source domain and the unlabeled target domain. As shown in Tab. 13, our proposed U2{}^{\text{2}}PL+ achieves the best IoU performance on 8 out of 16 categories, and the mIoU over 16 classes (mIoU16) of our method outperforms other state-of-the-art alternatives, achieving 61.6%61.6\%. Specifically, U2{}^{\text{2}}PL+ performs especially great on class bus, which surpasses DAFormer (Hoyer et al., , 2022) by a large margin of 8.4%8.4\%.

Effectiveness of Using Unreliable Pseudo-Labels.

We study our core insight, i.e., using unreliable pseudo-labels to promote domain adaptive semantic segmentation, we conduct experiments about selecting negative candidates with different reliability in Tab. 14. As illustrated in the table, containing features from the source domain 𝒟l\mathcal{D}_{l} brings marginal improvements. Significant improvements are brought because of the introduction of unreliable predictions.

Performance on Tailed Classes.

From Tab. 15, we can tell that the improvements of our U2{}^{\text{2}}PL+ on tailed classes are much more significant than those of all categories. For instance, on the GTA5 \to Cityscapes benchmark, U2{}^{\text{2}}PL+ outperforms DAFormer (Hoyer et al., , 2022) by +1.3 mIoU (69.6 v.s. 68.3) and +2.9 mIoUtail (63.8 v.s. 60.9), respectively.

Qualitative Results.

Fig. 6 shows the improvements in segmentation results using different methods. Benefiting from using unreliable pseudo-labels, the model is able to correct the wrong predictions when using only reliable pseudo-labels (e.g., DAFormer (Hoyer et al., , 2022)).

Discussion.

Both DA and SS aim at leveraging a large amount of unlabeled data, while DA often suffers from domain shift, and is more important to learn domain-invariant feature representations, building a cross-category-discriminative feature embedding space. Although our U2{}^{\text{2}}PL+ mainly focuses on semi-supervised learning and does not pay much attention to bridging the gap between source and target domains, using unreliable pseudo-labels provides another efficient way for generalization. This might be because, in DA, the quality of pseudo-labels is much worse than those in SS due to the limited generalization ability of the segmentation model, making sufficient use of all pixels remains a valuable issue hence.

4.3 Experiments on Weakly Supervised Segmentation

Table 16: Comparison with state-of-the-art methods on weakly supervised semantic segmentation benchmark. We report the mIoU on PASCAL VOC 2012 val set and test set. “Sup.” indicates the supervision type. \mathcal{F}, \mathcal{I}, and 𝒮\mathcal{S} represent full supervision, image-level supervision, and saliency supervision, respectively. {\dagger} indicates our implementation.
Method Sup. Backbone val test
Fully Supervised Methods
DeepLab \mathcal{F} ResNet-101 77.6 79.7
WideResNet38 WR-38 80.8 82.5
SegFormer MiT-B1 78.7 -
Multi-Stage Weakly Supervised Methods
OAA+ +𝒮\mathcal{I}+\mathcal{S} ResNet-101 65.2 66.4
MCIS ResNet-101 66.2 66.9
AuxSegNet WR-38 69.0 68.6
NSROM ResNet-101 70.4 70.2
EPS ResNet-101 70.9 70.8
SEAM \mathcal{I} WR-38 64.5 65.7
SC-CAM ResNet-101 66.1 65.9
CDA WR-38 66.1 66.8
AdvCAM ResNet-101 68.1 68.0
CPN ResNet-101 67.8 68.5
RIB ResNet-101 68.3 68.6
Single-Stage Weakly Supervised Methods
EM \mathcal{I} VGG-16 38.2 39.6
MIL - 42.0 40.6
CRF-RNN VGG-16 52.8 53.7
RRM WR-38 62.6 62.9
1Stage WR-38 62.7 64.3
AFA MiT-B1 64.9 66.1
U2{}^{\text{2}}PL+ MiT-B1 66.4 67.0
Table 17: Ablation study on using pseudo pixels with different reliability as negative keys. The reliability is measured by the scores of CAMs (see Sec. 3.5.2 for details). Specifically, we study the formulation of nij(c)n_{ij}(c) defined in Eq. (31), which indicates whether pixel jj from the ii-th image is a qualified negative key for class cc. Incorporating 𝟙[𝐌ijc<β]\mathbbm{1}[\mathbf{M}_{ij}^{c}<\beta] indicates unreliable predictions for class cc is considered as negative keys. The mIoU on PASCAL VOC 2012 val set is reported.
      nij(c)n_{ij}(c)       mIoU
      𝟙[yic=0]\mathbbm{1}[y_{i}^{c}=0]       𝟙[𝐌ijc<β]\mathbbm{1}[\mathbf{M}_{ij}^{c}<\beta]
      ✓       65.0
      ✓       65.7
      ✓       ✓       66.4

In this section, we evaluate the efficacy of our U2{}^{\text{2}}PL+ under the weakly supervised (WS) setting. Differently, under this setting, pixel-level annotations are entirely inaccessible, what we have is only image-level labels, making it crucial to make sufficient use of all pixel-level predictions. We first describe experimental settings for domain adaptive semantic segmentation. Then, we compare our U2{}^{\text{2}}PL+ with state-of-the-art alternatives. Finally, we provide qualitative results.

Several WS methods with image-level labels adopt a multi-stage framework, e.g., (Jiang et al., , 2019; Sun et al., , 2020). They first train a classification model and generate CAMs to be pseudo-labels. Additional refinement techniques are usually required to improve the quality of these pseudo-labels. Finally, a standalone semantic segmentation network is trained using these pseudo-labels. This type of training pipeline is obviously not efficient. To this end, we select a representative single-stage method, AFA (Ru et al., , 2022) as the baseline, where the final segmentation model is trained end-to-end.

Datasets.

In this setting, we conduct experiments on PASCAL VOC 2012 (Everingham et al., , 2010), which contains 21 semantic classes (including the background class). Following common practices (Fan et al., 2020c, ; Fan et al., 2020b, ; Fan et al., 2020a, ), it is augmented with the SBD dataset (Hariharan et al., , 2011), resulting in 10,58210,582, 1,4491,449, and 1,4641,464 images for training, validation, and testing, respectively.

Network Structure.

We take AFA (Ru et al., , 2022) as the baseline, which uses the Mix Transformer (MiT) (Xie et al., , 2021) as the backbone. A simple MLP head is used to be the segmentation head following (Xie et al., , 2021). The backbone is pre-trained on ImageNet-1K (Deng et al., , 2009), while other parameters are randomly initialized. The representation head is the same as in the semi-supervised setting.

Refer to caption
Figure 7: Qualitative results on PASCAL VOC 2012 val set under the weakly supervised setting. (a) Input images. (b) Labels for the corresponding image. (c) Predictions from our baseline AFA (Ru et al., , 2022). (d) Segmentation maps predicted by our U2{}^{\text{2}}PL+.
Evaluation Metric.

By default, we report the mIoU on both the validation set and testing set as the evaluation criteria.

Implementation Details.

Following the standard configuration of AFA (Ru et al., , 2022), the AdamW optimizer is used to train the model, whose initial learning rate is set to 6×1056\times 10^{-5} and decays every iteration with a polynomial scheduler for parameters of the backbone. The learning rates for other parameters are ten times that of the backbone. The weight decay is fixed at 0.010.01. Random rescaling with a range of [0.5,2.0][0.5,2.0], random horizontal flipping, and random cropping with a cropping size of 512×512512\times 512 are adopted. The network for 20,00020,000 iterations with a batch size of 88. All WS experiments are conducted with 22 2080Ti GPUs.

Comparisons with State-of-the-Art Alternatives.

In Tab. 16, we compare our proposed U2{}^{\text{2}}PL+ with a wide range of representative state-of-the-art alternatives, including multi-stage methods and single-stage methods. Some multi-stage methods further leverage saliency maps, including OAA+ (Jiang et al., , 2019), MCIS (Sun et al., , 2020), AuxSegNet (Xu et al., 2021a, ), NSROM (Yao et al., , 2021), and EPS (Lee et al., 2021c, ), while others do not, including SEAM (Wang et al., 2020b, ), SC-CAM (Chang et al., , 2020), CDA (Su et al., , 2021), AdvCAM (Lee et al., 2021b, ), CPN (Zhang et al., 2021a, ), and RIB (Lee et al., 2021a, ). Single-stage methods include EM (Papandreou et al., , 2015), MIL (Pinheiro and Collobert, , 2015), CRF-RNN (Roy and Todorovic, , 2017), RRM (Zhang et al., 2020a, ), 1Stage (Araslanov and Roth, , 2020), and AFA (Ru et al., , 2022). All methods are refined using CRF.

It has been illustrated in Tab. 16 that U2{}^{\text{2}}PL+ brings improvements of +1.5%+1.5\% mIoU and +0.9%+0.9\% mIoU over AFA (Ru et al., , 2022) on the val set and the test set, respectively. U2{}^{\text{2}}PL+ clearly surpasses previous state-of-the-art single-stage methods with significant margins. It is worth noticing that U2{}^{\text{2}}PL+ manages to achieve competitive results even when compared with multi-stage methods, e.g., OAA+ (Jiang et al., , 2019), MCIS (Sun et al., , 2020), SEAM (Wang et al., 2020b, ), SC-CAM (Chang et al., , 2020) and CDA (Su et al., , 2021).

Effectiveness of Using Unreliable Pseudo-Labels.

We study our core insight, i.e., using unreliable pseudo-labels promotes weakly supervised semantic segmentation, we conduct experiments about selecting negative candidates with different reliability in Tab. 17. As illustrated in the table, incorporating unreliable predictions, i.e., 𝟙[𝐌ijc<β]\mathbbm{1}[\mathbf{M}_{ij}^{c}<\beta], brings significant improvements. It is worth noticing that since we do not have any dense annotations, and thus leveraging the image-level annotations, i.e., 𝟙[yic=0]\mathbbm{1}[y_{i}^{c}=0], in selecting negative candidates becomes crucial for a stable training procedure.

Qualitative Results.

Fig. 7 shows the improvements in segmentation results using different methods. Those predictions are refined by CRF (Krähenbühl and Koltun, , 2011). Benefiting from using unreliable pseudo-labels, the model is able to classify ambiguous regions into correct classes.

5 Conclusion

In this paper, we extend our original U2PL to U2{}^{\text{2}}PL+, a unified framework for label-efficient semantic segmentation, by including unreliable pseudo-labels into training. Our main insight is that those unreliable predictions are usually just confused about a few classes and are confident enough not to belong to the remaining remote classes. U2{}^{\text{2}}PL+ outperforms many existing state-of-the-art methods in both semi-supervised, domain adaptive, and weakly supervised semantic segmentation, suggesting our framework provides a new promising paradigm in label-efficient learning research. Our ablation experiments prove the insight of this work is quite solid, i.e., only when introducing unreliable predictions into the contrastive learning paradigm brings significant improvements under both settings. Qualitative results give visual proof of its effectiveness, especially the better performance on borders between semantic objects or other ambiguous regions.

Declarations

Acknowledgements This work was supported in part by the National Key R&D Program of China (No. 2022ZD0116500), the National Natural Science Foundation of China (No. U21B2042), and in part by the 2035 Innovation Program of CAS, and the InnoHK program.

Data Availability. The datasets generated during and/or analyzed during the current study are available from the PASCAL VOC 2012111http://host.robots.ox.ac.uk/pascal/VOC/, the SBD222https://ieeexplore.ieee.org/abstract/document/6126343, the GTA5333https://arxiv.org/pdf/1608.02192v1.pdf, the Synthia444https://synthia-dataset.net/, and the Cityscapes555https://www.cityscapes-dataset.com/.

References

  • Abramov et al., (2020) Abramov, A., Bayer, C., and Heller, C. (2020). Keep it simple: Image statistics matching for domain adaptation. arXiv preprint arXiv:2005.12551.
  • Ahn and Kwak, (2018) Ahn, J. and Kwak, S. (2018). Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In CVPR.
  • Alonso et al., (2021) Alonso, I., Sabater, A., Ferstl, D., Montesano, L., and Murillo, A. C. (2021). Semi-supervised semantic segmentation with pixel-level contrastive learning from a class-wise memory bank. In ICCV.
  • Araslanov and Roth, (2020) Araslanov, N. and Roth, S. (2020). Single-stage semantic segmentation from image labels. In CVPR.
  • Araslanov and Roth, (2021) Araslanov, N. and Roth, S. (2021). Self-supervised augmentation consistency for adapting semantic segmentation. In CVPR.
  • Arazo et al., (2020) Arazo, E., Ortego, D., Albert, P., O’Connor, N. E., and McGuinness, K. (2020). Pseudo-labeling and confirmation bias in deep semi-supervised learning. In International Joint Conference on Neural Networks (IJCNN.
  • Bachman et al., (2014) Bachman, P., Alsharif, O., and Precup, D. (2014). Learning with pseudo-ensembles. NeurIPS.
  • Badrinarayanan et al., (2017) Badrinarayanan, V., Kendall, A., and Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. TPAMI, 39(12):2481–2495.
  • Chang et al., (2019) Chang, W.-L., Wang, H.-P., Peng, W.-H., and Chiu, W.-C. (2019). All about structure: Adapting structural information across domains for boosting semantic segmentation. In CVPR.
  • Chang et al., (2020) Chang, Y.-T., Wang, Q., Hung, W.-C., Piramuthu, R., Tsai, Y.-H., and Yang, M.-H. (2020). Weakly-supervised semantic segmentation via sub-category exploration. In CVPR.
  • Chen et al., (2019) Chen, C., Xie, W., Huang, W., Rong, Y., Ding, X., Huang, Y., Xu, T., and Huang, J. (2019). Progressive feature alignment for unsupervised domain adaptation. In CVPR.
  • (12) Chen, H., Jin, Y., Jin, G., Zhu, C., and Chen, E. (2021a). Semi-supervised semantic segmentation by improving prediction confidence. IEEE Transactions on Neural Networks and Learning Systems, 13(1).
  • (13) Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A. L. (2017a). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 40(4):834–848.
  • (14) Chen, L.-C., Papandreou, G., Schroff, F., and Adam, H. (2017b). Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587.
  • Chen et al., (2018) Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018). Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV.
  • Chen et al., (2020) Chen, T., Kornblith, S., Swersky, K., Norouzi, M., and Hinton, G. (2020). Big self-supervised models are strong semi-supervised learners. NeurIPS.
  • (17) Chen, X., Xie, S., and He, K. (2021b). An empirical study of training self-supervised vision transformers. In ICCV.
  • (18) Chen, X., Yuan, Y., Zeng, G., and Wang, J. (2021c). Semi-supervised semantic segmentation with cross pseudo supervision. In CVPR.
  • Cheng et al., (2022) Cheng, B., Misra, I., Schwing, A. G., Kirillov, A., and Girdhar, R. (2022). Masked-attention mask transformer for universal image segmentation. In CVPR.
  • Cheng et al., (2021) Cheng, B., Schwing, A., and Kirillov, A. (2021). Per-pixel classification is not all you need for semantic segmentation. NeurIPS.
  • Choi et al., (2019) Choi, J., Kim, T., and Kim, C. (2019). Self-ensembling with gan-based data augmentation for domain adaptation in semantic segmentation. In ICCV.
  • Cordts et al., (2016) Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., and Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In CVPR.
  • Dai et al., (2015) Dai, J., He, K., and Sun, J. (2015). Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In ICCV.
  • Deng et al., (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In CVPR.
  • DeVries and Taylor, (2017) DeVries, T. and Taylor, G. W. (2017). Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552.
  • Dosovitskiy et al., (2021) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR.
  • (27) Du, Y., Fu, Z., Liu, Q., and Wang, Y. (2022a). Weakly supervised semantic segmentation by pixel-to-prototype contrast. In CVPR.
  • (28) Du, Y., Shen, Y., Wang, H., Fei, J., Li, W., Wu, L., Zhao, R., Fu, Z., and Liu, Q. (2022b). Learning from future: A novel self-training framework for semantic segmentation. NeurIPS.
  • Everingham et al., (2010) Everingham, M., Van Gool, L., Williams, C. K., Winn, J., and Zisserman, A. (2010). The pascal visual object classes (voc) challenge. IJCV, 88(2):303–338.
  • (30) Fan, J., Zhang, Z., Song, C., and Tan, T. (2020a). Learning integral objects with intra-class discriminator for weakly-supervised semantic segmentation. In CVPR.
  • (31) Fan, J., Zhang, Z., and Tan, T. (2020b). Employing multi-estimations for weakly-supervised semantic segmentation. In ECCV.
  • Fan et al., (2022) Fan, J., Zhang, Z., and Tan, T. (2022). Pointly-supervised panoptic segmentation. In ECCV. Springer.
  • (33) Fan, J., Zhang, Z., Tan, T., Song, C., and Xiao, J. (2020c). Cian: Cross-image affinity net for weakly supervised semantic segmentation. In AAAI.
  • French et al., (2020) French, G., Laine, S., Aila, T., Mackiewicz, M., and Finlayson, G. (2020). Semi-supervised semantic segmentation needs strong, varied perturbations. In BMVC.
  • Gong et al., (2019) Gong, R., Li, W., Chen, Y., and Gool, L. V. (2019). Dlow: Domain flow for adaptation and generalization. In CVPR.
  • Goodfellow et al., (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. NeurIPS.
  • Grandvalet and Bengio, (2004) Grandvalet, Y. and Bengio, Y. (2004). Semi-supervised learning by entropy minimization. NeurIPS.
  • Hariharan et al., (2011) Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., and Malik, J. (2011). Semantic contours from inverse detectors. In ICCV.
  • He et al., (2020) He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. (2020). Momentum contrast for unsupervised visual representation learning. In CVPR.
  • He et al., (2016) He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In CVPR.
  • Hoffman et al., (2018) Hoffman, J., Tzeng, E., Park, T., Zhu, J.-Y., Isola, P., Saenko, K., Efros, A., and Darrell, T. (2018). Cycada: Cycle-consistent adversarial domain adaptation. In ICML.
  • Hoffman et al., (2016) Hoffman, J., Wang, D., Yu, F., and Darrell, T. (2016). Fcns in the wild: Pixel-level adversarial and constraint-based adaptation. arXiv preprint arXiv:1612.02649.
  • Hong et al., (2018) Hong, W., Wang, Z., Yang, M., and Yuan, J. (2018). Conditional generative adversarial network for structured domain adaptation. In CVPR.
  • Hoyer et al., (2022) Hoyer, L., Dai, D., and Van Gool, L. (2022). Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation. In CVPR.
  • Hu et al., (2021) Hu, H., Wei, F., Hu, H., Ye, Q., Cui, J., and Wang, L. (2021). Semi-supervised semantic segmentation via adaptive equalization learning. NeurIPS.
  • Ji et al., (2023) Ji, W., Li, J., Bi, Q., Liu, T., Li, W., and Cheng, L. (2023). Segment anything is not always perfect: An investigation of sam on different real-world applications. arXiv preprint arXiv:2304.05750.
  • Jiang et al., (2019) Jiang, P.-T., Hou, Q., Cao, Y., Cheng, M.-M., Wei, Y., and Xiong, H.-K. (2019). Integral object mining via online attention accumulation. In ICCV.
  • Kang et al., (2020) Kang, G., Wei, Y., Yang, Y., Zhuang, Y., and Hauptmann, A. (2020). Pixel-level cycle association: A new perspective for domain adaptive semantic segmentation. NeurIPS.
  • Ke et al., (2020) Ke, Z., Qiu, D., Li, K., Yan, Q., and Lau, R. W. (2020). Guided collaborative training for pixel-wise semi-supervised learning. In ECCV.
  • Kim et al., (2021) Kim, B., Han, S., and Kim, J. (2021). Discriminative region suppression for weakly-supervised semantic segmentation. In AAAI.
  • Kirillov et al., (2023) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W.-Y., et al. (2023). Segment anything. In ICCV.
  • Krähenbühl and Koltun, (2011) Krähenbühl, P. and Koltun, V. (2011). Efficient inference in fully connected crfs with gaussian edge potentials. NeurIPS.
  • Krizhevsky et al., (2009) Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny images.
  • (54) Lee, C.-Y., Batra, T., Baig, M. H., and Ulbricht, D. (2019a). Sliced wasserstein discrepancy for unsupervised domain adaptation. In CVPR.
  • Lee et al., (2013) Lee, D.-H. et al. (2013). Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In ICML, volume 3, page 896.
  • (56) Lee, J., Choi, J., Mok, J., and Yoon, S. (2021a). Reducing information bottleneck for weakly supervised semantic segmentation. NeurIPS.
  • (57) Lee, J., Kim, E., Lee, S., Lee, J., and Yoon, S. (2019b). Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference. In CVPR.
  • (58) Lee, J., Kim, E., and Yoon, S. (2021b). Anti-adversarially manipulated attributions for weakly and semi-supervised semantic segmentation. In CVPR.
  • (59) Lee, S., Lee, M., Lee, J., and Shim, H. (2021c). Railroad is not a train: Saliency as pseudo-pixel supervision for weakly supervised semantic segmentation. In CVPR.
  • (60) Li, J., Fan, J., and Zhang, Z. (2022a). Towards noiseless object contours for weakly supervised semantic segmentation. In CVPR.
  • (61) Li, R., Jia, X., He, J., Chen, S., and Hu, Q. (2021a). T-svdnet: Exploring high-order prototypical correlations for multi-source domain adaptation. In ICCV.
  • (62) Li, R., Li, S., He, C., Zhang, Y., Jia, X., and Zhang, L. (2022b). Class-balanced pixel-level self-labeling for domain adaptive semantic segmentation. In CVPR.
  • (63) Li, Y., Kuang, Z., Liu, L., Chen, Y., and Zhang, W. (2021b). Pseudo-mask matters in weakly-supervised semantic segmentation. In ICCV.
  • Li et al., (2019) Li, Y., Yuan, L., and Vasconcelos, N. (2019). Bidirectional learning for domain adaptation of semantic segmentation. In CVPR.
  • Lin et al., (2016) Lin, D., Dai, J., Jia, J., He, K., and Sun, J. (2016). Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In CVPR.
  • Liu et al., (2021) Liu, S., Zhi, S., Johns, E., and Davison, A. J. (2021). Bootstrapping semantic segmentation with regional contrast. arXiv preprint arXiv:2104.04465.
  • (67) Long, J., Shelhamer, E., and Darrell, T. (2015a). Fully convolutional networks for semantic segmentation. In CVPR.
  • (68) Long, M., Cao, Y., Wang, J., and Jordan, M. (2015b). Learning transferable features with deep adaptation networks. In ICML.
  • Loshchilov and Hutter, (2017) Loshchilov, I. and Hutter, F. (2017). Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
  • Luo et al., (2019) Luo, Y., Zheng, L., Guan, T., Yu, J., and Yang, Y. (2019). Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation. In CVPR.
  • Ma et al., (2024) Ma, J., He, Y., Li, F., Han, L., You, C., and Wang, B. (2024). Segment anything in medical images. Nature Communications, 15(1):654.
  • Mei et al., (2020) Mei, K., Zhu, C., Zou, J., and Zhang, S. (2020). Instance adaptive self-training for unsupervised domain adaptation. In ECCV.
  • Melas-Kyriazi and Manrai, (2021) Melas-Kyriazi, L. and Manrai, A. K. (2021). Pixmatch: Unsupervised domain adaptation via pixelwise consistency training. In CVPR.
  • Murez et al., (2018) Murez, Z., Kolouri, S., Kriegman, D., Ramamoorthi, R., and Kim, K. (2018). Image to image translation for domain adaptation. In CVPR.
  • Nowozin et al., (2016) Nowozin, S., Cseke, B., and Tomioka, R. (2016). f-gan: Training generative neural samplers using variational divergence minimization. NeurIPS.
  • Olsson et al., (2021) Olsson, V., Tranheden, W., Pinto, J., and Svensson, L. (2021). Classmix: Segmentation-based data augmentation for semi-supervised learning. In CVPR.
  • Oord et al., (2018) Oord, A. v. d., Li, Y., and Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
  • Ouali et al., (2020) Ouali, Y., Hudelot, C., and Tami, M. (2020). Semi-supervised semantic segmentation with cross-consistency training. In CVPR.
  • Papandreou et al., (2015) Papandreou, G., Chen, L.-C., Murphy, K. P., and Yuille, A. L. (2015). Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation. In ICCV.
  • Pinheiro and Collobert, (2015) Pinheiro, P. O. and Collobert, R. (2015). From image-level to pixel-level labeling with convolutional networks. In CVPR.
  • Richter et al., (2016) Richter, S. R., Vineet, V., Roth, S., and Koltun, V. (2016). Playing for data: Ground truth from computer games. In ECCV.
  • Ronneberger et al., (2015) Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer Assisted Intervention, pages 234–241. Springer.
  • Ros et al., (2016) Ros, G., Sellart, L., Materzynska, J., Vazquez, D., and Lopez, A. M. (2016). The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In CVPR.
  • Roy and Todorovic, (2017) Roy, A. and Todorovic, S. (2017). Combining bottom-up, top-down, and smoothness cues for weakly supervised image segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3529–3538.
  • Ru et al., (2022) Ru, L., Zhan, Y., Yu, B., and Du, B. (2022). Learning affinity from attention: end-to-end weakly-supervised semantic segmentation with transformers. In CVPR.
  • Ru et al., (2023) Ru, L., Zheng, H., Zhan, Y., and Du, B. (2023). Token contrast for weakly-supervised semantic segmentation. In CVPR.
  • Saito et al., (2018) Saito, K., Watanabe, K., Ushiku, Y., and Harada, T. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR.
  • Sajjadi et al., (2016) Sajjadi, M., Javanmardi, M., and Tasdizen, T. (2016). Regularization with stochastic transformations and perturbations for deep semi-supervised learning. NeurIPS.
  • Sakaridis et al., (2021) Sakaridis, C., Dai, D., and Van Gool, L. (2021). Acdc: The adverse conditions dataset with correspondences for semantic driving scene understanding. In ICCV.
  • Sankaranarayanan et al., (2018) Sankaranarayanan, S., Balaji, Y., Jain, A., Lim, S. N., and Chellappa, R. (2018). Learning from synthetic data: Addressing domain shift for semantic segmentation. In CVPR.
  • Sohn et al., (2020) Sohn, K., Berthelot, D., Carlini, N., Zhang, Z., Zhang, H., Raffel, C. A., Cubuk, E. D., Kurakin, A., and Li, C.-L. (2020). Fixmatch: Simplifying semi-supervised learning with consistency and confidence. NeurIPS.
  • Strudel et al., (2021) Strudel, R., Garcia, R., Laptev, I., and Schmid, C. (2021). Segmenter: Transformer for semantic segmentation. In ICCV.
  • Su et al., (2021) Su, Y., Sun, R., Lin, G., and Wu, Q. (2021). Context decoupling augmentation for weakly supervised semantic segmentation. In ICCV.
  • Sun et al., (2020) Sun, G., Wang, W., Dai, J., and Van Gool, L. (2020). Mining cross-image semantics for weakly supervised semantic segmentation. In ECCV. Springer.
  • Sun et al., (2021) Sun, K., Shi, H., Zhang, Z., and Huang, Y. (2021). Ecs-net: Improving weakly supervised semantic segmentation by using connections between class activation maps. In ICCV.
  • Tarvainen and Valpola, (2017) Tarvainen, A. and Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. NeurIPS.
  • Tsai et al., (2018) Tsai, Y.-H., Hung, W.-C., Schulter, S., Sohn, K., Yang, M.-H., and Chandraker, M. (2018). Learning to adapt structured output space for semantic segmentation. In CVPR.
  • Vu et al., (2019) Vu, T.-H., Jain, H., Bucher, M., Cord, M., and Pérez, P. (2019). Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In CVPR.
  • Wan et al., (2020) Wan, Z., Zhang, B., Chen, D., Zhang, P., Chen, D., Liao, J., and Wen, F. (2020). Bringing old photos back to life. In CVPR.
  • (100) Wang, H., Fan, J., Wang, Y., Song, K., Wang, T., and Zhang, Z. (2023a). Droppos: Pre-training vision transformers by reconstructing dropped positions. Advances in Neural Information Processing Systems (NeurIPS).
  • (101) Wang, H., Shen, T., Zhang, W., Duan, L.-Y., and Mei, T. (2020a). Classes matter: A fine-grained adversarial approach to cross-domain semantic segmentation. In ECCV.
  • (102) Wang, H., Shen, Y., Fei, J., Li, W., Wu, L., Wang, Y., and Zhang, Z. (2023b). Pulling target to source: A new perspective on domain adaptive semantic segmentation. arXiv preprint arXiv:2305.13752.
  • (103) Wang, H., Song, K., Fan, J., Wang, Y., Xie, J., and Zhang, Z. (2023c). Hard patches mining for masked image modeling. In CVPR.
  • Wang et al., (2021) Wang, W., Zhou, T., Yu, F., Dai, J., Konukoglu, E., and Van Gool, L. (2021). Exploring cross-image pixel contrast for semantic segmentation. In ICCV.
  • (105) Wang, Y., Chen, H., Fan, Y., Sun, W., Tao, R., Hou, W., Wang, R., Yang, L., Zhou, Z., Guo, L.-Z., Qi, H., Wu, Z., Li, Y.-F., Nakamura, S., Ye, W., Savvides, M., Raj, B., Shinozaki, T., Schiele, B., Wang, J., Xie, X., and Zhang, Y. (2022a). Usb: A unified semi-supervised learning benchmark for classification. In NeurIPS.
  • (106) Wang, Y., Chen, H., Heng, Q., Hou, W., Fan, Y., Wu, Z., Wang, J., Savvides, M., Shinozaki, T., Raj, B., et al. (2023d). Freematch: Self-adaptive thresholding for semi-supervised learning. In ICLR.
  • (107) Wang, Y., Fei, J., Wang, H., Li, W., Wu, L., Zhao, R., and Shen, Y. (2023e). Balancing logit variation for long-tail semantic segmentation. In CVPR.
  • Wang et al., (2019) Wang, Y., Ma, X., Chen, Z., Luo, Y., Yi, J., and Bailey, J. (2019). Symmetric cross entropy for robust learning with noisy labels. In CVPR.
  • (109) Wang, Y., Wang, H., Shen, Y., Fei, J., Li, W., Jin, G., Wu, L., Zhao, R., and Le, X. (2022b). Semi-supervised semantic segmentation using unreliable pseudo labels. In CVPR.
  • (110) Wang, Y., Zhang, J., Kan, M., Shan, S., and Chen, X. (2020b). Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation. In CVPR.
  • Wei et al., (2017) Wei, Y., Feng, J., Liang, X., Cheng, M.-M., Zhao, Y., and Yan, S. (2017). Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. In CVPR.
  • Wu et al., (2021) Wu, T., Huang, J., Gao, G., Wei, X., Wei, X., Luo, X., and Liu, C. H. (2021). Embedded discriminative attention mechanism for weakly supervised semantic segmentation. In CVPR.
  • Wu et al., (2019) Wu, Z., Wang, X., Gonzalez, J. E., Goldstein, T., and Davis, L. S. (2019). Ace: Adapting to changing environments for semantic segmentation. In ICCV.
  • Xie et al., (2021) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., and Luo, P. (2021). Segformer: Simple and efficient design for semantic segmentation with transformers. NeurIPS.
  • Xie et al., (2020) Xie, Q., Luong, M.-T., Hovy, E., and Le, Q. V. (2020). Self-training with noisy student improves imagenet classification. In CVPR.
  • Xu et al., (2022) Xu, J., De Mello, S., Liu, S., Byeon, W., Breuel, T., Kautz, J., and Wang, X. (2022). Groupvit: Semantic segmentation emerges from text supervision. In CVPR.
  • (117) Xu, L., Ouyang, W., Bennamoun, M., Boussaid, F., Sohel, F., and Xu, D. (2021a). Leveraging auxiliary tasks with affinity learning for weakly supervised semantic segmentation. In ICCV.
  • (118) Xu, Y., Shang, L., Ye, J., Qian, Q., Li, Y.-F., Sun, B., Li, H., and Jin, R. (2021b). Dash: Semi-supervised learning with dynamic thresholding. In ICML.
  • Yang et al., (2022) Yang, L., Zhuo, W., Qi, L., Shi, Y., and Gao, Y. (2022). St++: Make self-training work better for semi-supervised semantic segmentation. In CVPR.
  • Yao et al., (2021) Yao, Y., Chen, T., Xie, G.-S., Zhang, C., Shen, F., Wu, Q., Tang, Z., and Zhang, J. (2021). Non-salient region object mining for weakly supervised semantic segmentation. In CVPR.
  • Yu and Koltun, (2015) Yu, F. and Koltun, V. (2015). Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122.
  • Yuan et al., (2021) Yuan, J., Liu, Y., Shen, C., Wang, Z., and Li, H. (2021). A simple baseline for semi-supervised semantic segmentation with strong data augmentation. In ICCV.
  • Yun et al., (2019) Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., and Yoo, Y. (2019). Cutmix: Regularization strategy to train strong classifiers with localizable features. In ICCV.
  • (124) Zhang, B., Xiao, J., Wei, Y., Sun, M., and Huang, K. (2020a). Reliability does matter: An end-to-end weakly supervised semantic segmentation approach. In AAAI.
  • (125) Zhang, F., Gu, C., Zhang, C., and Dai, Y. (2021a). Complementary patch for weakly supervised semantic segmentation. In ICCV.
  • (126) Zhang, P., Zhang, B., Chen, D., Yuan, L., and Wen, F. (2020b). Cross-domain correspondence learning for exemplar-based image translation. In CVPR.
  • (127) Zhang, P., Zhang, B., Zhang, T., Chen, D., Wang, Y., and Wen, F. (2021b). Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation. In CVPR.
  • Zhang et al., (2019) Zhang, Q., Zhang, J., Liu, W., and Tao, D. (2019). Category anchor-guided unsupervised domain adaptation for semantic segmentation. NeurIPS.
  • Zhao et al., (2017) Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J. (2017). Pyramid scene parsing network. In CVPR.
  • Zhao et al., (2021) Zhao, X., Vemulapalli, R., Mansfield, P. A., Gong, B., Green, B., Shapira, L., and Wu, Y. (2021). Contrastive learning for label efficient semantic segmentation. In ICCV.
  • Zheng et al., (2021) Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., Torr, P. H., et al. (2021). Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In CVPR.
  • Zhong et al., (2021) Zhong, Y., Yuan, B., Wu, H., Yuan, Z., Peng, J., and Wang, Y.-X. (2021). Pixel contrastive-consistent semi-supervised semantic segmentation. In ICCV.
  • Zhou et al., (2016) Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A. (2016). Learning deep features for discriminative localization. In CVPR.
  • Zhou et al., (2021) Zhou, Q., Zhuang, C., Lu, X., and Ma, L. (2021). Domain adaptive semantic segmentation with regional contrastive consistency regularization. arXiv preprint arXiv:2110.05170.
  • Zou et al., (2018) Zou, Y., Yu, Z., Kumar, B., and Wang, J. (2018). Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In ECCV.
  • Zou et al., (2020) Zou, Y., Zhang, Z., Zhang, H., Li, C.-L., Bian, X., Huang, J.-B., and Pfister, T. (2020). Pseudoseg: Designing pseudo labels for semantic segmentation. In ICLR.
  • Zuo et al., (2021) Zuo, S., Yu, Y., Liang, C., Jiang, H., Er, S., Zhang, C., Zhao, T., and Zha, H. (2021). Self-training with differentiable teacher. arXiv preprint arXiv:2109.07049.