PixMatch: Unsupervised Domain Adaptation via Pixelwise Consistency Training

    Luke Melas-Kyriazi 1,2
    1 Harvard University
    2 Oxford University
    3 Boston Children's Hospital

Abstract

Unsupervised domain adaptation is a promising technique for semantic segmentation and other computer vision tasks for which large-scale data annotation is costly and time-consuming. In semantic segmentation, it is attractive to train models on annotated images from a simulated (source) domain and deploy them on real (target) domains. In this work, we present a novel framework for unsupervised domain adaptation based on the notion of target-domain consistency training. Intuitively, our work is based on the idea that in order to perform well on the target domain, a model’s output should be consistent with respect to small perturbations of inputs in the target domain. Specifically, we introduce a new loss term to enforce pixelwise consistency between the model's predictions on a target image and a perturbed version of the same image. In comparison to popular adversarial adaptation methods, our approach is simpler, easier to implement, and more memory-efficient during training. Experiments and extensive ablation studies demonstrate that our simple approach achieves remarkably strong results on two challenging synthetic-to-real benchmarks, GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes.

Approach

Our proposed unsupervised domain adaptation approach for semantic segmentation, PixMatch, which employs consistency training and pseudolabeling to enforce consistency on the target domain.

Examples

Qualitative examples of our consistency training method and prior methods on SYNTHIA-to-Cityscapes. The final column shows our baseline model with augmentation-based perturbations. Note that these images are not hand-picked; they are the first 5 images in the Cityscapes validation set.

Citation

@inproceedings{
    melaskyriazi2021pixmatch,
    title={PixMatch: Unsupervised Domain Adaptation via Pixelwise Consistency Training}
    author={Luke Melas-Kyriazi and Arjun K. Manrai}
    year={2021}
    booktitle={CVPR}
}

Related Work