Figure 2.
Graphical representation of the unsupervised domain adaptation process. A task-loss
(e.g., a cross-entropy loss) is used for a supervised training stage on the source domain using the semantic annotations. Unsupervised adaptation to target data without labels can be performed at different levels (e.g., input, features or output) with different strategies.
Figure 2.
Graphical representation of the unsupervised domain adaptation process. A task-loss
(e.g., a cross-entropy loss) is used for a supervised training stage on the source domain using the semantic annotations. Unsupervised adaptation to target data without labels can be performed at different levels (e.g., input, features or output) with different strategies.
Figure 3.
Autonomous cars, industry robots and home assistant robots are just some of the possible real world applications of Unsupervised Domain Adaptation (UDA) in semantic segmentation. (The images are modified version of pictures obtained with kind permission from Shutterstock, Inc., New York, NY, USA. The original versions have been created (from left to right) by Scharfsinn, Monopoly919 and PaO_Studio).Applications
Figure 3.
Autonomous cars, industry robots and home assistant robots are just some of the possible real world applications of Unsupervised Domain Adaptation (UDA) in semantic segmentation. (The images are modified version of pictures obtained with kind permission from Shutterstock, Inc., New York, NY, USA. The original versions have been created (from left to right) by Scharfsinn, Monopoly919 and PaO_Studio).Applications
Figure 7.
Mean IoU (mIoU) of different methods grouped by backbone in the scenario adapting source knowledge from GTA5 to Cityscapes (see
Table 1
). Backbones are sorted by decreasing the number of entries. Orange crosses represent the per-backbone mean mIoU. Only the backbones with 3 or more entries are displayed.
Figure 7.
Mean IoU (mIoU) of different methods grouped by backbone in the scenario adapting source knowledge from GTA5 to Cityscapes (see
Table 1
). Backbones are sorted by decreasing the number of entries. Orange crosses represent the per-backbone mean mIoU. Only the backbones with 3 or more entries are displayed.
Table 1.
Mean IoU (mIoU) for different methods grouped by backbone in the scenario adapting source knowledge from GTA5 to Cityscapes.
MethodBackbonemIoUMethodBackbonemIoUBiasetton et al. [
65
]ResNet-10130.4Chen et al. [
46
]VGG-1635.9Chang et al. [
62
]ResNet-10145.4Chen et al. [
51
]VGG-1638.1Chen et al. [
46
]ResNet-10139.4Choi et al. [
78
]VGG-1642.5Chen et al. [
95
]ResNet-10146.4Du et al. [
55
]VGG-1637.7Du et al. [
55
]ResNet-10145.4Hoffman et al. [
45
]VGG-1627.1Gong et al. [
75
]ResNet-10142.3Hoffman et al. [
50
]VGG-1635.4Hoffman et al. [
50
]ResNet-10142.7 *Huang et al. [
49
]VGG-1632.6Li et al. [
48
]ResNet-10148.5Li et al. [
48
]VGG-1641.3Lian et al. [
101
]ResNet-10147.4Lian et al. [
101
]VGG-1637.2Luo et al. [
52
]ResNet-10142.6Luo et al. [
52
]VGG-1634.2Luo et al. [
63
]ResNet-10143.2Luo et al. [
63
]VGG-1636.6Michieli et al. [
66
]ResNet-10133.3Saito et al. [
89
]VGG-1628.8Spadotto et al. [
67
]ResNet-10135.1Sankaranarayanan et al. [
59
]VGG-1637.1Tsai et al. [
60
]ResNet-10142.4Tsai et al. [
60
]VGG-1635.0Tsai et al. [
70
]ResNet-10146.5Tsai et al. [
70
]VGG-1637.5Vu et al. [
68
]ResNet-10145.5Vu et al. [
68
]VGG-1636.1Wu et al. [
82
]ResNet-10138.5Wu et al. [
82
]VGG-1636.2Yang et al. [
25
]ResNet-10150.5Yang et al. [
25
]VGG-1642.2Zhang et al. [
47
]ResNet-10147.8Zhang et al. [
96
]VGG-1628.9Zou et al. [
94
]ResNet-10147.1Zhang et al. [
97
]VGG-1631.4Murez et al. [
58
]ResNet-3431.8Zhou et al. [
71
]VGG-1647.81-3 Lian et al. [
101
]ResNet-3848.0Zhu et al. [
57
]VGG-1638.1 *Zou et al. [
93
]ResNet-3847.0Zou et al. [
93
]VGG-1636.1Zou et al. [
94
]ResNet-3849.8Hong et al. [
79
]VGG-1944.51-6 Lee et al. [
91
]ResNet-5035.8Chen et al. [
51
]DRN-2645.1Saito et al. [
88
]ResNet-5033.3Dundar et al. [
84
]DRN-2638.3Wu et al. [
82
]ResNet-5041.7Hoffman et al. [
50
]DRN-2639.51-3 Hoffman et al. [
50
]MobileNet-v237.3 *Huang et al. [
49
]DRN-2640.2Toldo et al. [
53
]MobileNet-v241.1Liu et al. [
120
]DRN-2639.1 *Zhu et al. [
76
]MobileNet-v229.3 *Yang et al. [
74
]DRN-2642.6Murez et al. [
58
]DenseNet35.7Zhu et al. [
76
]DRN-2639.6 *Huang et al. [
49
]ERFNet31.3Saito et al. [
89
]DRN-10539.7