Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics. Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class. However, real-world training data collected with different composition biases often exhibits severe distribution gaps for domain and class, leading to substantial performance degradation. In this paper, we propose a self-balanced domain generalization framework that adaptively learns the weights of losses to alleviate the bias caused by different distributions of the multi-domain source data. The self-balanced scheme is based on an auxiliary reweighting network that iteratively updates the weight of loss conditioned on the domain and class information by leveraging balanced meta data. Experimental results demonstrate the effectiveness of our method overwhelming state-of-the-art works for domain generalization.
|Title of host publication
|2021 IEEE International Conference on Image Processing, ICIP 2021 - Proceedings
|IEEE Computer Society
|Number of pages
|Published - 2021
|2021 IEEE International Conference on Image Processing, ICIP 2021 - Anchorage, United States
Duration: 19 Sep 2021 → 22 Sep 2021
|Proceedings - International Conference on Image Processing, ICIP
|2021 IEEE International Conference on Image Processing, ICIP 2021
|19/09/21 → 22/09/21
Bibliographical noteFunding Information:
This research was supported by the Yonsei University Research Fund of 2021 (2021-22-0001).
∗Corresponding author This work was supported by Institute of Information communications Technology Planning Evaluation (IITP) grand funded by the Korea government(MSIT) (No.2020-0-00056, To create AI systems that act appropriately and effectively in novel situations that occur in open worlds.
© 2021 IEEE
- Class imbalance
- Domain generalization
- Domain imbalance