TY - JOUR
T1 - Single Image Deraining Using Time-Lapse Data
AU - Cho, Jaehoon
AU - Kim, Seungryong
AU - Min, Dongbo
AU - Sohn, Kwanghoon
N1 - Funding Information:
Manuscript received December 23, 2019; revised May 3, 2020; accepted May 25, 2020. Date of publication June 12, 2020; date of current version July 13, 2020. This work was supported in part by the Research and Development Program for Advanced Integrated-Intelligence for Identification (AIID) through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT under Grant NRF-2018M3E3A1057289 and in part by the Ewha Womans University Research Grant of 2018. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Emanuele Salerno. (Corresponding authors: Dongbo Min; Kwanghoon Sohn.) Jaehoon Cho and Kwanghoon Sohn are with the School of Electrical and Electronic Engineering, Yonsei University, Seoul 120-749, South Korea (e-mail: rehoon@yonsei.ac.kr; khsohn@yonsei.ac.kr).
Publisher Copyright:
© 1992-2012 IEEE.
PY - 2020
Y1 - 2020
N2 - Leveraging on recent advances in deep convolutional neural networks (CNNs), single image deraining has been studied as a learning task, achieving an outstanding performance over traditional hand-designed approaches. Current CNNs based deraining approaches adopt the supervised learning framework that uses a massive training data generated with synthetic rain streaks, having a limited generalization ability on real rainy images. To address this problem, we propose a novel learning framework for single image deraining that leverages time-lapse sequences instead of the synthetic image pairs. The deraining networks are trained using the time-lapse sequences in which both camera and scenes are static except for time-varying rain streaks. Specifically, we formulate a background consistency loss such that the deraining networks consistently generate the same derained images from the time-lapse sequences. We additionally introduce two loss functions, the structure similarity loss that encourages the derained image to be similar with an input rainy image and the directional gradient loss using the assumption that the estimated rain streaks are likely to be sparse and have dominant directions. To consider various rain conditions, we leverage a dynamic fusion module that effectively fuses multi-scale features. We also build a novel large-scale time-lapse dataset providing real world rainy images containing various rain conditions. Experiments demonstrate that the proposed method outperforms state-of-the-art techniques on synthetic and real rainy images both qualitatively and quantitatively. On the high-level vision tasks under severe rainy conditions, it has been shown that the proposed method can be utilized as a pre-preprocessing step for subsequent tasks.
AB - Leveraging on recent advances in deep convolutional neural networks (CNNs), single image deraining has been studied as a learning task, achieving an outstanding performance over traditional hand-designed approaches. Current CNNs based deraining approaches adopt the supervised learning framework that uses a massive training data generated with synthetic rain streaks, having a limited generalization ability on real rainy images. To address this problem, we propose a novel learning framework for single image deraining that leverages time-lapse sequences instead of the synthetic image pairs. The deraining networks are trained using the time-lapse sequences in which both camera and scenes are static except for time-varying rain streaks. Specifically, we formulate a background consistency loss such that the deraining networks consistently generate the same derained images from the time-lapse sequences. We additionally introduce two loss functions, the structure similarity loss that encourages the derained image to be similar with an input rainy image and the directional gradient loss using the assumption that the estimated rain streaks are likely to be sparse and have dominant directions. To consider various rain conditions, we leverage a dynamic fusion module that effectively fuses multi-scale features. We also build a novel large-scale time-lapse dataset providing real world rainy images containing various rain conditions. Experiments demonstrate that the proposed method outperforms state-of-the-art techniques on synthetic and real rainy images both qualitatively and quantitatively. On the high-level vision tasks under severe rainy conditions, it has been shown that the proposed method can be utilized as a pre-preprocessing step for subsequent tasks.
KW - Single image deraining
KW - convolutional neural networks (CNNs)
KW - dynamic fusion module
KW - time-lapse dataset
UR - http://www.scopus.com/inward/record.url?scp=85088304878&partnerID=8YFLogxK
U2 - 10.1109/TIP.2020.3000612
DO - 10.1109/TIP.2020.3000612
M3 - Article
AN - SCOPUS:85088304878
SN - 1057-7149
VL - 29
SP - 7274
EP - 7289
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
M1 - 9115884
ER -