Unsupervised Domain Adaptation for Low-Dose Computed Tomography Denoising

Jaa Yeon Lee, Wonjin Kim, Yebin Lee, Ji Yeon Lee, Eunji Ko, Jang Hwan Choi

Research output: Contribution to journalArticlepeer-review

4 Scopus citations


Deep neural networks have shown great improvements in low-dose computed tomography (CT) denoising. Early deep learning-based low-dose CT denoising algorithms were primarily based on supervised learning. However, supervised learning requires a large number of training samples, which is impractical in real-world scenarios. To address this problem, we propose a novel unsupervised domain adaptation approach for low-dose CT denoising. This proposed framework adapts the network pretrained with paired low-and normal-dose phantom images (source domain) to denoise unlabeled low-dose human CT images (target domain). Our framework modifies the action of the domain classifier, enabling the denoising network to be adapted to the target domain. Furthermore, we introduce a new backpropagation method, which we call domain-independent weighted backpropagation. By combining these techniques, we demonstrate that the denoising network can be properly trained without using clinical clean CT images. The experimental results showed that our method exhibited better performance in terms of both objective and perceptual image qualities when compared with current unsupervised denoising algorithms. Our proposed domain adaptation represents a first-use case in the context of CT denoising problems, with the possibility of extension to other image restoration tasks.

Original languageEnglish
Pages (from-to)126580-126592
Number of pages13
JournalIEEE Access
StatePublished - 2022

Bibliographical note

Publisher Copyright:
© 2013 IEEE.


  • Low-dose computed tomography (LDCT) denoising
  • deep learning
  • domain adaptation
  • low-dose CT
  • unsupervised learning


Dive into the research topics of 'Unsupervised Domain Adaptation for Low-Dose Computed Tomography Denoising'. Together they form a unique fingerprint.

Cite this