No-reference perceptual CT image quality assessment based on a self-supervised learning framework

Wonkyeong Lee, Eunbyeol Cho, Wonjin Kim, Hyebin Choi, Kyongmin Sarah Beck, Hyun Jung Yoon, Jongduk Baek, Jang Hwan Choi

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

Accurate image quality assessment (IQA) is crucial to optimize computed tomography (CT) image protocols while keeping the radiation dose as low as reasonably achievable. In the medical domain, IQA is based on how well an image provides a useful and efficient presentation necessary for physicians to make a diagnosis. Moreover, IQA results should be consistent with radiologists’ opinions on image quality, which is accepted as the gold standard for medical IQA. As such, the goals of medical IQA are greatly different from those of natural IQA. In addition, the lack of pristine reference images or radiologists’ opinions in a real-time clinical environment makes IQA challenging. Thus, no-reference IQA (NR-IQA) is more desirable in clinical settings than full-reference IQA (FR-IQA). Leveraging an innovative self-supervised training strategy for object detection models by detecting virtually inserted objects with geometrically simple forms, we propose a novel NR-IQA method, named deep detector IQA (D2IQA), that can automatically calculate the quantitative quality of CT images. Extensive experimental evaluations on clinical and anthropomorphic phantom CT images demonstrate that our D2IQA is capable of robustly computing perceptual image quality as it varies according to relative dose levels. Moreover, when considering the correlation between the evaluation results of IQA metrics and radiologists’ quality scores, our D2IQA is marginally superior to other NR-IQA metrics and even shows performance competitive with FR-IQA metrics.

Original languageEnglish
Article number045033
JournalMachine Learning: Science and Technology
Volume3
Issue number4
DOIs
StatePublished - 1 Dec 2022

Bibliographical note

Funding Information:
This work was partly supported by the Technology development Program of MSS [S3146559], the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2022M3A9I2017587, NRF-2022R1A2C1092072), and by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Nos. 1711174276, RS-2020-KD000016). This work was also partly supported by Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government (21YR2400, Development of image and medical intelligence core technology for rehabilitation diagnosis and treatment of brain and spinal cord diseases).

Publisher Copyright:
© 2022 The Author(s). Published by IOP Publishing Ltd

Keywords

  • computed tomography
  • no-reference image quality assessment
  • perceptual image quality
  • radiation dose
  • self-supervised learning

Fingerprint

Dive into the research topics of 'No-reference perceptual CT image quality assessment based on a self-supervised learning framework'. Together they form a unique fingerprint.

Cite this