TY - JOUR
T1 - No-reference perceptual CT image quality assessment based on a self-supervised learning framework
AU - Lee, Wonkyeong
AU - Cho, Eunbyeol
AU - Kim, Wonjin
AU - Choi, Hyebin
AU - Beck, Kyongmin Sarah
AU - Yoon, Hyun Jung
AU - Baek, Jongduk
AU - Choi, Jang Hwan
N1 - Funding Information:
This work was partly supported by the Technology development Program of MSS [S3146559], the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2022M3A9I2017587, NRF-2022R1A2C1092072), and by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Nos. 1711174276, RS-2020-KD000016). This work was also partly supported by Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government (21YR2400, Development of image and medical intelligence core technology for rehabilitation diagnosis and treatment of brain and spinal cord diseases).
Publisher Copyright:
© 2022 The Author(s). Published by IOP Publishing Ltd
PY - 2022/12/1
Y1 - 2022/12/1
N2 - Accurate image quality assessment (IQA) is crucial to optimize computed tomography (CT) image protocols while keeping the radiation dose as low as reasonably achievable. In the medical domain, IQA is based on how well an image provides a useful and efficient presentation necessary for physicians to make a diagnosis. Moreover, IQA results should be consistent with radiologists’ opinions on image quality, which is accepted as the gold standard for medical IQA. As such, the goals of medical IQA are greatly different from those of natural IQA. In addition, the lack of pristine reference images or radiologists’ opinions in a real-time clinical environment makes IQA challenging. Thus, no-reference IQA (NR-IQA) is more desirable in clinical settings than full-reference IQA (FR-IQA). Leveraging an innovative self-supervised training strategy for object detection models by detecting virtually inserted objects with geometrically simple forms, we propose a novel NR-IQA method, named deep detector IQA (D2IQA), that can automatically calculate the quantitative quality of CT images. Extensive experimental evaluations on clinical and anthropomorphic phantom CT images demonstrate that our D2IQA is capable of robustly computing perceptual image quality as it varies according to relative dose levels. Moreover, when considering the correlation between the evaluation results of IQA metrics and radiologists’ quality scores, our D2IQA is marginally superior to other NR-IQA metrics and even shows performance competitive with FR-IQA metrics.
AB - Accurate image quality assessment (IQA) is crucial to optimize computed tomography (CT) image protocols while keeping the radiation dose as low as reasonably achievable. In the medical domain, IQA is based on how well an image provides a useful and efficient presentation necessary for physicians to make a diagnosis. Moreover, IQA results should be consistent with radiologists’ opinions on image quality, which is accepted as the gold standard for medical IQA. As such, the goals of medical IQA are greatly different from those of natural IQA. In addition, the lack of pristine reference images or radiologists’ opinions in a real-time clinical environment makes IQA challenging. Thus, no-reference IQA (NR-IQA) is more desirable in clinical settings than full-reference IQA (FR-IQA). Leveraging an innovative self-supervised training strategy for object detection models by detecting virtually inserted objects with geometrically simple forms, we propose a novel NR-IQA method, named deep detector IQA (D2IQA), that can automatically calculate the quantitative quality of CT images. Extensive experimental evaluations on clinical and anthropomorphic phantom CT images demonstrate that our D2IQA is capable of robustly computing perceptual image quality as it varies according to relative dose levels. Moreover, when considering the correlation between the evaluation results of IQA metrics and radiologists’ quality scores, our D2IQA is marginally superior to other NR-IQA metrics and even shows performance competitive with FR-IQA metrics.
KW - computed tomography
KW - no-reference image quality assessment
KW - perceptual image quality
KW - radiation dose
KW - self-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85145662107&partnerID=8YFLogxK
U2 - 10.1088/2632-2153/aca87d
DO - 10.1088/2632-2153/aca87d
M3 - Article
AN - SCOPUS:85145662107
SN - 2632-2153
VL - 3
JO - Machine Learning: Science and Technology
JF - Machine Learning: Science and Technology
IS - 4
M1 - 045033
ER -