Abstract
An efficient deep-detector image quality assessment (EDIQA) is proposed to address the need for an objective and efficient medical image quality assessment (IQA) without requiring reference images or ground-truth scores from expert radiologists. Existing methods encounter limitations in meeting diagnostic quality and computation efficiency, especially when reference images are unavailable. The proposed EDIQA leverages knowledge distillation in a two-stage training procedure, using a task-based IQA model and the modified deep-detector IQA (mD2IQA) as the teacher model and novel student model designed for effective learning. This approach enables the student model to compute image scores based on a task-based approach without complex signal insertion and multiple predictions, resulting in a speed improvement of over 1.6e+4 times compared to the teacher model. A deep-learning architecture is developed to allow the student model to learn hierarchical multiscale features of the image from low- to high-level semantic features. Rigorous evaluations demonstrate the generalizability of the proposed model across various modalities and anatomical parts, indicating a step toward a universal IQA metric in medical imaging.
Original language | English |
---|---|
Article number | 4501715 |
Pages (from-to) | 1-15 |
Number of pages | 15 |
Journal | IEEE Transactions on Instrumentation and Measurement |
Volume | 73 |
DOIs | |
State | Published - 2024 |
Bibliographical note
Publisher Copyright:© 1963-2012 IEEE.
Keywords
- Deep learning
- diagnostic quality
- image quality assessment (IQA)
- knowledge distillation
- medical image quality
- no-reference IQA
- visual perception