Published April 20, 2023 | Version v1
Dataset Open

Low-dose Computed Tomography Perceptual Image Quality Assessment Grand Challenge Dataset (MICCAI 2023)

  • 1. Ewha Womans University
  • 2. Friedrich-Alexander-Universität Erlangen-Nürnberg
  • 3. Stanford University
  • 4. Yonsei University
  • 5. Mayo Clinic

Contributors

Producer:

  • 1. Ewha Womans University
  • 2. Seoul National University Hospital

Description

Image quality assessment (IQA) is extremely important in computed tomography (CT) imaging, since it facilitates the optimization of radiation dose and the development of novel algorithms in medical imaging, such as restoration. In addition, since an excessive dose of radiation can cause harmful effects in patients, generating high- quality images from low-dose images is a popular topic in the medical domain. However, even though peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) are the most widely used evaluation metrics for these algorithms, their correlation with radiologists’ opinion of the image quality has been proven to be insufficient in previous studies, since they calculate the image score based on numeric pixel values (1-3). In addition, the need for pristine reference images to calculate these metrics makes them ineffective in real clinical environments, considering that pristine, high-quality images are often impossible to obtain due to the risk posed to patients as a result of radiation dosage. To overcome these limitations, several studies have aimed to develop a no-reference novel image quality metric that correlates well with radiologists’ opinion on image quality without any reference images (2, 4, 5).

Nevertheless, due to the lack of open-source datasets specifically for CT IQA, experiments have been conducted with datasets that differ from each other, rendering their results incomparable and introducing difficulties in determining a standard image quality metric for CT imaging. Besides, unlike real low-dose CT images with quality degradation due to various combinations of artifacts, most studies are conducted with only one type of artifact (e.g., low-dose noise (6-11), view aliasing (12), metal artifacts (13), scattering (14-16), motion artifacts (17-22), etc.). Therefore, this challenge aims to 1) evaluate various NR-IQA models on CT images containing complex noise/artifacts, 2) to compare their correlations with scores produced by radiologists, and 3) to grant insights into the determination of the best-performing metric of CT imaging in terms of correlating with the perception of radiologists’.

Furthermore, considering that low-dose CT images are achieved by reducing the number of projections per rotation and by reducing the X-ray current, the combination of two major artifacts, namely the sparse view streak and noise generated by these methods, is dealt with in this challenge so that the best-performing IQA model applicable in real clinical environments can be verified.

 

Funding Declaration:

This research was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155966, Artificial Intelligence Convergence Innovation Human Resources Development (Ewha Womans University)), and by the National Research Foundation of Korea (NRF-2022R1A2C1092072), and by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 1711174276, RS-2020-KD000016).

 

References:

  1. Lee W, Cho E, Kim W, Choi J-H. Performance evaluation of image quality metrics for perceptual assessment of low-dose computed tomography images. Medical Imaging 2022: Image Perception, Observer Performance, and Technology Assessment: SPIE, 2022.
  2. Lee W, Cho E, Kim W, Choi H, Beck KS, Yoon HJ, Baek J, Choi J-H. No-reference perceptual CT image quality assessment based on a self-supervised learning framework. Machine Learning: Science and Technology 2022.
  3. Choi D, Kim W, Lee J, Han M, Baek J, Choi J-H. Integration of 2D iteration and a 3D CNN-based model for multi-type artifact suppression in C-arm cone-beam CT. Machine Vision and Applications 2021;32(116):1-14.
  4. Pal D, Patel B, Wang A. SSIQA: Multi-task learning for non-reference CT image quality assessment with self-supervised noise level prediction. 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI): IEEE, 2021; p. 1962-1965.
  5. Mittal A, Moorthy AK, Bovik AC. No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 2012;21(12):4695-4708. doi: 10.1109/TIP.2012.2214050
  6. Lee J-YK, Wonjin; Lee, Yebin; Lee, Ji-Yeon; Ko, Eunji; Choi, Jang-Hwan. Unsupervised Domain Adaptation for Low-dose Computed Tomography Denoising. IEEE Access 2022.
  7. Jeon S-Y, Kim W, Choi J-H. MM-Net: Multi-frame and Multi-mask-based Unsupervised Deep Denoising for Low-dose Computed Tomography. IEEE Transactions on Radiation and Plasma Medical Sciences 2022.
  8. Kim W, Lee J, Kang M, Kim JS, Choi J-H. Wavelet subband-specific learning for low-dose computed tomography denoising. PloS one 2022;17(9):e0274308.
  9. Han M, Shim H, Baek J. Low-dose CT denoising via convolutional neural network with an observer loss function. Med Phys 2021;48(10):5727-5742. doi: 10.1002/mp.15161
  10. Kim B, Shim H, Baek J. Weakly-supervised progressive denoising with unpaired CT images. Med Image Anal 2021;71:102065. doi: 10.1016/j.media.2021.102065
  11. Wagner F, Thies M, Gu M, Huang Y, Pechmann S, Patwari M, Ploner S, Aust O, Uderhardt S, Schett G, Christiansen S, Maier A. Ultralow-parameter denoising: Trainable bilateral filter layers in computed tomography. Med Phys 2022;49(8):5107-5120. doi: 10.1002/mp.15718
  12. Kim B, Shim H, Baek J. A streak artifact reduction algorithm in sparse-view CT using a self-supervised neural representation. Med Phys 2022. doi: 10.1002/mp.15885
  13. Kim S, Ahn J, Kim B, Kim C, Baek J. Convolutional neural network-based metal and streak artifacts reduction in dental CT images with sparse-view sampling scheme. Med Phys 2022;49(9):6253-6277. doi: 10.1002/mp.15884
  14. Bier B, Berger M, Maier A, Kachelrieß M, Ritschl L, Müller K, Choi JH, Fahrig R. Scatter correction using a primary modulator on a clinical angiography Carm CT system. Med Phys 2017;44(9):e125-e137.
  15. Maul N, Roser P, Birkhold A, Kowarschik M, Zhong X, Strobel N, Maier A. Learning-based occupational x-ray scatter estimation. Phys Med Biol 2022;67(7). doi: 10.1088/1361-6560/ac58dc
  16. Roser P, Birkhold A, Preuhs A, Syben C, Felsner L, Hoppe E, Strobel N, Kowarschik M, Fahrig R, Maier A. X-Ray Scatter Estimation Using Deep Splines. IEEE Trans Med Imaging 2021;40(9):2272-2283. doi: 10.1109/TMI.2021.3074712
  17. Maier J, Nitschke M, Choi JH, Gold G, Fahrig R, Eskofier BM, Maier A. Rigid and Non-Rigid Motion Compensation in Weight-Bearing CBCT of the Knee Using Simulated Inertial Measurements. IEEE Trans Biomed Eng 2022;69(5):1608-1619. doi: 10.1109/TBME.2021.3123673
  18. Choi JH, Maier A, Keil A, Pal S, McWalter EJ, Beaupré GS, Gold GE, Fahrig R. Fiducial markerbased correction for involuntary motion in weightbearing Carm CT scanning of knees. II. Experiment. Med Phys 2014;41(6Part1):061902.
  19. Choi JH, Fahrig R, Keil A, Besier TF, Pal S, McWalter EJ, Beaupré GS, Maier A. Fiducial markerbased correction for involuntary motion in weightbearing Carm CT scanning of knees. Part I. Numerical modelbased optimization. Med Phys 2013;40(9):091905.
  20. Berger M, Muller K, Aichert A, Unberath M, Thies J, Choi JH, Fahrig R, Maier A. Marker-free motion correction in weight-bearing cone-beam CT of the knee joint. Med Phys 2016;43(3):1235-1248. doi: 10.1118/1.4941012
  21. Ko Y, Moon S, Baek J, Shim H. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module. Med Image Anal 2021;67:101883. doi: 10.1016/j.media.2020.101883
  22. Preuhs A, Manhart M, Roser P, Hoppe E, Huang Y, Psychogios M, Kowarschik M, Maier A. Appearance Learning for Image-Based Motion Estimation in Tomography. IEEE Trans Med Imaging 2020;39(11):3667-3678. doi: 10.1109/TMI.2020.3002695

Files

LDCTIQAG2023_train.zip

Files (728.6 MB)

Name Size Download all
md5:dd385eaee0dbf35b503afeef54c2957e
728.6 MB Preview Download