Nuklearmedizin 2021; 60(02): 150
DOI: 10.1055/s-0041-1726739
WIS-Vortrag
Medizinische Physik

Development of a deep learning method for CT-free attenuation correction for an ultra-long axial field of view PET scanner

S Xue
1   University of Bern, Dept. Nuclear Medicine, Bern, Switzerland
,
KP Bohn
1   University of Bern, Dept. Nuclear Medicine, Bern, Switzerland
,
R Guo
2   Shanghai Jiaotong University, Dept. Nuclear Medicine, Shanghai, China
,
H Sari
1   University of Bern, Dept. Nuclear Medicine, Bern, Switzerland
3   Siemens Healthcare AG, Advanced Clinical Imaging Technology, Lausanne, Switzerland
,
M Viscione
1   University of Bern, Dept. Nuclear Medicine, Bern, Switzerland
,
A Rominger
1   University of Bern, Dept. Nuclear Medicine, Bern, Switzerland
,
B Li
2   Shanghai Jiaotong University, Dept. Nuclear Medicine, Shanghai, China
,
K Shi
1   University of Bern, Dept. Nuclear Medicine, Bern, Switzerland
4   Dept. Informatics, Technical University of Munich, Germany
› Author Affiliations
 

Ziel/Aim The possibility of reduced ionization dose of ultra-high-sensitivity total-body PET makes attenuation computed tomography (CT) a critical radiation burden in clinical applications. Artificial intelligence has shown the potential to generate PET images from non-attenuation corrected PET images. Our aim in this work is to develop a CT-free attenuation correction (AC) for an ultra-long field of view (FOV) PET scanner.

Methodik/Methods Whole body PET images of 165 patients scanned with a digital regular FOV PET scanner (Siemens Biograph Vision in Shanghai and Bern) was included for the development and testing of the deep learning methods. Furthermore, the developed algorithm was tested on data of 10 patients scanned with an ultra-long axial FOV scanner (Siemens Biograph Vision Quadra in Bern). A 2D generative adversarial network (GAN) was developed featuring a residual dense block, which enables the model to fully exploit hierarchical features from all network layers. The normalized root mean squared error (NRMSE) and peak signal-to-noise ratio (PSNR), were calculated to evaluate the results generated by deep learning.

Ergebnisse/Results The preliminary results showed that, the developed deep learning method achieved an average NRMSE of 0.4 ± 0.3 % and PSNR of 51.4 ± 6.4 for the test on Biograph Vision and an average NRMSE of 1.0 ± 0.3 % and PSNR of 40.3 ± 3.1 for the validation on Biograph Vision Quadra.

Schlussfolgerungen/Conclusions The developed deep learning method to shows the potential for CT-free AC for an ultra-long FOV PET scanner. Work in progress includes clinical assessment of PET images by independent nuclear medicine physicians. Training and fine-tuning with more datasets will be performed to further consolidate the development.



Publication History

Article published online:
08 April 2021

© 2021. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany