Segmentation-free direct tumor volume and metabolic activity estimation from PET scans
Introduction
Positron emission tomography (PET) is a medical imaging modality used to image the distribution of radiotracers and thereby observe functional processes in the body. Quantitative analysis of radiotracer uptake in tumor tissue in PET images is a crucial step towards precise diagnosis and treatment response assessment (Bailey et al., 2005). A critical challenge in quantitative imaging is the ability to reproducibly estimate key metrics such as tumor volume, standardized uptake value (SUV) and total lesion glycolysis (TLG). The aim of this paper is to propose and evaluate a novel machine learning-based method for direct estimation of these metrics, without the requirement for segmentation. We focus on 18F-FDG, which is by far the most dominant radiotracer used in oncological PET imaging studies.
Currently, the maximum intensity (SUVmax) in a user defined volume of interest is the most widely used metric to assess tumor metabolism (Wahl et al., 2009). Different popular approaches use manual “expert” segmentation or a fixed threshold value to obtain the average radiotracer uptake in a region around the maximal pixel. However, all these approaches have limitations. The accuracy of uptake quantitation will depend on the signal to background ratio, size of the object, resolution of the scanning equipment, the reconstruction algorithm and its sensitivity to noise. Depending on the chosen thresholding values, it also has a tendency to overestimate/underestimate lesion boundaries (Foster et al., 2014a). These limitations, in addition to the relatively poor image resolution in PET (as compared to CT and MRI), mean that manual segmentation is a difficult task characterized by low reproducibility due to high inter- and intra-observer variability (Foster et al., 2014a). Achieving reproducibility in quantitative measurement requires a consistent assessment methodology (Wahl et al., 2009), as well as the use of more advanced image interpretation methods. Currently, the vast majority of literature proposing advanced image analysis tools for PET quantification is centered on the challenge of lesion volume estimation. Sophisticated approaches for this task have been recently proposed based on belief function theory (Lelandais et al., 2012), Bayesian-based classification (Hatt et al., 2010) and possibility theory (Dewalle-Vignion et al., 2011). Graph-theoretic methods based on the random walker algorithm as well as the maximum-flow method have also been recently reported (Cui et al., 2015; Song et al., 2013; Ju et al., 2015). While tumor volume determination is an important step in PET quantification, it is insufficient in terms of gaining an overall picture of treatment response. Achieving the latter requires the computation of tumor metabolic activity and specifically, the computation of metrics based on SUV and TLG (Wahl et al., 2009). This is because, due to variable metabolism of the tumour tissue and also the partial volume effect (PVE) observed in PET images, simple segmentation methods based on a physical tumour volume (which would be visible in CT or MRI) typically create inaccurate estimate of the metabolic tumor volume/activity estimations. In order to achieve more accurate results, additional processing of the data should be performed. User-input is generally required for this, which opens up the possibility for error, causing accuracy and reproducibility to suffer. To perform activity quantification, George et al. (2011) proposed a fuzzy Gaussian mixture model to classify (segment) the voxels into two classes of tumor and background and directly estimated total lesion activity from the resulting classification. The method showed poor performance on non-spherical lesions. In a later paper (George et al., 2012), a multiclass extension of this work was proposed, which introduced a fuzzy model with a stochastic expectation maximization algorithm. In this work, however; activity estimation of lesions is still initialized by clustering-based segmentation which is a time consuming step. Also, the methods are dependent on how many ‘hard’ classes are defined. Although the authors have tried to compensate for the partial volume effect (PVE) of large lesions (e.g., 27.67 mL), their method might fail for tiny lesions that are highly affected by PVE. Moreover, there are other sources of variability which have not been investigated in their studies, e.g., variability due to different reconstruction methods, scanning durations, and different levels of background activity.
Table 1 lists several of the most recent methods proposed for PET and PET-CT analysis according to the quantification goal, methodology, and data used. As can be observed from the table, most quantification methods are focused uniquely on tumor segmentation. In addition, many methods require the selection of multiple parameters which reduce reproducibility. Other things to note in the table include the generally limited range of acquisition and image reconstruction conditions under which most methods were tested; in addition to the proposed work, only three other methods were tested with data acquired over multiple scan duration, while only one other method was tested with data reconstructed using different algorithms. Also, in clinical FDG PET images, there may be several volumes/organs containing high activity in addition to the analyzed lesions, for example, the brain, kidneys, heart – all metabolize large amounts of radiotracer, we refer to these uptakes as “normal”. We note from the table, however; that when images are created using phantoms, background activity is frequently constant, only 4 studies modeled additional sources of normal activity. The reader is also referred to Foster et al. (2014a) for a recent survey of PET quantification methods. Based on our literature review, our hypothesis is that the development of a method capable of joint volume and activity estimation, without depending on preceding segmentation and requiring minimal yet intuitive user interaction, could significantly simplify the interpretation of PET images obtained in a clinical environment.
In this paper, we propose a machine learning based method to achieve this objective. To the best of our knowledge, this is the first attempt to directly (i.e., without even a rough delineation) estimate volume and activity within a specified PET region of interest (ROI). Our method automatically obtains estimates of activity and volume metrics (which can then be used to derive other measures, e.g., SUVmean and TLG) given a center point and ROI, without the requirement for tumor segmentation, PVE correction, or the selection of image specific parameters. In addition, our proposed method is robust to noise (which comes from different sources e.g., scanning duration) and choice of reconstruction algorithm and its parameters. In this way, our method allows for faster processing, more efficient workflow and provides a tool for capturing key measurements in a way that is robust and reproducible. It is possible to have an irregular shape patch which closely encloses a lesion or roughly delineates it (this can be considered as a coarse segmentation), but this requires more interaction from a user or a robust automatic segmentation method. However, our goal is to reduce the user interaction substantially without requiring any segmentation. In particular, our method requires the user to only provide a patch (e.g., a box) surrounding a lesion, which can be completed much quicker than a segmentation, even a coarse segmentation.
Section snippets
Methods
Our proposed method for segmentation-free, machine learning-based direct estimation of volume and activity consists of several steps. As shown in Fig. 1, we have two main phases; training and testing, which are described in the following sections.
Data
As described in Section 2.1, our proposed machine learning method was trained once for phantom experiments, using 3D random patches extracted from 55 different PET images scanned from the Elliptical Lung-Spine Body Phantom™(ELSB) and once for the clinical tests, with an additional 85 phantom images from The National Electrical Manufacturers Association (NEMA) International Electrotechnical Commission (IEC) PET Body Phantom. It was then tested on both unseen (i.e., not included in training set)
Experimental results
In the following subsections, we report our experimental results. We start by describing the robustness study and report these results in Section 4.1. In Section 4.2, we describe and present the results of our validation experiments against both popular and recently proposed approaches in the literature. In Section 4.6, we present the results of the TLG estimation experiments. In Section 4.7, we describe our results on clinical images.
Discussion and conclusion
The task of quantifying lesion parameters in clinical practice is frequently a demanding manual process, requiring the selection of threshold values, as well as scanner-specific corrections due to partial volume effect and noise. Sophisticated automated volume segmentation methods have been proposed which alleviate this burden, but very often these methods rely on manual parameter tweaking or initialization which requires experience. In addition, quantifying activity is a still manual process
Conflict of interest
The authors have no conflict of interest to declare.
References (33)
- et al.
Joint segmentation of anatomical and functional images: applications in quantification of lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT images
Med. Image Anal.
(2013) - et al.
Exploring feature-based approaches in pet images for predicting cancer treatment outcomes
Pattern Recognit.
(2009) - et al.
A review on segmentation of positron emission tomography images
Comput. Biol. Med.
(2014) - et al.
Accurate automatic delineation of heterogeneous functional volumes in positron emission tomography for oncology applications
Int. J. Radiat. Oncol. Biol. Phys.
(2010) Fast noise variance estimation
Comput. Vis. Image Understand.
(1996)- et al.
Denoising of pet images by combining wavelets and curvelets for improved preservation of resolution and quantitation
Med. Image Anal.
(2013) - et al.
Automated radiation targeting in head-and-neck cancer using region-based texture analysis of PET and CT images
Int. J. Radiat. Oncol. Biol. Phys.
(2009) - et al.
Unsupervised tumour segmentation in pet using local and global intensity-fitting active surface and alpha matting
Comput. Biol. Med.
(2013) - et al.
Contourlet-based active contour model for PET image segmentation
Med. Phys.
(2013) - et al.
Globally convergent image reconstruction for emission tomography using relaxed ordered subsets algorithms
IEEE Trans. Med. Imaging
(2003)