Visualization of 4D-PET/CT, Target Volumes and Dose Distribution: Applications in Radiotherapy Planning

. In radiation treatment (RT) planning medical doctors need to consider a variety of information sources for anatomical and functional target volume delineation. The validation and inspection of the deﬁned target volumes and the resulting RT plan is a complex task, especially in the presence of moving target areas as it is the case for tumors of the chest and the upper abdomen. A 4D-PET/CT visualization system may become a helpful tool for validating RT plans. We deﬁne major requirements such a visualization system should fulﬁll to provide medical doctors with the necessary visual information to validate tumor delin-eations, and review the dose distribution of a RT plan. We present an implementation of such a system, and present qualitative results of its applications for a lung cancer patient.


Introduction
For lung cancer, the most prominent functional imaging system in use is PET along with CT as the anatomical imaging modality. PET/CT imaging with the 18f-fludeoxyglucose (18FDG) tracer is an accurate diagnostic method for nonsmall cell lung cancer, and is used for the delineation of the gross tumor volume (GTV) [11]. Respiration causes target areas to move which can not be captured by the planning CT which is only a static image. 4D imaging techniques can be used to image patients under free breathing conditions and define target volumes representing the lesion over the whole breathing cycle. For example, movement related volume definitions are possible by using time-averaged CT scans for planning [17]. 4D PET/CT can be used for delineating the tumor on images of each breathing phase, and the union of the contouring can be used to define target volumes, e.g. the internal target volume (ITV) [11]. The inspection of target volumes is usually done slice-wise and often combined with a video of maximum intensity projection (MIP) of the 4D data sets. However, this makes it hard to capture the real 3D motion of target areas, and might give false impressions about tumor coverage by the defined target volumes. Therefore, a 4D-PET/CT visualization system can assist RT planning and validating treatment plans, especially in the presence of moving structures like tumors of the chest and the upper abdomen.
In this paper we present a 3D multi-modal visualization framework which focuses on the validation and inspection of target volumes and dose distribution of RT plans. In order for a 3D visualization to assist physicians in this task, we define the following major requirements which need to be addressed: 1. Support for 4D (3D+t) PET and CT data sets and fusion of these image modalities: PET and CT signals should be fused in a 3D rendering. Support for changing time bins should be provided for giving access to the whole breathing cycle of the patient. 2. Visualization of segmentation data: Defined structures such as GTV, ITV, planning target volume (PTV) or organs at risk (OAR) should be included and combined with the 3D visualization of PET and CT for evaluating the spatial configuration and ensuring optimal coverage of moving target areas. 3. Visualization of dose information: Visualizing dose information as iso-dose surfaces together with defined structures like the GTV, ITV or OARs should allow evaluating the spatial configuration and coverage of moving target areas complementary to dose volume histograms (DVH). 4. Clipping and/or masking parts of the volume: Hiding parts of the volume which might not be relevant in the current situation (e.g. visualize only the PET signal inside a region of interest (ROI)) should be supported. 5. Interactivity and pre-processing: There should be no pre-processing involved such as re-sampling data sets to the same size or offline volume fusion into a new data set. The parameters such as data sets to be visualized, clipping, and visual appearance, should be modifiable on-the-fly.
The proposed visualization framework performs fusion of 4D PET/CT images, combined with defined target volumes and segmentation information of OARs. Furthermore, the visualization of dose volumes provides the necessary information for a visual review and validation of the dose distribution of the computed treatment plan. We present an implementation of such a system, and present results of how it can be used to review target volumes and dose distributions for a lung cancer patient.

Related Work
SlicerRT [13] is a freely available RT research toolkit implemented as an extension to 3D Slicer [3]. It has functionality for visualizing iso-dose surfaces and calculating and plotting dose volume histograms (DVH). The visualization is based on the Visualization Toolkit (VTK) [14]. The Medical Imaging Interaction Toolkit (MITK) [16] provides a platform similar to 3D Slicer, with a plug-in system, and combines functionality of VTK and ITK [6]. It provides DICOM data import, visualization and various plug-ins, e.g. for registration and segmentation, and supports 4D data sets. However, to the best of our knowledge, neither SlicerRT nor 3D Slicer nor MITK support direct volume rendering (DVR) of multiple volume data sets combined with translucent boundary visualization of segmentation data. State-of-the-art visualization approaches for a number of our requirements exist. PET/CT visualization with advanced functionality for fusion with focus on enhancing visibility of ROIs can be found in [18] and [7]. [4] and [1] combine DVR of multi-modality data sets with segmentation data, and use segmentation information for enabling or disabling volumes per segmentation object, and support different rendering modes such MIP or iso-surfacing. Translucent boundary visualization of segmentation data and DVR with advanced support for clipping can be found in [2,8,15]. 4D approaches focused on extracting and visualizing tumor motion [5] and organ motion [10] in CT images exist as well.
However, none of the above mentioned approaches is available in platforms like 3D Slicer or MITK.

Data Sets
The data sets we focus on consist of the following types: 4D PET/CT, planning CT, segmentation data sets, and dose volumes. The planning CT has a voxel size of 1.17mm x 1.17mm x 3mm and pixel dimensions of 512 x 512 x 107. Segmentation data sets represent OARs and target volumes such as GTV and ITV. They were converted from RT-Struct format to binary volumes by rasterizing the slice-wise polygons in planning CT resolution. The 4D CT consists of 10 time bins with a voxel size of 1.17mm x 1.17mm x 2mm and pixel dimensions of 512 x 512 x 77. The 4D FDG-PET data set has a voxel size of 4mm x 4mm x 4mm and pixel dimensions of 144 x 144 x 45 consisting of 10 time bins. The dose volume holds the relevant dose distribution information in reference to the planning CT and was exported from the planning system software. It has a voxel size of 2.5mm x 2.5mm x 3mm with pixel dimensions of 212 x 119 x 107.

3D Multi-Modal Renderering Core
The functionality of the 3D multi-modal rendering consists of three main parts: fusion of PET and CT, rendering of binary volumes and iso-dose rendering of dose volumes.
The volume rendering is based on ray-casting with front-to-back blending [9]. Figure 1 gives an overview of the fusion pipeline. The fusion of PET/CT is implemented on the data level. For each voxel, color and opacity values are fused by a weighted linear combination and blended in front to back order in the ray-casting algorithm. The fusion can be combined with information from binary volumes to define ROIs which encode which modality to visualize. This can be either just one modality (e.g. only PET inside the ITV), or the fused PET/CT, or background for hiding parts of the volume.
Binary volumes which represent target volumes and segmentation of OARs are visualized as a surface. Each volume gets rendered in a separate pass into a depth texture, and a texture for color and opacity. The first hit of the viewing ray with the surface of the binary volume determines the depth value (similar to rendering the front face of a triangulated mesh). Color and opacity values can be assigned to each binary volume individually. For each intersection point, we include the color and opacity values into the blending scheme of our ray-casting algorithm. By this we can blend surfaces at the correct depth, and therefore preserve their spatial ordering.
Dose information is rendered as iso-dose surfaces. We extract the surface during the ray-casting pass by testing whether a voxel belongs to a defined isodose surface. If a voxel belongs to a surface, we include the color and opacity of the dose value into the blending scheme of our ray-casting pass. The respective dose parameters for the surfaces can be specified as a list of values in Gray units.

Implementation and Integration
The 3D multi-modal rendering framework is mostly implemented in CUDA [12], and consists of three main parts (see fig. 2(b)): the data store module, the rendering module and an interface to VTK. The data store is responsible for storing volumes in GPU memory and makes them available to our rendering module. Data sets are organized in a unified coordinate system which takes into account spatial transformations between data sets. The core of the rendering module is responsible for PET/CT fusion, binary volume rendering and dose volume visualization.
Our visualization framework has been integrated in MITK. Figure 2(a) shows the GUI of the MITK platform. Part 1 of fig. 2(a) shows already available functionality of MITK. This includes a data set manager, image navigator (3D+time navigation) and 2D slice views. The integration of our multi-modal rendering core is realized via a MITK plug-in which is the connection between MITK and our rendering core (see fig. 2(b)). Figure 2(a) part 3 shows the GUI of our MITK plug-in. It communicates with our rendering core via a VTK interface (see fig.  2(b)), and is responsible for setting and changing parameters such as data sets, their visual appearance and parameters for clipping and fusion. It is also resonsible for calculating and plotting the dose volume histogram for selected target volumes and OARs. Finally the result of our 3D rendering is integrated and replaces the standard 3D view of MITK (see fig.2(a) part 2).

Results and Discussion
Our 3D multi-modal rendering framework combines information of PET, CT, segmentation and dose information. Figure 3 shows qualitative results of our 3D rendering. The result of the 4D PET/CT fusion is shown in fig. 3(a) (requirement 1). Segmentation data sets, the ITV and a safety margin around the trachea (SMT), were added in fig. 3(b) (requirement 2). Clipping was applied to hide parts of the thorax (requirement 4). The parameters can be adjusted depending on the ROI (see fig. 3(c)). Volumes can be enabled or disabled (the PET was disabled fig. 3(d)), and the time bins can be changed from within our MITK plug-in (requirement 5). Seeing the information of modalities inside a target volume can help to verify their coverage of the tumor, or to analyze their spatial configuration in respect to OAR segmentations, especially when they intersect with regions where the delivered dose should be low (see fig.3(d)).   fig. 4(a). Clipping was used to hide parts of the volume and view the inside of the segmented structures. Figure 4(b) shows the result of fusion with binary volume information together with their surface rendering, and fig. 4(c) without their surface rendering respectively. The inside of the lung is defined as background (BG), the SMT as CT, the ITV as PET and the GTV as CT. The assignment is done in the same order, and results in the following mapping: Results of the dose visualization are shown in fig. 5 (requirement 3). We defined four different iso-dose values, and used clipping to hide parts of the thorax which would occlude the target area. A qualitative result of fusing the planning CT with the iso-dose rendering is shown in fig. 5(a). Figure 5(b) shows the result of PET/CT fusion and iso-dose rendering. In fig. 5(c) the PET was disabled and the binary volumes of ITV and SMT were added. Our MITK plugin also supports DVH computation and visualization of the segmentation data sets which were selected for the 3D rendering (see fig.5(d)). Figure 6 shows two different time bins of a 4D PET/CT data set, together with segmentation data (ITV, SMT) and the 15 Gray iso-dose surface. Using the animation plug-in of MITK (or changing time bins manually) provides an interactive 4D visualization of a breathing patient. Comparing fig. 6(a) with fig.  6(b) shows how the PET signal moved inside the ITV. This can help to analyze the movement (due to respiratory motion) of target areas inside static structures or dose regions, which are represented by iso-dose surfaces. For example, the coverage of the ITV with the 4D PET signal can be reviewed, and, if necessary, adjustments can be made.

Conclusion and Future Work
By including our 3D multi-modality rendering framework into MITK, we implemented the requirements 1 -5 stated in section 1. Making all this available to medical doctors gives them a set of tools which they can use to interactively explore target volumes and dose distribution data of a RT plan. The proposed functionality can be applied to a multitude of scenarios including: checking the spatial configuration of target volumes defined on volumes of different modalities, checking the coverage of the ITV with the 4D PET signal (respiratory motion of the tumor area) or the spatial configuration of dose areas inside OARs.
In future implementations, we plan to include motion information, e.g. including information from registration algorithms for motion modeling and definition of movement related target volumes, as well as extracting and visualizing spatial uncertainty. An extended user testing will explore the possible use of this tool in clinical practice.