Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter September 12, 2015

Tissue segmentation from head MRI: a ground truth validation for feature-enhanced tracking

  • Tobias Wissel EMAIL logo , Patrick Stüber , Benjamin Wagner , Achim Schweikard and Floris Ernst

Abstract

Accuracy is essential for optical head-tracking in cranial radiotherapy. Recently, the exploitation of local patterns of tissue information was proposed to achieve a more robust registration. Here, we validate a ground truth for this information obtained from high resolution MRI scans. In five subjects we compared the segmentation accuracy of a semi-automatic algorithm with five human experts. While the algorithm segments the skin and bone surface with an average accuracy of less than 0.1 mm and 0.2 mm, respectively, the mean error on the tissue thickness was 0.17 mm. We conclude that this accuracy is a reasonable basis for extracting reliable cutaneous structures to support surface registration.

1 Introduction

Precise localization of the cranium is vital for effective head radiotherapy. Thus, patient immobilization is currently achieved by inconvenient thermoplastic masks with repositioning accuracies in the mm range [1]. A fast alternative for more comfortable monitoring during treatment is given by marker-less optical head-tracking. To achieve the desired sub-millimeter accuracy and robustness of skin surface registration for continuous scanning over time, feature-supported tracking was introduced recently [2]. Using machine learning models, optical backscatter features across the forehead are converted into a tissue thickness measure, which is joined with the 3D spatial information and then used for enhanced 4D surface registration.

These models are generated in a supervised manner based on a high resolution ground truth for tissue thickness across the forehead. Here, we validate the ground truth accuracy and show that the segmentation procedure used in [2] suffices to provide smooth subcutaneous structures across the forehead.

2 Material and methods

2.1 Data acquisition

High resolution MR images (0.1025 mm × 0.1025 mm 1 mm) of the forehead region were acquired on an Ingenia 3.0T scanner (Philips Healthcare) from five subjects (3 male, 2 female, aged 25-64). Fig. 1 marks the region of interest in a lower resolution T1 scan of the entire head.

Figure 1 Clinical T1 contrast scan of a subject’s head. The volume-of-interest (VOI) is marked in green. It contains the forehead region and MR-visible marker spheres for registering the volume into other coordinate spaces.
Figure 1

Clinical T1 contrast scan of a subject’s head. The volume-of-interest (VOI) is marked in green. It contains the forehead region and MR-visible marker spheres for registering the volume into other coordinate spaces.

The imaging sequence was a gradient echo (FFE-T1) with a flip angle of 15° to rapidly acquire a volume of 210 mm × 210 mm × 70 mm. Echo time was kept as low as possible (TE/TR = 5/17 ms) to minimize susceptibility artifacts at tissue-air/bone interfaces [3]. The volume was aligned in parallel to the AC-PC line and approximately orthogonal to the forehead surface.

2.2 Tissue segmentation

The segmentation pipeline consists of nine major steps and was performed slice-wise. First, the initial segmentation of five components (grey matter, white matter, cerebral spinal fluid (CSF), meninges and skull/skin) was done using SPM8 [4]. Based on probability maps, the output indicates the likelihood for each voxel of belonging to a certain type of anatomy. Second, the first four components were merged into a negative and the last component (extra-cranial tissue and skull) into a positive mask. By masking the original image with both of them, the interior of the skull was cut out and only voxels were kept that had a probability of more than 25% of belonging to component five and more than 68% of not belonging to one of the others (cf. Fig 2A-D). The volume-of-interest (VOI) was then restricted to the forehead (Fig 2E).

Figure 2 Illustration of the segmentation pipeline. A: raw volume slice, B: negative mask of intra-cranial sites, C: positive mask of likely extra-cranial tissue sites, D: raw slice after masking, E: forehead region from D, F: region growing output, G: active bone contour detection using snakes (blue: initial contour, red: output contour), H: snake output segment, I: output from F and H after Canny edge detection
Figure 2

Illustration of the segmentation pipeline. A: raw volume slice, B: negative mask of intra-cranial sites, C: positive mask of likely extra-cranial tissue sites, D: raw slice after masking, E: forehead region from D, F: region growing output, G: active bone contour detection using snakes (blue: initial contour, red: output contour), H: snake output segment, I: output from F and H after Canny edge detection

In a third step manual inspection allowed to correct for rare drops in local contrast or large vessels. In rare cases these may lead to holes in the tissue region which cannot be closed using a previously applied morphological closing operator and median low-pass filter. Fourth, a 2D region growing algorithm was used to extract the largest, connected tissue segment (Fig 2F). In order to obtain a smooth edge of the tissue-bone boundary, 2D snakes were applied [5] (200 support points, 150 gradient descent steps, weights for tension and stiffness were 0.03 and 0.005, respectively). The active contour also made use of a balloon force and gradient vector flow as an external force. Fig 2G shows the initial contour and the optimized result for one slice. Fig 2H plots the segment contained within this optimized contour. Canny edge detection then yielded the air-tissue boundary (cf. Fig 2I). In the eighth step agglomerative clustering rejected minor contours (e.g. surrounding vessels) and identified the point clouds for the skin and bone surface.

From these clouds the actual tissue thickness was obtained as the distance between each point on the skin surface and the penetration point of the corresponding normal vector at the bone surface (cf. Fig 3).

Figure 3 3D point clouds for the skin (green) and bone (red) surface. Tissue thickness is computed by normal vector (black) penetration through both surfaces. The orthogonality condition holds for the skin surface.
Figure 3

3D point clouds for the skin (green) and bone (red) surface. Tissue thickness is computed by normal vector (black) penetration through both surfaces. The orthogonality condition holds for the skin surface.

2.3 Expert segmentation and validation

The algorithmic segmentation as described above was validated based on the segmentation results of five skilled human experts. Each of them manually segmented the skin and bone surfaces for one slice per subject. Each slice was selected from the middle of the volume to be representative for the forehead region possibly scanned by an optical tracking system. To evaluate the segmentation error propagation to the skin thickness measure, the experts were also asked to segment five slices for subject three. Five slices were sufficient to apply the normal vector penetration procedure and to compare with the tissue thickness output by the segmentation algorithm. The deviation of the algorithmic from the expert segmentation was computed as a 3D vector using point-to-point correspondences obtained via Euclidean nearest-neighbor search. Finally, expert five segmented one slice of subject three five times, in order to get an impression of the intra-operator reproducibility.

3 Results

Fig. 4 shows the algorithmic segmentation result for subject three. The top part shows the 3D skin surface over-laid with color-coded tissue thickness. The surface shows smooth subcutaneous and cutaneous structures. The histogram reveals that large areas on the forehead had a tissue thickness between 3.4 mm and 5.6 mm. This corresponds to 80 % of the histogram area. Corresponding intervals for the other subjects (not shown) had a range of 3.35 mm (S1, 70 %), 3.87 mm (S2, 70 %), 2.60 mm (S4, 80 %), and 1.98 mm (S5, 80 %).

Figure 4 Segmentation result for subject 3. A: 3D skin surface over-laid with color-coded tissue thickness. B: Histogram of the tissue thickness across the patch shown in A.
Figure 4

Segmentation result for subject 3. A: 3D skin surface over-laid with color-coded tissue thickness. B: Histogram of the tissue thickness across the patch shown in A.

The deviation of the algorithmic from the expert segmentation is provided in Table 1. As illustrated in Fig. 5, the tissue boundary was always identified with an average accuracy of less than 0.1 mm and the bone with less than 0.2 mm. The standard deviations shown in Table 1 contain on average 0.019 mm inter-operator variability for the skin and 0.031 for the bone surface (root-mean-square across all subjects). Expert five was able to reproduce his results with mean deviations of 0.084 mm for the skin and 0.086 mm for the bone.

Figure 5 Deviations of the algorithmic from the expert segmentation for all subjects averaged across all experts.
Figure 5

Deviations of the algorithmic from the expert segmentation for all subjects averaged across all experts.

Table 1

Mean and standard deviation of the absolute differences between expert and algorithmic segmentation for skin and bone contours (in mm).

Skin [mm]Bone [mm]
Subject 10.093 ± 0.0810.168 ± 0.098
Subject 20.082 ± 0.0720.199 ± 0.128
Subject 30.101 ± 0.0820.154 ± 0.126
Subject 40.102 ± 0.0710.155 ± 0.095
Subject 50.095 ± 0.0780.185 ± 0.118

All experts segmented five slices for subject three to evaluate how the segmentation errors on the skin and bone boundaries translate to the tissue thickness measure. The results are show in Table 2. On average, the error on the tissue thickness in that region was found to be 0.173 mm and for all experts less than 0.21 mm.

Table 2

Mean and standard deviation of the absolute differences on the computed tissue thickness (in mm).

Expert 1Expert 2Expert 3Expert 4Expert 5
S30.1610.1830.1620.1540.205
±0.123±0.151±0.145±0.128±0.159

4 Discussion and conclusion

A qualitative evaluation of the skin patches reveals that cutaneous structures were smoothly visible after segmentation. These variations in tissue thickness originate from subcutaneous vessels, facial muscles or changes in the subcutaneous fat layer. The segmentation error for the air-tissue and tissue-bone interface were on average less than twice the in-plane voxel size. With respect to the theoretical limit of half the voxel size and the main thickness interval of tissue across the forehead (here the structures were varying in ranges of 1.98 mm or more), this is acceptable. Moreover, repeated manual segmentation for one expert suggests that significant parts of this error may be due to intra-operator variability. The accuracy across subjects may vary due to motion during the acquisition process, which lasted approximately 16 min (as it was the case for subjects two and five). This motion leads to much noisier images and blurred boundaries. Large motion results in halo-like noise patterns around the head.

In general, the segmentation of the bone was more prone to errors due to a poorer contrast with respect to the adjacent cranium or meninges structures. Nevertheless, it was found that the mean boundary deviations do not additively translate to the skin thickness measure. One possible contribution to this is that the thickness was extracted along the forehead normal direction, while the slice orientation was not precisely orthogonal.

In conclusion, we have shown, that the accuracy of tissue thickness segmentation is acceptable and provides a reasonable basis for revealing smooth and prominent cutaneous structures across the forehead. These can add valuable information to a surface registration process and fix poorly defined degrees of freedom.

Acknowledgment

The authors acknowledge the support of Dr. Uwe Melchert and Christian Erdmann from the clinic of neuroradiology, University Hospital Schleswig-Holstein, respectively.

Funding

This work was supported by Varian Medical Systems, Inc. and partially funded by the Graduate School for Computing in Medicine and Life Sciences, German Excellence Initiative [DFG GSC 235/1]

Author’s Statement

  1. Conflict of interest Authors state no conflict of interest. Material and Methods: Informed consent: Informed consent has been obtained from all individuals included in this study. Ethical approval: The research related to human use has been complied with all the relevant national regulations, institutional policies and in accordance the tenets of the Helsinki Declaration, and has been approved by the authors’ institutional review board or equivalent committee.

References

[1] Robar, JL., Clark, BG., Schella, JW., and Kim, CS. Analysis of patient repositioning accuracy in precision radiation therapy using automated image fusion. Journal of Applied Clinical Medical Physics, 2005: 6 (1): 71–83.10.1120/jacmp.v6i1.1998Search in Google Scholar

[2] Wissel, T., Stüber, P., Wagner, B., Bruder, R., Schweikard, A., and Ernst, F. Tissue Thickness Estimation for High Precision Head-Tracking using a Galvanometric Laser Scanner - A Case Study. 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC ‘14), Chicago, USA, 2014, 3106–310910.1109/EMBC.2014.6944280Search in Google Scholar

[3] Farahani, K., Sinha, U., Sinha, S., Chiu, LCL., and Lufkin, R., Effect of field strength on susceptibility artifacts in magnetic resonance imaging. Computerized Medical Imaging and Graphics, 1990: 14 (6): 409–413.10.1016/0895-6111(90)90040-ISearch in Google Scholar

[4] Friston, KJ., Ashburner, JT., Kiebel, SJ., Nichols, TE., and Penny, WD. Statistical Parametric Mapping: The Analysis of Functional Brain Images. 2006: Elsevier, London.Search in Google Scholar

[5] Kaas, M., Witkin, A., and Terzopoulos, D. Snakes: active contour models. International Journal of Computer Vision, 1988: 1 (4): 321–331.10.1007/BF00133570Search in Google Scholar

[6] Canny, J. A computational approach to edge detection. IEEE Transaction on Pattern Analysis and Machine Intelligence. 1986: PAMI-8 (6), 679–698.10.1016/B978-0-08-051581-6.50024-6Search in Google Scholar

Published Online: 2015-9-12
Published in Print: 2015-9-1

© 2015 by Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 24.4.2024 from https://www.degruyter.com/document/doi/10.1515/cdbme-2015-0057/html
Scroll to top button