Classification of teeth in cone-beam CT using deep convolutional neural network
Introduction
Dental records play an important role in forensic identification after large-scale disasters [1], [2], [3]. Forensic dentistry is an important field because dental information can be used for identifying a person even when his/her body has been severely damaged; moreover, antemortem (AM) x-ray images are easier to collect than DNA samples. For dental identification, postmortem (PM) dental findings and teeth conditions are recorded in a dental chart. However, most dentists are inexperienced at recording the dental chart for corpses, and it can cause a psychiatric burden. Such a psychiatric stress may also lead to incorrect data recording and psychiatric disorders.
For overcoming these drawbacks, studies have proposed automatically obtaining dental information from the dental x-ray images, creating panoramic-like images from CT data for better image comparison, and/or matching the AM and PM images [4], [5], [6], [7], [8], [9], [10]. Jain et al. [4] investigated a computerized method for matching the AM and PM dental images. Each tooth was first isolated from its neighbors and the tooth contour was extracted on the basis of intensity. The corresponding image was then searched for by matching the extracted contours with rigid transformation. Among 38 AM/PM image pairs, 25 were correctly matched while the genuine AM image was selected as the second-best match in 5 of the remaining 13 cases. Zhou et al. [5] proposed a 3-step method to retrieve matched images. Images were first classified into bitewing, periapical, or panoramic images, and the teeth on bitewing images were segmented using a top-hat filter and an active contour method. The corresponding image was searched for by matching the boundary shape.
Lin et al. [8] proposed a method to classify teeth on bitewing images. After tooth segmentation, the length and width ratio and crown size were used as features to classify each tooth as a molar or premolar by using a support vector machine. They achieved an overall classification accuracy of 95% when using 47 images containing 369 teeth. Hosntalab et al. [10] proposed a multi-stage technique for the classification and numbering of teeth on multi-slice CT images. The 3 step process included the segmentation of tooth regions using a variation level set, feature extraction of the wavelet-Fourier descriptor, and classification of teeth into 4 groups using a supervised classifier. Using this technique, they achieved high classification accuracies above 94% for 804 teeth from 30 CT cases [10].
In this study, as a component of automated dental chart filing system, we investigated an automated method for classifying tooth types on dental cone-beam CT images using a deep convolutional neural network (DCNN). Since the success of Krizhevsky et al. [11] in the ImageNet 2012 competition, DCNNs have shown their outstanding ability in object recognition and natural image classification. The application of DCNNs to medical images has been increasingly investigated by many groups that have achieved certain degrees of success [12], [13], [14], [15], [16], [17]. However, a successful application procedure has not yet been established, and to our knowledge DCNNs have only been applied to dental image processing in one study. Wang et al. [18] reported the comparison of dental radiography analysis algorithms for the grand challenges held in IEEE international Symposium on Biomedical Imaging 2015, in which Ronneberger et al. employed u-shaped deep convolution neural network for segmentation of bitewing radiographs for caries detection. In this preliminary study, we investigated the utility of a DCNN in classifying teeth into 7 types by using rectangular regions of interest (ROIs), each of which enclosed a tooth from an axial slice.
Section snippets
Image dataset
The images used in this study were obtained using two dental CT units, namely Veraviewepocs 3D (J.Morita Mfg, Corp., Kyoto, Japan) and Alphard VEGA (Asahi Roentgen Ind. Co., Ltd., Kyoto, Japan), which were used to acquire images in 33 and 19 cases, respectively. The images were obtained from Asahi University Hospital, Gifu, Japan. The diameter of the field of view ranged from 51 to 200 mm, and the voxel resolution ranged from 0.1 to 0.39 mm. The institutional review boards of Gifu University and
Experimental results
A DCNN was employed to classify each ROI into 7 tooth types. We compared the classification results using training dataset A without data augmentation, dataset B with image rotation, dataset C with intensity transformation, and dataset D with both image rotation and intensity transformation. The number of training samples in these datasets is listed in Table 2. The original hand-cropped ROIs had different sizes. These images were resized to 256×256 pixels automatically prior to being randomly
Discussion
Using the DCNN, most teeth could be correctly classified to one of the 7 tooth types. One of the advantages of this method is that it does not require the precise tooth segmentation as might be required for the conventional feature-based classification. Because the convolution with the pooling process is robust enough to withstand image shift, it is possible to automatically recognize and classify tooth type from the larger, whole dental regions.
Deep learning in general is considered to require
Conclusion
We have investigated the utility of the DCNN for classifying of tooth types on dental cone-beam CT. By increasing the number of training samples by rotation and intensity transformation, the classification performance was improved and a high accuracy of 91.0% was achieved. The 7-tooth-type classification result can be effectively used for automatic preparation of dental charts, which may be useful in forensic identification.
Conflict of interest
None declared.
Acknowledgement
This study was partly supported by a Grant-in-Aid for Scientific Research (B) (No. 26293402) by Japan Society for the Promotion of Science and a Grant-in-Aid for Scientific Research on Innovative Areas (Multidisciplinary Computational Anatomy, No. 26108005) by Ministry of Education, Culture, Sports, Science and Technology, Japan.
References (21)
- et al.
Validation of post mortem dental CT for disaster victim identification
J. Forensic Radiol. Imaging
(2016) - et al.
A content-based system for human identification based on bitewing dental X-ray images
Pattern Recognit.
(2005) - et al.
An effective classification and numbering system for dental bitewing radiographs using teeth region and contour information
Pattern Recognit.
(2010) - et al.
A benchmark for comparison of dental radiography analysis algorithms
Med. Image Anal.
(2016) Forensic dental identification in mass disasters: the current status
J. Calif. Dent. Assoc.
(2014)- et al.
A review of dental biometrics from teeth feature extraction and matching techniques
Int. J. Sci. Res.
(2014) - et al.
Matching of dental X-ray images for human identification
Pattern Recognit.
(2015) - S. Tohnak, A. Mehnert, M. Mahoney, S. Crozier. Dental identification system based on unwrapped CT images, in:...
- et al.
Generation of intra-oral-like images from cone beam computed tomography volumes for dental forensic image comparison
J. Forensic Sci.
(2015) - A.Z. Arifin, M. Hadi, A. Yuniarti, W. Khotimah, A. Yudhi, E.R. Astuti, Classification and numbering on posterior dental...