Elsevier

Computers in Biology and Medicine

Volume 80, 1 January 2017, Pages 24-29
Computers in Biology and Medicine

Classification of teeth in cone-beam CT using deep convolutional neural network

https://doi.org/10.1016/j.compbiomed.2016.11.003Get rights and content

Highlights

  • A new application of deep convolutional neural network to dental images was explored.

  • Good classification performance was obtained even with a small number of data.

  • Data augmentation, especially the intensity transformation was effective in improving the classification performance.

  • Performance was not strongly dependent on resizing methods except for the crop method.

  • DCNN was effective in tooth classification without the need for precise tooth segmentation

Abstract

Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images. Regions of interest (ROIs) including single teeth were extracted from CT slices. Fifty two CT volumes were randomly divided into 42 training and 10 test cases, and the ROIs obtained from the training cases were used for training the DCNN. For examining the sampling effect, random sampling was performed 3 times, and training and testing were repeated. We used the AlexNet network architecture provided in the Caffe framework, which consists of 5 convolution layers, 3 pooling layers, and 2 full connection layers. For reducing the overtraining effect, we augmented the data by image rotation and intensity transformation. The test ROIs were classified into 7 tooth types by the trained network. The average classification accuracy using the augmented training data by image rotation and intensity transformation was 88.8%. Compared with the result without data augmentation, data augmentation resulted in an approximately 5% improvement in classification accuracy. This indicates that the further improvement can be expected by expanding the CT dataset. Unlike the conventional methods, the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation. The proposed tooth classification method can be useful in automatic filing of dental charts for forensic identification.

Introduction

Dental records play an important role in forensic identification after large-scale disasters [1], [2], [3]. Forensic dentistry is an important field because dental information can be used for identifying a person even when his/her body has been severely damaged; moreover, antemortem (AM) x-ray images are easier to collect than DNA samples. For dental identification, postmortem (PM) dental findings and teeth conditions are recorded in a dental chart. However, most dentists are inexperienced at recording the dental chart for corpses, and it can cause a psychiatric burden. Such a psychiatric stress may also lead to incorrect data recording and psychiatric disorders.

For overcoming these drawbacks, studies have proposed automatically obtaining dental information from the dental x-ray images, creating panoramic-like images from CT data for better image comparison, and/or matching the AM and PM images [4], [5], [6], [7], [8], [9], [10]. Jain et al. [4] investigated a computerized method for matching the AM and PM dental images. Each tooth was first isolated from its neighbors and the tooth contour was extracted on the basis of intensity. The corresponding image was then searched for by matching the extracted contours with rigid transformation. Among 38 AM/PM image pairs, 25 were correctly matched while the genuine AM image was selected as the second-best match in 5 of the remaining 13 cases. Zhou et al. [5] proposed a 3-step method to retrieve matched images. Images were first classified into bitewing, periapical, or panoramic images, and the teeth on bitewing images were segmented using a top-hat filter and an active contour method. The corresponding image was searched for by matching the boundary shape.

Lin et al. [8] proposed a method to classify teeth on bitewing images. After tooth segmentation, the length and width ratio and crown size were used as features to classify each tooth as a molar or premolar by using a support vector machine. They achieved an overall classification accuracy of 95% when using 47 images containing 369 teeth. Hosntalab et al. [10] proposed a multi-stage technique for the classification and numbering of teeth on multi-slice CT images. The 3 step process included the segmentation of tooth regions using a variation level set, feature extraction of the wavelet-Fourier descriptor, and classification of teeth into 4 groups using a supervised classifier. Using this technique, they achieved high classification accuracies above 94% for 804 teeth from 30 CT cases [10].

In this study, as a component of automated dental chart filing system, we investigated an automated method for classifying tooth types on dental cone-beam CT images using a deep convolutional neural network (DCNN). Since the success of Krizhevsky et al. [11] in the ImageNet 2012 competition, DCNNs have shown their outstanding ability in object recognition and natural image classification. The application of DCNNs to medical images has been increasingly investigated by many groups that have achieved certain degrees of success [12], [13], [14], [15], [16], [17]. However, a successful application procedure has not yet been established, and to our knowledge DCNNs have only been applied to dental image processing in one study. Wang et al. [18] reported the comparison of dental radiography analysis algorithms for the grand challenges held in IEEE international Symposium on Biomedical Imaging 2015, in which Ronneberger et al. employed u-shaped deep convolution neural network for segmentation of bitewing radiographs for caries detection. In this preliminary study, we investigated the utility of a DCNN in classifying teeth into 7 types by using rectangular regions of interest (ROIs), each of which enclosed a tooth from an axial slice.

Section snippets

Image dataset

The images used in this study were obtained using two dental CT units, namely Veraviewepocs 3D (J.Morita Mfg, Corp., Kyoto, Japan) and Alphard VEGA (Asahi Roentgen Ind. Co., Ltd., Kyoto, Japan), which were used to acquire images in 33 and 19 cases, respectively. The images were obtained from Asahi University Hospital, Gifu, Japan. The diameter of the field of view ranged from 51 to 200 mm, and the voxel resolution ranged from 0.1 to 0.39 mm. The institutional review boards of Gifu University and

Experimental results

A DCNN was employed to classify each ROI into 7 tooth types. We compared the classification results using training dataset A without data augmentation, dataset B with image rotation, dataset C with intensity transformation, and dataset D with both image rotation and intensity transformation. The number of training samples in these datasets is listed in Table 2. The original hand-cropped ROIs had different sizes. These images were resized to 256×256 pixels automatically prior to being randomly

Discussion

Using the DCNN, most teeth could be correctly classified to one of the 7 tooth types. One of the advantages of this method is that it does not require the precise tooth segmentation as might be required for the conventional feature-based classification. Because the convolution with the pooling process is robust enough to withstand image shift, it is possible to automatically recognize and classify tooth type from the larger, whole dental regions.

Deep learning in general is considered to require

Conclusion

We have investigated the utility of the DCNN for classifying of tooth types on dental cone-beam CT. By increasing the number of training samples by rotation and intensity transformation, the classification performance was improved and a high accuracy of 91.0% was achieved. The 7-tooth-type classification result can be effectively used for automatic preparation of dental charts, which may be useful in forensic identification.

Conflict of interest

None declared.

Acknowledgement

This study was partly supported by a Grant-in-Aid for Scientific Research (B) (No. 26293402) by Japan Society for the Promotion of Science and a Grant-in-Aid for Scientific Research on Innovative Areas (Multidisciplinary Computational Anatomy, No. 26108005) by Ministry of Education, Culture, Sports, Science and Technology, Japan.

References (21)

There are more references available in the full text version of this article.

Cited by (0)

View full text