Skip to main content
Log in

OBM-CNN: a new double-stream convolutional neural network for shield pattern segmentation in ancient oracle bones

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The rejoining of oracle bone (OB) rubbings is a fundamental topic in oracle research. However, severely damaged OB fragments usually lack important material information for rejoining, which has limited the progress of OB rejoining research. Identifying material information from broken OBs is difficult because interclass differences are so obscure. In this case, we can only make a judgment by relying on the “shield pattern” presented in OB rubbings. However, it is time-consuming and laborious to identify materials from “shield pattern” identification directly, and the classification accuracy is typically very low. Thus, we proposed a novel two-stream convolutional neural network (OBM-CNN), which consists of segmentation and detection subnetworks, to handle this challenge. First, the segmentation subnetwork is based on UNet++ and improved with residual block bilinear interpolation. Then, in the detection subnetwork, the backbone feature extraction network of Faster RCNN is replaced with the encoder feature extraction network, and the detection accuracy is significantly improved through cross-training. In addition, a novel dataset named OB-Material was constructed, and we provided labels for the segmentation and detection of “shield patterns”, which compensated for the lack of oracle material datasets. The experimental results show that the “shield pattern” segmentation of our proposed method reached an F1-score value of 95.23%. For “shield pattern” detection, when IoU= 0.5, the F-score value was 9.72% higher than that of the optimal contrast model. In material classification, the optimal accuracy rate reached an excellent result of 91.8%. In conclusion, this paper presents value data for promoting the combination of oracle bone inscriptions and AI technology and the application of AI-aided ancient Chinese characters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Zhang C, Zong R, Cao S, Men Y, Mo B (2020) AI-Powered Oracle Bone Inscriptions Recognition and Fragments Rejoining. In: IJCAI, pp 5309–5311

  2. Li A (2017) Oracle-bone Material Identification.Oracle-bone Inscriptions And The History Of The Shang Dynasty, pp 276– 281

  3. Zhao P (2020) A review of morphologic studies on Oracle Bones. J Yin 041:7–11

  4. Pang F (2012) Overview of oracle conjugation research. J Zhanjiang Normal Univ 33:142–146

    Google Scholar 

  5. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: CVPR, pp 770–778

  6. Zhou Z, Siddiquee M, Tajbakhsh N, Liang J (2018) UNEt++: A Nested U-Net Architecture for Medical Image Segmentation. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, Cham, pp 3–11

  7. Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28:91–99

    Google Scholar 

  8. Tian H (2011) The academic significance and methods of Oracle Bone conjugation. J Palace Museum 01:6–12

    Google Scholar 

  9. Chen S, Xu H, Weize G, Xuxin L, Bofeng M (2020) A Classification Method of Oracle Materials Based on Local Convolutional Neural Network Framework. IEEE Comput Graph Appl 40:32–44

    Article  Google Scholar 

  10. Berg T, Liu J, Woo Lee S et al (2014) Birdsnap: Large-scale fine-grained visual categorization of birds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2011–2018

  11. Sochor J, Špaňhel J, Herout A (2018) Boxcars: Improving fine-grained recognition of vehicles using 3-d bounding boxes in traffic surveillance. IEEE Trans Intell Transp Syst 20:97–108

    Article  Google Scholar 

  12. Khosla A, Jayadevaprakash N, Yao B, Li F F (2011) Novel dataset for fine-grained image categorization: Stanford dogs. In: Proceedings of CVPR Workshop on Fine-Grained Visual Categorization, pp 2–1

  13. Chen Q, Liu W, Yu X (2020) A Viewpoint Aware Multi-Task Learning Framework for Fine-Grained Vehicle Recognition. IEEE Access 8:171912–171923

    Article  Google Scholar 

  14. Zhang H, Xu T, Elhoseiny M et al (2016) Spda-cnn: Unifying semantic part detection and abstraction for fine-grained recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1143–1152

  15. Fu J, Zheng H, Mei T (2017) Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4438–4446

  16. Girshick R, Donahue J, Darrell T et al (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587

  17. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440

  18. Jaderberg M, Simonyan K, Zisserman A (2015) Spatial transformer networks. Adv Neural Inf Process Syst 28:2017–2025

    Google Scholar 

  19. Li X, Chen H, Qi X, et al. (2018) H-denseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans Med Imaging 37(12):2663–2674

    Article  Google Scholar 

  20. Weng Y, Zhou T, Li Y, et al. (2019) Nas-unet: Neural architecture search for medical image segmentation. IEEE Access 7:44247–44257

    Article  Google Scholar 

  21. Wu J, Zhang Y, Tang X (2019) Simultaneous tissue classification and lateral ventricle segmentation via a 2D U-Net driven by a 3D fully convolutional neural network. In: EMBC, pp 5928–5931

  22. Trebing K, Staczyk T, Mehrkanoon S (2021) Smaat-unet: Precipitation nowcasting using a small attention-unet architecture. Pattern Recogn Lett 145:178–186

    Article  Google Scholar 

  23. Wang Sijie, et al. (2020) Improved RPCA method via non-convex regularisation for image denoising. IET Signal Process 14:269–277

    Article  Google Scholar 

  24. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional Networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 234–241

  25. Wang Z (2020) Automatic localization and segmentation of the ventricles in magnetic resonance images. IEEE Trans Circ Syst Video Technol 31(2):621–631

    Article  Google Scholar 

  26. Wang Z (2020) Robust segmentation of the colour image by fusing the SDD clustering results from different colour spaces. IET Image Process 14(13):3273–3281

    Article  Google Scholar 

  27. He K, Zhang X, Ren S, et al. (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916

    Article  Google Scholar 

  28. Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp 1440–1448

  29. Ren S, He K, Girshick R, et al. (2016) Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28:91–99

    Google Scholar 

  30. Redmon J, Divvala S, Girshick R et al (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788

  31. Liu W, Anguelov D, Erhan D, et al. (2016) SSD: Single Shot multibox detector. In: European conference on computer vision. Springer, Cham, pp 21–37

  32. Zhao H, Shi J, Qi X et al (2017) Pyramid scene parsing network. In:Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2881–2890

  33. Badrinarayanan V, Kendall A, Cipolla R (2017) Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495

    Article  Google Scholar 

  34. Chen L C, Papandreou G, Kokkinos I, et al. (2017) Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848

    Article  Google Scholar 

  35. Diakogiannis F I, Waldner F, Caccetta P, et al. (2020) ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J Photogramm Remote Sens 162:94–114

    Article  Google Scholar 

  36. Lin T Y, Goyal P, Girshick R et al (2017) Focal loss for dense object detection. In: ICCV, pp 2980–2988

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shanxiong Chen.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, W., Chen, S., Zhang, C. et al. OBM-CNN: a new double-stream convolutional neural network for shield pattern segmentation in ancient oracle bones. Appl Intell 52, 12241–12257 (2022). https://doi.org/10.1007/s10489-021-03111-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-03111-w

Keywords

Navigation