Skip to main content
Log in

Automated segmentation of computed tomography images of fiber-reinforced composites by deep learning

  • Computation & theory
  • Published:
Journal of Materials Science Aims and scope Submit manuscript

Abstract

A deep learning procedure has been examined for automatic segmentation of 3D tomography images from fiber-reinforced ceramic composites consisting of fibers and matrix of the same material (SiC), and thus identical image intensities. The analysis uses a neural network to distinguish phases from shape and edge information rather than intensity differences. It was used successfully to segment phases in a unidirectional composite that also had a coating with similar image intensity. It was also used to segment matrix cracks generated during in situ tensile loading of the composite and thereby demonstrate the influence of nonuniform fiber distribution on the nature of matrix cracking. By avoiding the need for manual segmentation of thousands of image slices, the procedure overcomes a major impediment to the extraction of quantitative information from such images. The analysis was performed using recently developed software that provides a general framework for executing both training and inference.

Graphic abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7

Similar content being viewed by others

Notes

  1. The variational method for segmentation at the fiber tow scale begins with a prior geometric model of the weave topology that is iteratively matched to the μCT image via an optimization process [47].

  2. Object Research Systems, Montreal, Canada (free of charge for non-commercial use) [23].

  3. The test specimen was supplied by Prof. G Morscher: further details of the fabrication method are given in [48].

References

  1. Bale HA, Blacklock M, Begley MR et al (2012) Characterizing three-dimensional textile ceramic composites using synchrotron X-ray micro-computed-tomography. J Am Ceram Soc 95:392–402. https://doi.org/10.1111/j.1551-2916.2011.04802.x

    Article  CAS  Google Scholar 

  2. Bale HA, Haboub A, Macdowell AA et al (2013) Real-time quantitative imaging of failure events in materials under load at temperatures above 1,600 C. Nat Mater 12:40–46. https://doi.org/10.1038/nmat3497

    Article  CAS  Google Scholar 

  3. Chateau C, Gélébart L, Bornert M et al (2011) In situ X-ray microtomography characterization of damage in SiCf/SiC minicomposites. Compos Sci Technol 71:916–924. https://doi.org/10.1016/j.compscitech.2011.02.008

    Article  CAS  Google Scholar 

  4. Wright P, Fu Z, Sinclair I, Spearing SM (2008) Ultra high resolution computed tomography of damage in notched carbon fiber-epoxy composites. J Compos Mater 42:1993–2002. https://doi.org/10.1177/0021998308092211

    Article  CAS  Google Scholar 

  5. Moffat AJ, Wright P, Buffière JY et al (2008) Micromechanisms of damage in 0 splits in a [90/0] s composite material using synchrotron radiation computed tomography. Scr Mater 59:1043–1046. https://doi.org/10.1016/j.scriptamat.2008.07.034

    Article  CAS  Google Scholar 

  6. Mazars V, Caty O, Couégnat G et al (2017) Damage investigation and modeling of 3D woven ceramic matrix composites from X-ray tomography in situ tensile tests. Acta Mater 140:130–139. https://doi.org/10.1016/j.actamat.2017.08.034

    Article  CAS  Google Scholar 

  7. Cox BN, Bale HA, Begley MR et al (2014) Stochastic virtual tests for high-temperature ceramic matrix composites. Annu Rev Mater Res 44:479–529. https://doi.org/10.1146/annurev-matsci-122013-025024

    Article  CAS  Google Scholar 

  8. Saucedo-Mora L, Zou C, Lowe T, Marrow TJ (2017) Three-dimensional measurement and cohesive element modelling of deformation and damage in a 2.5-dimensional woven ceramic matrix composite. Fatigue Fract Eng Mater Struct 40:683–695. https://doi.org/10.1111/ffe.12537

    Article  CAS  Google Scholar 

  9. Barnard HS, MacDowell AA, Parkinson DY et al (2017) Synchrotron X-ray micro-tomography at the advanced light source: developments in high-temperature in situ mechanical testing. J Phys: Conf Ser 849:012043. https://doi.org/10.1088/1742-6596/849/1/012043

    Article  CAS  Google Scholar 

  10. Larson NM, Cuellar C, Zok FW (2019) X-ray computed tomography of microstructure evolution during matrix impregnation and curing in unidirectional fiber beds. Compos Part A Appl Sci Manuf 117:243–259. https://doi.org/10.1016/j.compositesa.2018.11.021

    Article  CAS  Google Scholar 

  11. Marshall DB, Cox BN (2008) Integral textile ceramic structures. Annu Rev Mater Res 38:425–443. https://doi.org/10.1146/annurev.matsci.38.060407.130214

    Article  CAS  Google Scholar 

  12. Saucedo-Mora L, Lowe T, Zhao S et al (2016) In situ observation of mechanical damage within a SiC–SiC ceramic matrix composite. J Nucl Mater 481:13–23. https://doi.org/10.1016/j.jnucmat.2016.09.007

    Article  CAS  Google Scholar 

  13. Perciano T, Ushizima DM, Krishnan H et al (2017) Insight into 3D micro-CT data: exploring segmentation algorithms through performance metrics. J Synchrotron Radiat 24:1065–1077. https://doi.org/10.1107/S1600577517010955

    Article  Google Scholar 

  14. Straumit I, Lomov SV, Wevers M (2015) Quantification of the internal structure and automatic generation of voxel models of textile composites from X-ray computed tomography data. Compos Part A Appl Sci Manuf 69:150–158. https://doi.org/10.1016/j.compositesa.2014.11.016

    Article  CAS  Google Scholar 

  15. Czabaj MW, Riccio ML, Whitacre WW (2014) Three-dimensional imaging and numerical reconstruction of graphite/epoxy composite microstructure based on ultra-high resolution X-ray computed tomography. In: Proceedings of the American Society for Composites—29th technical conference ASC 2014; 16th US-Japan conference on composite materials ASTM-D30 Meeting, vol 105, pp 174–182

  16. Haberl MG, Churas C, Tindall L et al (2018) CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation. Nat Methods 15:677–680. https://doi.org/10.1038/s41592-018-0106-z

    Article  CAS  Google Scholar 

  17. Sinchuk Y, Kibleur P, Aelterman J et al (2020) Variational and deep learning segmentation of very-low-contrast X-ray computed tomography images of carbon/epoxy woven composites. Materials (Basel) 13:936. https://doi.org/10.3390/ma13040936

    Article  CAS  Google Scholar 

  18. Haboub A, Bale HA, Nasiatka JR et al (2014) Tensile testing of materials at high temperatures above 1700 C with in situ synchrotron X-ray micro-tomography. Rev Sci Instrum 85:1–13. https://doi.org/10.1063/1.4892437

    Article  CAS  Google Scholar 

  19. Pandolfi RJ, Allan DB, Arenholz E et al (2018) Xi-cam: a versatile interface for data visualization and analysis. J Synchrotron Radiat 25:1261–1270. https://doi.org/10.1107/S1600577518005787

    Article  Google Scholar 

  20. Greenspan H, Van Ginneken B, Summers RM (2016) Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging 35:1153–1159

    Article  Google Scholar 

  21. Farfade SS, Saberian M, Li LJ (2015) Multi-view face detection using Deep convolutional neural networks. In: ICMR 2015—proceedings of the 2015 ACM international conference on multimedia retrieval. Association for Computing Machinery, Inc, New York, New York, USA, pp 643–650

  22. Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39:2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615

    Article  Google Scholar 

  23. Dragonfly|3D visualization and analysis solutions for scientific and industrial data|ORS. https://www.theobjects.com/dragonfly/index.html. Accessed 21 Mar 2020

  24. Hamwood J, Alonso-Caneiro D, Read SA et al (2018) Effect of patch size and network architecture on a convolutional neural network approach for automatic segmentation of OCT retinal layers. Biomed Opt Express 9:3049. https://doi.org/10.1364/boe.9.003049

    Article  Google Scholar 

  25. Sudre CH, Li W, Vercauteren T et al (2017) Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Cardoso MJ, Arbel T, Carneiro G et al (eds) Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, Cham, pp 240–248

    Chapter  Google Scholar 

  26. Losses - Keras Documentation. https://keras.io/losses/. Accessed 21 Mar 2020

  27. Geron A (2019) Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: concepts, tools, and techniques to build intelligent systems, 2nd edn. O’Rilly, Sebastopol

    Google Scholar 

  28. Optimizers - Keras Documentation. https://keras.io/optimizers/. Accessed 21 Mar 2020

  29. Milletari F, Navab N, Ahmadi SA (2016) V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: Proceedings—2016 4th international conference on 3D vision, 3DV 2016. Institute of Electrical and Electronics Engineers Inc., pp 565–571

  30. Jegou S, Drozdzal M, Vazquez D et al (2017) The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: IEEE computer society conference on computer vision and pattern recognition workshops 2017 July, pp 1175–1183. https://doi.org/10.1109/CVPRW.2017.156

  31. Arhatari BD, Zonneveldt M, Thornton J, Abbey B (2017) Local structural damage evaluation of a C/C-SiC ceramic matrix composite. Microsc Microanal 23:518–526. https://doi.org/10.1017/S1431927617000459

    Article  CAS  Google Scholar 

  32. Spearing SM, Zok FW, Evans AG (1994) Stress corrosion cracking in a unidirectional ceramic-matrix composite. J Am Ceram Soc 77:562–570. https://doi.org/10.1111/j.1151-2916.1994.tb07030.x

    Article  CAS  Google Scholar 

  33. Marshall DB (1984) An indentation method for measuring matrix-fiber frictional stresses in ceramic composites. J Am Ceram Soc 67:C259–C260. https://doi.org/10.1111/j.1151-2916.1984.tb19690.x

    Article  CAS  Google Scholar 

  34. Marshall DB, Cox BN, Evans AG (1985) The mechanics of matrix cracking in brittle-matrix fiber composites. Acta Metall 33:2013–2021. https://doi.org/10.1016/0001-6160(85)90124-5

    Article  Google Scholar 

  35. Marshall DB, Evans AG (1985) Failure mechanisms in ceramic-fiber/ceramic-matrix composites. J Am Ceram Soc 68:225–231. https://doi.org/10.1111/j.1151-2916.1985.tb15313.x

    Article  CAS  Google Scholar 

  36. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention, pp 234–241

  37. Kamnitsas K, Ledig C, Newcombe VFJ et al (2017) Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal 36:61–78. https://doi.org/10.1016/j.media.2016.10.004

    Article  Google Scholar 

  38. Prasoon A, Petersen K, Igel C et al (2013) Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network. In: Mori K, Sakuma I, Sato Y, Barillot C, Navab N (eds) Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer, Berlin, pp 246–253

    Google Scholar 

  39. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A (eds) Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer, Berlin, pp 234–241

    Google Scholar 

  40. Garcia-Garcia A, Orts-Escolano S, Oprea S et al (2017) A review on deep learning techniques applied to semantic segmentation. arXiv:1704.06857

  41. Aloysius N, Geetha M (2018) A review on deep convolutional neural networks. In: Proceedings of the 2017 IEEE international conference on communication and signal processing, ICCSP 2017. Institute of Electrical and Electronics Engineers Inc., pp 588–592

  42. Dettmers T (2015) Deep learning in a nutshell: core concepts|NVIDIA Developer Blog. In: Nvidia Dev. https://developer.nvidia.com/blog/deep-learning-nutshell-core-concepts/. Accessed 30 Jun 2020

  43. Badran A, Marshall DB, Legault Z et al (2020) XCT dataset and deep learning models for automated segmentation of computed tomography images of fiber-reinforced composites. Mater Data Facil Open. https://doi.org/10.18126/SAIM-CV6C

    Article  Google Scholar 

  44. Arganda-Carreras I, Turaga SC, Berger DR et al (2015) Crowdsourcing the creation of image segmentation algorithms for connectomics. Front Neuroanat 9:1–13. https://doi.org/10.3389/fnana.2015.00142

    Article  Google Scholar 

  45. Cardona A, Saalfeld S, Preibisch S et al (2010) An integrated micro-and macroarchitectural analysis of the drosophila brain by computer-assisted serial section electron microscopy. PLoS Biol 8:1000502. https://doi.org/10.1371/journal.pbio.1000502

    Article  CAS  Google Scholar 

  46. Cheng D, Meng G, Xiang S, Pan C (2017) FusionNet: edge aware deep convolutional networks for semantic segmentation of remote sensing harbor images. IEEE J Sel Top Appl Earth Obs Remote Sens 10:5769–5783. https://doi.org/10.1109/JSTARS.2017.2747599

    Article  Google Scholar 

  47. Bénézech J, Couégnat G (2019) Variational segmentation of textile composite preforms from X-ray computed tomography. Compos Struct 230:111496. https://doi.org/10.1016/j.compstruct.2019.111496

    Article  Google Scholar 

  48. Zhou J, Almansour AS, Chase GG, Morscher GN (2017) Enhanced oxidation resistance of SiC/SiC minicomposites via slurry infiltration of oxide layers. J Eur Ceram Soc 37:3241–3253. https://doi.org/10.1016/j.jeurceramsoc.2017.03.065

    Article  CAS  Google Scholar 

Download references

Acknowledgements

This research was supported by the US National Science Foundation PIRE program, Grant number 1743701, led by Prof. G Singh at Kansas State University. Beam time at the Berkeley Advanced Light Source was provided under an ALS Approved Program led by M. Czabaj. We thank P. Creveling, M. Czabaj, G. Morscher and D. Parkinson for assistance with in situ experiments at the Berkeley ALS.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aly Badran.

Ethics declarations

Conflict of interest

Authors affiliated with Object Research Systems (ORS) developed Dragonfly, the software package that was used in this work. The software is licensed commercially, at a cost to most industry licensees, but at no cost for non-commercial users. University of Colorado workers have no interests to declare.

Additional information

Handling Editor: Avinash Dongare.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (MPG 52820 kb)

Supplementary material 2 (MPG 19590 kb)

Supplementary material 3 (MPG 10713 kb)

Appendices

Appendix A: Deep-learning image segmentation by CNNs

Semantic image segmentation—the labeling of pixels in an image according to the object they constitute—is a deep-learning method that was first applied to scientific imaging with the description of the U-Net architecture in 2015 [37], although non-scientific applications predate that work [40]. These network models are built as CNNs, in which image data are subdivided into patches and fed through a network of neurons, which are arranged in sequential layers. Each neuron in a given layer receives input from neurons in the previous layer, transforms the input signal, and then passes the result to a set of neurons in the next layer. The signals are integrated in successive layers of the network, where ultimately higher order neurons may have remarkable discriminative value by selectively amplifying various signals from previous layers. CNNs employ convolution operations as their first layer(s) of neurons. The coefficients in the convolution kernels are seeded randomly initially and are refined iteratively in the learning phase, where the output of the CNN is compared with the manually segmented image (training). The learned weights of different neurons confer the extreme selectivity which gives the networks their power. Similar to biological neural networks, these models are able to interpret texture that is observed in the image and use that to help distinguish hallmarks of one visual object from another. The early neurons are able to encode texture, but not because they are programmed to recognize specific patterns, edges, gradients, or other primitive image descriptors. Rather, the coefficients of the kernels in those early-stage convolutional filters are learned through the reinforcement process of network training. Consequently, if a convolutional kernel conveys a meaningful signal that can provide discriminating value in the network, it will be preserved and up-weighted. CNNs are also used in object detection and other deep learning enabled computer vision techniques. Further discussion of CNNs can be found in Refs. [41, 42].

Appendix B: Implementation of deep learning

To make the deep-learning method easy to apply to a broad class of image segmentation problems, a general framework for executing both the training and inference stages of the deep-learning cycle was developed. Both the training and inference tools rely on software libraries TensorFlow and Keras available from Google (Mountain View, California USA) [27]. These libraries have been integrated into a desktop software platform for image manipulation and analysis named Dragonfly (developed and licensed by Object Research Systems, Montreal, Canada), available at no cost under non-commercial licensing terms. Further details on the software integration and instructions on how to download Dragonfly, the software described in this paper, can be found on the main Object Research Systems website [23]. To download the trained deep-learning models, training images and raw data used for this work along with the CMC image data set refer to the Materials Data Facility repository [43]. Other deep learning models can be found at the Object Research Systems online tool-sharing community repository (Infinite Toolbox).

It is an important goal to make the deep learning solution accessible by non-experts, but provide high-value flexibility for experts that want to use the same system. There are no major runtime parameters associated with the inference stage of deep learning, so this goal of serving both classes of users is accomplished in the interface of how users configure and train their models.

When setting up model training, the basic panel exposes standard parameters: patch size, stride ratio, and batch size, optimization function, and loss function. The training for the microstructural segmentation in “CMC microstructure segmentation” section was done using the neural network architecture FCDenseNet [29, 30], with a patch size of 64 × 64 pixels, stride ratio of 0.5 and batch size 16. The loss function “categorical-crossentropy” and optimization algorithm “Adam” were used as the default functions of the Keras Library. For the segmentation of matrix cracks, the U-net [36] CNN was used with patch size of 128 pixels, stride to input ratio 1, batch size 32, loss function “categorical crossentropy,” and optimization function “Adadelta.” Initially, a U-Net model was trained for the microstructural segmentation. However, the inference results had many errors, so a model with deeper neural net architecture, FCDenseNet, was deployed (with the cost of a much longer time taken for training). The patch size was set to 64 × 64 since the features of the microstructure, such as individual fibers, spanned small areas in the image. For the crack segmentation, a U-net model proved adequate. The cracks spanned larger horizontal pixel areas in the longitudinal images, thus allowing use of larger patch size (128 × 128 pixels). The default optimization function in the Keras library was used for both models.

To support greater control, an optional advanced parameters panel that exposes options for additional logging (including support for TensorBoard), fine tuning parameters for the optimization function, conditions for early termination of training, and conditions for learning rate reduction. These parameters are understood to be beyond the scope of non-experts, but match many of the controls experts would have if they were directly programming their own solution with lower level tools. This platform also integrates methods for data augmentation to reduce the raw training data that must be manually prepared, and data set aside for validation during the course of training. Models that were previously trained can be accessed for further iterations of training.

In the inference stage, the deep-learning model behaves as a simple image transform engine, which can take a single gray-scale image and return a single segmented image. From the user perspective, the trained deep-learning model behaves as a simple image filter. To simplify user interaction, the interface allows the selection from a library of trained models that can be applied like any standard image filter. A preview mode allows viewing of the output from any of the trained models applied to any single image or any sub-area of an image. This permits rapid assessment of whether any of those models is suitable for application at hand.

Appendix C: Validation of deep learning

The general application of the deep-learning solution described here was validated by applying it to a set of serial-section transmission electron micrographs of Drosophila melanogaster neurons that has been used as an image segmentation challenge [44, 45]. These images have many structural features (plasma membranes) that are difficult to discriminate with standard algorithms. The challenge micrographs include a stack of 30 raw images that have not been processed and a stack of 20 manually segmented images to be used as training data for machine learning technique development. A recent report documented the successful segmentation of these micrographs with a CNN called FusionNet [46]. The FusionNet architecture was reproduced here and implemented as a model in the new Dragonfly toolkit. Following training, a set of 10 validation slices were used to assess the inference quality. Visual inspection confirmed that proper segmentation of the plasma membrane was achieved.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Badran, A., Marshall, D., Legault, Z. et al. Automated segmentation of computed tomography images of fiber-reinforced composites by deep learning. J Mater Sci 55, 16273–16289 (2020). https://doi.org/10.1007/s10853-020-05148-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10853-020-05148-7

Navigation