Skip to main content
Log in

Predicting the Content of the Main Components of Gardeniae Fructus Praeparatus Based on Deep Learning

  • Original Paper
  • Published:
Statistics in Biosciences Aims and scope Submit manuscript

Abstract

Gardeniae Fructus (GF) and its stir-fried product, Gardeniae Fructus Praeparatus (GFP), are commonly used herbal medicines in traditional Chinese clinic. However, it is challenging to measure the content of GFP’s main components rapidly during processing. In this paper, an MLP-based method for GFP component content prediction is proposed. 10 deep learning models including CNN and Transformer are used to extract features from the built image dataset. Combined with the measured component content data, the extracted feature data are used to train the MLP regression model and evaluate the effect. It is demonstrated that the proposed method can be used for the rapid and nondestructive determination of the content of the main components of Chinese herbal pieces. This study provides insights for similar studies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Chen L, Li M, Yang Z, Tao W, Wang P, Tian X et al (2020) Gardenia jasminoides Ellis: ethnopharmacology, phytochemistry, and pharmacological and industrial applications of an important traditional Chinese medicine. J Ethnopharmacol 257:112829. https://doi.org/10.1016/j.jep.2020.112829

    Article  CAS  PubMed  Google Scholar 

  2. Xia MY, Wang Y, Zheng YH, Song YN, Liu TL, Zhang C (2021) Quality changes of Gardeniae Fructus Praeparatus processed by different frying methods: a color-component correlation analysis. China J Chin Mater Med 46(9):2197–2206. https://doi.org/10.19540/j.cnki.cjcmm.20210220.303

    Article  CAS  Google Scholar 

  3. Liu J, Huang X (2017) Dynamic changes of color and five constituents during processing of Gardeniae Fructus. Chin Tradit Patent Med. https://doi.org/10.3969/j.issn.1001-1528.2017.11.028

    Article  Google Scholar 

  4. Zhang X (2018) Dynamic correlation between constituents detected by HPLC and colors of samples in stir-frying process of Gardeniae Fructus Praeparatus. Chin Tradit Herbal Drugs. https://doi.org/10.7501/j.issn.0253-2670.2018.17.010

    Article  Google Scholar 

  5. Li XQ, Wang Y, Zhang X, Li LY, Dai YJ, Wang QH (2018) Correlation analysis of apparent color change and chemical composition content during processing of gardeniae fructus. Chin J Exp Tradit Med Formulae 24:1–5

    Google Scholar 

  6. Wang Y, Li L, Liu Y, Cui Q, Zhang Z (2021) Enhanced quality monitoring during black tea processing by the fusion of nirs and computer vision. J Food Eng 304:110599. https://doi.org/10.1016/j.jfoodeng.2021.110599

    Article  CAS  Google Scholar 

  7. Ren G, Gan N, Song Y, Ning J, Zhang Z (2021) Evaluating Congou black tea quality using a lab-made computer vision system coupled with morphological features and chemometrics. Microchem J 160:105600. https://doi.org/10.1016/j.microc.2020.105600

    Article  CAS  Google Scholar 

  8. Jin G, Wang YJ, Li M, Li T, Huang WJ, Li L et al (2021) Rapid and real-time detection of black tea fermentation quality by using an inexpensive data fusion system. Food Chem 358:129815. https://doi.org/10.1016/j.foodchem.2021.129815

    Article  CAS  PubMed  Google Scholar 

  9. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Article  ADS  CAS  PubMed  Google Scholar 

  10. Li Q, Zeng J, Lin L, Zhang J, Zhu J, Yao L et al (2021) Mid-infrared spectra feature extraction and visualization by convolutional neural network for sugar adulteration identification of honey and real-world application. Lwt 140:110856. https://doi.org/10.1016/j.lwt.2021.110856

    Article  CAS  Google Scholar 

  11. Belay AJ, Salau AO, Ashagrie M, Haile MB (2022) Development of a chickpea disease detection and classification model using deep learning. Inform Med Unlocked 31:100970. https://doi.org/10.1016/j.imu.2022.100970

    Article  Google Scholar 

  12. Chakravartula SSN, Moscetti R, Bedini G, Nardella M, Massantini R (2022) Use of convolutional neural network (CNN) combined with FT-NIR spectroscopy to predict food adulteration: a case study on coffee. Food Control 135:108816. https://doi.org/10.1016/j.foodcont.2022.108816

    Article  CAS  Google Scholar 

  13. Kong D, Shi Y, Sun D, Zhou L, Zhang W, Qiu R, He Y (2022) Hyperspectral imaging coupled with CNN: a powerful approach for quantitative identification of feather meal and fish by-product meal adulterated in marine fishmeal. Microchem J 180:107517. https://doi.org/10.1016/j.microc.2022.107517

    Article  CAS  Google Scholar 

  14. Zhang C, Wu W, Zhou L, Cheng H, Ye X, He Y (2020) Developing deep learning based regression approaches for determination of chemical compositions in dry black goji berries (Lycium ruthenicum Murr.) using near-infrared hyperspectral imaging. Food Chem 319:6536. https://doi.org/10.1016/j.foodchem.2020.126536

    Article  CAS  Google Scholar 

  15. Wang Y, Xiong F, Zhang Y, Wang S, Yuan Y, Lu C et al (2023) Application of hyperspectral imaging assisted with integrated deep learning approaches in identifying geographical origins and predicting nutrient contents of Coix seeds. Food Chem 404:134503. https://doi.org/10.1016/j.foodchem.2022.134503

    Article  CAS  PubMed  Google Scholar 

  16. Öğrekçi S, Ünal Y, Dudak MN (2023) A comparative study of vision transformers and convolutional neural networks: sugarcane leaf diseases identification. Eur Food Res Technol 249(7):1833–1843. https://doi.org/10.1007/s00217-023-04258-1

    Article  CAS  Google Scholar 

  17. Zheng H, Wang G, Li X (2022) Swin-MLP: a strawberry appearance quality identification method by Swin Transformer and multi-layer perceptron. J Food Meas Charact 16(4):2789–2800. https://doi.org/10.1007/s11694-022-01396-0

    Article  Google Scholar 

  18. Zhang Y, Wang C, Wang Y, Cheng P (2022) Determining the stir-frying degree of gardeniae fructus praeparatus based on deep learning and transfer learning. Sensors 22(21):8091. https://doi.org/10.3390/s22218091

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  19. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge

    Google Scholar 

  20. Gu J, Wang Z, Kuen J, Ma L, Shahroudy A, Shuai B et al (2018) Recent advances in convolutional neural networks. Pattern Recogn 77:354–377. https://doi.org/10.1016/j.patcog.2017.10.013

    Article  ADS  Google Scholar 

  21. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T et al (2020) An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. https://doi.org/10.48550/arXiv.2010.11929

  22. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN et al (2017) Attention is all you need. In: Advances in neural information processing systems, vol 30

  23. Wang Z, Wang X, Wang G (2018) Learning fine-grained features via a CNN tree for large-scale classification. Neurocomputing 275:1231–1240. https://doi.org/10.1016/j.neucom.2017.09.061

    Article  Google Scholar 

  24. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 770–778

  25. Wightman R, Touvron H, Jégou H (2021) Resnet strikes back: An improved training procedure in timm. ar**v preprint ar**v:2110.00476. https://doi.org/10.48550/arXiv.2110.00476

  26. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4510–4520.

  27. Howard A, Sandler M, Chu G, Chen LC, Chen B, Tan M et al (2019) Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 1314–1324

  28. Ding X, Zhang X, Ma N, Han J, Ding G, Sun J (2021) Repvgg: Making vgg-style convnets great again. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 13733–13742. https://doi.ieeecomputersociety.org/https://doi.org/10.1109/CVPR46437.2021.01352

  29. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z et al (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 10012–10022. https://doi.org/10.1109/ICCV48922.2021.00986

  30. Mehta S, Rastegari (2021) Mobilevit: light-weight, general-purpose, and mobile-friendly vision transformer. ar**v preprint ar**v:2110.02178. https://doi.org/10.48550/arXiv.2110.02178

  31. Mehta S, Rastegari M (2022) Separable self-attention for mobile vision transformers. ar**v preprint ar**v:2206.02680. https://doi.org/10.48550/arXiv.2206.02680

  32. Liu X, Peng H, Zheng N, Yang Y, Hu H, Yuan Y (2023) EfficientViT: memory efficient vision transformer with cascaded group attention. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14420–14430. https://doi.org/10.48550/arXiv.2305.07027

  33. Rosenblatt F (1961) Principles of neurodynamics. Perceptrons and the theory of brain mechanisms. Cornell Aeronautical Lab Inc, Buffalo

    Book  Google Scholar 

  34. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation, Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations. Foundations, 318–362.

  35. Moshou D, Bravo C, West J, Wahlen S, McCartney A, Ramon H (2004) Automatic detection of ‘yellow rust’in wheat using reflectance measurements and neural networks. Comput Electron Agric 44(3):173–188. https://doi.org/10.1016/j.compag.2004.04.003

    Article  Google Scholar 

  36. Johann AL, de Araújo AG, Delalibera HC, Hirakawa AR (2016) Soil moisture modeling based on stochastic behavior of forces on a no-till chisel opener. Comput Electron Agric 121:420–428. https://doi.org/10.1016/j.compag.2015.12.020

    Article  Google Scholar 

  37. Ma P, Li A, Yu N, Li Y, Bahadur R, Wang Q, Ahuja JK (2021) Application of machine learning for estimating label nutrients using USDA Global Branded Food Products Database, (BFPD). J Food Compos Anal 100:103857. https://doi.org/10.1016/j.jfca.2021.103857

    Article  CAS  Google Scholar 

  38. Chen T, Guestrin C (2016) Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, pp 785–794. https://doi.org/10.1145/2939672.2939785

  39. Huber PJ (1964) Robust Estimation of a Location Parameter. Ann Math Stat 35:492–518. https://doi.org/10.1214/aoms/1177703732

    Article  MathSciNet  Google Scholar 

  40. Gujarati DN, Porter DC, Gunasekar S (2009) Basic econometrics. Tata McGraw-Hill Education, New York

    Google Scholar 

  41. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp 248–255. IEEE. https://doi.org/10.1109/CVPR.2009.5206848

  42. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4700–4708.

  43. Li RYM, Tang B, Chau KW (2019) Sustainable construction safety knowledge sharing: a partial least square-structural equation modeling and a feedforward neural network approach. Sustainability 11(20):5831. https://doi.org/10.3390/su11205831

    Article  Google Scholar 

  44. Liu Z, Mao H, Wu CY, Feichtenhofer C, Darrell T, Xie S (2022) A convnet for the 2020s. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11976–11986.

  45. Ling LA, Mx B, Lma C, Jz D, Fz A, Yqa C et al (2020) A rapid analysis method of safflower (Carthamus tinctorius l.) using combination of computer vision and near-infrared—sciencedirect. Spectrochim Acta Part A. https://doi.org/10.1016/j.saa.2020.118360

    Article  Google Scholar 

  46. Nijhawan R, Batra A, Kumar M, Jain DK (2022) Food classification of Indian cuisines using handcrafted features and vision transformer network. Available at SSRN 4014907. https://doi.org/10.2139/ssrn.4014907

  47. Steiner A, Kolesnikov A, Zhai X, Wightman R, Uszkoreit J, Beyer L (2021) How to train your vit? data, augmentation, and regularization in vision transformers. ar**v preprint ar**v:2106.10270. https://doi.org/10.48550/arXiv.2106.10270

  48. Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H (2021) Training data-efficient image transformers & distillation through attention. In: International conference on machine learning (pp. 10347–10357). PMLR. https://doi.org/10.48550/arXiv.2012.12877

Download references

Funding

This research was funded by Scientific and technological innovation project of China Academy of Chinese Medical Sciences (No. CI2021A04204), the National Natural Science Foundation of China projects (Nos. 82173979 and 81873010), and project of NATCM for traditional Chinese medicine processing technology inheritance base (National Science and Technology of Traditional Chinese Medicine-Chinese Materia Medica [2022] No. 59).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Pengle Cheng or Cun Zhang.

Ethics declarations

Competing interests

The authors declare that they have no financial and personal relationships with other people or organizations that can inappropriately influence this work, and there is no professional or other personal interest of any nature or kind in any product, service or company.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, C., Wang, Y., Cheng, P. et al. Predicting the Content of the Main Components of Gardeniae Fructus Praeparatus Based on Deep Learning. Stat Biosci (2024). https://doi.org/10.1007/s12561-024-09421-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12561-024-09421-0

Keywords

Navigation