Skip to main content
Log in

Efficient and accurate detection of herd pigs based on Ghost-YOLOv7-SIoU

  • S.I. : Machine Learning and Big Data Analytics for IoT Security and Privacy (SPIoT 2022)
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Computer vision methods for non-contact detection of herd pigs could help detect early disease and reduce mortality rates by analyzing pig behavior. Due to the limitation of breeding space and cost, the unit breeding area is relatively dense, making it difficult to detect all pigs for a long time without interruption. In order to improve the detection performance, this paper proposes an end-to-end efficient and accurate herd pig detection framework based on YOLOv7 target detection model, which is named Ghost-YOLOv7-SIoU. In this framework, the feature extraction backbone network consists of a series of directly connected efficient layer aggregation networks (ELAN) and downsampling modules. The neck network contains a feature pyramid network and path aggregation network. Ghost convolution is adopted to replace the \(3 \times 3\) standard convolution of the ELAN module in backbone network and the scaled-up ELAN module in neck network to obtain rich features while reducing the parameter number and computational effort. Furthermore, to speed up the model convergence and improve the model robustness and accuracy, SIoU loss is used for bounding box regression in the training stage. On the VOC2012 dataset, the number of parameters and floating-point operations decreased by 13.4% and 15.7% compared to YOLOv7, with comparable detection accuracy. Additionally, the number of parameters and floating-point operations decreased by 13.7% and 16.1% on our pig dataset. Ghost-YOLOv7-SIoU is superior to YOLOV4-CSP and YOLOR-CSP in accuracy. Experimental results demonstrate the effectiveness of the proposed method in improving the efficiency of model detection while ensuring detection accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

References

  1. Maselyne J, Saeys W, De Ketelaere B et al (2014) Validation of a high frequency radio frequency identification (HF RFID) system for registering feeding patterns of growing-finishing pigs. Comput Electron Agric 102:10–18

    Article  Google Scholar 

  2. Guo Y, Zhu W, Jiao P et al (2014) Foreground detection of group-housed pigs based on the combination of Mixture of Gaussians using prediction mechanism and threshold segmentation. Biosys Eng 125:98–104

    Article  Google Scholar 

  3. Yiyang L, Longqing S, Yuanbing Z et al (2017) Individual pig object detection algorithm based on Gaussian mixture model. Int J Agric Biol Eng 10(5):186–193

    Google Scholar 

  4. Huang W, Zhu W, Ma C et al (2018) Identification of group-housed pigs based on gabor and local binary pattern features. Biosys Eng 166:90–100

    Article  Google Scholar 

  5. Huang W, Zhu W, Ma C et al (2020) Weber texture local descriptor for identification of group-housed pigs. Sensors 20(16):4649

    Article  Google Scholar 

  6. Girshick R, Donahue J, Darrell T et al (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587

  7. Girshick R (2015) Fast R-CNN. In: Proceedings of the IEEE international conference on computer vision, pp 1440–1448

  8. Ren S, He K, Girshick R et al (2015) Faster R-CNN: towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28

  9. Zhu X, Chen C, Zheng B et al (2020) Automatic recognition of lactating sow postures by refined two-stream RGB-D faster R-CNN. Biosys Eng 189:116–132

    Article  Google Scholar 

  10. Yang Q, Xiao D, Lin S (2018) Feeding behavior recognition for group-housed pigs with the faster R-CNN. Comput Electron Agric 155:453–460

    Article  Google Scholar 

  11. Tu S, Yuan W, Liang Y et al (2021) Automatic detection and segmentation for group-housed pigs based on PigMS R-CNN. Sensors 21(9):3251

    Article  Google Scholar 

  12. Liu W, Anguelov D, Erhan D, et al (2016) Ssd: single shot multibox detector. In: European conference on computer vision, Springer, Cham, pp 21–37

  13. Redmon J, Divvala S, Girshick R et al (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788

  14. Redmon J, Farhadi A (2017) YOLO9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7263–7271

  15. Redmon J, Farhadi A (2018) Yolov3: an incremental improvement. Preprint http://arxiv.org/abs/1804.02767

  16. Bochkovskiy A, Wang CY, Liao HYM (2020) Yolov4: optimal speed and accuracy of object detection. Preprint http://arxiv.org/abs/2004.10934

  17. Wen H, Dai F, Yuan Y (2021) A study of YOLO algorithm for target detection. J Adv Inn Artif Life Robot 2:287–290

    Google Scholar 

  18. Li C, Li L, Jiang H,et al (2022) YOLOv6: A single-stage object detection framework for industrial applications. Preprint http://arxiv.org/abs/2209.02976

  19. Deng M, Yu R (2020) Pig target detection method based on SSD convolution network. J Phys Conf Ser 1486(2):022031

    Article  Google Scholar 

  20. Yang-pei J, Ying Y, Gang L (2020) Recognition of pig eating and drinking behavior based on visible spectrum and YOLOv2. Spectrosc Spectr Anal 40(5):1588–1594

    Google Scholar 

  21. Li G, Jiao J, Shi G et al (2022) Fast recognition of pig faces based on improved Yolov3. J Phys Conf Ser 2171(1):012005

    Article  Google Scholar 

  22. Bhujel A, Arulmozhi E, Moon BE et al (2021) Deep-learning-based automatic monitoring of pigs’ physico-temporal activities at different greenhouse gas concentrations. Animals 11(11):3089

    Article  Google Scholar 

  23. Wang CY, Bochkovskiy A, Liao HYM (2022) YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. Preprint http://arxiv.org/abs/2207.02696

  24. Zhang H, Cisse M, Dauphin YN et al (2017) mixup: beyond empirical risk minimization. Preprint http://arxiv.org/abs/1710.09412

  25. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  26. Lin T Y, Dollár P, Girshick R et al (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125

  27. Wang K, Liew J H, Zou Y et al (2019) Panet: few-shot image semantic segmentation with prototype alignment. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9197–9206

  28. He K, Zhang X, Ren S et al (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916

    Article  Google Scholar 

  29. Ding X, Zhang X, Ma N et al (2021) Repvgg: making vgg-style convnets great again. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 13733–13742

  30. Han K, Wang Y, Tian Q, et al. Ghostnet: More features from cheap operations. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 1580–1589.

  31. Su Z, Fang L, Kang W et al (2020) Dynamic group convolution for accelerating convolutional neural networks. In: European conference on computer vision, Springer, Cham, pp 138–155

  32. Rezatofighi H, Tsoi N, Gwak JY et al (2019) Generalized intersection over union: a metric and a loss for bounding box regression. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 658–666

  33. Gevorgyan Z (2022) SIoU loss: more powerful learning for bounding box regression. Preprint http://arxiv.org/abs/2205.12740

  34. Everingham M, Winn J (2012) The PASCAL visual object classes challenge 2012 (VOC2012) development kit. Pattern Anal Stat Model Comput Learn Tech Rep 2007:1–45

    Google Scholar 

  35. Wang CY, Bochkovskiy A, Liao HYM (2021) Scaled-yolov4: scaling cross stage partial network. IN: Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pp 13029–13038

  36. Wang CY, Yeh IH, Liao HYM. You only learn one representation: unified network for multiple tasks. Preprint http://arxiv.org/abs/2105.04206

Download references

Funding

This research was funded by the Science and Technology Project of Hebei Education Department QN2022078.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lijuan Zhang.

Ethics declarations

Conflict of interest

The authors state that this article has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, D., Zhang, L., Wang, J. et al. Efficient and accurate detection of herd pigs based on Ghost-YOLOv7-SIoU. Neural Comput & Applic 36, 2339–2352 (2024). https://doi.org/10.1007/s00521-023-09093-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-09093-9

Keywords

Navigation