Skip to main content
Log in

YOLO-CEA: a real-time industrial defect detection method based on contextual enhancement and attention

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

This paper proposes a real-time industrial defect detection method based on context enhancement and attention to address the problem that current general-purpose target detectors can hardly achieve high detection accuracy and fast detection speed simultaneously. First, a modified MonileNetV3 is used as the backbone network to reduce the number of parameters and improve the model detection speed. A lightweight TRANS module is proposed at the end of the backbone network to combine more layers of features provided by global contextual information for complex background small target detection. Secondly, a cross-layer multi-scale feature fusion network is designed to fully fuse the fine-grained and semantic feature information extracted by the backbone and enhance the spatial location information between neighboring feature layers. Finally, a cascaded Two-channel Efficient Space attention module is used to fully extract texture and semantic features from the defective regions, allowing the model to focus more on the wrong locations and improve the feature representation capability of the network. The NEU-DET steel and PCB datasets are used to test the effectiveness of the proposed model. The experimental results show that compared to the original YOLOv5s algorithm, the mAP metrics are improved by 5.9% and 0.6%, F1 is improved by 4.82% and 0.93%, respectively, and the parameters are reduced by 33.77 M, enabling fast detection of industrial surface defects and meeting the needs of the entire industry.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data availibility

The data used to support the findings of this study are available from the corresponding author upon request.

References

  1. Zhang, Z., Zhou, M., Wan, H., Li, M., Li, G., Han, D.: IDD-Net: Industrial defect detection method based on deep-learning. Eng. Appl. Artif. Intell. 123, 106390 (2023)

    Article  Google Scholar 

  2. Learning, D. Deep learning. High-dimensional Fuzzy Clustering, (2020)

  3. Gong, Y., Srivastava, G.: Multi-target trajectory tracking in multi-frame video images of basketball sports based on deep learning. EAI Endors. Trans. Scalable Info. Syst. 10, e9–e9 (2023)

    Google Scholar 

  4. Pan, K., Zhao, Y., Wang, T., Yao, S.: MSNet: a lightweight multi-scale deep learning network for pedestrian re-identification. Signal Image Video Process. 17, 3091 (2023)

  5. Girshick, R.: Fast R-CNN. Computer Science, (2015)

  6. Ren S., He K., Girshick R., Sun J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Machine Intell. 39, 6 (2017)

  7. Xuelong W., Ying G., Junyu D., Xukun Q., Lin Q., Hui M., Jun L.: Surface defects detection of paper dish based on Mask R-CNN. International Workshop on Pattern Recognition, SPIE, Washington (2018)

  8. Joseph R., Santosh Kumar D., Ross B.G., Ali F.: You only look once: unified, real-time object detection. IEEE, New Jersey (2015)

  9. Wei L., Dragomir A., Dumitru E., Christian S., Scott E.R., Cheng-Yang F., Alexander C.B.: SSD: single shot multibox detector. Springer, Cham (2015)

  10. Li, G., Shao, R., Wan, H., Zhou, M., Li, M.: A model for surface defect detection of industrial products based on attention augmentation. Comput. Intell. Neurosci. (2022). https://doi.org/10.1155/2022/9577096

  11. Zhang, Z.K., Zhou, M.L., Shao, R., Li, M., Li, G.: A defect detection model for industrial products based on attention and knowledge distillation. Comput. Intell. Neurosci. 2022, 6174255 (2022). https://doi.org/10.1155/2022/6174255

    Article  Google Scholar 

  12. Luo, H., Wang, P., Chen, H., Kowelo, V.: Small object detection network based on feature information enhancement. Comput. Intell. Neurosci. (2022). https://doi.org/10.1155/2022/6394823

  13. Guo, Z., Wang, C., Yang, G., Huang, Z., Li, G.: MSFT-YOLO: improved YOLOv5 based on transformer for detecting defects of steel surface. Sensors 22(9), 3467 (2022). https://doi.org/10.3390/s22093467

  14. Xu, S., Wang, X., Lv, W., Chang, Q., Cui, C., Deng, K., Wang, G., Dang, Q., Wei, S., Du, Y., et al.: PP-YOLOE: an evolved version of YOLO. Preprint at http://arxiv.org/abs/2203.16250 (2022)

  15. Dlamini, S., Kuo, C., Chao, S.: Developing a surface mount technology defect detection system for mounted devices on printed circuit boards using a MobileNetV2 with feature pyramid network. Eng. Appl. Artif. Intell. 121, 105875 (2023)

    Article  Google Scholar 

  16. Jiang, X., Cai, W., Ding, Y., Wang, X., Yang, Z., Di, X., Gao, W.: Camouflaged object detection based on ternary cascade perception. Remote Sens. 15, 1188 (2023)

    Article  Google Scholar 

  17. Bin H.: Multi-scale feature fusion network with attention for single image dehazing. Pattern Recognit. Image Anal. 31, 31 (2021)

  18. Lin, T., Dollár, P., Girshick, R., He, K., Hariharan, B. & Belongie, S.: Feature pyramid networks for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117-2125. IEEE, New Jersey (2017)

  19. Tan M., Pang R., Le Q.V.: EfficientDet: scalable and efficient object detection. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, New Jersey (2020)

  20. Shu L., Lu Q., Haifang Q., Jianping S., Jiaya J.: Path aggregation network for instance segmentation, (2018)

  21. Golnaz G., Tsung-Yi L., Ruoming P., Quoc V.L.: NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection. IEEE, New Jersey (2019)

  22. Guo M.H., Xu T.X., Liu J.J., Liu Z.N., Jiang P.T., Mu TJ, Zhang S.H., Martin R.R., Cheng M.M., Hu S.M.: Attention mechanisms in computer vision: a survey. Comput. Visual Media 8, 3 (2022)

  23. Wang Q., Wu B., Zhu P., Li P., Zuo W., Hu Q.: ECA-Net: Efficient channel attention for deep convolutional neural networks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, New Jersey (2020)

  24. Hou, Q., Zhou, D., Feng, J.: IEEE Comp Soc, “coordinate attention for efficient mobile network design,’’ presented at the,: IEEE/CVF conference on computer vision and pattern recognition. CVPR 2021(2021), 13708–13717 (2021). https://doi.org/10.1109/CVPR46437.2021.01350

    Article  Google Scholar 

  25. Jan G., Krzysztof G.: Awareness of self attention. Avant J. Philos. Interdiscip. Vanguard 7, 3 (2016)

  26. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Houlsby, N.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, (2020)

  27. Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable DETR: deformable transformers for end-to-end object detection. International Conference on Learning Representations, (2021)

  28. Zhiqiang, W., Jun, L.: A review of object detection based on convolutional neural network. 2017 36th Chinese Control Conference (CCC), pp. 11104-11109. (2017)

  29. Howard, A., Sandler, M., Chu, G., Chen, L., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., et al.: Searching for mobilenetv3. Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1314-1324. IEEE, New Jersey (2019)

  30. Liao, Y., Lu, S., Yang, Z., Liu, W.: Depthwise grouped convolution for object detection. Machine Vision Appl. (2021). https://doi.org/10.1007/s00138-021-01243-0

  31. Liang, F., et al.: Efficient neural network using pointwise convolution kernels with linear phase constraint. Neurocomputing 423, 572–579 (2021). https://doi.org/10.1016/j.neucom.2020.10.067

    Article  Google Scholar 

  32. Zheng, Z., Wang, P., Ren, D., Liu, W., Ye, R., Hu, Q., Zuo, W.: Enhancing geometric factors in model learning and inference for object detection and instance segmentation. IEEE Trans. Cybern. 52, 8574–8586 (2021)

    Article  Google Scholar 

  33. Zhang, Y. F., Ren, W., Zhang, Z., Jia, Z., Wang, L., Tan, T.: Focal and Efficient IOU Loss for Accurate Bounding Box Regression, (2021)

  34. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778. IEEE, New Jersey (2016)

  35. Gouider, C., Seddik, H.: YOLOv4 enhancement with efficient channel recalibration approach in CSPdarknet53. 2022 IEEE Information Technologies Smart Industrial Systems (ITSIS), pp. 1-6. IEEE, New Jersey (2022)

  36. Tan, M., Le, Q.: Efficientnetv2: Smaller models and faster training. Preprint http://arxiv.org/abs/2104.00298 (2021)

  37. Zhang, X., Zhou, X., Lin, M., Sun, J.: Hufflenet: an extremely efficient convolutional neural network for mobile devices. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848-6856. IEEE, New Jersey (2018)

  38. Wang, C., Bochkovskiy, A., Liao, H.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. Preprint at http://arxiv.org/abs/2207.02696 (2022)

Download references

Acknowledgements

The authors would like to thank all the anonymous reviewers for their insightful comments and constructive suggestions.

Funding

This work was supported by the Taishan Scholars Program (NO. tsqn202103097) and the Key R and D plan of Shandong Province (Soft Science Project)(2022RZB02012).

Author information

Authors and Affiliations

Authors

Contributions

Project administration, GL; data curation; writing—original draft, SZ; writing—review and editing, ML; funding acquisition, MZ.

Corresponding author

Correspondence to Gang Li.

Ethics declarations

Conflict of interest

The authors declare that there are no conflicts of interest regarding this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, S., Li, G., Zhou, M. et al. YOLO-CEA: a real-time industrial defect detection method based on contextual enhancement and attention. Cluster Comput 27, 2329–2344 (2024). https://doi.org/10.1007/s10586-023-04079-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-023-04079-7

Keywords

Navigation