Skip to main content
Log in

Weight-adaptive channel pruning for CNNs based on closeness-centrality modeling

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Neural network pruning provides significant performance in reducing the resource requirements for deploying deep convolutional models. Recent pruning techniques concentrate on eliminating less important or redundant channels from the network. However, these well-designed methods conflict in some situations. For example, some filters are important in importance-based methods but may be regarded as redundant in similarity-based methods. So, the correctness of some existing methods is questionable. In this paper, a novel pruning approach, entitled weight-adaptive channel pruning (WACP), is presented to address the problem. Our approach takes full advantage of the feature similarity information instead of simply categorizing the similarity feature as redundant. Specifically, we first reveal that there is a stable similarity relationship between different output features, independent of the batch size of input images. Then, based on the similarity information, we propose a weight-adaptive compensation strategy to minimize the performance loss caused by pruning. Moreover, we design a novel channel pruning algorithm that determines which features should be retained from a set of similar features by introducing the closeness centrality of graph theory. Extensive and targeted experiments have demonstrated the validity of our proposed WACP for compressing networks. The comparison results demonstrate that the WACP achieves state-of-the-art performance on several benchmark networks and datasets, even for a very high compression rate. For example, WACP improves accuracy by 0.46% while reducing FLOPs by 52.2% and parameters by 43.5% with ResNet-56 on CIFAR-10. For ResNet-50 on ImageNet, WACP prunes more than 55% of FLOPs with only a 0.70%/0.42% decline in top-1/top-5 accuracy. The codes are at https://github.com/lsianke/WACP.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Algorithm 1
Algorithm 2
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data Availibility

The codes are at https://github.com/lsianke/WACP.

References

  1. Abadi M, Barham P, Chen J, et al (2016) Tensorflow: A system for large-scale machine learning. arXiv:1605.08695

  2. Aghasi A, Abdi A, Nguyen NH, et al (2016) Net-trim: Convex pruning of deep neural networks with performance guarantee. In: NIPS

  3. Chang J, Lu Y, Xue P et al (2021) Automatic channel pruning via clustering and swarm intelligence optimization for cnn. Appl Intell 52:17751–17771

    Article  Google Scholar 

  4. Chen LC, Zhu Y, Papandreou G, et al (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In: European conference on computer vision

  5. Chen Z, Xiang J, Lu Y, et al (2023) Rgp: Neural network pruning through regular graph with edges swapping. IEEE transactions on neural networks and learning systems PP

  6. Denil M, Shakibi B, Dinh L, et al (2013) Predicting parameters in deep learning. In: NIPS

  7. Dong X, Chen S, Pan SJ (2017) Learning to prune deep neural networks via layer-wise optimal brain surgeon. In: NIPS

  8. Duan Y, Hu X, Zhou Y, et al (2022) Network pruning via feature shift minimization. arXiv:2207.02632

  9. Gao S, Huang F, Cai WT, et al (2021) Network pruning via performance maximization. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 9266–9276

  10. Girdhar R, Tran D, Torresani L, et al (2019) Distinit: Learning video representations without a single labeled video. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp 852–861

  11. Guo J, Han K, Wang Y, et al (2020) Hit-detector: Hierarchical trinity architecture search for object detection. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 11402–11411

  12. Guo S, Zhang L, Zheng X, et al (2023) Automatic network pruning via hilbert-schmidt independence criterion lasso under information bottleneck principle. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 17458–17469

  13. He K, Zhang X, Ren S, Sun J (2015) Deep residual learning for image recognition. In: 2016 IEEE conference on Computer Vision and Pattern Recognition (CVPR), p 770–778

  14. He Y, Zhang X, Sun J (2017) Channel pruning for accelerating very deep neural networks. 2017 IEEE International Conference on Computer Vision (ICCV), pp 1398–1406

  15. He Y, Kang G, Dong X, et al (2018) Soft filter pruning for accelerating deep convolutional neural networks. In: International joint conference on artificial intelligence

  16. He Y, Liu P, Wang Z, et al (2018) Filter pruning via geometric median for deep convolutional neural networks acceleration. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 4335–4344

  17. He Y, Ding Y, Liu P, et al (2020) Learning filter pruning criteria for deep convolutional neural networks acceleration. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 2006–2015

  18. Hinton GE, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. arXiv:1503.02531

  19. Howard AG, Zhu M, Chen B, et al (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861

  20. Hu X, Fang H, Zhang L, et al (2023) Dynamic connection pruning for densely connected convolutional neural networks. Applied Intelligence

  21. Huang Z, Wang N (2017) Data-driven sparse structure selection for deep neural networks. arXiv:1707.01213

  22. Ji Y, Liang L, Deng L, et al (2018) Tetris: Tile-matching the tremendous irregular sparsity. In: Neural Information Processing Systems

  23. Joo D, Yi E, Baek S, et al (2021) Linearly replaceable filters for deep network channel pruning. In: AAAI Conference on artificial intelligence

  24. Kang M, Han B (2020) Operation-aware soft channel pruning using differentiable masks. In: International conference on machine learning

  25. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Commun ACM 60:84–90

    Article  Google Scholar 

  26. LeCun Y, Denker JS, Solla SA (1989) Optimal brain damage. In: NIPS

  27. Lee J, Park S, Mo S, et al (2020) Layer-adaptive sparsity for the magnitude-based pruning. In: International conference on learning representations

  28. Li B, Wu B, Su J, et al (2020) Eagleeye: Fast sub-net evaluation for efficient neural network pruning. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, Springer, pp 639–654

  29. Li H, Kadav A, Durdanovic I, et al (2016) Pruning filters for efficient convnets. arXiv:1608.08710

  30. Li Y, Lin S, Liu J, et al (2021) Towards compact cnns via collaborative compression. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp 6434–6443

  31. Lin M, Ji R, Wang Y, et al (2020) Hrank: Filter pruning using high-rank feature map. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 1526–1535

  32. Lin M, Ji R, xin Zhang Y, et al (2020) Channel pruning via automatic structure search. arXiv:2001.08565

  33. Lin M, xin Zhang Y, Li Y, et al (2021) 1xn pattern for pruning convolutional neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 45:3999–4008

  34. Lin M, Cao L, xin Zhang Y, et al (2022) Pruning networks with cross-layer ranking & k-reciprocal nearest filters. IEEE transactions on neural networks and learning systems PP

  35. Lin S, Ji R, Yan C, et al (2019) Towards optimal structured cnn pruning via generative adversarial learning. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 2785–2794

  36. Liu Z, Li J, Shen Z, et al (2017) Learning efficient convolutional networks through network slimming. 2017 IEEE International Conference on Computer Vision (ICCV) pp 2755–2763

  37. Liu Z, Mu H, Zhang X, et al (2019) Metapruning: Meta learning for automatic neural network channel pruning. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp 3295–3304

  38. Liu Z, Shen Z, Savvides M, et al (2020) Reactnet: Towards precise binary neural network with generalized activation functions. arXiv:2003.03488

  39. Luo JH, Wu J, Lin W (2017) Thinet: A filter level pruning method for deep neural network compression. 2017 IEEE International Conference on Computer Vision (ICCV) pp 5068–5076

  40. Molchanov P, Tyree S, Karras T, et al (2016) Pruning convolutional neural networks for resource efficient inference. arXiv: Learning

  41. Molchanov P, Mallya A, Tyree S, et al (2019) Importance estimation for neural network pruning. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 11256–11264

  42. Mussay B, Osadchy M, Braverman V, et al (2019) Data-independent neural pruning via coresets. arXiv: Learning

  43. Oh J, Kim H, Baik S, et al (2021) Batch normalization tells you which filter is important. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp 3351–3360

  44. Paszke A, Gross S, Chintala S, et al (2017) Automatic differentiation in pytorch

  45. Peng H, Wu J, Chen S, et al (2019) Collaborative channel pruning for deep networks. In: International conference on machine learning

  46. Raja KB, Raghavendra R, Busch C (2019) Obtaining stable iris codes exploiting low-rank tensor space and spatial structure aware refinement for better iris recognition. 2019 International Conference on Biometrics (ICB), pp 1–8

  47. Sandler M, Howard AG, Zhu M, et al (2018) Mobilenetv2: Inverted residuals and linear bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 4510–4520

  48. Shao M, Dai J, Kuang J et al (2020) A dynamic cnn pruning method based on matrix similarity. Signal, Image Vid Process 15:381–389

    Article  Google Scholar 

  49. Shen M, Molchanov P, Yin H, et al (2021) When to prune? a policy towards early structural pruning. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 12237–12246

  50. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  51. Szegedy C, Liu W, Jia Y, et al (2014) Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–9

  52. Wang D, Zhou L, Zhang X, et al (2018) Exploring linear relationship in feature map subspace for convnets compression. arXiv:1803.05729

  53. Wang H, Qin C, Zhang Y, et al (2020) Neural pruning via growing regularization. arXiv:2012.09243

  54. Wang Z, Li C, Wang X (2021) Convolutional neural network pruning with structural redundancy reduction. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14908–14917

  55. Wang Z, jun Liu X, Huang L, et al (2021) Model pruning based on quantified similarity of feature maps. arXiv:2105.06052

  56. You Z, Yan K, Ye J, et al (2019) Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks. In: Neural information processing systems

  57. Zhang C, Bengio S, Hardt M, et al (2016) Understanding deep learning requires rethinking generalization. arXiv:1611.03530

  58. Zhang M, Yu X, Rong J et al (2019) Graph pruning for model compression. Appl Intell 52:11244–11256

    Article  Google Scholar 

  59. Zhang X, Xie W, Li Y et al (2023) Reaf: Remembering enhancement and entropy-based asymptotic forgetting for filter pruning. IEEE Trans Image Process 32:3912–3923

    Article  Google Scholar 

  60. Zhang Y, Gao S, Huang H (2021) Exploration and estimation for model compression. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp 477–486

  61. xin Zhang Y, Lin M, Lin CW, et al (2021) Carrying out cnn channel pruning in a white box. IEEE transactions on neural networks and learning systems PP

  62. Zhang Y, Lin M, Lin Z et al (2022) Learning best combination for efficient n: M sparsity. Adv Neural Inf Process Syst 35:941-953

  63. Zhou A, Ma Y, Zhu J, et al (2021) Learning n: M fine-grained structured sparse neural networks from scratch. arXiv:2102.04010

  64. Zhuang Z, Tan M, Zhuang B, et al (2018) Discrimination-aware channel pruning for deep neural networks. In: Neural information processing systems

Download references

Funding

The work was supported by National Natural Science Foundation of China (Grant No. 61976246 ), Natural Science Foundation of Chongqing(Grant No. CSTB2023NSCQ-MSX0018), Fundamental Research Funds for the Central Universities (Grant No. SWU-KR22046), National Natural Science Foundation of China (Grant No. U20A20227).

Author information

Authors and Affiliations

Authors

Contributions

Zhao Dong: Conceptualization, Methodology, Software, Data curation, Writing − original draft. Yuanzhi Duan: Methodology, Software, Writing − reviewing. Yue Zhou: Writing − reviewing & editing. Shukai Duan: Supervision. Xiaofang Hu: Validation, Formal analysis, Investigation, Funding acquisition.

Corresponding author

Correspondence to Xiaofang Hu.

Ethics declarations

Competing of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Ethical and informed consent for data used

The relevant datasets are publicly available, and the authors of the manuscript are aware that the data used in this article does not involve ethical issues.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, Z., Duan, Y., Zhou, Y. et al. Weight-adaptive channel pruning for CNNs based on closeness-centrality modeling. Appl Intell 54, 201–215 (2024). https://doi.org/10.1007/s10489-023-05164-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-05164-5

Keywords

Navigation