skip to main content
10.1145/3570991.3571044acmotherconferencesArticle/Chapter ViewAbstractPublication PagescodsConference Proceedingsconference-collections
short-paper

Accelerating Computer Vision Tasks on GPUs using Ramanujan Graph Product Framework

Published:04 January 2023Publication History

ABSTRACT

Sparse neural networks have been proven to generate efficient and better runtimes when compared to dense neural networks. Acceleration in runtime is better achieved with structured sparsity. However, generating an efficient sparsity structure to maintain both runtime and accuracy is a challenging task. In this paper, we implement the RBGP4 sparsity pattern derived from the Ramanujan Bipartite Graph Product (RBGP) framework on various Computer Vision tasks and test how well it performs w.r.t accuracy and runtime. Using this approach, we generate structured sparse neural networks which has multiple levels of block sparsity that generates good connectivity due to the presence of Ramanujan bipartite graphs. We benchmark our approach on Semantic Segmentation and Pose Estimation tasks on an edge device (Jetson Nano 2GB) as well as server (V100) GPUs. We compare the results obtained for RBGP4 sparsity pattern with the unstructured and block sparsity patterns. When compared to sparsity patterns like unstructured and block, we obtained significant speedups while maintaining accuracy.

References

  1. Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. 2017. Structured Pruning of Deep Convolutional Neural Networks. J. Emerg. Technol. Comput. Syst. 13, 3, Article 32 (feb 2017), 18 pages. https://doi.org/10.1145/3005348Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Junyi Chai, Hao Zeng, Anming Li, and Eric W.T. Ngai. 2021. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Machine Learning with Applications 6 (2021), 100134. https://doi.org/10.1016/j.mlwa.2021.100134Google ScholarGoogle ScholarCross RefCross Ref
  3. Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. 2018. Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges. IEEE Signal Processing Magazine 35, 1 (2018), 126–136. https://doi.org/10.1109/MSP.2017.2765695Google ScholarGoogle ScholarCross RefCross Ref
  4. Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan M. Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. 2014. cuDNN: Efficient Primitives for Deep Learning. ArXiv abs/1410.0759(2014).Google ScholarGoogle Scholar
  5. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. 2016. The Cityscapes Dataset for Semantic Urban Scene Understanding. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarCross RefCross Ref
  6. Jonathan Frankle and Michael Carbin. 2018. The Lottery Ticket Hypothesis: Training Pruned Neural Networks. CoRR abs/1803.03635(2018). arXiv:1803.03635http://arxiv.org/abs/1803.03635Google ScholarGoogle Scholar
  7. Gousia Habib and Shaima Qureshi. 2022. Optimization and acceleration of convolutional neural networks: A survey. Journal of King Saud University - Computer and Information Sciences 34, 7(2022), 4244–4268. https://doi.org/10.1016/j.jksuci.2020.10.004Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Song Han, Jeff Pool, John Tran, and William J. Dally. 2015. Learning Both Weights and Connections for Efficient Neural Networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1 (Montreal, Canada) (NIPS’15). MIT Press, Cambridge, MA, USA, 1135–1143.Google ScholarGoogle Scholar
  9. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized Neural Networks. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.). Vol. 29. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2016/file/d8330f857a17c53d217014ee776bfd50-Paper.pdfGoogle ScholarGoogle Scholar
  10. François Lagunas, Ella Charlaix, Victor Sanh, and Alexander Rush. 2021. Block Pruning For Faster Transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, 10619–10629. https://doi.org/10.18653/v1/2021.emnlp-main.829Google ScholarGoogle ScholarCross RefCross Ref
  11. Yann LeCun, John Denker, and Sara Solla. 1989. Optimal Brain Damage. In Advances in Neural Information Processing Systems, D. Touretzky (Ed.). Vol. 2. Morgan-Kaufmann. https://proceedings.neurips.cc/paper/1989/file/6c9882bbac1c7093bd25041881277658-Paper.pdfGoogle ScholarGoogle Scholar
  12. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2017. Pruning Filters for Efficient ConvNets. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=rJqFGTslgGoogle ScholarGoogle Scholar
  13. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In Computer Vision – ECCV 2014, David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (Eds.). Springer International Publishing, Cham, 740–755.Google ScholarGoogle ScholarCross RefCross Ref
  14. Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Penksy. 2015. Sparse Convolutional Neural Networks. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 806–814. https://doi.org/10.1109/CVPR.2015.7298681Google ScholarGoogle Scholar
  15. Jiang-Jiang Liu, Qibin Hou, Ming-Ming Cheng, Changhu Wang, and Jiashi Feng. 2020. Improving Convolutional Networks with Self-Calibrated Convolutions. In IEEE CVPR.Google ScholarGoogle Scholar
  16. Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. 2022. A ConvNet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 11976–11986.Google ScholarGoogle ScholarCross RefCross Ref
  17. Gaurav Menghani. 2021. Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better. CoRR abs/2106.08962(2021). arXiv:2106.08962https://arxiv.org/abs/2106.08962Google ScholarGoogle Scholar
  18. Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. 2017. Pruning Convolutional Neural Networks for Resource Efficient Inference. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=SJGCiw5glGoogle ScholarGoogle Scholar
  19. Ameya Prabhu, Girish Varma, and Anoop Namboodiri. 2018. Deep Expander Networks: Efficient Deep Networks from Graph Theory. In Proceedings of the European Conference on Computer Vision (ECCV).Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Tara N. Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran. 2013. Low-rank matrix factorization for Deep Neural Network training with high-dimensional output targets. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (2013), 6655–6659.Google ScholarGoogle ScholarCross RefCross Ref
  21. Cheng-Hao Tu, Jia-Hong Lee, Yi-Ming Chan, and Chu-Song Chen. 2020. Pruning Depthwise Separable Convolutions for MobileNet Compression. In 2020 International Joint Conference on Neural Networks (IJCNN). 1–8. https://doi.org/10.1109/IJCNN48605.2020.9207259Google ScholarGoogle Scholar
  22. Dharma Teja Vooturi, Girish Varma, and Kishore Kothapalli. [n. d.]. Ramanujan bipartite graph products for efficient block sparse neural networks. Concurrency and Computation: Practice and Experience n/a, n/a([n. d.]), e6363. https://doi.org/10.1002/cpe.6363 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/cpe.6363Google ScholarGoogle Scholar
  23. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. 2016. Learning Structured Sparsity in Deep Neural Networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems (Barcelona, Spain) (NIPS’16). Curran Associates Inc., Red Hook, NY, USA, 2082–2090.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Bin Xiao, Haiping Wu, and Yichen Wei. 2018. Simple Baselines for Human Pose Estimation and Tracking. In Proceedings of the European Conference on Computer Vision (ECCV).Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Xin Yuan, Pedro Henrique Pamplona Savarese, and Michael Maire. 2021. Growing Efficient Deep Networks by Structured Continuous Sparsification. In International Conference on Learning Representations. https://openreview.net/forum?id=wb3wxCObbRTGoogle ScholarGoogle Scholar
  26. Michael Zhu and Suyog Gupta. 2017. To prune, or not to prune: exploring the efficacy of pruning for model compression. https://doi.org/10.48550/ARXIV.1710.01878Google ScholarGoogle Scholar
  27. Mingjian Zhu, Kai Han, Yehui Tang, and Yunhe Wang. 2021. Visual Transformer Pruning. CoRR abs/2104.08500(2021). arXiv:2104.08500https://arxiv.org/abs/2104.08500Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    CODS-COMAD '23: Proceedings of the 6th Joint International Conference on Data Science & Management of Data (10th ACM IKDD CODS and 28th COMAD)
    January 2023
    357 pages
    ISBN:9781450397971
    DOI:10.1145/3570991

    Copyright © 2023 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 4 January 2023

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • short-paper
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate197of680submissions,29%
  • Article Metrics

    • Downloads (Last 12 months)31
    • Downloads (Last 6 weeks)1

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format