skip to main content
10.1145/3589334.3645394acmconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article
Free Access

InArt: In-Network Aggregation with Route Selection for Accelerating Distributed Training

Authors Info & Claims
Published:13 May 2024Publication History

ABSTRACT

Deep learning has brought about a revolutionary transformation in network applications, particularly in domains like e-commerce and online advertising. Distributed training (DT), as a critical means to expedite model training, has progressively emerged as a key foundational infrastructure for such applications. However, with the rapid advancement of hardware accelerators, the performance bottleneck in DT has shifted from computation to communication. In-network aggregation (INA) solutions have shown promise in alleviating the communication bottleneck. Regrettably, current INA solutions primarily focus on improving efficiency under the traditional parameter server (PS) architecture and do not fully address the communication bottleneck caused by limited PS ingress bandwidth. To bridge this gap, we propose InArt, the first work to introduce INA with routing selection in a multi-PS architecture. InArt employs a multi-PS architecture to split DT tasks among multiple PSs, and selects appropriate routing schemes to fully harness INA capabilities. To accommodate traffic dynamics, InArt adopts a two-phase approach: splitting the training model among multiple parameter servers and selecting routing paths for INA. We propose Lagrange multiplier and randomized rounding algorithms for these phases, respectively. We implement InArt and evaluate its performance through experiments on physical platforms (Tofino switches) and Mininet emulation (P4 Software Switches). Experimental results show that InArt can reduce communication time by 48%\!\sim57\!% compared with state-of-the-art solutions.

Skip Supplemental Material Section

Supplemental Material

rfp0463.mp4

Supplemental video

mp4

35 MB

References

  1. [n. d.]. Behavioral model version 2 (bmv2). https://github.com/p4lang/behavioralmodel Accessed: June. 14, 2023.Google ScholarGoogle Scholar
  2. [n. d.]. IBM ILOG CPLEX Optimization Studio. https://nl.mathworks.com/ products/connections/product_detail/Ibm-ilog-cplex.html.Google ScholarGoogle Scholar
  3. [n. d.]. iftop. http://www.ex-parrot.com/~pdw/iftop/ Accessed: June. 14, 2023.Google ScholarGoogle Scholar
  4. [n. d.]. An instant virtual network on your laptop. http://Mininet.org Accessed: June. 14, 2023.Google ScholarGoogle Scholar
  5. [n. d.]. Intel Tofino. https://www.intel.com/content/www/us/en/products/ network-io/programmable-ethernet-switch/tofino-series.html Accessed: June. 14, 2023.Google ScholarGoogle Scholar
  6. [n. d.]. Netronome Agilio SmartNIC. https://www.netronome.com/products/ agilio-cx.Google ScholarGoogle Scholar
  7. [n. d.]. Open Tofino. https://github.com/barefootnetworks/Open-Tofino. Accessed: June. 14, 2023.Google ScholarGoogle Scholar
  8. [n. d.]. Pyomo. https://github.com/Pyomo/pyomo.Google ScholarGoogle Scholar
  9. [n. d.]. Tcpreplay - Pcap editing and replaying utilities. https://tcpreplay.appneta. com. Accessed: June. 14, 2023.Google ScholarGoogle Scholar
  10. Lusine Abrahamyan, Yiming Chen, Giannis Bekoulis, and Nikos Deligiannis. 2021. Learned gradient compression for distributed deep learning. IEEE Transactions on Neural Networks and Learning Systems (2021).Google ScholarGoogle Scholar
  11. Saurabh Agarwal, Hongyi Wang, Shivaram Venkataraman, and Dimitris Papailiopoulos. 2021. On the utility of gradient compression in distributed training systems. arXiv preprint arXiv:2103.00543 (2021).Google ScholarGoogle Scholar
  12. Mohammad Al-Fares, Alexander Loukissas, and Amin Vahdat. 2008. A scalable, commodity data center network architecture. ACM SIGCOMM computer communication review 38, 4 (2008), 63--74.Google ScholarGoogle Scholar
  13. Rand Jawad Kadhim Almahmood and Adem Tekerek. 2022. Issues and Solutions in Deep Learning-Enabled Recommendation Systems within the E-Commerce Field. Applied Sciences 12, 21 (2022), 11256.Google ScholarGoogle ScholarCross RefCross Ref
  14. Saravanan Chandrasekaran,Aditya Kumar Singh Pundir, T Bheema Lingaiah, et al. 2022. Deep learning approaches for cyberbullying detection and classification on social media. Computational Intelligence and Neuroscience 2022 (2022).Google ScholarGoogle Scholar
  15. Chia-Yu Chen, Jiamin Ni, Songtao Lu, Xiaodong Cui, Pin-Yu Chen, Xiao Sun, Naigang Wang, Swagath Venkataramani, Vijayalakshmi Viji Srinivasan, Wei Zhang, et al. 2020. Scalecom: Scalable sparsified gradient compression for communication-efficient distributed training. Advances in Neural Information Processing Systems 33 (2020), 13551--13563.Google ScholarGoogle Scholar
  16. Ge Chen, Gaoxiong Zeng, and Li Chen. 2021. P4COM: In-Network Computation with Programmable Switches. arXiv preprint arXiv:2107.13694 (2021).Google ScholarGoogle Scholar
  17. Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. 2016. Revisiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981 (2016).Google ScholarGoogle Scholar
  18. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171--4186.Google ScholarGoogle Scholar
  19. Raj P Dhanya and VS Anitha. 2023. Implementation and Performance Evaluation of Load Balanced Routing in SDN based Fat Tree Data Center. In 2023 6th International Conference on Information Systems and Computer Networks (ISCON). IEEE, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  20. Jin Fang, Gongming Zhao, Hongli Xu, Changbo Wu, and Zhuolong Yu. 2023. GRID: Gradient routing with in-network aggregation for distributed training. IEEE/ACM Transactions on Networking (2023).Google ScholarGoogle Scholar
  21. Jin Fang, Gongming Zhao, Hongli Xu, Zhuolong Yu, Bingchen Shen, and Liguang Xie. 2023. GOAT: Gradient Scheduling with Collaborative In-Network Aggregation for Distributed Training. In 2023 IEEE/ACM 31st International Symposium on Quality of Service (IWQoS). 1--10. https://doi.org/10.1109/IWQoS57198.2023. 10188783Google ScholarGoogle ScholarCross RefCross Ref
  22. Geoff Gordon and Ryan Tibshirani. 2012. Karush-kuhn-tucker conditions. Optimization 10, 725/36 (2012), 725.Google ScholarGoogle Scholar
  23. Joseph F Grcar. 2011. Mathematicians of Gaussian elimination. Notices of the AMS 58, 6 (2011), 782--792.Google ScholarGoogle Scholar
  24. Sayed Hadi Hashemi, Sangeetha Abdu Jyothi, and Roy Campbell. 2019. Tictac: Accelerating distributed deep learning with communication scheduling. Proceedings of Machine Learning and Systems 1 (2019), 418--430.Google ScholarGoogle Scholar
  25. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.Google ScholarGoogle ScholarCross RefCross Ref
  26. Yuzhen Huang, Tatiana Jin, Yidi Wu, Zhenkun Cai, Xiao Yan, Fan Yang, Jinfeng Li, Yuying Guo, and James Cheng. 2018. Flexps: Flexible parallelism control in parameter server architecture. Proceedings of the VLDB Endowment 11, 5 (2018), 566--579.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Yimin Jiang, Yibo Zhu, Chang Lan, Bairen Yi, Yong Cui, and Chuanxiong Guo. 2020. A Unified Architecture for Accelerating Distributed {DNN} Training in Heterogeneous {GPU/CPU} Clusters. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). 463--479.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Minkoo Kang, Gyeongsik Yang, Yeonho Yoo, and Chuck Yoo. 2020. TensorExpress: In-network communication scheduling for distributed deep learning. In 2020 IEEE 13th international conference on cloud computing (CLOUD). IEEE, 25--27.Google ScholarGoogle ScholarCross RefCross Ref
  29. ChonLam Lao, Yanfang Le, Kshiteej Mahajan, Yixi Chen, Wenfei Wu, Aditya Akella, and MichaelMSwift. 2021. ATP: In-network Aggregation for Multi-tenant Learning.. In NSDI. 741--761.Google ScholarGoogle Scholar
  30. Mengmou Li. 2018. Generalized Lagrange multiplier method and KKT conditions with an application to distributed optimization. IEEE Transactions on Circuits and Systems II: Express Briefs 66, 2 (2018), 252--256.Google ScholarGoogle Scholar
  31. Jesus Lopez-Perez. 2021. Elasticities on a Mixed Integer Programming Model for Revenue Optimization. In XX SIGEF Congress-Harnessing Complexity through Fuzzy Logic. Springer, 153--177.Google ScholarGoogle Scholar
  32. Liang Luo, Jacob Nelson, Luis Ceze, Amar Phanishayee, and Arvind Krishnamurthy. 2018. Parameter hub: a rack-scale parameter server for distributed deep neural network training. In Proceedings of the ACM Symposium on Cloud Computing. 41--54.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Yun Ma, Dongwei Xiang, Shuyu Zheng, Deyu Tian, and Xuanzhe Liu. 2019. Moving deep learning into web browser: How far can we go?. In The World Wide Web Conference. 1234--1244.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Luo Mai, Guo Li, Marcel Wagenländer, Konstantinos Fertakis, Andrei-Octavian Brabete, and Peter Pietzuch. 2020. {KungFu}: Making Training in Distributed Machine Learning Adaptive. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). 937--954.Google ScholarGoogle Scholar
  35. Shyam Singh Rajput and Virendra Singh Kushwah. 2016. A genetic based improved load balanced min-min task scheduling algorithm for load balancing in cloud computing. In 2016 8th international conference on Computational Intelligence and Communication Networks (CICN). IEEE, 677--681.Google ScholarGoogle ScholarCross RefCross Ref
  36. Amedeo Sapio, Ibrahim Abdelaziz, Abdulla Aldilaijan, Marco Canini, and Panos Kalnis. 2017. In-network computation is a dumb idea whose time has come. In Proceedings of the 16th ACM Workshop on Hot Topics in Networks. 150--156.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan Ports, and Peter Richtarik. 2021. Scaling Distributed Machine Learning with In-Network Aggregation. In 18th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 21). 785--808.Google ScholarGoogle Scholar
  38. Karen Simonyan and AndrewZisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google ScholarGoogle Scholar
  39. Ankur Sinha, Tharo Soun, and Kalyanmoy Deb. 2019. Using Karush-Kuhn- Tucker proximity measure for solving bilevel optimization problems. Swarm and evolutionary computation 44 (2019), 496--510.Google ScholarGoogle Scholar
  40. Suraiya Tairin, Haiying Shen, and Zeyu Zhang. 2023. Embracing Uncertainty for Equity in Resource Allocation in ML Training. In Proceedings of the 52nd International Conference on Parallel Processing. 423--432.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Ching-Yuan Tsai, Ching-Chi Lin, Pangfeng Liu, and Jan-Jan Wu. 2018. Communication scheduling optimization for distributed deep learning systems. In 2018 IEEE 24th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 739--746.Google ScholarGoogle ScholarCross RefCross Ref
  42. Joost Verbraeken, Matthijs Wolting, Jonathan Katzy, Jeroen Kloppenburg, Tim Verbelen, and Jan S Rellermeyer. 2020. A survey on distributed machine learning. ACM Computing Surveys (CSUR) 53, 2 (2020), 1--33.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Yanwu Yang and Panyu Zhai. 2022. Click-through rate prediction in online advertising: A literature review. Information Processing & Management 59, 2 (2022), 102853.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Xiao Zeng, Ming Yan, and Mi Zhang. 2021. Mercury: Efficient on-device distributed dnn training via stochastic importance sampling. In Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. 29--41.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Gongming Zhao, Jiawei Liu, Yutong Zhai, Hongli Xu, and Huang He. 2023. Alleviating the Impact of Abnormal Events Through Multi-Constrained VM Placement. IEEE Transactions on Parallel and Distributed Systems 34, 5 (2023), 1508--1523. https://doi.org/10.1109/TPDS.2023.3248681Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Ruiting Zhou, Jinlong Pang, Qin Zhang, Chuan Wu, Lei Jiao, Yi Zhong, and Zongpeng Li. 2022. Online Scheduling Algorithm for Heterogeneous Distributed Machine Learning Jobs. IEEE Transactions on Cloud Computing (2022).Google ScholarGoogle Scholar

Index Terms

  1. InArt: In-Network Aggregation with Route Selection for Accelerating Distributed Training

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        WWW '24: Proceedings of the ACM on Web Conference 2024
        May 2024
        4826 pages
        ISBN:9798400701719
        DOI:10.1145/3589334

        Copyright © 2024 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 13 May 2024

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,899of8,196submissions,23%
      • Article Metrics

        • Downloads (Last 12 months)31
        • Downloads (Last 6 weeks)31

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader