ABSTRACT
Deep learning has brought about a revolutionary transformation in network applications, particularly in domains like e-commerce and online advertising. Distributed training (DT), as a critical means to expedite model training, has progressively emerged as a key foundational infrastructure for such applications. However, with the rapid advancement of hardware accelerators, the performance bottleneck in DT has shifted from computation to communication. In-network aggregation (INA) solutions have shown promise in alleviating the communication bottleneck. Regrettably, current INA solutions primarily focus on improving efficiency under the traditional parameter server (PS) architecture and do not fully address the communication bottleneck caused by limited PS ingress bandwidth. To bridge this gap, we propose InArt, the first work to introduce INA with routing selection in a multi-PS architecture. InArt employs a multi-PS architecture to split DT tasks among multiple PSs, and selects appropriate routing schemes to fully harness INA capabilities. To accommodate traffic dynamics, InArt adopts a two-phase approach: splitting the training model among multiple parameter servers and selecting routing paths for INA. We propose Lagrange multiplier and randomized rounding algorithms for these phases, respectively. We implement InArt and evaluate its performance through experiments on physical platforms (Tofino switches) and Mininet emulation (P4 Software Switches). Experimental results show that InArt can reduce communication time by 48%\!\sim57\!% compared with state-of-the-art solutions.
Supplemental Material
- [n. d.]. Behavioral model version 2 (bmv2). https://github.com/p4lang/behavioralmodel Accessed: June. 14, 2023.Google Scholar
- [n. d.]. IBM ILOG CPLEX Optimization Studio. https://nl.mathworks.com/ products/connections/product_detail/Ibm-ilog-cplex.html.Google Scholar
- [n. d.]. iftop. http://www.ex-parrot.com/~pdw/iftop/ Accessed: June. 14, 2023.Google Scholar
- [n. d.]. An instant virtual network on your laptop. http://Mininet.org Accessed: June. 14, 2023.Google Scholar
- [n. d.]. Intel Tofino. https://www.intel.com/content/www/us/en/products/ network-io/programmable-ethernet-switch/tofino-series.html Accessed: June. 14, 2023.Google Scholar
- [n. d.]. Netronome Agilio SmartNIC. https://www.netronome.com/products/ agilio-cx.Google Scholar
- [n. d.]. Open Tofino. https://github.com/barefootnetworks/Open-Tofino. Accessed: June. 14, 2023.Google Scholar
- [n. d.]. Pyomo. https://github.com/Pyomo/pyomo.Google Scholar
- [n. d.]. Tcpreplay - Pcap editing and replaying utilities. https://tcpreplay.appneta. com. Accessed: June. 14, 2023.Google Scholar
- Lusine Abrahamyan, Yiming Chen, Giannis Bekoulis, and Nikos Deligiannis. 2021. Learned gradient compression for distributed deep learning. IEEE Transactions on Neural Networks and Learning Systems (2021).Google Scholar
- Saurabh Agarwal, Hongyi Wang, Shivaram Venkataraman, and Dimitris Papailiopoulos. 2021. On the utility of gradient compression in distributed training systems. arXiv preprint arXiv:2103.00543 (2021).Google Scholar
- Mohammad Al-Fares, Alexander Loukissas, and Amin Vahdat. 2008. A scalable, commodity data center network architecture. ACM SIGCOMM computer communication review 38, 4 (2008), 63--74.Google Scholar
- Rand Jawad Kadhim Almahmood and Adem Tekerek. 2022. Issues and Solutions in Deep Learning-Enabled Recommendation Systems within the E-Commerce Field. Applied Sciences 12, 21 (2022), 11256.Google ScholarCross Ref
- Saravanan Chandrasekaran,Aditya Kumar Singh Pundir, T Bheema Lingaiah, et al. 2022. Deep learning approaches for cyberbullying detection and classification on social media. Computational Intelligence and Neuroscience 2022 (2022).Google Scholar
- Chia-Yu Chen, Jiamin Ni, Songtao Lu, Xiaodong Cui, Pin-Yu Chen, Xiao Sun, Naigang Wang, Swagath Venkataramani, Vijayalakshmi Viji Srinivasan, Wei Zhang, et al. 2020. Scalecom: Scalable sparsified gradient compression for communication-efficient distributed training. Advances in Neural Information Processing Systems 33 (2020), 13551--13563.Google Scholar
- Ge Chen, Gaoxiong Zeng, and Li Chen. 2021. P4COM: In-Network Computation with Programmable Switches. arXiv preprint arXiv:2107.13694 (2021).Google Scholar
- Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. 2016. Revisiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981 (2016).Google Scholar
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171--4186.Google Scholar
- Raj P Dhanya and VS Anitha. 2023. Implementation and Performance Evaluation of Load Balanced Routing in SDN based Fat Tree Data Center. In 2023 6th International Conference on Information Systems and Computer Networks (ISCON). IEEE, 1--6.Google ScholarCross Ref
- Jin Fang, Gongming Zhao, Hongli Xu, Changbo Wu, and Zhuolong Yu. 2023. GRID: Gradient routing with in-network aggregation for distributed training. IEEE/ACM Transactions on Networking (2023).Google Scholar
- Jin Fang, Gongming Zhao, Hongli Xu, Zhuolong Yu, Bingchen Shen, and Liguang Xie. 2023. GOAT: Gradient Scheduling with Collaborative In-Network Aggregation for Distributed Training. In 2023 IEEE/ACM 31st International Symposium on Quality of Service (IWQoS). 1--10. https://doi.org/10.1109/IWQoS57198.2023. 10188783Google ScholarCross Ref
- Geoff Gordon and Ryan Tibshirani. 2012. Karush-kuhn-tucker conditions. Optimization 10, 725/36 (2012), 725.Google Scholar
- Joseph F Grcar. 2011. Mathematicians of Gaussian elimination. Notices of the AMS 58, 6 (2011), 782--792.Google Scholar
- Sayed Hadi Hashemi, Sangeetha Abdu Jyothi, and Roy Campbell. 2019. Tictac: Accelerating distributed deep learning with communication scheduling. Proceedings of Machine Learning and Systems 1 (2019), 418--430.Google Scholar
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.Google ScholarCross Ref
- Yuzhen Huang, Tatiana Jin, Yidi Wu, Zhenkun Cai, Xiao Yan, Fan Yang, Jinfeng Li, Yuying Guo, and James Cheng. 2018. Flexps: Flexible parallelism control in parameter server architecture. Proceedings of the VLDB Endowment 11, 5 (2018), 566--579.Google ScholarDigital Library
- Yimin Jiang, Yibo Zhu, Chang Lan, Bairen Yi, Yong Cui, and Chuanxiong Guo. 2020. A Unified Architecture for Accelerating Distributed {DNN} Training in Heterogeneous {GPU/CPU} Clusters. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). 463--479.Google ScholarDigital Library
- Minkoo Kang, Gyeongsik Yang, Yeonho Yoo, and Chuck Yoo. 2020. TensorExpress: In-network communication scheduling for distributed deep learning. In 2020 IEEE 13th international conference on cloud computing (CLOUD). IEEE, 25--27.Google ScholarCross Ref
- ChonLam Lao, Yanfang Le, Kshiteej Mahajan, Yixi Chen, Wenfei Wu, Aditya Akella, and MichaelMSwift. 2021. ATP: In-network Aggregation for Multi-tenant Learning.. In NSDI. 741--761.Google Scholar
- Mengmou Li. 2018. Generalized Lagrange multiplier method and KKT conditions with an application to distributed optimization. IEEE Transactions on Circuits and Systems II: Express Briefs 66, 2 (2018), 252--256.Google Scholar
- Jesus Lopez-Perez. 2021. Elasticities on a Mixed Integer Programming Model for Revenue Optimization. In XX SIGEF Congress-Harnessing Complexity through Fuzzy Logic. Springer, 153--177.Google Scholar
- Liang Luo, Jacob Nelson, Luis Ceze, Amar Phanishayee, and Arvind Krishnamurthy. 2018. Parameter hub: a rack-scale parameter server for distributed deep neural network training. In Proceedings of the ACM Symposium on Cloud Computing. 41--54.Google ScholarDigital Library
- Yun Ma, Dongwei Xiang, Shuyu Zheng, Deyu Tian, and Xuanzhe Liu. 2019. Moving deep learning into web browser: How far can we go?. In The World Wide Web Conference. 1234--1244.Google ScholarDigital Library
- Luo Mai, Guo Li, Marcel Wagenländer, Konstantinos Fertakis, Andrei-Octavian Brabete, and Peter Pietzuch. 2020. {KungFu}: Making Training in Distributed Machine Learning Adaptive. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). 937--954.Google Scholar
- Shyam Singh Rajput and Virendra Singh Kushwah. 2016. A genetic based improved load balanced min-min task scheduling algorithm for load balancing in cloud computing. In 2016 8th international conference on Computational Intelligence and Communication Networks (CICN). IEEE, 677--681.Google ScholarCross Ref
- Amedeo Sapio, Ibrahim Abdelaziz, Abdulla Aldilaijan, Marco Canini, and Panos Kalnis. 2017. In-network computation is a dumb idea whose time has come. In Proceedings of the 16th ACM Workshop on Hot Topics in Networks. 150--156.Google ScholarDigital Library
- Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan Ports, and Peter Richtarik. 2021. Scaling Distributed Machine Learning with In-Network Aggregation. In 18th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 21). 785--808.Google Scholar
- Karen Simonyan and AndrewZisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google Scholar
- Ankur Sinha, Tharo Soun, and Kalyanmoy Deb. 2019. Using Karush-Kuhn- Tucker proximity measure for solving bilevel optimization problems. Swarm and evolutionary computation 44 (2019), 496--510.Google Scholar
- Suraiya Tairin, Haiying Shen, and Zeyu Zhang. 2023. Embracing Uncertainty for Equity in Resource Allocation in ML Training. In Proceedings of the 52nd International Conference on Parallel Processing. 423--432.Google ScholarDigital Library
- Ching-Yuan Tsai, Ching-Chi Lin, Pangfeng Liu, and Jan-Jan Wu. 2018. Communication scheduling optimization for distributed deep learning systems. In 2018 IEEE 24th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 739--746.Google ScholarCross Ref
- Joost Verbraeken, Matthijs Wolting, Jonathan Katzy, Jeroen Kloppenburg, Tim Verbelen, and Jan S Rellermeyer. 2020. A survey on distributed machine learning. ACM Computing Surveys (CSUR) 53, 2 (2020), 1--33.Google ScholarDigital Library
- Yanwu Yang and Panyu Zhai. 2022. Click-through rate prediction in online advertising: A literature review. Information Processing & Management 59, 2 (2022), 102853.Google ScholarDigital Library
- Xiao Zeng, Ming Yan, and Mi Zhang. 2021. Mercury: Efficient on-device distributed dnn training via stochastic importance sampling. In Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. 29--41.Google ScholarDigital Library
- Gongming Zhao, Jiawei Liu, Yutong Zhai, Hongli Xu, and Huang He. 2023. Alleviating the Impact of Abnormal Events Through Multi-Constrained VM Placement. IEEE Transactions on Parallel and Distributed Systems 34, 5 (2023), 1508--1523. https://doi.org/10.1109/TPDS.2023.3248681Google ScholarDigital Library
- Ruiting Zhou, Jinlong Pang, Qin Zhang, Chuan Wu, Lei Jiao, Yi Zhong, and Zongpeng Li. 2022. Online Scheduling Algorithm for Heterogeneous Distributed Machine Learning Jobs. IEEE Transactions on Cloud Computing (2022).Google Scholar
Index Terms
- InArt: In-Network Aggregation with Route Selection for Accelerating Distributed Training
Recommendations
In-Network Aggregation with Transport Transparency for Distributed Training
ASPLOS 2023: Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3Recent In-Network Aggregation (INA) solutions offload the all-reduce operation onto network switches to accelerate and scale distributed training (DT). On end hosts, these solutions build custom network stacks to replace the transport layer. The INA-...
Dynamic route optimisation in a multihomed mobile network
The present work is a dynamic route optimisation in nested multihomed mobile network. The mobile network node sends a request message to a local fixed node inside the mobile network to initiate a session. The local fixed node executes route selection ...
A Study of Communication Route Selection Considering Route Security
SAINT '11: Proceedings of the 2011 IEEE/IPSJ International Symposium on Applications and the InternetNetworks such as the Internet and mobile phone networks are widely used. Existing telecommunication route selection is based on the transmission quality, such as bandwidth, delay, costs, and network obstacles when routing is requested. However, the ...
Comments