Abstract
Federated learning (FL) has been widely deployed in edge computing scenarios. However, FL-related technologies are still facing severe challenges while evolving rapidly. Among them, statistical heterogeneity (i.e., non-IID) seriously hinders the wide deployment of FL. In our work, we propose a new framework for communication-efficient and personalized federated learning, namely FedPrune. More specifically, under the newly proposed FL framework, each client trains a converged model locally to obtain critical parameters and substructure that guide the pruning of the network participating FL. FedPrune is able to achieve high accuracy while greatly reducing communication overhead. Moreover, each client learns a personalized model in FedPrune. Experimental results has demonstrated that FedPrune achieves the best accuracy in image recognition task with varying degrees of reduced communication costs compared to the three baseline methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: Communication-efficient SGD via gradient quantization and encoding, pp. 1709–1720 (2017)
Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: learning what (not) to forget. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018, Part III. LNCS, vol. 11207, pp. 144–161. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_9
Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 1126–1135. PMLR, 06–11 August 2017. http://proceedings.mlr.press/v70/finn17a.html
Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=rJl-b3RcF7
Ivkin, N., Rothchild, D., Ullah, E., Braverman, V., Stoica, I., Arora, R.: Communication-efficient distributed SGD with sketching. In: Proceedings of NeurIPS, pp. 1–23 (2019)
Jiang, Y., Konečný, J., Rush, K., Kannan, S.: Improving Federated Learning Personalization via Model Agnostic Meta Learning. CoRR abs/1909.12488 (2019). http://arxiv.org/abs/1909.12488
Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. In: Proceedings of NeurIPS Workshop (2016)
Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37(3), 50–60 (2020)
Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. In: Proceedings of MLSys (2018)
Liang, P.P., Liu, T., Ziyin, L., Salakhutdinov, R., Morency, L.P.: Think locally, act globally: Federated learning with local and global representations. arXiv preprint arXiv:2001.01523 (2020)
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of AISTATS, pp. 1273–1282 (2017)
Reisizadeh, A., Mokhtari, A., Hassani, H., Jadbabaie, A., Pedarsani, R.: FedPAQ: a communication-efficient federated learning method with periodic averaging and quantization. In: Proceedings of AISTATS, pp. 2021–2031 (2020)
Vahidian, S., Morafah, M., Lin, B.: Personalized Federated Learning by Structured and Unstructured Pruning under Data Heterogeneity. CoRR abs/2105.00562 (2021), https://arxiv.org/abs/2105.00562
Wang, S., et al.: Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. (JSAC) 37(6), 1205–1221 (2019)
Zhao, Y., et al.: TDFI: two-stage deep learning framework for friendship inference via multi-source information. In: IEEE INFOCOM 2019 - IEEE Conference on Computer Communications, pp. 1981–1989 (2019). https://doi.org/10.1109/INFOCOM.2019.8737458
Zhao, Y., Xu, K., Wang, H., Li, B., Jia, R.: Stability-based analysis and defense against backdoor attacks on edge computing services. IEEE Netw. 35(1), 163–169 (2021). https://doi.org/10.1109/MNET.011.2000265
Zhao, Y., Xu, K., Wang, H., Li, B., Qiao, M., Shi, H.: MEC-enabled hierarchical emotion recognition and perturbation-aware defense in smart cities. IEEE IoT J. 1 (2021). https://doi.org/10.1109/JIOT.2021.3079304
Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-iid data. arXiv preprint arXiv:1806.00582 (2018)
Zhou, G., Xu, K., Li, Q., Liu, Y., Zhao, Y.: AdaptCL: Efficient Collaborative Learning with Dynamic and Adaptive Pruning. CoRR abs/2106.14126 (2021), https://arxiv.org/abs/2106.14126
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, Y., Zhao, Y., Zhou, G., Xu, K. (2021). FedPrune: Personalized and Communication-Efficient Federated Learning on Non-IID Data. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Communications in Computer and Information Science, vol 1516. Springer, Cham. https://doi.org/10.1007/978-3-030-92307-5_50
Download citation
DOI: https://doi.org/10.1007/978-3-030-92307-5_50
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-92306-8
Online ISBN: 978-3-030-92307-5
eBook Packages: Computer ScienceComputer Science (R0)