PerFedRLNAS: One-for-All Personalized Federated Neural Architecture Search

Authors

  • Dixi Yao University of Toronto
  • Baochun Li University of Toronto

DOI:

https://doi.org/10.1609/aaai.v38i15.29576

Keywords:

ML: Distributed Machine Learning & Federated Learning, ML: Auto ML and Hyperparameter Tuning

Abstract

Personalized federated learning is a new paradigm to address heterogeneous problems (e.g. issues with non-i.i.d. data) in federated learning. However, existing personalized federated learning methods lack standards for how personalized and shared parts of the models are designed. Sometimes, manual design can even lead to worse performance than non-personalization. As a result, we propose a new algorithm for personalized federated neural architecture search, called PerFedRLNAS, to automatically personalize the architectures and weights of models on each client. With such an algorithm, we can solve the issues of low efficiency as well as failure to adapt to new search spaces in previous federated neural architecture search work. We further show that with automatically assigning different client architectures can solve heterogeneity of data distribution, efficiency and memory in federated learning. In our experiments, we empirically show that our framework shows much better performance with respect to personalized accuracy and overall time compared to state-of-the-art methods. Furthermore, PerFedRLNAS has a good generalization ability to new clients, and is easy to be deployed in practice.

Published

2024-03-24

How to Cite

Yao, D., & Li, B. (2024). PerFedRLNAS: One-for-All Personalized Federated Neural Architecture Search. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16398-16406. https://doi.org/10.1609/aaai.v38i15.29576

Issue

Section

AAAI Technical Track on Machine Learning VI