skip to main content
10.1145/3632410.3632501acmotherconferencesArticle/Chapter ViewAbstractPublication PagescomadConference Proceedingsconference-collections
extended-abstract

Exploiting Sparsity in Over-parameterized Federated Learning over Multiple Access Channels

Published:04 January 2024Publication History

ABSTRACT

Federated Learning (FL) is a privacy-preserving distributed Machine Learning (ML) paradigm. Here, a global machine learning model is learned by aggregating the local models that were learned over local data at each edge user. Realizing the benefits of FL is challenging in communication-constrained environments, such as the traditional wireless uplink communications characterized by low bandwidth links. A well-known FL protocol over such resource-constrained channels is FL over multiple access channels, also known as FL-MAC, where edge users simultaneously transmit their model parameters, hence avoiding the need for orthogonal resources. Modern-day neural networks are over-parameterized, and hence, in FL, a large number of parameters need to be communicated over the resource-constrained uplink channel between the edge user and the server. However, if such neural networks are trained in the lazy training regime (the training is as per NTK dynamics), the model weights change very slowly across gradient descent epochs while achieving exponential convergence. This motivates the use of incremental model weights since such updates are highly sparse, allowing for algorithms that employ compressive sensing, thus allowing compressed model update communication. Accordingly, we propose Compressed FL-MAC (or CFL-MAC), a framework where local training at the clients is carried over over-parameterized neural networks; however, the model updates sent to the server are compressed before transmission. We empirically demonstrate that with the proposed framework, good performance with a huge saving in terms of parameters can be achieved.

References

  1. Emmanuel J Candès and Michael B Wakin. 2008. An introduction to compressive sampling. IEEE signal processing magazine 25, 2 (2008), 21–30.Google ScholarGoogle ScholarCross RefCross Ref
  2. Lenaic Chizat, Edouard Oyallon, and Francis Bach. 2019. On lazy training in differentiable programming. Advances in neural information processing systems 32 (2019).Google ScholarGoogle Scholar
  3. Baihe Huang, Xiaoxiao Li, Zhao Song, and Xin Yang. 2021. Fl-ntk: A neural tangent kernel-based framework for federated learning analysis. In International Conference on Machine Learning. PMLR, 4423–4434.Google ScholarGoogle Scholar
  4. Arthur Jacot, Franck Gabriel, and Clément Hongler. 2018. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems 31 (2018).Google ScholarGoogle Scholar
  5. Tomer Sery, Nir Shlezinger, Kobi Cohen, and Yonina C Eldar. 2021. Over-the-air federated learning from heterogeneous data. IEEE Transactions on Signal Processing 69 (2021), 3796–3811.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Exploiting Sparsity in Over-parameterized Federated Learning over Multiple Access Channels
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          CODS-COMAD '24: Proceedings of the 7th Joint International Conference on Data Science & Management of Data (11th ACM IKDD CODS and 29th COMAD)
          January 2024
          627 pages

          Copyright © 2024 Owner/Author

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 4 January 2024

          Check for updates

          Qualifiers

          • extended-abstract
          • Research
          • Refereed limited
        • Article Metrics

          • Downloads (Last 12 months)22
          • Downloads (Last 6 weeks)7

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format