Algorithmic Foundation of Federated Learning with Sequential Data

Authors

  • Mingrui Liu George Mason University

DOI:

https://doi.org/10.1609/aaai.v38i20.30291

Keywords:

Optimization, Federated Learning, Algorithmic Foundation, Sequential Data

Abstract

The current analysis of federated optimization algorithms for training deep neural networks assumes that the data is non-sequential (e.g., images), which incurs a smooth loss objective. In contrast, edge devices generate lots of sequential data every day, where these sequences exhibit significant sequential correlation at different time stamps (e.g., text messages). In order to learn from such sequential data, people typically use a class of neural networks that is inherently nonsmooth, with a potentially unbounded smoothness parameter. Examples include recurrent neural networks, long-short-term memory networks, and transformers. It remains unclear how to design provably efficient algorithms for training these neural networks to learn from sequential data. My goal is to lay the algorithmic foundation of federated learning with sequential data, which contributes novel algorithms for learning from a range of real-world sequential data (e.g., natural language, electronic health record, transportation, time series, etc.) using state-of-the-art deep neural networks. In this talk, I will first motivate the problem by showing that the transformer, which is widely used for sequential data learning, has an unbounded smooth landscape. Then, I will introduce provably efficient federated deep learning algorithms in the presence of unbounded smoothness. In particular, I will introduce a few efficient algorithms for various settings of federated learning, including homogeneous data, heterogeneous data, and partial client participation. The main result is twofold. First, we show that the designed algorithms provably small computational and communication complexities. Second, we establish fundamental hardness results in the unbounded smoothness setting. Ultimately, I will discuss the future challenges of extending our research framework from small-scale neural networks to large language models.

Downloads

Published

2024-03-24

How to Cite

Liu, M. (2024). Algorithmic Foundation of Federated Learning with Sequential Data. Proceedings of the AAAI Conference on Artificial Intelligence, 38(20), 22675-22675. https://doi.org/10.1609/aaai.v38i20.30291