Robust Distributed Gradient Aggregation Using Projections onto Gradient Manifolds

Authors

  • Kwang In Kim POSTECH

DOI:

https://doi.org/10.1609/aaai.v38i12.29214

Keywords:

ML: Distributed Machine Learning & Federated Learning, ML: Adversarial Learning & Robustness

Abstract

We study the distributed gradient aggregation problem where individual clients contribute to learning a central model by sharing parameter gradients constructed from local losses. However, errors in some gradients, caused by low-quality data or adversaries, can degrade the learning process when naively combined. Existing robust gradient aggregation approaches assume that local data represent the global data-generating distribution, which may not always apply to heterogeneous (non-i.i.d.) client data. We propose a new algorithm that can robustly aggregate gradients from potentially heterogeneous clients. Our approach leverages the manifold structure inherent in heterogeneous client gradients and evaluates gradient anomaly degrees by projecting them onto this manifold. This algorithm is implemented as a simple and efficient method that accumulates random projections within the subspace defined by the nearest neighbors within a gradient cloud. Our experiments demonstrate consistent performance improvements over state-of-the-art robust aggregation algorithms.

Published

2024-03-24

How to Cite

Kim, K. I. (2024). Robust Distributed Gradient Aggregation Using Projections onto Gradient Manifolds. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13151-13159. https://doi.org/10.1609/aaai.v38i12.29214

Issue

Section

AAAI Technical Track on Machine Learning III