Bringing UMAP Closer to the Speed of Light with GPU Acceleration

Authors

  • Corey J. Nolet Nvidia University of Maryland Baltimore County
  • Victor Lafargue Nvidia
  • Edward Raff University of Maryland Baltimore County Booz Allen Hamilton
  • Thejaswi Nanditale Nvidia
  • Tim Oates University of Maryland Baltimore County
  • John Zedlewski Nvidia
  • Joshua Patterson NVIDIA

DOI:

https://doi.org/10.1609/aaai.v35i1.16118

Keywords:

Software Engineering, Dimensionality Reduction/Feature Selection, Learning with Manifolds, Scalability of ML Systems

Abstract

The Uniform Manifold Approximation and Projection (UMAP) algorithm has become widely popular for its ease of use, quality of results, and support for exploratory, unsupervised, supervised, and semi-supervised learning. While many algorithms can be ported to a GPU in a simple and direct fashion, such efforts have resulted in inefficent and inaccurate versions of UMAP. We show a number of techniques that can be used to make a faster and more faithful GPU version of UMAP, and obtain speedups of up to 100x in practice. Many of these design choices/lessons are general purpose and may inform the conversion of other graph and manifold learning algorithms to use GPUs. Our implementation has been made publicly available as part of the open source RAPIDS cuML library (https://github.com/rapidsai/cuml).

Downloads

Published

2021-05-18

How to Cite

Nolet, C. J., Lafargue, V., Raff, E., Nanditale, T., Oates, T., Zedlewski, J., & Patterson, J. (2021). Bringing UMAP Closer to the Speed of Light with GPU Acceleration. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1), 418-426. https://doi.org/10.1609/aaai.v35i1.16118

Issue

Section

AAAI Technical Track on Application Domains