Towards Real-Time Segmentation on the Edge

Authors

  • Yanyu Li Northeastern University
  • Changdi Yang Northeastern University
  • Pu Zhao Northeastern University
  • Geng Yuan Northeastern University
  • Wei Niu College of William & Mary
  • Jiexiong Guan College of William & Mary
  • Hao Tang CVL, ETH Zurich
  • Minghai Qin Northeastern University
  • Qing Jin Northeastern University
  • Bin Ren College of William & Mary
  • Xue Lin Northeastern University
  • Yanzhi Wang Northeastern University

DOI:

https://doi.org/10.1609/aaai.v37i2.25232

Keywords:

CV: Segmentation, ML: Auto ML and Hyperparameter Tuning, ML: Learning on the Edge & Model Compression

Abstract

The research in real-time segmentation mainly focuses on desktop GPUs. However, autonomous driving and many other applications rely on real-time segmentation on the edge, and current arts are far from the goal. In addition, recent advances in vision transformers also inspire us to re-design the network architecture for dense prediction task. In this work, we propose to combine the self attention block with lightweight convolutions to form new building blocks, and employ latency constraints to search an efficient sub-network. We train an MLP latency model based on generated architecture configurations and their latency measured on mobile devices, so that we can predict the latency of subnets during search phase. To the best of our knowledge, we are the first to achieve over 74% mIoU on Cityscapes with semi-real-time inference (over 15 FPS) on mobile GPU from an off-the-shelf phone.

Downloads

Published

2023-06-26

How to Cite

Li, Y., Yang, C., Zhao, P., Yuan, G., Niu, W., Guan, J., Tang, H., Qin, M., Jin, Q., Ren, B., Lin, X., & Wang, Y. (2023). Towards Real-Time Segmentation on the Edge. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1468-1476. https://doi.org/10.1609/aaai.v37i2.25232

Issue

Section

AAAI Technical Track on Computer Vision II