Dynamic Ensemble of Low-Fidelity Experts: Mitigating NAS “Cold-Start”

Authors

  • Junbo Zhao Department of Electronic Engineering, Tsinghua University Tsinghua Shenzhen International Graduate School
  • Xuefei Ning Department of Electronic Engineering, Tsinghua University
  • Enshu Liu Department of Electronic Engineering, Tsinghua University
  • Binxin Ru SailYond Technology & Research Institute of Tsinghua University in Shenzhen
  • Zixuan Zhou Department of Electronic Engineering, Tsinghua University
  • Tianchen Zhao Department of Electronic Engineering, Tsinghua University
  • Chen Chen Huawei Technologies Co., Ltd
  • Jiajin Zhang Huawei Technologies Co., Ltd
  • Qingmin Liao Tsinghua Shenzhen International Graduate School
  • Yu Wang Department of Electronic Engineering, Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v37i9.26339

Keywords:

ML: Auto ML and Hyperparameter Tuning, ML: Deep Neural Architectures, ML: Deep Neural Network Algorithms, ML: Ensemble Methods

Abstract

Predictor-based Neural Architecture Search (NAS) employs an architecture performance predictor to improve the sample efficiency. However, predictor-based NAS suffers from the severe ``cold-start'' problem, since a large amount of architecture-performance data is required to get a working predictor. In this paper, we focus on exploiting information in cheaper-to-obtain performance estimations (i.e., low-fidelity information) to mitigate the large data requirements of predictor training. Despite the intuitiveness of this idea, we observe that using inappropriate low-fidelity information even damages the prediction ability and different search spaces have different preferences for low-fidelity information types. To solve the problem and better fuse beneficial information provided by different types of low-fidelity information, we propose a novel dynamic ensemble predictor framework that comprises two steps. In the first step, we train different sub-predictors on different types of available low-fidelity information to extract beneficial knowledge as low-fidelity experts. In the second step, we learn a gating network to dynamically output a set of weighting coefficients conditioned on each input neural architecture, which will be used to combine the predictions of different low-fidelity experts in a weighted sum. The overall predictor is optimized on a small set of actual architecture-performance data to fuse the knowledge from different low-fidelity experts to make the final prediction. We conduct extensive experiments across five search spaces with different architecture encoders under various experimental settings. For example, our methods can improve the Kendall's Tau correlation coefficient between actual performance and predicted scores from 0.2549 to 0.7064 with only 25 actual architecture-performance data on NDS-ResNet. Our method can easily be incorporated into existing predictor-based NAS frameworks to discover better architectures. Our method will be implemented in Mindspore (Huawei 2020), and the example code is published at https://github.com/A-LinCui/DELE.

Downloads

Published

2023-06-26

How to Cite

Zhao, J., Ning, X., Liu, E., Ru, B., Zhou, Z., Zhao, T., Chen, C., Zhang, J., Liao, Q., & Wang, Y. (2023). Dynamic Ensemble of Low-Fidelity Experts: Mitigating NAS “Cold-Start”. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11316-11326. https://doi.org/10.1609/aaai.v37i9.26339

Issue

Section

AAAI Technical Track on Machine Learning IV