ScarletNAS: Bridging the Gap Between Scalability and Fairness in Neural Architecture Search

08/16/2019
by   Xiangxiang Chu, et al.
0

One-shot neural architecture search features fast training of a supernet in a single run. A pivotal issue for this weight-sharing approach is the lacking of scalability. A simple adjustment with identity block renders a scalable supernet but it arouses unstable training, which makes the subsequent model ranking unreliable. In this paper, we introduce linearly equivalent transformation to soothe training turbulence, providing with the proof that such transformed path is identical with the original one as per representational power. The overall method is named as SCARLET (SCAlable supeRnet with Linearly Equivalent Transformation). We show through experiments that linearly equivalent transformations can indeed harmonize the supernet training. With an EfficientNet-like search space and a multi-objective reinforced evolutionary backend, it generates a series of competitive models: Scarlet-A achieves 76.9 EfficientNet-B0 by a large margin; the shallower Scarlet-B exemplifies the proposed scalability which attains the same accuracy 76.3 with much fewer FLOPs; Scarlet-C scores competitive 75.6 sizes. The models and evaluation code are released online https://github.com/xiaomi-automl/ScarletNAS .

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset