RNAS: Architecture Ranking for Powerful Networks
Neural Architecture Search (NAS) is attractive for automatically producing deep networks with excellent performance and acceptable computational costs. The performance of intermediate networks in most of existing NAS algorithms are usually represented by the results evaluated on a small proxy dataset with insufficient training in order to save computational resources and time. Although these representations could help us to distinct some searched architectures, they are still far away from the exact performance or ranking orders of all networks sampled from the given search space. Therefore, we propose to learn a performance predictor for ranking different models in the searching period using few networks pre-trained on the entire dataset. We represent each neural architecture as a feature tensor and use the predictor to further refining the representations of networks in the search space. The resulting performance predictor can be utilized for searching desired architectures without additional evaluation. Experimental results illustrate that, we can only use 0.1% (424 models) of the entire NASBench dataset to construct an accurate predictor for efficiently finding the architecture with 93.90% accuracy (0.04% top performance in the whole search space), which is about 0.5% higher than that of the state-of-the-art methods.
READ FULL TEXT