An Approximation Algorithm for Optimal Subarchitecture Extraction
We consider the problem of finding the set of architectural parameters for a chosen deep neural network which is optimal under three metrics: parameter size, inference speed, and error rate. In this paper we state the problem formally, and present an approximation algorithm that, for a large subset of instances behaves like an FPTAS with an approximation error of ρ≤ |1- ϵ|, and that runs in O(|Ξ| + |W^*_T|(1 + |Θ||B||Ξ|/(ϵ s^3/2))) steps, where ϵ and s are input parameters; |B| is the batch size; |W^*_T| denotes the cardinality of the largest weight set assignment; and |Ξ| and |Θ| are the cardinalities of the candidate architecture and hyperparameter spaces, respectively.
READ FULL TEXT