Multi-device, Multi-tenant Model Selection with GP-EI

03/17/2018
by   Chen Yu, et al.
0

Bayesian optimization is the core technique behind the emergence of AutoML, which holds the promise of automatically searching for models and hyperparameters to make machine learning techniques more accessible. As such services are moving towards the cloud, we ask -- When multiple AutoML users share the same computational infrastructure, how should we allocate resources to maximize the "global happiness" of all users? We focus on GP-EI, one of the most popular algorithms for automatic model selection and hyperparameter tuning, and develop a novel multi-device, multi-tenant extension that is aware of multiple computation devices and multiple users sharing the same set of computation devices. Theoretically, given N users and M devices, we obtain a regret bound of O((MIU(T,K) + M)N^2/M), where MIU(T,K) refers to the maximal incremental uncertainty up to time T for the covariance matrix K. Empirically, we evaluate our algorithm on two applications of automatic model selection, and show that our algorithm significantly outperforms the strategy of serving users independently. Moreover, when multiple computation devices are available, we achieve near-linear speedup when the number of users is much larger than the number of devices.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset