RLFlow: Optimising Neural Network Subgraph Transformation with World Models

05/03/2022
by   Sean Parker, et al.
0

We explored the use of reinforcement learning (RL) agents that can learn to perform neural network subgraph transformations, without the need of expertly designed heuristics to achieve a high level of performance. Reducing compute requirements of deep learning models is a focus of extensive research and many systems, optimisations and just-in-time (JIT) compilers have been proposed to decrease runtime. Recent work has aimed to apply reinforcement learning to computer systems with some success, especially using model-free RL techniques. Model-based reinforcement learning methods have seen an increased focus in research as they can be used to learn the transition dynamics of the environment; this can be leveraged to train an agent using the hallucinogenic environment, thereby increasing sample efficiency compared to model-free approaches. Furthermore, when using a world model as a simulated environment, batch rollouts can occur safely in parallel and, especially in systems environments, it overcomes the latency impact of updating system environments that can take orders of magnitude longer to perform an action compared to simple emulators for video games. We propose a design for a model-based agent which learns to optimise the architecture of neural networks by performing a sequence of subgraph transformations to reduce model runtime. We show our approach can match the performance of state of the art on common convolutional networks and outperform those by up to 5

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset