Deep Tile Coder: an Efficient Sparse Representation Learning Approach with applications in Reinforcement Learning
Representation learning is critical to the success of modern large-scale reinforcement learning systems. Previous works show that sparse representation can effectively reduce catastrophic interference and hence provide relatively stable and consistent boostrap targets when training reinforcement learning algorithms. Tile coding is a well-known sparse feature generation method in reinforcement learning. However, its application is largely restricted to small, low dimensional domains, as its computational and memory requirement grows exponentially as dimension increases. This paper proposes a simple and novel tile coding operation—deep tile coder, which adapts tile coding into deep learning setting, and can be easily scaled to high dimensional problems. The key distinction of our method with previous sparse representation learning method is that, we generate sparse feature by construction, while most previous works focus on designing regularization techniques. We are able to theoretically guarantee sparsity and importantly, our method ensures sparsity from the beginning of learning, without the need of tuning regularization weight. Furthermore, our approach maps from low dimension feature space to high dimension sparse feature space without introducing any additional training parameters. Our empirical demonstration covers classic discrete action control and Mujoco continuous robotics control problems. We show that reinforcement learning algorithms equipped with our deep tile coder achieves superior performance. To our best knowledge, our work is the first to demonstrate successful application of sparse representation learning method in online deep reinforcement learning algorithms for challenging tasks without using a target network.
READ FULL TEXT