Dynamic Weights in Multi-Objective Deep Reinforcement Learning
Many real-world decision problems are characterized by multiple objectives which must be balanced based on their relative importance. In the dynamic weights setting this relative importance changes over time, as recognized by Natarajan and Tadepalli (2005) who proposed a tabular Reinforcement Learning algorithm to deal with this problem. However, this earlier work is not feasible for reinforcement learning settings in which the input is high-dimensional, necessitating the use of function approximators, such as neural networks. We propose two novel methods for multi-objective RL with dynamic weights, a multi-network approach and a single-network approach that conditions on the weights. Due to the inherent non-stationarity of the dynamic weights setting, standard experience replay techniques are insufficient. We therefore propose diverse experience replay, a framework to maintain a diverse set of experiences in the replay buffer, and show how it can be applied to make experience replay relevant in multi-objective RL. To evaluate the performance of our algorithms we introduce a new benchmark called the Minecart problem. We show empirically that our algorithms outperform more naive approaches. We also show that, while there are significant differences between many small changes in the weights opposed to sparse larger changes, the conditioned network with diverse experience replay consistently outperforms the other algorithms.
READ FULL TEXT