Stealing Deep Reinforcement Learning Models for Fun and Profit
In this paper, we present the first attack methodology to extract black-box Deep Reinforcement Learning (DRL) models only from their actions with the environment. Model extraction attacks against supervised Deep Learning models have been widely studied. However, those techniques cannot be applied to the reinforcement learning scenario due to DRL models' high complexity, stochasticity and limited observable information. Our methodology overcomes those challenges by proposing two techniques. The first technique is an RNN classifier which can reveal the training algorithms of the target black-box DRL model only based on its predicted actions. The second technique is the adoption of imitation learning to replicate the model from the extracted training algorithm. Experimental results indicate that the integration of these two techniques can effectively recover the DRL models with high fidelity. We also demonstrate a use case to show that our model extraction attack can significantly improve the success rate of adversarial attacks, making the DRL models more vulnerable.
READ FULL TEXT