Decisions that Explain Themselves: A User-Centric Deep Reinforcement Learning Explanation System

12/01/2022
by   Xiaoran Wu, et al.
0

With deep reinforcement learning (RL) systems like autonomous driving being wildly deployed but remaining largely opaque, developers frequently use explainable RL (XRL) tools to better understand and work with deep RL agents. However, previous XRL works employ a techno-centric research approach, ignoring how RL developers perceive the generated explanations. Through a pilot study, we identify major goals for RL practitioners to use XRL methods and four pitfalls that widen the gap between existing XRL methods and these goals. The pitfalls include inaccessible reasoning processes, inconsistent or unintelligible explanations, and explanations that cannot be generalized. To fill the discovered gap, we propose a counterfactual-inference-based explanation method that discovers the details of the reasoning process of RL agents and generates natural language explanations. Surrounding this method, we build an interactive XRL system where users can actively explore explanations and influential information. In a user study with 14 participants, we validated that developers identified 20.9 agents with our system compared to the baseline method, and using our system helped end users improve their performance in actionability tests by 25.1 an auto-driving task and by 16.9

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset