An Argumentation-based Approach for Explaining Goal Selection in Intelligent Agents

09/14/2020
by   Mariela Morveli-Espinoza, et al.
0

During the first step of practical reasoning, i.e. deliberation or goals selection, an intelligent agent generates a set of pursuable goals and then selects which of them he commits to achieve. Explainable Artificial Intelligence (XAI) systems, including intelligent agents, must be able to explain their internal decisions. In the context of goals selection, agents should be able to explain the reasoning path that leads them to select (or not) a certain goal. In this article, we use an argumentation-based approach for generating explanations about that reasoning path. Besides, we aim to enrich the explanations with information about emerging conflicts during the selection process and how such conflicts were resolved. We propose two types of explanations: the partial one and the complete one and a set of explanatory schemes to generate pseudo-natural explanations. Finally, we apply our proposal to the cleaner world scenario.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset