Cognitive Perspectives on Context-based Decisions and Explanations
When human cognition is modeled in Philosophy and Cognitive Science, there is a pervasive idea that humans employ mental representations in order to navigate the world and make predictions about outcomes of future actions. By understanding how these representational structures work, we not only understand more about human cognition but also gain a better understanding for how humans rationalise and explain decisions. This has an influencing effect on explainable AI, where the goal is to provide explanations of computer decision-making for a human audience. We show that the Contextual Importance and Utility method for XAI share an overlap with the current new wave of action-oriented predictive representational structures, in ways that makes CIU a reliable tool for creating explanations that humans can relate to and trust.
READ FULL TEXT