XAI in the context of Predictive Process Monitoring: Too much to Reveal
Predictive Process Monitoring (PPM) has been integrated into process mining tools as a value-adding task. PPM provides useful predictions on the further execution of the running business processes. To this end, machine learning-based techniques are widely employed in the context of PPM. In order to gain stakeholders trust and advocacy of PPM predictions, eXplainable Artificial Intelligence (XAI) methods are employed in order to compensate for the lack of transparency of most efficient predictive models. Even when employed under the same settings regarding data, preprocessing techniques, and ML models, explanations generated by multiple XAI methods differ profoundly. A comparison is missing to distinguish XAI characteristics or underlying conditions that are deterministic to an explanation. To address this gap, we provide a framework to enable studying the effect of different PPM-related settings and ML model-related choices on characteristics and expressiveness of resulting explanations. In addition, we compare how different explainability methods characteristics can shape resulting explanations and enable reflecting underlying model reasoning process
READ FULL TEXT