Model Monitoring and Robustness of In-Use Machine Learning Models: Quantifying Data Distribution Shifts Using Population Stability Index
Safety goes first. Meeting and maintaining industry safety standards for robustness of artificial intelligence (AI) and machine learning (ML) models require continuous monitoring for faults and performance drops. Deep learning models are widely used in industrial applications, e.g., computer vision, but the susceptibility of their performance to environment changes (e.g., noise) after deployment on the product, are now well-known. A major challenge is detecting data distribution shifts that happen, comparing the following: (i) development stage of AI and ML models, i.e., train/validation/test, to (ii) deployment stage on the product (i.e., even after `testing') in the environment. We focus on a computer vision example related to autonomous driving and aim at detecting shifts that occur as a result of adding noise to images. We use the population stability index (PSI) as a measure of presence and intensity of shift and present results of our empirical experiments showing a promising potential for the PSI. We further discuss multiple aspects of model monitoring and robustness that need to be analyzed simultaneously to achieve robustness for industry safety standards. We propose the need for and the research direction toward categorizations of problem classes and examples where monitoring for robustness is required and present challenges and pointers for future work from a practical perspective.
READ FULL TEXT