Towards Probability-based Safety Verification of Systems with Components from Machine Learning

03/02/2020
by   Hermann Kaindl, et al.
0

Machine learning (ML) has recently created many new success stories. Hence, there is a strong motivation to use ML technology in software-intensive systems, including safety-critical systems. This raises the issue of safety verification of ML-based systems, which is currently thought to be infeasible or, at least, very hard. We think that it requires taking into account specific properties of ML technology such as: (i) Most ML approaches are inductive, which is both their power and their source of failure. (ii) Neural networks (NN) resulting from deep learning are at the current state of the art not transparent. Consequently, there will always be errors remaining and, at least for deep NNs (DNNs), verification of their internal structure is extremely hard. However, also traditional safety engineering cannot provide full guarantees that no harm will ever occur. That is why probabilities are used, e.g., for specifying a Risk or a Tolerable Hazard Rate (THR). Recent theoretical work has extended the scope of formal verification to probabilistic model-checking, but this requires behavioral models. Hence, we propose verification based on probabilities of errors both estimated for by controlled experiments and output by the inductively learned classifier itself. Generalization error bounds may propagate to the probabilities of a hazard, which must not exceed a THR. As a result, the quantitatively determined bound on the probability of a classification error of an ML component in a safety-critical system contributes in a well-defined way to the latter's overall safety verification.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset