Data Sanity Check for Deep Learning Systems via Learnt Assertions

09/06/2019
by   Haochuan Lu, et al.
0

Deep learning (DL) techniques have demonstrated satisfactory performance in many tasks, even in safety-critical applications. Reliability is hence a critical consideration to DL-based systems. However, the statistical nature of DL makes it quite vulnerable to invalid inputs, i.e., those cases that are not considered in the training phase of a DL model. This paper proposes to perform data sanity check to identify invalid inputs, so as to enhance the reliability of DL-based systems. To this end, we design and implement a tool to detect behavior deviation of a DL model when processing an input case, and considers it the symptom of invalid input cases. Via a light, automatic instrumentation to the target DL model, this tool extracts the data flow footprints and conducts an assertion-based validation mechanism.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset