Robust and Active Learning for Deep Neural Network Regression

07/28/2021
by   Xi Li, et al.
0

We describe a gradient-based method to discover local error maximizers of a deep neural network (DNN) used for regression, assuming the availability of an "oracle" capable of providing real-valued supervision (a regression target) for samples. For example, the oracle could be a numerical solver which, operationally, is much slower than the DNN. Given a discovered set of local error maximizers, the DNN is either fine-tuned or retrained in the manner of active learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset