Towards Robust Direct Perception Networks for Automated Driving

09/30/2019
by   Chih-Hong Cheng, et al.
0

We consider the problem of engineering robust direct perception neural networks with output being regression. Such networks take high dimensional input image data, and they produce affordances such as the curvature of the upcoming road segment or the distance to the front vehicle. Our proposal starts by allowing a neural network prediction to deviate from the label with tolerance Δ. The source of tolerance can be either contractual or from limiting factors where two entities may label the same data with slightly different numerical values. The tolerance motivates the use of a non-standard loss function where the loss is set to 0 so long as the prediction-to-label distance is less than Δ. We further extend the loss function and define a new provably robust criterion that is parametric to the allowed output tolerance Δ, the layer index l̃ where perturbation is considered, and the maximum perturbation amount κ. During training, the robust loss is computed by first propagating symbolic errors from the l̃-th layer (with quantity bounded by κ) to the output layer, followed by computing the overflow between the error bounds and the allowed tolerance. The overall concept is experimented in engineering a direct perception neural network for understanding the central position of the ego-lane in pixel coordinates.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset