High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach
Deep neural networks are a powerful technique for learning complex functions from data. However, their appeal in real-world applications can be hindered by an inability to quantify the uncertainty of predictions. In this paper, the generation of prediction intervals (PI) for quantifying uncertainty in regression tasks is considered. It is axiomatic that high-quality PIs should be as narrow as possible, whilst capturing a specified portion of data. In this paper we derive a loss function directly from this high-quality principle that requires no distributional assumption. We show how its form derives from a likelihood principle, that it can be used with gradient descent, and that in ensembled form, model uncertainty is accounted for. This remedies limitations of a popular model developed on the same high-quality principle. Experiments are conducted on ten regression benchmark datasets. The proposed quality-driven (QD) method is shown to outperform current state-of-the-art uncertainty quantification methods, reducing average PI width by around 10
READ FULL TEXT