Leveraging Intrinsic Gradient Information for Machine Learning Model Training
Designing models that produce accurate predictions is the fundamental objective of machine learning. This work presents methods demonstrating that when the derivatives of target variables with respect to inputs can be extracted from processes of interest, they can be leveraged to improve the accuracy of differentiable machine learning models. Four key ideas are explored: (1) Improving the predictive accuracy of linear regression models and feed-forward neural networks (NNs); (2) Using the difference between the performance of feedforward NNs trained with and without gradient information to tune NN complexity (in the form of hidden node number); (3) Using gradient information to regularise linear regression; and (4) Using gradient information to improve generative image models. Across this variety of applications, gradient information is shown to enhance each predictive model, demonstrating its value for a variety of applications.
READ FULL TEXT