How Much Chemistry Does a Deep Neural Network Need to Know to Make Accurate Predictions?

10/05/2017
by   Garrett B. Goh, et al.
0

In the last few years, we have seen the rise of deep learning applications in a broad range of chemistry research problems. Recently, we reported on the development of Chemception, a deep convolutional neural network (CNN) architecture for general-purpose small molecule property prediction. In this work, we investigate the effects of systematically removing and adding basic chemical information to the image channels of the 2D images used to train Chemception. By augmenting images with only 3 additional basic chemical information, we demonstrate that Chemception now outperforms contemporary deep learning models trained on more sophisticated chemical representations (molecular fingerprints) for the prediction of toxicity, activity, and solvation free energy, as well as physics-based free energy simulation methods. Thus, our work demonstrates that a firm grasp of first-principles chemical knowledge is not a pre-requisite for deep learning models to accurately predict chemical properties. Lastly, by altering the chemical information content in the images, and examining the resulting performance of Chemception, we also identify two different learning patterns in predicting toxicity/activity as compared to solvation free energy, and these patterns suggest that Chemception is learning about its tasks in the manner that is consistent with established knowledge.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset