Reinforced dynamics for enhanced sampling in large atomic and molecular systems. I. Basic Methodology
A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Applications to the full-atom, explicit solvent models of alanine dipeptide and tripeptide show some promise for this new approach.
READ FULL TEXT