Structure-Guided Processing Path Optimization with Deep Reinforcement Learning

09/21/2020
by   Johannes Dornheim, et al.
31

A major goal of material design is the inverse optimization of processing-structure-property relationships. In this paper, we propose and investigate a deep reinforcement learning approach for the optimization of processing paths. The goal is to find optimal processing paths in the material structure space that lead to target structures, which have been identified beforehand to yield desired material properties. The contribution completes the desired inversion of the processing-structure-property chain in a flexible and generic way. As the relation between properties and structures is generally nonunique, typically a whole set of goal structures can be identified, that lead to desired properties. Our proposed method optimizes processing paths from a start structure to one of the equivalent goal-structures. The algorithm learns to find near-optimal paths by interacting with the structure-generating process. It is guided by structure descriptors as process state features and a reward signal, which is formulated based on a distance function in the structure space. The model-free reinforcement learning algorithm learns through trial and error while interacting with the process and does not rely on a priori sampled processing data. We instantiate and evaluate the proposed method by optimizing paths of a generic metal forming process to reach near-optimal structures, which are represented by one-point statistics of crystallographic textures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset