Towards model discovery with reinforcement learning

02/22/2020
by   Adrián Lozano-Durán, et al.
0

We propose to learn (i) models expressed in analytical form, (ii) which are evaluated a posteriori, and (iii) using exclusively integral quantities from the reference solution as prior knowledge. In point (i), we pursue interpretable models expressed symbolically as opposed to black-box neural networks, the latter only being used during learning to efficiently parameterize the large search space of possible models. In point (ii), learned models are dynamically evaluated a posteriori in the computational solver instead of based on a priori information from preprocessed high-fidelity data, thereby accounting for the specificity of the solver at hand such as its numerics. Finally in point (iii), the exploration of new models is solely guided by predefined integral quantities, e.g., averaged quantities of engineering interest in Reynolds-averaged or large-eddy simulations (LES). This also enables the assimilation of sparse data from experimental measurements, which usually provide an averaged large-scale description of the system rather than a detailed small-scale description. We use a coupled deep reinforcement learning framework and computational solver to concurrently achieve these objectives. The combination of reinforcement learning with objectives (i), (ii) and (iii) differentiate our work from previous modeling attempts based on machine learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset