Multi-fidelity data fusion through parameter space reduction with applications to automotive engineering
Multi-fidelity models are of great importance due to their capability of fusing information coming from different simulations and sensors. Gaussian processes are employed for nonparametric regression in a Bayesian setting. They generalize linear regression embedding the inputs in a latent manifold inside an infinite-dimensional reproducing kernel Hilbert space. We can augment the inputs with the observations of low-fidelity models in order to learn a more expressive latent manifold and thus increment the model's accuracy. This can be realized recursively with a chain of Gaussian processes with incrementally higher fidelity. We would like to extend these multi-fidelity model realizations to case studies affected by a high-dimensional input space but with low intrinsic dimensionality. In these cases physical supported or purely numerical low-order models are still affected by the curse of dimensionality when queried for responses. When the model's gradient information is provided, the existence of an active subspace, or a nonlinear transformation of the input parameter space, can be exploited to design low-fidelity response surfaces and thus enable Gaussian process multi-fidelity regression, without the need to perform new simulations. This is particularly useful in the case of data scarcity. In this work, we present a new multi-fidelity approach involving active subspaces and nonlinear level-set learning method. We test the proposed numerical method on two different high-dimensional benchmarks, and on a more complex car aerodynamics problem. We show how a low intrinsic dimensionality bias can increase the accuracy of Gaussian process response surfaces.
READ FULL TEXT