Learning Sparse Mixture Models

03/28/2022
by   Fatima Antarou Ba, et al.
0

This work approximates high-dimensional density functions with an ANOVA-like sparse structure by the mixture of wrapped Gaussian and von Mises distributions. When the dimension d is very large, it is complex and impossible to train the model parameters by the usually known learning algorithms due to the curse of dimensionality. Therefore, assuming that each component of the model depends on an a priori unknown much smaller number of variables than the space dimension d, we first define an algorithm that determines the mixture model's set of active variables by the Kolmogorov-Smirnov and correlation test. Then restricting the learning procedure to the set of active variables, we iteratively determine the set of variable interactions of the marginal density function and simultaneously learn the parameters by the Kolmogorov and correlation coefficient statistic test and the proximal Expectation-Maximization algorithm. The learning procedure considerably reduces the algorithm's complexity for the input dimension d and increases the model's accuracy for the given samples, as the numerical examples show.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset