Monotonic Alpha-divergence Minimisation
In this paper, we introduce a novel iterative algorithm which carries out α-divergence minimisation by ensuring a systematic decrease in the α-divergence at each step. In its most general form, our framework allows us to simultaneously optimise the weights and components parameters of a given mixture model. Notably, our approach permits to build on various methods previously proposed for α-divergence minimisation such as gradient or power descent schemes. Furthermore, we shed a new light on an integrated Expectation Maximization algorithm. We provide empirical evidence that our methodology yields improved results, all the while illustrating the numerical benefits of having introduced some flexibility through the parameter α of the α-divergence.
READ FULL TEXT