Analysis of Gradient-Based Expectation-Maximization-Like Algorithms via Integral Quadratic Constraints
The Expectation-Maximization (EM) algorithm is one of the most popular methods used to solve the problem of distribution-based clustering in unsupervised learning. In this paper, we propose an analysis of a generalized EM (GEM) algorithm and a designed EM-like algorithm, as linear time-invariant (LTI) systems with a feedback nonlinearity, and by leveraging tools from robust control theory, particularly integral quadratic constraints (IQCs). Towards this goal, we investigate the absolute stability of dynamical systems of the above form with a sector-bounded feedback nonlinearity, that represent the aforementioned algorithms. This analysis allows us to craft a strongly convex objective function, which led to the design of the aforementioned novel EM-like algorithm for Gaussian mixture models (GMMs). Furthermore, it allows us to establish bounds on the convergence rates of the studied algorithms. In particular, the derived bounds for our proposed EM-like algorithm generalize bounds found in the literature for the EM algorithm on GMMs, and our analysis of an existing gradient ascent GEM algorithm based on the Q-function allowed us to approximately recover bounds found in the literature.
READ FULL TEXT