A Fast Algorithm for Maximum Likelihood Estimation of Mixture Proportions Using Sequential Quadratic Programming
Maximum likelihood estimation of mixture proportions has a long history in statistics and continues to play a role in modern statistics, notably in recently developed nonparametric empirical Bayes (EM) methods. Although this problem has traditionally been solved by using an EM algorithm, recent work by Koenker and Mizera shows that modern convex optimization methods can be substantially faster and more accurate. In particular, they used an interior point (IP) method to solve a dual formulation of the original optimization problem. Here we develop a new optimization approach, based on sequential quadratic programming (SQP), which is substantially faster than the IP method without sacrificing accuracy. Our approach combines several ideas: first, it solves a reformulation of the original primal problem rather than the dual; second, it uses SQP instead of IP, with the goal of making full use of the expensive gradient and Hessian computations; third, it uses an active set method to solve the QP subproblem within the SQP algorithm to exploit the sparse nature of the problem; and fourth, it uses low-rank approximations for much faster gradient and Hessian computations at virtually no cost to accuracy. We illustrate all these ideas in experiments on synthetic data sets as well as on a large genetic association data set. For large data sets (e.g., n ≈ 10^6 observations and m ≈ 10^3 mixture components) our implementation yields at least 100-fold reduction in compute time compared with state-of-the-art IP methods, and it does so without sacrificing accuracy. Our methods are implemented in Julia and R, and we have made the source code available at https://github.com/stephenslab/mixsqp-paper.
READ FULL TEXT