Iterative Smoothing Proximal Gradient for Regression with Structured Sparsity
In the context high-dimensionnal predictive models, we consider the problem of optimizing the sum of a smooth convex loss and a non-smooth convex penalty, whose proximal operator is known, and a non-smooth convex structured penalties such as total variation, or overlapping group lasso. We propose to smooth the structured penalty, since it allows a generic framework in which a large range of non-smooth convex structured penalties can be minimized without computing their proximal operators that are either not known or expensive to compute. The problem can be minimized with an accelerated proximal gradient method to benefit of (non-smoothed) sparsity-inducing penalties. We propose an expression of the duality gap to control the convergence of the global non-smooth problem. This expression is applicable to a large range of structured penalties. However, smoothing methods have many limitations that the proposed solver aims to overcome. Therefore, we propose a continuation algorithm, called CONESTA, that dynamically generates a decreasing sequence of smoothing parameters in order to maintain the optimal convergence speed towards any globally desired precision. At each continuation step, the aforementioned duality gap provides the current error and thus the next smaller prescribed precision. Given this precision, we propose a expression to calculate the optimal smoothing parameter, that minimizes the number of iterations to reach such precision. We demonstrate that CONESTA achieves an improved convergence rate compared to classical (without continuation) proximal gradient smoothing. Moreover, experiments conducted on both simulated and high-dimensional neuroimaging (MRI) data, exhibit that CONESTA significantly outperforms the excessive gap method, ADMM, classical proximal gradient smoothing and inexact FISTA in terms of convergence speed and/or precision of the solution.
READ FULL TEXT