Reducing Sampling Ratios and Increasing Number of Estimates Improve Bagging in Sparse Regression
Bagging, a powerful ensemble method from machine learning, improves the performance of unstable predictors. Although the power of Bagging has been shown mostly in classification problems, we demonstrate the success of employing Bagging in sparse regression over the baseline method (L1 minimization). The framework employs the generalized version of the original Bagging with various bootstrap ratios. The performance limits associated with different choices of bootstrap sampling ratio L/m and number of estimates K is analyzed theoretically. Simulation shows that the proposed method yields state-of-the-art recovery performance, outperforming L1 minimization and Bolasso in the challenging case of low levels of measurements. A lower L/m ratio (60 of measurements. With the reduced sampling rate, SNR improves over the original Bagging by up to 24 number of estimates K = 30 gives satisfying result, even though increasing K is discovered to always improve or at least maintain the performance.
READ FULL TEXT