InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning
Data pruning aims to obtain lossless performances as training on the original data with less overall cost. A common approach is to simply filter out samples that make less contribution to the training. This leads to gradient expectation bias between the pruned and original data. To solve this problem, we propose InfoBatch, a novel framework aiming to achieve lossless training acceleration by unbiased dynamic data pruning. Specifically, InfoBatch randomly prunes a portion of less informative samples based on the loss distribution and rescales the gradients of the remaining samples. We train the full data in the last few epochs to improve the performance of our method, which further reduces the bias of the total update. As a plug-and-play and architecture-agnostic framework, InfoBatch consistently obtains lossless training results on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet-1K saving 40%, 33%, 30%, and 26% overall cost, respectively. We extend InfoBatch into semantic segmentation task and also achieve lossless mIoU on ADE20K dataset with 20% overall cost saving. Last but not least, as InfoBatch accelerates in data dimension, it further speeds up large-batch training methods (eg. LARS and LAMB) by 1.3 times without extra cost or performance drop. The code will be made public.
READ FULL TEXT