Parallel training of linear models without compromising convergence

11/05/2018
by   Nikolas Ioannou, et al.
0

In this paper we analyze, evaluate, and improve the performance of training generalized linear models on modern CPUs. We start with a state-of-the-art asynchronous parallel training algorithm, identify system-level performance bottlenecks, and apply optimizations that improve data parallelism, cache line locality, and cache line prefetching of the algorithm. These modifications reduce the per-epoch run-time significantly, but take a toll on algorithm convergence in terms of the required number of epochs. To alleviate these shortcomings of our systems-optimized version, we propose a novel, dynamic data partitioning scheme across threads which allows us to approach the convergence of the sequential version. The combined set of optimizations result in a consistent bottom line speedup in convergence of up to ×12 compared to the initial asynchronous parallel training algorithm and up to ×42, compared to state of the art implementations (scikit-learn and h2o) on a range of multi-core CPU architectures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset