SGLB: Stochastic Gradient Langevin Boosting

01/20/2020
by   Aleksei Ustimenko, et al.
1

In this paper, we introduce Stochastic Gradient Langevin Boosting (SGLB) - a powerful and efficient machine learning framework, which may deal with a wide range of loss functions and has provable generalization guarantees. The method is based on a special form of Langevin Diffusion equation specifically designed for gradient boosting. This allows us to guarantee the global convergence, while standard gradient boosting algorithms can guarantee only local optima, which is a problem for multimodal loss functions. To illustrate the advantages of SGLB, we apply it to a classification task with 0-1 loss function, which is known to be multimodal, and to a standard Logistic regression task that is convex. The algorithm is implemented as a part of the CatBoost gradient boosting library and outperforms classic gradient boosting methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset