Debiasing Stochastic Gradient Descent to handle missing values

02/21/2020
by   Aude Sportisse, et al.
0

A major caveat of large scale data is their incom-pleteness. We propose an averaged stochastic gradient algorithm handling missing values in linear models. This approach has the merit to be free from the need of any data distribution modeling and to account for heterogeneous missing proportion. In both streaming and finite-sample settings, we prove that this algorithm achieves convergence rate of O(1 n) at the iteration n, the same as without missing values. We show the convergence behavior and the relevance of the algorithm not only on synthetic data but also on real data sets, including those collected from medical register.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset