Learning from aggregated data with a maximum entropy model

10/05/2022
by   Alexandre Gilotte, et al.
0

Aggregating a dataset, then injecting some noise, is a simple and common way to release differentially private data.However, aggregated data – even without noise – is not an appropriate input for machine learning classifiers.In this work, we show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis. The resulting model is a Markov Random Field (MRF), and we detail how to apply, modify and scale a MRF training algorithm to our setting. Finally we present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset