MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery

10/22/2020
by   Xiaoxiao Li, et al.
4

To address the issue that deep neural networks (DNNs) are vulnerable to model inversion attacks, we design an objective function, which adjusts the separability of the hidden data representations, as a way to control the trade-off between data utility and vulnerability to inversion attacks. Our method is motivated by the theoretical insights of data separability in neural networking training and results on the hardness of model inversion. Empirically, by adjusting the separability of data representation, we show that there exist sweet-spots for data separability such that it is difficult to recover data during inference while maintaining data utility.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset