Training reliable deep learning models which avoid making overconfident ...
Differentially private (stochastic) gradient descent is the workhorse of...
We design, to the best of our knowledge, the first differentially privat...
ML models are ubiquitous in real world applications and are a constant f...
We consider the problem of minimizing a non-convex objective while prese...
In the privacy-utility tradeoff of a model trained on benchmark language...
Leveraging transfer learning has recently been shown to be an effective
...
We introduce new differentially private (DP) mechanisms for gradient-bas...
Models need to be trained with privacy-preserving learning algorithms to...
All state-of-the-art (SOTA) differentially private machine learning (DP ...
We study the problem of differentially private linear regression where e...
Differential Privacy (DP) provides a formal framework for training machi...
In this paper we revisit the problem of differentially private empirical...
In this paper we revisit the problem of private empirical risk minimziat...
Differentially Private Stochastic Gradient Descent (DP-SGD) forms a
fund...
Poisoning attacks have emerged as a significant security threat to machi...
Differential privacy is an information theoretic constraint on algorithm...
We study differentially private (DP) algorithms for stochastic convex
op...
Sensitive statistics are often collected across sets of users, with repe...
Many commonly used learning algorithms work by iteratively updating an
i...
We design differentially private learning algorithms that are agnostic t...
We study the problem of privacy-preserving collaborative filtering where...
Training deep belief networks (DBNs) requires optimizing a non-convex
fu...
Empirical Risk Minimization (ERM) is a standard technique in machine
lea...
In this paper, we initiate a systematic investigation of differentially
...
In this paper, we consider the problem of preserving privacy in the onli...