Identification and Formal Privacy Guarantees
Empirical economic research crucially relies on highly sensitive individual datasets. At the same time, increasing availability of public individual-level data makes it possible for adversaries to potentially de-identify anonymized records in sensitive research datasets. This increasing disclosure risk has incentivised large data curators, most notably the US Census bureau and several large companies including Apple, Facebook and Microsoft to look for algorithmic solutions to provide formal non-disclosure guarantees for their secure data. The most commonly accepted formal data security concept in the Computer Science community is differential privacy. It restricts the interaction of researchers with the data by allowing them to issue queries to the data. The differential privacy mechanism then replaces the actual outcome of the query with a randomised outcome. While differential privacy does provide formal data security guarantees, its impact on the identification of empirical economic models and on the performance of estimators in those models has not been sufficiently studied. Since privacy protection mechanisms are inherently finite-sample procedures, we define the notion of identifiability of the parameter of interest as a property of the limit of experiments. It is linked to the asymptotic behavior in measure of differentially private estimators. We demonstrate that particular instances of regression discontinuity design and average treatment effect may be problematic for inference with differential privacy because their estimators can only be ensured to converge weakly with their asymptotic limit remaining random and, thus, may not be estimated consistently. This result is clearly supported by our simulation evidence. Our analysis suggests that many other estimators that rely on nuisance parameters may have similar properties with the requirement of differential privacy.
READ FULL TEXT