ER-AE: Differentially-private Text Generation for Authorship Anonymization
Most of privacy protection studies for textual data focus on removing explicit sensitive identifiers. However, personal writing style, as a strong indicator of the authorship, is often neglected. Recent studies on writing style anonymization can only output numeric vectors which are difficult for the recipients to interpret. We propose a novel text generation model for authorship anonymization. Combined with a semantic embedding reward loss function and the exponential mechanism, our proposed auto-encoder can generate differentially-private sentences that have a close semantic and similar grammatical structure to the original text while removing personal traits of the writing style. It does not require any conditioned labels or paralleled text data during training. We evaluate the performance of the proposed model on the real-life peer reviews dataset and the Yelp review dataset. The result suggests that our model outperforms the state-of-the-art on semantic preservation, authorship obfuscation, and stylometric transformation.
READ FULL TEXT