X-Secure T-Private Federated Submodel Learning

10/02/2020
by   Zhuqing Jia, et al.
0

The problem of (information-theoretic) X-secure T-private federated submodel learning represents a setting where a large scale machine learning model is partitioned into K submodels and stored across N distributed servers according to an X-secure threshold secret sharing scheme. Various users wish to successively train (update) the submodel that is most relevant to their local data while keeping the identity of their relevant submodel private from any set of up to T colluding servers. Inspired by the idea of cross-subspace alignment (CSA) for X-secure T-private information retrieval, we propose a novel CSA-RW (read-write) scheme for efficiently (in terms of communication cost) and privately reading from and writing to a distributed database. CSA-RW is shown to be asymptotically/approximately optimal in download/upload communication cost, and improves significantly upon available baselines from prior work. It also answers in the affirmative an open question by Kairouz et al. by exploiting synergistic gains from the joint design of private read and write operations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset