Mechanism Design for Multi-Party Machine Learning
In a multi-party machine learning system, different parties cooperate on optimizing towards better models by sharing data in a privacy-preserving way. A major challenge in the learning is the incentive issue. For example, if there is competition among the parties, one may strategically hide his data to prevent other parties from getting better models. In this paper, we study the problem through the lens of mechanism design. Compared with the standard mechanism design setting, our setting has several fundamental differences. First, each agent's valuation has externalities that depend on others' true types. We call this setting mechanism design with type-imposed externalities. Second, each agent can only misreport a lower type, but not the other way round. We show that some results (e.g., the truthfulness of the VCG mechanism) in the standard mechanism design setting fail to hold. We provide the optimal truthful mechanism in the quasi-monotone utility setting. We also provide necessary and sufficient conditions for truthful mechanisms in the most general case. Finally, we show the existence of such mechanisms are highly affected by the market growth rate and give empirical analysis.
READ FULL TEXT