To Save Crowdsourcing from Cheap-Talk: Strategic Learning from Biased Users
Today many users are invited by a crowdsourcing platform (e.g., TripAdvisor, and Waze) to provide their anonymous reviews about service experiences (e.g., in hotels, restaurants, and trips), yet many reviews are found biased to be extremely positive or negative. It is difficult to learn from biased users' reviews to infer actual service state, as the service state can also be extreme and the platform cannot verify immediately. Further, due to the anonymity, reviewers can hide their bias types (positive or negative) from the platform and adaptively adjust their reviews against the platform's inference. To our best knowledge, we are the first to study how to save crowdsourcing from cheap-talk and strategically learn the actual service state from biased users' reviews. We formulate the problem as a dynamic Bayesian game, including the unknown users' messaging and the platform's follow-up inference. Through involved analysis, we provide closed-form expressions for the Perfect Bayesian Equilibrium (PBE). Our PBE shows that the platform's strategic learning can successfully prevent biased users from cheap-talk in most cases, where a user even with extreme bias still honestly messages to convince the platform of listening to his review. We prove that the price of anarchy (PoA) is 2, telling that the social cost can at most be doubled in the worst-case. As the user number becomes large, our platform always learns the actual service state. Perhaps surprisingly, the platform's expected cost may be worse off after adding one more user.
READ FULL TEXT