Seminar: NS seminar

Vijay Kamble
- Ph.D. from the University of California, Berkeley

Date:May 5, 2015

Truthful output agreement mechanisms for evaluating a large number of similar objects.
A majority of tasks in crowdsourcing platforms such as Amazon Mechanical Turk involve asking a worker multiple questions, often of multiple-choice format, such as labeling tasks for machine learning applications. The problem of peer-grading in massive open online courses (MOOCs) involves a similar procedure. It is often the case that the principal who posts the questions does not have access to the true answer to any of them. We consider the problem of incentivizing the workers or the graders to report their answers truthfully in such a setting.This class of problems, pioneered by the peer-prediction method and the Bayesian truth serum, is now quite well studied in the literature. In this work we propose new mechanisms that, unlike most works on this topic, require no extraneous elicitation from the workers, and furthermore allow the agents beliefs to be (almost) arbitrary. Moreover, these mechanisms have the structure of output agreement mechanisms, which are simple, intuitive and have been quite popular in practice. These mechanisms operate under scenarios where the number of questions is large, and are suitable for most tasks in crowdsourcing and peer-grading.