Peer prediction aims to incentivize truthful reports from agents whose reports cannot be assessed with any objective ground truthful information. In the multi-task setting where each agent is asked multiple questions, a sequence of mechanisms have been proposed which are truthful — truth-telling is guaranteed to be an equilibrium, or even better, informed truthful — truth-telling is guaranteed to be one of the best-paid equilibria. However, these guarantees assume agents’ strategies are restricted to be task-independent: an agent’s report on a task is not affected by her information about other tasks.
We provide the first discussion on how to design (informed) truthful mechanisms for task-dependent strategies, which allows the agents to report based on all her information on the assigned tasks. We call such stronger mechanisms (informed) omni-truthful. In particular, we propose the joint-disjoint task framework, a new paradigm which builds upon the previous penalty-bonus task framework. First, we show a natural reduction from mechanisms in the penalty-bonus task framework to mechanisms in the joint-disjoint task framework that maps every truthful mechanism to an omni-truthful mechanism. Such a reduction is non-trivial as we show that current penalty-bonus task mechanisms are not, in general, omni-truthful. Second, for a stronger truthful guarantee, we design the matching agreement (MA) mechanism which is informed omni-truthful. Finally, for the MA mechanism in the detail-free setting where no prior knowledge is assumed, we show how many tasks are required to (approximately) retain the truthful guarantees.
more »
« less
Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach
Peer prediction mechanisms incentivize agents to truthfully report their signals even in the absence of verification by comparing agents’ reports with those of their peers. In the detail-free multi-task setting, agents are asked to respond to multiple independent and identically distributed tasks, and the mechanism does not know the prior distribution of agents’ signals. The goal is to provide an epsilon-strongly truthful mechanism where truth-telling rewards agents “strictly” more than any other strategy profile (with epsilon additive error) even for heterogeneous agents, and to do so while requiring as few tasks as possible.
We design a family of mechanisms with a scoring function that maps a pair of reports to a
score. The mechanism is strongly truthful if the scoring function is “prior ideal”. Moreover, the mechanism is epsilon-strongly truthful as long as the scoring function used is sufficiently close to the ideal scoring function. This reduces the above mechanism design problem to a learning problem – specifically learning an ideal scoring function. Because learning the prior distribution is sufficient (but not necessary) to learn the scoring function, we can apply standard learning theory techniques that leverage side information about the prior (e.g., that it is close to some parametric model). Furthermore, we derive a variational representation of an ideal scoring function and reduce the learning problem into an empirical risk minimization.
We leverage this reduction to obtain very general results for peer prediction in the multi-task setting. Specifically,
Sample Complexity. We show how to derive good bounds on the number of tasks required for different types of priors–in some cases exponentially improving previous results. In particular, we can upper bound the required number of tasks for parametric models with bounded learning complexity. Furthermore, our reduction applies to myriad continuous signal space settings. To the best of our knowledge, this is the first peer-prediction mechanism on continuous signals designed for the multi-task setting.
Connection to Machine Learning. We show how to turn a soft-predictor of an agent’s signals (given the other agents’ signals) into a mechanism. This allows the practical use of machine learning algorithms that give good results even when many agents provide noisy information.
Stronger Properties. In the finite setting, we obtain -strongly truthful mechanisms for any stochastically relevant prior. Prior works either only apply to more restrictive settings, or achieve a weaker notion of truthfulness (informed truthfulness).
more »
« less
- Award ID(s):
- 2007256
- NSF-PAR ID:
- 10316786
- Date Published:
- Journal Name:
- 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Peer prediction mechanisms incentivize self-interested agents to truthfully report their signals even in the absence of verification by comparing agents’ reports with their peers. We propose two new mechanisms, Source and Target Differential Peer Prediction, and prove very strong guarantees for a very general setting. Our Differential Peer Prediction mechanisms are strongly truthful: Truth-telling is a strict Bayesian Nash equilibrium. Also, truth-telling pays strictly higher than any other equilibria, excluding permutation equilibria, which pays the same amount as truth-telling. The guarantees hold for asymmetric priors among agents, which the mechanisms need not know (detail-free) in the single question setting. Moreover, they only require three agents, each of which submits a single item report: two report their signals (answers), and the other reports her forecast (prediction of one of the other agent’s reports). Our proof technique is straightforward, conceptually motivated, and turns on the logarithmic scoring rule’s special properties. Moreover, we can recast the Bayesian Truth Serum mechanism into our framework. We can also extend our results to the setting of continuous signals with a slightly weaker guarantee on the optimality of the truthful equilibrium.more » « less
-
We initiate the study of information elicitation mechanisms for a crowd containing both self-interested agents, who respond to incentives, and adversarial agents, who may collude to disrupt the system. Our mechanisms work in the peer prediction setting where ground truth need not be accessible to the mechanism or even exist. We provide a meta-mechanism that reduces the design of peer prediction mechanisms to a related robust learning problem. The resulting mechanisms are ϵ-informed truthful, which means truth-telling is the highest paid ϵ-Bayesian Nash equilibrium (up to ϵ-error) and pays strictly more than uninformative equilibria. The value of ϵ depends on the properties of robust learning algorithm, and typically limits to 0 as the number of tasks and agents increase. We show how to use our meta-mechanism to design mechanisms with provable guarantees in two important crowdsourcing settings even when some agents are self-interested and others are adversarial.more » « less
-
We consider the crowdsourcing setting where, in response to the assigned tasks, agents strategically decide both how much effort to exert (from a continuum) and whether to manipulate their reports. The goal is to design payment mechanisms that (1) satisfy limited liability (all payments are non-negative), (2) reduce the principal’s cost of budget, (3) incentivize effort and (4) incentivize truthful responses. In our framework, the payment mechanism composes a performance measurement, which noisily evaluates agents’ effort based on their reports, and a payment function, which converts the scores output by the performance measurement to payments. Previous literature suggests applying a peer prediction mechanism combined with a linear payment function. This method can achieve either (1), (3) and (4), or (2), (3) and (4) in the binary effort setting. In this paper, we suggest using a rank-order payment function (tournament). Assuming Gaussian noise, we analytically optimize the rank-order payment function, and identify a sufficient statistic, sensitivity, which serves as a metric for optimizing the performance measurements. This helps us obtain (1), (2) and (3) simultaneously. Additionally, we show that adding noise to agents’ scores can preserve the truthfulness of the performance measurements under the non-linear tournament, which gives us all four objectives. Our real-data estimated agent-based model experiments show that our method can greatly reduce the payment of effort elicitation while preserving the truthfulness of the performance measurement. In addition, we empirically evaluate several commonly used performance measurements in terms of their sensitivities and strategic robustness.more » « less
-
Peer prediction refers to a collection of mechanisms for eliciting information from human agents when direct verification of the obtained information is unavailable. They are designed to have a game-theoretic equilibrium where everyone reveals their private information truthfully. This result holds under the assumption that agents are Bayesian and they each adopt a fixed strategy across all tasks. Human agents however are observed in many domains to exhibit learning behavior in sequential settings. In this paper, we explore the dynamics of sequential peer prediction mechanisms when participants are learning agents. We first show that the notion of no regret alone for the agents’ learning algorithms cannot guaran- tee convergence to the truthful strategy. We then focus on a family of learning algorithms where strategy updates only depend on agents’ cumulative rewards and prove that agents’ strategies in the popular Correlated Agreement (CA) mechanism converge to truthful reporting when they use algorithms from this family. This fam- ily of algorithms is not necessarily no-regret, but includes several familiar no-regret learning algorithms (e.g multiplicative weight update and Follow the Perturbed Leader) as special cases. Simulation of several algorithms in this family as well as the ε-greedy algorithm, which is outside of this family, shows convergence to the truthful strategy in the CA mechanism.more » « less