skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on May 19, 2026

Title: Maximizing Truth Learning in a Social Network is NP-Hard
Sequential learning models situations where agents predict a ground truth in sequence, by using their private, noisy measurements, and the predictions of agents who came earlier in the sequence. We study sequential learning in a social network, where agents only see the actions of the previous agents in their own neighborhood. The fraction of agents who predict the ground truth correctly depends heavily on both the network topology and the ordering in which the predictions are made. A natural question is to find an ordering, with a given network, to maximize the (expected) number of agents who predict the ground truth correctly. In this paper, we show that it is in fact NP-hard to answer this question for a general network, with both the Bayesian learning model and a simple majority rule model. Finally, we show that even approximating the answer is hard.  more » « less
Award ID(s):
2229876
PAR ID:
10594761
Author(s) / Creator(s):
; ;
Publisher / Repository:
Proceedings of the 24th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS)
Date Published:
Format(s):
Medium: X
Location:
Detroit, MI
Sponsoring Org:
National Science Foundation
More Like this
  1. Consider a network of agents that all want to guess the correct value of some ground truth state. In a sequential order, each agent makes its decision using a single private signal which has a constant probability of error, as well as observations of actions from its network neighbors earlier in the order. We are interested in enabling network-wide asymptotic truth learning – that in a network of n agents, almost all agents make a correct prediction with probability approaching one as n goes to infinity. In this paper we study both random orderings and carefully crafted decision orders with respect to the graph topology as well as sufficient or necessary conditions for a graph to support such a good ordering. We first show that on a sparse graph of average constant degree with a random ordering asymptotic truth learning does not happen. We then show a rather modest sufficient condition to enable asymptotic truth learning. With the help of this condition we characterize graphs generated from the Erdös Rényi model and preferential attachment model. In an Erdös Rényi graph, unless the graph is super sparse (with O(n) edges) or super dense (nearly a complete graph), there exists a decision ordering that supports asymptotic truth learning. Similarly, any preferential attachment network with a constant number of edges per node can achieve asymptotic truth learning under a carefully designed ordering but not under either a random ordering nor the arrival order. We also evaluated a variant of the decision ordering on different network topologies and demonstrated clear effectiveness in improving truth learning over random orderings. 
    more » « less
  2. Strictly proper scoring rules (SPSR) are incentive compatible for eliciting information about random variables from strategic agents when the principal can reward agents after the realization of the random variables. They also quantify the quality of elicited information, with more accurate predictions receiving higher scores in expectation. In this paper, we extend such scoring rules to settings where a principal elicits private probabilistic beliefs but only has access to agents’ reports. We name our solution Surrogate Scoring Rules (SSR). SSR is built on a bias correction step and an error rate estimation procedure for a reference answer defined using agents’ reports. We show that, with a little information about the prior distribution of the random variables, SSR in a multi-task setting recover SPSR in expectation, as if having access to the ground truth. Therefore, a salient feature of SSR is that they quantify the quality of information despite the lack of ground truth, just as SPSR do for the setting with ground truth. As a by-product, SSR induce dominant uniform strategy truthfulness in reporting. Our method is verified both theoretically and empirically using data collected from real human forecasters. 
    more » « less
  3. Merlo, Paola; Tiedemann, Jorg; Tsarfaty, Reut (Ed.)
    GQA (CITATION) is a dataset for real-world visual reasoning and compositional question answering. We found that many answers predicted by the best vision-language models on the GQA dataset do not match the ground-truth answer but still are semantically meaningful and correct in the given context. In fact, this is the case with most existing visual question answering (VQA) datasets where they assume only one ground-truth answer for each question. We propose Alternative Answer Sets (AAS) of ground-truth answers to address this limitation, which is created automatically using off-the-shelf NLP tools. We introduce a semantic metric based on AAS and modify top VQA solvers to support multiple plausible answers for a question. We implement this approach on the GQA dataset and show the performance improvements. 
    more » « less
  4. We initiate the study of information elicitation mechanisms for a crowd containing both self-interested agents, who respond to incentives, and adversarial agents, who may collude to disrupt the system. Our mechanisms work in the peer prediction setting where ground truth need not be accessible to the mechanism or even exist. We provide a meta-mechanism that reduces the design of peer prediction mechanisms to a related robust learning problem. The resulting mechanisms are ϵ-informed truthful, which means truth-telling is the highest paid ϵ-Bayesian Nash equilibrium (up to ϵ-error) and pays strictly more than uninformative equilibria. The value of ϵ depends on the properties of robust learning algorithm, and typically limits to 0 as the number of tasks and agents increase. We show how to use our meta-mechanism to design mechanisms with provable guarantees in two important crowdsourcing settings even when some agents are self-interested and others are adversarial. 
    more » « less
  5. Abstract We present an approach to analyse learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e. from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyse environments where learning is “slow”, such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning. 
    more » « less