We investigate the problem of determining a binary ground truth using advice from a group of independent reviewers (experts) who express their guess about a ground truth correctly with some independent probability (competence) p_i. In this setting, when all reviewers are competent with p >= 0.5, the Condorcet Jury Theorem tells us that adding more reviewers increases the overall accuracy, and if all p_i's are known, then there exists an optimal weighting of the reviewers. However, in practical settings, reviewers may be noisy or incompetent, i.e., p_i < 0.5, and the number of experts may be small, so the asymptotic Condorcet Jury Theorem is not practically relevant. In such cases we explore appointing one or more chairs (judges) who determine the weight of each reviewer for aggregation, creating multiple levels. However, these chairs may be unable to correctly identify the competence of the reviewers they oversee, and therefore unable to compute the optimal weighting. We give conditions when a set of chairs is able to weight the reviewers optimally, and depending on the competence distribution of the agents, give results about when it is better to have more chairs or more reviewers. Through numerical simulations we show that in some cases it is better to have more chairs, but in many cases it is better to have more reviewers.
more »
« less
Towards Group Learning: Distributed Weighting of Experts
Aggregating signals from a collection of noisy sources is a fundamental problem in many domains including crowd-sourcing, multi-agent planning, sensor networks, signal processing, voting, ensemble learning, and federated learning. The core question is how to aggregate signals from multiple sources (e.g. experts) in order to reveal an underlying ground truth. While a full answer depends on the type of signal, correlation of signals, and desired output, a problem common to all of these applications is that of differentiating sources based on their quality and weighting them accordingly. It is often assumed that this differentiation and aggregation is done by a single, accurate central mechanism or agent (e.g. judge). We complicate this model in two ways. First, we investigate the setting with both a single judge, and one with multiple judges. Second, given this multi-agent interaction of judges, we investigate various constraints on the judges’ reporting space. We build on known results for the optimal weighting of experts and prove that an ensemble of sub-optimal mechanisms can perform optimally under certain conditions. We then show empirically that the ensemble approximates the performance of the optimal mechanism under a broader range of conditions.
more »
« less
- Award ID(s):
- 2007955
- PAR ID:
- 10386119
- Date Published:
- Journal Name:
- The 13th Workshop on Optimization and Learning in Multiagent Systems at AAMAS 2022
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We consider the problem of determining a binary ground truth using advice from a group of independent reviewers (experts) who express their guess about a ground truth correctly with some independent probability (competence) $$p_i$$. In this setting, when all reviewers are competent with $$p \geq 0.5$$, the Condorcet Jury Theorem tells us that adding more reviewers increases the overall accuracy, and if all $$p_i$$'s are known, then there exists an optimal weighting of the reviewers. However, in practical settings, reviewers may be noisy or incompetent, i.e., $$p_i \leq 0.5$$, and the number of experts may be small, so the asymptotic Condorcet Jury Theorem is not practically relevant. In such cases we explore appointing one or more chairs (judges) who determine the weight of each reviewer for aggregation, creating multiple levels. However, these chairs may be unable to correctly identify the competence of the reviewers they oversee, and therefore unable to compute the optimal weighting. We give conditions on when a set of chairs is able to weight the reviewers optimally, and depending on the competence distribution of the agents, give results about when it is better to have more chairs or more reviewers. Through simulations we show that in some cases it is better to have more chairs, but in many cases it is better to have more reviewers.more » « less
-
We consider the problem of determining a binary ground truth using advice from a group of independent reviewers (experts) who express their guess about a ground truth correctly with some independent probability (competence) 𝑝 . In this setting, when all reviewers 𝑖 are competent with 𝑝 ≥ 0.5, the Condorcet Jury Theorem tells us that adding more reviewers increases the overall accuracy, and if all 𝑝 ’s are known, then there exists an optimal weighting of the 𝑖 reviewers. However, in practical settings, reviewers may be noisy or incompetent, i.e., 𝑝𝑖 ≤ 0.5, and the number of experts may be small, so the asymptotic Condorcet Jury Theorem is not practically relevant. In such cases we explore appointing one or more chairs ( judges) who determine the weight of each reviewer for aggregation, creating multiple levels. However, these chairs may be unable to correctly identify the competence of the reviewers they oversee, and therefore unable to compute the optimal weighting. We give conditions on when a set of chairs is able to weight the reviewers optimally, and depending on the competence distribution of the agents, give results about when it is better to have more chairs or more reviewers. Through simulations we show that in some cases it is better to have more chairs, but in many cases it is better to have more reviewers.more » « less
-
null (Ed.)When objects from two categories of expertise (e.g., faces and cars in dual car/face experts) are processed simultaneously, competition occurs across a variety of tasks. Here, we investigate whether competition between face and car processing also occurs during ensemble coding. The relationship between single object recognition and ensemble coding is debated, but if ensemble coding relies on the same ability as object recognition, we expect cars to interfere with ensemble coding of faces as a function of car expertise. We measured the ability to judge the variability in identity of arrays of faces, in the presence of task irrelevant distractors (cars or novel objects). On each trial, participants viewed two sequential arrays containing four faces and four distractors, judging which array was the more diverse in terms of face identity. We measured participants’ car expertise, object recognition ability, and face recognition ability. Using Bayesian statistics, we found evidence against competition as a function of car expertise during ensemble coding of faces. Face recognition ability predicted ensemble judgments for faces, regardless of the category of task-irrelevant distractors. The result suggests that ensemble coding is not susceptible to competition between different domains of similar expertise, unlike single-object recognition.more » « less
-
This paper studies the distributed feedback optimization problem for linear multi-agent systems without precise knowledge of local costs and agent dynamics. The proposed solution is based on a hierarchical approach that uses upper-level coordinators to adjust reference signals toward the global optimum and lower-level controllers to regulate agents’ outputs toward the reference signals. In the absence of precise information on local gradients and agent dynamics, an extremum-seeking mechanism is used to enforce a gradient descent optimization strategy, and an adaptive dynamic programming approach is taken to synthesize an internal-model-based optimal tracking controller. The whole procedure relies only on measurements of local costs and input-state data along agents’ trajectories. Moreover, under appropriate conditions, the closed-loop signals are bounded and the output of the agents exponentially converges to a small neighborhood of the desired extremum. A numerical example is conducted to validate the efficacy of the proposed method.more » « less
An official website of the United States government

