This work explores sequential Bayesian binary hypothesis testing in the social learning setup under expertise diversity. We consider a two-agent (say advisor-learner) sequential binary hypothesis test where the learner infers the hypothesis based on the decision of the advisor, a prior private signal, and individual belief. In addition, the agents have varying expertise, in terms of the noise variance in the private signal. Under such a setting, we first investigate the behavior of optimal agent beliefs and observe that the nature of optimal agents could be inverted depending on expertise levels. We also discuss suboptimality of the Prelec reweighting function under diverse expertise. Next, we consider an advisor selection problem wherein the belief of the learner is fixed and the advisor is to be chosen for a given prior. We characterize the decision region for choosing such an advisor and argue that a learner with beliefs varying from the true prior often ends up selecting a suboptimal advisor.
more »
« less
The effects of base rate neglect on sequential belief updating and real-world beliefs
Base-rate neglect is a pervasive bias in judgment that is conceptualized as underweighting of prior information and can have serious consequences in real-world scenarios. This bias is thought to reflect variability in inferential processes but empirical support for a cohesive theory of base-rate neglect with sufficient explanatory power to account for longer-term and real-world beliefs is lacking. A Bayesian formalization of base-rate neglect in the context of sequential belief updating predicts that belief trajectories should exhibit dynamic patterns of dependence on the order in which evidence is presented and its consistency with prior beliefs. To test this, we developed a novel ‘urn-and-beads’ task that systematically manipulated the order of colored bead sequences and elicited beliefs via an incentive-compatible procedure. Our results in two independent online studies confirmed the predictions of the sequential base-rate neglect model: people exhibited beliefs that are more influenced by recent evidence and by evidence inconsistent with prior beliefs. We further found support for a noisy-sampling inference model whereby base-rate neglect results from rational discounting of noisy internal representations of prior beliefs. Finally, we found that model-derived indices of base-rate neglect—including noisier prior representation—correlated with propensity for unusual beliefs outside the laboratory. Our work supports the relevance of Bayesian accounts of sequential base-rate neglect to real-world beliefs and hints at strategies to minimize deleterious consequences of this pervasive bias.
more »
« less
- Award ID(s):
- 1949418
- PAR ID:
- 10518162
- Editor(s):
- Gershman, Samuel J
- Publisher / Repository:
- PLoS Computational Biology
- Date Published:
- Journal Name:
- PLOS Computational Biology
- Edition / Version:
- 1
- Volume:
- 18
- Issue:
- 12
- ISSN:
- 1553-7358
- Page Range / eLocation ID:
- e1010796
- Subject(s) / Keyword(s):
- computational psychiatry Bayesian models of cognition leaky memory urn-and-beads task
- Format(s):
- Medium: X Size: 3MB Other: pdf
- Size(s):
- 3MB
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Reward learning as a method for inferring human intent and preferences has been studied extensively. Prior approaches make an implicit assumption that the human maintains a correct belief about the robot's domain dynamics. However, this may not always hold since the human's belief may be biased, which can ultimately lead to a misguided estimation of the human's intent and preferences, which is often derived from human feedback on the robot's behaviors. In this paper, we remove this restrictive assumption by considering that the human may have an inaccurate understanding of the robot. We propose a method called Generalized Reward Learning with biased beliefs about domain dynamics (GeReL) to infer both the reward function and human's belief about the robot in a Bayesian setting based on human ratings. Due to the complex forms of the posteriors, we formulate it as a variational inference problem to infer the posteriors of the parameters that govern the reward function and human's belief about the robot simultaneously. We evaluate our method in a simulated domain and with a user study where the user has a bias based on the robot's appearances. The results show that our method can recover the true human preferences while subject to such biased beliefs, in contrast to prior approaches that could have misinterpreted them completely.more » « less
-
null (Ed.)We propose a criterion of stability for two-sided markets with asymmetric information. A central idea is to formulate off-path beliefs conditional on counterfactual pairwise deviations and on-path beliefs in the absence of such deviations. A matching-belief configuration is stable if the matching is individually rational with respect to the system of on-path beliefs and is not blocked with respect to the system of off-path beliefs. The formulation provides a language for assessing matching outcomes with respect to their supporting beliefs and opens the door to further belief-based refinements. The main refinement analyzed in the paper requires the Bayesian consistency of on-path and off-path beliefs with prior beliefs. We define concepts of Bayesian efficiency, the rational expectations competitive equilibrium, and the core. Their contrast with pairwise stability manifests the role of information asymmetry in matching formation. (JEL C78, D40, D82, D83)more » « less
-
Recent years have seen a surge in research on why people fall for misinformation and what can be done about it. Drawing on a framework that conceptualizes truth judgments of true and false information as a signal-detection problem, the current article identifies three inaccurate assumptions in the public and scientific discourse about misinformation: (1) People are bad at discerning true from false information, (2) partisan bias is not a driving force in judgments of misinformation, and (3) gullibility to false information is the main factor underlying inaccurate beliefs. Counter to these assumptions, we argue that (1) people are quite good at discerning true from false information, (2) partisan bias in responses to true and false information is pervasive and strong, and (3) skepticism against belief-incongruent true information is much more pronounced than gullibility to belief-congruent false information. These conclusions have significant implications for person-centered misinformation interventions to tackle inaccurate beliefs.more » « less
-
Accurate detection of infected individuals is one of the critical steps in stopping any pandemic. When the underlying infection rate of the disease is low, testing people in groups, instead of testing each individual in the population, can be more efficient. In this work, we consider noisy adaptive group testing design with specific test sensitivity and specificity that select the optimal group given previous test results based on pre-selected utility function. As in prior studies on group testing, we model this problem as a sequential Bayesian Optimal Experimental Design (BOED) to adaptively design the groups for each test. We analyze the required number of group tests when using the updated posterior on the infection status and the corresponding Mutual Information (MI) as our utility function for selecting new groups. More importantly, we study how the potential bias on the ground-truth noise of group tests may affect the group testing sample complexity.more » « less
An official website of the United States government

