skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2049896

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Adaptive design optimization (ADO) is a state-of-the-art technique for experimental design (Cavagnaro et al., 2010). ADO dynamically identifies stimuli that, in expectation, yield the most information about a hypothetical construct of interest (e.g., parameters of a cognitive model). To calculate this expectation, ADO leverages the modeler’s existing knowledge, specified in the form of a prior distribution.Informativepriors align with the distribution of the focal construct in the participant population. This alignment is assumed by ADO’s internal assessment of expected information gain. If the prior is insteadmisinformative, i.e., does not align with the participant population, ADO’s estimates of expected information gain could be inaccurate. In many cases, the true distribution that characterizes the participant population is unknown, and experimenters rely on heuristics in their choice of prior and without an understanding of how this choice affects ADO’s behavior. Our work introduces a mathematical framework that facilitates investigation of the consequences of the choice of prior distribution on the efficiency of experiments designed using ADO. Through theoretical and empirical results, we show that, in the context ofprior misinformation, measures of expected information gain are distinct from the correctness of the corresponding inference. Through a series of simulation experiments, we show that, in the case of parameter estimation, ADO nevertheless outperforms other design methods. Conversely, in the case of model selection, misinformative priors can lead inference to favor the wrong model, and rather than mitigating this pitfall, ADO exacerbates it. 
    more » « less
  2. Scholars heavily rely on theoretical scope as a tool to challenge existing theory. We advocate that scientific discovery could be accelerated if far more effort were invested into also overtly specifying and painstakingly delineating the intended purview of any proposed new theory at the time of its inception. As a case study, we consider Tversky and Kahneman (1992). They motivated their Nobel-Prize-winning cumulative prospect theory with evidence that in each of two studies, roughly half of the participants violated independence, a property required by expected utility theory (EUT). Yet even at the time of inception, new theories may reveal signs of their own limited scope. For example, we show that Tversky and Kahneman’s findings in their own test of loss aversion provide evidence that at least half of their participants violated their theory, in turn, in that study. We highlight a combination of conflicting findings in the original article that make it ambiguous to evaluate both cumulative prospect theory’s scope and its parsimony on the authors’ own evidence. The Tversky and Kahneman article is illustrative of a social and behavioral research culture in which theoretical scope plays an extremely asymmetric role: to call existing theory into question and motivate surrogate proposals. 
    more » « less