skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Kellen, David"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In recent years, discussions comparing high-threshold and continuous accounts of recognition-memory judgments have increasingly turned their attention toward critical testing. One of the de ning features of this approach is its requirement for the relationship between theoretical assumptions and predictions to be laid out in a transparent and precise way. One of the (fortunate) consequences of this requirement is that it encourages researchers to debate the merits of the different assumptions at play. The present work addresses a recent attempt to overturn the dismissal of high-threshold models by getting rid of a background selective- in uence assumption. However, it can be shown that the contrast process proposed to explain this violation undermines a more general assumption that we dubbed“single-item generalization.” We argue that the case for the dismissal of these assumptions and the claimed support for the proposed high-threshold contrast account does not stand the scrutiny of their theoretical properties and empirical implications. 
    more » « less
    Free, publicly-accessible full text available August 1, 2026
  2. In everyday life, people routinely make decisions that involve irredeemable risks such as death (e.g., while driving). Even though these decisions under extinction risk are common, practically important, and have different properties compared to the types of decisions typically studied by decision scientists, they have received little research attention. The present work advances the formal understanding of decision making under extinction risk by introducing a novel experimental paradigm, the Extinction Gambling Task (EGT). We derive optimal strategies for three different types of extinction and near-extinction events, and compare them to participants’ choices in three experiments. Leveraging computational modelling to describe strategies at the individual level, we document strengths and shortcomings in participants’ decisions under extinction risk. Specifically, we find that, while participants are relatively good in terms of the qualitative strategies they employ, their decisions are nevertheless affected by loss chasing, scope insensitivity, and opportunity cost neglect. We hope that by formalising decisions under extinction risk and providing a task to study them, this work will facilitate future research on an important topic that has been largely ignored. 
    more » « less
    Free, publicly-accessible full text available July 1, 2026
  3. Abstract We develop alternative families of Bayes factors for use in hypothesis tests as alternatives to the popular default Bayes factors. The alternative Bayes factors are derived for the statistical analyses most commonly used in psychological research – one-sample and two-samplet tests, regression, and ANOVA analyses. They possess the same desirable theoretical and practical properties as the default Bayes factors and satisfy additional theoretical desiderata while mitigating against two features of the default priors that we consider implausible. They can be conveniently computed via an R package that we provide. Furthermore, hypothesis tests based on Bayes factors and those based on significance tests are juxtaposed. This discussion leads to the insight that default Bayes factors as well as the alternative Bayes factors are equivalent to test-statistic-based Bayes factors as proposed by Johnson.Journal of the Royal Statistical Society Series B: Statistical Methodology,67, 689–701. (2005). We highlight test-statistic-based Bayes factors as a general approach to Bayes-factor computation that is applicable to many hypothesis-testing problems for which an effect-size measure has been proposed and for which test power can be computed. 
    more » « less
  4. Measurement literacy is required for strong scientific reasoning, effective experimental design, conceptual and empirical validation of measurement quantities, and the intelligible interpretation of error in theory construction. This discourse examines how issues in measurement are posed and resolved and addresses potential misunderstandings. Examples drawn from across the sciences are used to show that measurement literacy promotes the goals of scientific discourse and provides the necessary foundation for carving out perspectives and carrying out interventions in science. 
    more » « less
  5. The ability to distinguish between different explanations of human memory abilities continues to be the subject of many ongoing theoretical debates. These debates attempt to account for a growing corpus of empirical phenomena in item-memory judgments, which include the list strength effect, the strength-based mirror effect, and output interference. One of the main theoretical contenders is the Retrieving Effectively from Memory (REM) model. We show that REM, in its current form, has difficulties in accounting for source-memory judgments – a situation that calls for its revision. We propose an extended REM model that assumes a local-matching process for source judgments alongside source differentiation. We report a first evaluation of this model’s predictions using three experiments in which we manipulated the relative source-memory strength of different lists of items. Analogous to item-memory judgments, we observed a null list strength effect and a strength-based mirror effect in the case of source memory. In a second evaluation, which relied on a novel experiment alongside two previously published datasets, we evaluated the model’s predictions regarding the manifestation of output interference in item and lack of it in source memory judgments. Our results showed output interference severely affecting the accuracy of item-memory judgments but having a null or negligible impact when it comes to source-memory judgments. Altogether, these results support REM’s core notion of differentiation (for both item and source information) as well as the concept of local matching proposed by the present extension. 
    more » « less
  6. Individuals’ decisions under risk tend to be in line with the notion that“losses loom larger than gains.” This loss aversion in decision making is commonly understood as a stable individual preference that is manifested across different contexts. The presumed stability and generality, which underlies the prominence of loss aversion in the literature at large, has been recently questioned by studies reporting how loss aversion can disappear, and even reverse, as a function of the choice context. The present study investigated whether loss aversion re ects a trait-like attitude of avoiding losses or rather individuals’ adaptability to different con- texts. We report three experiments investigating the within-subject context sensitivity of loss aversion in a two-alternative forced-choice task. Our results show that the choice context can shift people’s loss aversion, though somewhat inconsistently. Moreover, individual estimates of loss aversion are shown to have a con- siderable degree of stability. Altogether, these results indicate that even though the absolute value of loss aversion can be affected by external factors such as the choice context, estimates of people’s loss aversion still capture the relative dispositions toward gains and losses across individuals. 
    more » « less
  7. Abstract This commentary argues against the indictment of current experimental practices such as piecemeal testing, and the proposed integrated experiment design (IED) approach, which we see as yet another attempt at automating scientific thinking. We identify a number of undesirable features of IED that lead us to believe that its broad application will hinder scientific progress. 
    more » « less
  8. Abstract Statistical modeling is generally meant to describe patterns in data in service of the broader scientific goal of developing theories to explain those patterns. Statistical models support meaningful inferences when models are built so as to align parameters of the model with potential causal mechanisms and how they manifest in data. When statistical models are instead based on assumptions chosen by default, attempts to draw inferences can be uninformative or even paradoxical—in essence, the tail is trying to wag the dog. These issues are illustrated by van Doorn et al. (this issue) in the context of using Bayes Factors to identify effects and interactions in linear mixed models. We show that the problems identified in their applications (along with other problems identified here) can be circumvented by using priors over inherently meaningful units instead of default priors on standardized scales. This case study illustrates how researchers must directly engage with a number of substantive issues in order to support meaningful inferences, of which we highlight two: The first is the problem of coordination , which requires a researcher to specify how the theoretical constructs postulated by a model are functionally related to observable variables. The second is the problem of generalization , which requires a researcher to consider how a model may represent theoretical constructs shared across similar but non-identical situations, along with the fact that model comparison metrics like Bayes Factors do not directly address this form of generalization. For statistical modeling to serve the goals of science, models cannot be based on default assumptions, but should instead be based on an understanding of their coordination function and on how they represent causal mechanisms that may be expected to generalize to other related scenarios. 
    more » « less