skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Kvam, Peter D"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Grigorenko, Elena L (Ed.)
    Free, publicly-accessible full text available April 1, 2026
  2. Subjective value has long been measured using binary choice experiments, yet responses like willingness-to-pay prices can be an effective and efficient way to assess individual differences risk preferences and value. Tony Marley’s work illustrated that dynamic, stochastic models permit meaningful inferences about cognition from process-level data on paradigms beyond binary choice, yet many of these models remain difficult to use because their likelihoods must be approximated from simulation. In this paper, we develop and test an approach that uses deep neural networks to estimate the parameters of otherwise-intractable behavioral models. Once trained, these networks allow for accurate and instantaneous parameter estimation. We compare different network architectures and show that they accurately recover true risk preferences related to utility, response caution, anchoring, and non-decision processes. To illustrate the usefulness of the approach, it was then applied to estimate model parameters for a large, demographically representative sample of U.S. participants who completed a 20-question pricing task — an estimation task that is not feasible with previous methods. The results illustrate the utility of machine-learning approaches for fitting cognitive and economic models, providing efficient methods for quantifying meaningful differences in risk preferences from sparse data. 
    more » « less
  3. When making decisions based on probabilistic outcomes, people guide their behavior using knowledge gathered through both indirect descriptions and direct experience. Paradoxically, how people obtain information significantly impacts apparent preferences. A ubiquitous example is the description-experience gap: individuals seemingly overweight low probability events when probabilities are described yet underweight them when probabilities must be experienced firsthand. A leading explanation for this fundamental gap in decision-making is that probabilities are weighted differently when learned through description relative to experience, yet a formal theoretical account of the mechanism responsible for such weighting differences remains elusive. We demonstrate how various learning and memory retention models incorporating neuroscientifically motivated learning mechanisms can explain why probability weighting and valuation parameters often are found to vary across description and experience. In a simulation study, we show how learning through experience can lead to systematically biased estimates of probability weighting when using a traditional cumulative prospect theory model. We then use hierarchical Bayesian modeling and Bayesian model comparison to show how various learning and memory retention models capture participants’ behavior over and above changes in outcome valuation and probability weighting, accounting for description and experience-based decisions in a within-subject experiment. We conclude with a discussion of how substantive models of psychological processes can lead to insights that heuristic statistical models fail to capture. 
    more » « less
  4. The Implicit Association Test (IAT), like many behavioral measures, seeks to quantify meaningful individual differences in cognitive processes that are difficult to assess with approaches like self-reports. However, much like other behavioral measures, many IATs appear to show low test-retest reliability and typical scoring methods fail to quantify all of the decision-making processes that generate the overt task performance. Here, we develop a new modeling approach for IATs based on the geometric similarity representation (GSR) model. This model leverages both response times and accuracy on IATs to make inferences about representational similarity between the stimuli and categories. The model disentangles processes related to response caution, stimulus encoding, similarities between concepts and categories, and response processes unrelated to the choice itself. This approach to analyzing IAT data illustrates that the unreliability in IATs is almost entirely attributable to the methods used to analyze data from the task: GSR model parameters show test-retest reliability around .80-.90, on par with reliable self-report measures. Furthermore, we demonstrate how model parameters result in greater validity compared to the IAT D-score, Quad model, and simple diffusion model contrasts, predicting outcomes related to intergroup contact and motivation. Finally, we present a simple point-and-click software tool for fitting the model, which uses a pre-trained neural network to estimate best-fit parameters of the GSR model. This approach allows easy and instantaneous fitting of IAT data with minimal demands on coding or technical expertise on the part of the user, making the new model accessible and effective. 
    more » « less
  5. null (Ed.)