Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Building on Pomatto, Strack, and Tamuz (2020), we identify a tight condition for when background risk can induce first-order stochastic dominance. Using this condition, we show that under plausible levels of background risk, no theory of choice under risk can simultaneously satisfy the following three economic postulates: (i) decision-makers are risk averse over small gambles, (ii) their preferences respect stochastic dominance, and (iii) they account for background risk. This impossibility result applies to expected utility theory, prospect theory, rank-dependent utility, and many other models. (JEL D81, D91)more » « lessFree, publicly-accessible full text available June 1, 2025
-
We study how long‐lived, rational agents learn in a social network. In every period, after observing the past actions of his neighbors, each agent receives a private signal, and chooses an action whose payoff depends only on the state. Since equilibrium actions depend on higher‐order beliefs, it is difficult to characterize behavior. Nevertheless, we show that regardless of the size and shape of the network, the utility function, and the patience of the agents, the speed of learning in any equilibrium is bounded from above by a constant that only depends on the private signal distribution.more » « lessFree, publicly-accessible full text available January 1, 2025
-
We develop an axiomatic theory of information acquisition that captures the idea of constant marginal costs in information production: the cost of generating two independent signals is the sum of their costs, and generating a signal with probability half costs half its original cost. Together with Blackwell monotonicity and a continuity condition, these axioms determine the cost of a signal up to a vector of parameters. These parameters have a clear economic interpretation and determine the difficulty of distinguishing states. (JEL D82, D83)more » « less
-
A single seller faces a sequence of buyers with unit demand. The buyers are forward‐looking and long‐lived. Each buyer has private information about his arrival time and valuation where the latter evolves according to a geometric Brownian motion. Any incentive‐compatible mechanism has to induce truth‐telling about the arrival time and the evolution of the valuation. We establish that the optimal stationary allocation policy can be implemented by a simple posted price. The truth‐telling constraint regarding the arrival time can be represented as an optimal stopping problem that determines the first time at which the buyer participates in the mechanism. The optimal mechanism thus induces progressive participation by each buyer: he either participates immediately or at a future random time.more » « less
-
null (Ed.)We study how an agent learns from endogenous data when their prior belief is misspecified. We show that only uniform Berk–Nash equilibria can be long‐run outcomes, and that all uniformly strict Berk–Nash equilibria have an arbitrarily high probability of being the long‐run outcome for some initial beliefs. When the agent believes the outcome distribution is exogenous, every uniformly strict Berk–Nash equilibrium has positive probability of being the long‐run outcome for any initial belief. We generalize these results to settings where the agent observes a signal before acting.more » « less
-
null (Ed.)We study repeated independent Blackwell experiments; standard examples include drawing multiple samples from a population, or performing a measurement in different locations. In the baseline setting of a binary state of nature, we compare experiments in terms of their informativeness in large samples. Addressing a question due to Blackwell (1951), we show that generically an experiment is more informative than another in large samples if and only if it has higher Rényi divergences. We apply our analysis to the problem of measuring the degree of dissimilarity between distributions by means of divergences. A useful property of Rényi divergences is their additivity with respect to product distributions. Our characterization of Blackwell dominance in large samples implies that every additive divergence that satisfies the data processing inequality is an integral of Rényi divergences.more » « less
-
The drift-diffusion model (DDM) is a model of sequential sampling with diffusion signals, where the decision maker accumulates evidence until the process hits either an upper or lower stopping boundary and then stops and chooses the alternative that corresponds to that boundary. In perceptual tasks, the drift of the process is related to which choice is objectively correct, whereas in consumption tasks, the drift is related to the relative appeal of the alternatives. The simplest version of the DDM assumes that the stopping boundaries are constant over time. More recently, a number of papers have used nonconstant boundaries to better fit the data. This paper provides a statistical test for DDMs with general, nonconstant boundaries. As a by-product, we show that the drift and the boundary are uniquely identified. We use our condition to nonparametrically estimate the drift and the boundary and construct a test statistic based on finite samples.