Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract We present an approach to analyse learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e. from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyse environments where learning is “slow”, such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning.more » « less
- 
            We formulate a model of social interactions and misinferences by agents who neglect assortativity in their society, mistakenly believing that they interact with a representative sample of the population. A key component of our approach is the interplay between this bias and agents’ strategic incentives. We highlight a mechanism through which assortativity neglect, combined with strategic complementarities in agents’ behavior, drives up action dispersion in society (e.g., socioeconomic disparities in education investment). We also suggest that the combination of assortativity neglect and strategic incentives may be relevant in understanding empirically documented misperceptions of income inequality and political attitude polarization. (JEL C78, D11, D31, D72, D82, D91)more » « less
- 
            We propose a class of multiple‐prior representations of preferences under ambiguity, where the belief the decision‐maker (DM) uses to evaluate an uncertain prospect is the outcome of a game played by two conflicting forces, Pessimism and Optimism. The model does not restrict the sign of the DM's ambiguity attitude, and we show that it provides a unified framework through which to characterize different degrees of ambiguity aversion, and to represent the co‐existence of negative and positive ambiguity attitudes within individuals as documented in experiments. We prove that our baseline representation, dual‐self expected utility (DSEU) , yields a novel representation of the class of invariant biseparable preferences (Ghirardato, Maccheroni, and Marinacci (2004)), which drops uncertainty aversion from maxmin expected utility (Gilboa and Schmeidler (1989)), while extensions of DSEU allow for more general departures from independence. We also provide foundations for a generalization of prior‐by‐prior belief updating to our model.more » « less
- 
            null (Ed.)We exhibit a natural environment, social learning among heterogeneous agents, where even slight misperceptions can have a large negative impact on long‐run learning outcomes. We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents' actions over time, where agents' actions depend not only on their beliefs about the state but also on their idiosyncratic types (e.g., tastes or risk attitudes). When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, we show, first, that even arbitrarily small amounts of misperception about the type distribution can generate extreme breakdowns of information aggregation, where in the long run all agents incorrectly assign probability 1 to some fixed state of the world, regardless of the true underlying state. Second, any misperception of the type distribution leads long‐run beliefs and behavior to vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long‐run outcomes. Third, we show that how fragile information aggregation is against misperception depends on the richness of agents' payoff‐relevant uncertainty; a design implication is that information aggregation can be improved by simplifying agents' learning environment. The key feature behind our findings is that agents' belief‐updating becomes “decoupled” from the true state over time. We point to other environments where this feature is present and leads to similar fragility results.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
