skip to main content


Title: Infant Statisticians: The Origins of Reasoning Under Uncertainty
Humans frequently make inferences about uncertain future events with limited data. A growing body of work suggests that infants and other primates make surprisingly sophisticated inferences under uncertainty. First, we ask what underlying cognitive mechanisms allow young learners to make such sophisticated inferences under uncertainty. We outline three possibilities, the logic, probabilistic, and heuristics views, and assess the empirical evidence for each. We argue that the weight of the empirical work favors the probabilistic view, in which early reasoning under uncertainty is grounded in inferences about the relationship between samples and populations as opposed to being grounded in simple heuristics. Second, we discuss the apparent contradiction between this early-emerging sensitivity to probabilities with the decades of literature suggesting that adults show limited use of base-rate and sampling principles in their inductive inferences. Third, we ask how these early inductive abilities can be harnessed for improving later mathematics education and inductive inference. We make several suggestions for future empirical work that should go a long way in addressing the many remaining open questions in this growing research area.  more » « less
Award ID(s):
1640816
NSF-PAR ID:
10118690
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Perspectives on Psychological Science
Volume:
14
Issue:
4
ISSN:
1745-6916
Page Range / eLocation ID:
499 to 509
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Probabilistic predictions support public health planning and decision making, especially in infectious disease emergencies. Aggregating outputs from multiple models yields more robust predictions of outcomes and associated uncertainty. While the selection of an aggregation method can be guided by retrospective performance evaluations, this is not always possible. For example, if predictions are conditional on assumptions about how the future will unfold (e.g. possible interventions), these assumptions may never materialize, precluding any direct comparison between predictions and observations. Here, we summarize literature on aggregating probabilistic predictions, illustrate various methods for infectious disease predictions via simulation, and present a strategy for choosing an aggregation method when empirical validation cannot be used. We focus on the linear opinion pool (LOP) and Vincent average, common methods that make different assumptions about between-prediction uncertainty. We contend that assumptions of the aggregation method should align with a hypothesis about how uncertainty is expressed within and between predictions from different sources. The LOP assumes that between-prediction uncertainty is meaningful and should be retained, while the Vincent average assumes that between-prediction uncertainty is akin to sampling error and should not be preserved. We provide an R package for implementation. Given the rising importance of multi-model infectious disease hubs, our work provides useful guidance on aggregation and a deeper understanding of the benefits and risks of different approaches. 
    more » « less
  2. Personalization on digital platforms drives a broad range of harms, including misinformation, manipulation, social polarization, subversion of autonomy, and discrimination. In recent years, policy makers, civil society advocates, and researchers have proposed a wide range of interventions to address these challenges. This Article argues that the emerging toolkit reflects an individualistic view of both personal data and data-driven harms that will likely be inadequate to address growing harms in the global data ecosystem. It maintains that interventions must be grounded in an understanding of the fundamentally collective nature of data, wherein platforms leverage complex patterns of behaviors and characteristics observed across a large population to draw inferences and make predictions about individuals. Using the lens of the collective nature of data, this Article evaluates various approaches to addressing personalization-driven harms under current consideration. It also frames concrete guidance for future legislation in this space and for meaningful transparency that goes far beyond current transparency proposals. It offers a roadmap for what meaningful transparency must constitute: a collective perspective providing a third party with ongoing insight into the information gathered and observed about individuals and how it correlates with any personalized content they receive across a large, representative population. These insights would enable the third party to understand, identify, quantify, and address cases of personalization-driven harms. This Article discusses how such transparency can be achieved without sacrificing privacy and provides guidelines for legislation to support the development of such transparency. 
    more » « less
  3. dos Reis, Mario (Ed.)
    Abstract Ancestral sequence reconstruction (ASR) uses an alignment of extant protein sequences, a phylogeny describing the history of the protein family and a model of the molecular-evolutionary process to infer the sequences of ancient proteins, allowing researchers to directly investigate the impact of sequence evolution on protein structure and function. Like all statistical inferences, ASR can be sensitive to violations of its underlying assumptions. Previous studies have shown that, whereas phylogenetic uncertainty has only a very weak impact on ASR accuracy, uncertainty in the protein sequence alignment can more strongly affect inferred ancestral sequences. Here, we show that errors in sequence alignment can produce errors in ASR across a range of realistic and simplified evolutionary scenarios. Importantly, sequence reconstruction errors can lead to errors in estimates of structural and functional properties of ancestral proteins, potentially undermining the reliability of analyses relying on ASR. We introduce an alignment-integrated ASR approach that combines information from many different sequence alignments. We show that integrating alignment uncertainty improves ASR accuracy and the accuracy of downstream structural and functional inferences, often performing as well as highly accurate structure-guided alignment. Given the growing evidence that sequence alignment errors can impact the reliability of ASR studies, we recommend that future studies incorporate approaches to mitigate the impact of alignment uncertainty. Probabilistic modeling of insertion and deletion events has the potential to radically improve ASR accuracy when the model reflects the true underlying evolutionary history, but further studies are required to thoroughly evaluate the reliability of these approaches under realistic conditions. 
    more » « less
  4. This research paper focuses on the effect of recent national events on first-year engineering students’ attitudes about their political identity, social welfare, perspectives of diversity, and approaches to social situations. Engineering classrooms and cultures often focus on mastery of content and technical expertise with little prioritization given to integrating social issues into engineering. This depoliticization (i.e., the removal of social issues) in engineering removes the importance of issues related to including diverse individuals in engineering, working in diverse teams, and developing cultural sensitivity. This study resulted from the shift in the national discourse, during the 2016 presidential election, around diversity and identities in and out of the academy. We were collecting interview data as a part of a larger study on students attitudes about diversity in teams. Because these national events could affect students’ perceptions of our research topic, we changed a portion of our interviews to discuss national events in science, technology, engineering, and mathematics (STEM) classrooms and how students viewed these events in relation to engineering. We interviewed first-year undergraduate students (n = 12) who indicated large differences of attitudes towards diverse individuals, experiences with diverse team members, and/or residing at the intersection of multiple diversity markers. We asked participants during the Spring of 2017 to reflect on the personal impact of recent national events and how political discussions have or have not been integrated into their STEM classrooms. During interviews students were asked: 1) Have recent national events impacted you in any way? 2) Have national events been discussed in your STEM classes? 3) If so, what was discussed and how was it discussed? 4) Do these conversations have a place in STEM classes? 5) Are there events you wish were discussed that have not been? Inductive coding was used to analyze interviews and develop themes that were audited for quality by the author team. Two preliminary themes emerged from analysis: political awareness and future-self impact. Students expressed awareness of current political events at the local, national and global levels. They recognized personal and social impacts that these events imposed on close friends, family members, and society. However, students were unsure of how to interpret political dialogue as it relates to policy in engineering disciplines and practices. This uncertainty led students to question their future-selves or careers in engineering. As participants continued to discuss their uncertainty, they expressed a desire to make explicit connections between politics and STEM and their eventual careers in STEM. These findings suggest that depoliticization in the classroom results in engineering students having limited consciousness of how political issues are relevant to their field. This disconnect of political discourse in the classroom gives us a better understanding of how engineering students make sense of current national events in the face of depoliticization. By re-politicising STEM classrooms in a way relevant to students’ futures, educators can better utilize important dialogues to help students understand how their role as engineers influence society and how the experiences of society can influence their practice of engineering. 
    more » « less
  5. Parameter-space regularization in neural network optimization is a fundamental tool for improving generalization. However, standard parameter-space regularization methods make it challenging to encode explicit preferences about desired predictive functions into neural network training. In this work, we approach regularization in neural networks from a probabilistic perspective and show that by viewing parameter-space regularization as specifying an empirical prior distribution over the model parameters, we can derive a probabilistically well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training. This method—which we refer to as function-space empirical Bayes (FS-EB)—includes both parameter- and function-space regularization, is mathematically simple, easy to implement, and incurs only minimal computational overhead compared to standard regularization techniques. We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection, highly-calibrated predictive uncertainty estimates, successful task adaption from pre-trained models, and improved generalization under covariate shift. 
    more » « less