skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A cultural evolutionary theory that explains both gradual and punctuated change
Cumulative cultural evolution (CCE) occurs among humans who may be presented with many similar options from which to choose, as well as many social influences and diverse environments. It is unknown what general principles underlie the wide range of CCE dynamics and whether they can all be explained by the same unified paradigm. Here, we present a scalable evolutionary model of discrete choice with social learning, based on a few behavioural science assumptions. This paradigm connects the degree of transparency in social learning to the human tendency to imitate others. Computer simulations and quantitative analysis show the interaction of three primary factors—information transparency, popularity bias and population size—drives the pace of CCE. The model predicts a stable rate of evolutionary change for modest degrees of popularity bias. As popularity bias grows, the transition from gradual to punctuated change occurs, with maladaptive subpopulations arising on their own. When the popularity bias gets too severe, CCE stops. This provides a consistent framework for explaining the rich and complex adaptive dynamics taking place in the real world, such as modern digital media.  more » « less
Award ID(s):
1749348
PAR ID:
10434592
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Journal of The Royal Society Interface
Volume:
19
Issue:
196
ISSN:
1742-5662
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Machine learning (ML) has shown to be an effective alternative to physical models for quality prediction and process optimization of metal additive manufacturing (AM). However, the inherent “black box” nature of ML techniques such as those represented by artificial neural networks has often presented a challenge to interpret ML outcomes in the framework of the complex thermodynamics that govern AM. While the practical benefits of ML provide an adequate justification, its utility as a reliable modeling tool is ultimately reliant on assured consistency with physical principles and model transparency. To facilitate the fundamental needs, physics-informed machine learning (PIML) has emerged as a hybrid machine learning paradigm that imbues ML models with physical domain knowledge such as thermomechanical laws and constraints. The distinguishing feature of PIML is the synergistic integration of data-driven methods that reflect system dynamics in real-time with the governing physics underlying AM. In this paper, the current state-of-the-art in metal AM is reviewed and opportunities for a paradigm shift to PIML are discussed, thereby identifying relevant future research directions. 
    more » « less
  2. Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM’s process for solving a task. This level of transparency into LLMs’ predictions would yield significant safety benefits. However, we find that CoT explanations can systematically misrepresent the true reason for a model’s prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs—e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always “(A)”—which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations rationalizing those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. Building more transparent and explainable systems will require either improving CoT faithfulness through targeted efforts or abandoning CoT in favor of alternative methods. 
    more » « less
  3. Cities with strong local music scenes enjoy many social and economic benefits. To this end, we are interested in developing a locally-focused artist and event recommendation system called Localify.org that supports and promotes local music scenes. In this demo paper, we describe both the overall system architecture as well as our core recommendation algorithm. This algorithm uses artist-artist similarity information, as opposed to user-artist preference information, to bootstrap recommendation while we grow the number of users. The overall design of Localify was chosen based on the fact that local artists tend to be relatively obscure and reside in the long tail of the artist popularity distribution. We discuss the role of popularity bias and how we attempt to ameliorate it in the context of local music recommendation. 
    more » « less
  4. The idealization of a static machine-learned model, trained once and deployed forever, is not practical. As input distributions change over time, the model will not only lose accuracy, any constraints to reduce bias against a protected class may fail to work as intended. Thus, researchers have begun to explore ways to maintain algorithmic fairness over time. One line of work focuses on dynamic learning: retraining after each batch, and the other on robust learning which tries to make algorithms robust against all possible future changes. Dynamic learning seeks to reduce biases soon after they have occurred and robust learning often yields (overly) conservative models. We propose an anticipatory dynamic learning approach for correcting the algorithm to mitigate bias before it occurs. Specifically, we make use of anticipations regarding the relative distributions of population subgroups (e.g., relative ratios of male and female applicants) in the next cycle to identify the right parameters for an importance weighing fairness approach. Results from experiments over multiple real-world datasets suggest that this approach has promise for anticipatory bias correction. 
    more » « less
  5. By quantifying Twitter activity and sentiment for each of 274 neighborhood areas in New York City, this study introduces the Neighborhood Popularity Index and correlates changes in the index with real estate prices, a common measure of neighborhood change. Results show that social media provide both a near-real-time indicator of shifting attitudes toward neighborhoods and an early warning measure of future changes in neighborhood composition and demand. Although social media data provide an important complement to traditional data sources, the use of social media for neighborhood studies raises concerns regarding data accessibility and equity issues in data representativeness and bias. 
    more » « less