skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: City Population, Majority Group Size, and Residential Segregation Drive Implicit Racial Biases in U.S. Cities
Implicit biases, expressed as differential treatment towards out-group members, are pervasive in human societies. These biases are often racial or ethnic in nature and create disparities and inequities across many aspects of life. Recent research has revealed that implicit biases are, for the most part, driven by social contexts and local histories. However, it has remained unclear how and if the regular ways in which human societies self-organize in cities produce systematic variation in implicit bias strength. Here we leverage extensions of the mathematical models of urban scaling theory to predict and test between-city differences in implicit racial biases. Our model comprehensively links scales of organization from city-wide infrastructure to individual psychology to quanti-tatively predict that cities that are (1) more populous, (2) more diverse, and (3) less segregated have lower levels of implicit biases. We find broad empirical support for each of these predictions in U.S. cities for data spanning a decade of racial implicit association tests from millions of individuals. We conclude that the organization of cities strongly drives the strength of implicit racial biases and provides potential systematic intervention targets for the development and planning of more equitable societies.  more » « less
Award ID(s):
1952050
PAR ID:
10466670
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
NSF-PAR
Date Published:
Journal Name:
SSRN Electronic Journal
ISSN:
1556-5068
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Implicit biases - differential attitudes towards members of distinct groups - are pervasive in human societies and create inequities across many aspects of life. Recent research has revealed that implicit biases are generally driven by social contexts, but not whether they are systematically influenced by the ways that humans self-organize in cities. We leverage complex system modeling in the framework of urban scaling theory to predict differences in these biases between cities. Our model links spatial scales from city-wide infrastructure to individual psychology to predict that cities that are more populous, more diverse, and less segregated are less biased. We find empirical support for these predictions in U.S. cities with Implicit Association Test data spanning a decade from 2.7 million individuals and U.S. Census demographic data. Additionally, we find that changes in cities’ social environments precede changes in implicit biases at short time-scales, but this relationship is bi-directional at longer time-scales. We conclude that the social organization of cities may influence the strength of these biases. 
    more » « less
  2. While advances in fairness and alignment have helped mitigate overt biases exhibited by large language models (LLMs) when explicitly prompted, we hypothesize that these models may still exhibit implicit biases when simulating human behavior. To test this hypothesis, we propose a technique to systematically uncover such biases across a broad range of sociodemographic categories by assessing decision-making disparities among agents with LLM-generated, sociodemographically-informed personas. Using our technique, we tested six LLMs across three sociodemographic groups and four decision-making scenarios. Our results show that state-of-the-art LLMs exhibit significant sociodemographic disparities in nearly all simulations, with more advanced models exhibiting greater implicit biases despite reducing explicit biases. Furthermore, when comparing our findings to real-world disparities reported in empirical studies, we find that the biases we uncovered are directionally aligned but markedly amplified. This directional alignment highlights the utility of our technique in uncovering systematic biases in LLMs rather than random variations; moreover, the presence and amplification of implicit biases emphasizes the need for novel strategies to address these biases. 
    more » « less
  3. Abstract BackgroundAccumulating evidence suggests that the human microbiome impacts individual and public health. City subway systems are human-dense environments, where passengers often exchange microbes. The MetaSUB project participants collected samples from subway surfaces in different cities and performed metagenomic sequencing. Previous studies focused on taxonomic composition of these microbiomes and no explicit functional analysis had been done till now. ResultsAs a part of the 2018 CAMDA challenge, we functionally profiled the available ~ 400 subway metagenomes and built predictor for city origin. In cross-validation, our model reached 81% accuracy when only the top-ranked city assignment was considered and 95% accuracy if the second city was taken into account as well. Notably, this performance was only achievable if the similarity of distribution of cities in the training and testing sets was similar. To assure that our methods are applicable without such biased assumptions we balanced our training data to account for all represented cities equally well. After balancing, the performance of our method was slightly lower (76/94%, respectively, for one or two top ranked cities), but still consistently high. Here we attained an added benefit of independence of training set city representation. In testing, our unbalanced model thus reached (an over-estimated) performance of 90/97%, while our balanced model was at a more reliable 63/90% accuracy. While, by definition of our model, we were not able to predict the microbiome origins previously unseen, our balanced model correctly judged them to be NOT-from-training-cities over 80% of the time.Our function-based outlook on microbiomes also allowed us to note similarities between both regionally close and far-away cities. Curiously, we identified the depletion in mycobacterial functions as a signature of cities in New Zealand, while photosynthesis related functions fingerprinted New York, Porto and Tokyo. ConclusionsWe demonstrated the power of our high-speed function annotation method,mi-faser,by analysing ~ 400 shotgun metagenomes in 2 days, with the results recapitulating functional signals of different city subway microbiomes. We also showed the importance of balanced data in avoiding over-estimated performance. Our results revealed similarities between both geographically close (Ofa and Ilorin) and distant (Boston and Porto, Lisbon and New York) city subway microbiomes. The photosynthesis related functional signatures of NYC were previously unseen in taxonomy studies, highlighting the strength of functional analysis. 
    more » « less
  4. The spatial patterning of present-day racial bias in Southern states is predicted by the prevalence of slavery in 1860 and the structural inequalities that followed. Here we extend the investigation of the historical roots of implicit bias to areas outside the South by tracing the Great Migration of Black southerners to Northern and Western states. We found that the proportion of Black residents in each county ( N = 1,981 counties) during the years of the Great Migration (1900–1950) was significantly associated with greater implicit bias among White residents today. The association was statistically explained by measures of structural inequalities. Results parallel the pattern seen in Southern states but reflect population changes that occurred decades later as cities reacted to larger Black populations. These findings suggest that implicit biases reflect structural inequalities and the historical conditions that produced them. 
    more » « less
  5. Automatic scene classification has applications ranging from urban planning to autonomous driving, yet little is known about how well these systems work across social differences. We investigate explicit and implicit biases in deep learning architectures, including deep convolutional neural networks (dCNNs) and multimodal large language models (MLLMs). We examined nearly one million images from user-submitted photographs and Airbnb listings from over 200 countries as well as all 3320 US counties. To isolate scene-specific biases, we ensured no people were in any of the photos. We found significant explicit socioeconomic biases across all models, including lower classification accuracy, higher classification uncertainty, and increased tendencies to assign labels that could be offensive when applied to homes (e.g., “slum”) in images from homes with lower socioeconomic status. We also found significant implicit biases, with pictures from lower socioeconomic conditions more aligned with word embeddings from negative concepts. All trends were consistent across countries and within the diverse economic and racial landscapes of the United States. This research thus demonstrates a novel bias in computer vision, emphasizing the need for more inclusive and representative training datasets. 
    more » « less