skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 1, 2026

Title: Digital divides in scene recognition: uncovering socioeconomic biases in deep learning systems
Automatic scene classification has applications ranging from urban planning to autonomous driving, yet little is known about how well these systems work across social differences. We investigate explicit and implicit biases in deep learning architectures, including deep convolutional neural networks (dCNNs) and multimodal large language models (MLLMs). We examined nearly one million images from user-submitted photographs and Airbnb listings from over 200 countries as well as all 3320 US counties. To isolate scene-specific biases, we ensured no people were in any of the photos. We found significant explicit socioeconomic biases across all models, including lower classification accuracy, higher classification uncertainty, and increased tendencies to assign labels that could be offensive when applied to homes (e.g., “slum”) in images from homes with lower socioeconomic status. We also found significant implicit biases, with pictures from lower socioeconomic conditions more aligned with word embeddings from negative concepts. All trends were consistent across countries and within the diverse economic and racial landscapes of the United States. This research thus demonstrates a novel bias in computer vision, emphasizing the need for more inclusive and representative training datasets.  more » « less
Award ID(s):
1920896
PAR ID:
10599036
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Springer Nature
Date Published:
Journal Name:
Humanities and Social Sciences Communications
Volume:
12
Issue:
1
ISSN:
2662-9992
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. While advances in fairness and alignment have helped mitigate overt biases exhibited by large language models (LLMs) when explicitly prompted, we hypothesize that these models may still exhibit implicit biases when simulating human behavior. To test this hypothesis, we propose a technique to systematically uncover such biases across a broad range of sociodemographic categories by assessing decision-making disparities among agents with LLM-generated, sociodemographically-informed personas. Using our technique, we tested six LLMs across three sociodemographic groups and four decision-making scenarios. Our results show that state-of-the-art LLMs exhibit significant sociodemographic disparities in nearly all simulations, with more advanced models exhibiting greater implicit biases despite reducing explicit biases. Furthermore, when comparing our findings to real-world disparities reported in empirical studies, we find that the biases we uncovered are directionally aligned but markedly amplified. This directional alignment highlights the utility of our technique in uncovering systematic biases in LLMs rather than random variations; moreover, the presence and amplification of implicit biases emphasizes the need for novel strategies to address these biases. 
    more » « less
  2. Abstract Underwater imaging enables nondestructive plankton sampling at frequencies, durations, and resolutions unattainable by traditional methods. These systems necessitate automated processes to identify organisms efficiently. Early underwater image processing used a standard approach: binarizing images to segment targets, then integrating deep learning models for classification. While intuitive, this infrastructure has limitations in handling high concentrations of biotic and abiotic particles, rapid changes in dominant taxa, and highly variable target sizes. To address these challenges, we introduce a new framework that starts with a scene classifier to capture large within‐image variation, such as disparities in the layout of particles and dominant taxa. After scene classification, scene‐specific Mask regional convolutional neural network (Mask R‐CNN) models are trained to separate target objects into different groups. The procedure allows information to be extracted from different image types, while minimizing potential bias for commonly occurring features. Using in situ coastal plankton images, we compared the scene‐specific models to the Mask R‐CNN model encompassing all scene categories as a single full model. Results showed that the scene‐specific approach outperformed the full model by achieving a 20% accuracy improvement in complex noisy images. The full model yielded counts that were up to 78% lower than those enumerated by the scene‐specific model for some small‐sized plankton groups. We further tested the framework on images from a benthic video camera and an imaging sonar system with good results. The integration of scene classification, which groups similar images together, can improve the accuracy of detection and classification for complex marine biological images. 
    more » « less
  3. Implicit biases, expressed as differential treatment towards out-group members, are pervasive in human societies. These biases are often racial or ethnic in nature and create disparities and inequities across many aspects of life. Recent research has revealed that implicit biases are, for the most part, driven by social contexts and local histories. However, it has remained unclear how and if the regular ways in which human societies self-organize in cities produce systematic variation in implicit bias strength. Here we leverage extensions of the mathematical models of urban scaling theory to predict and test between-city differences in implicit racial biases. Our model comprehensively links scales of organization from city-wide infrastructure to individual psychology to quanti-tatively predict that cities that are (1) more populous, (2) more diverse, and (3) less segregated have lower levels of implicit biases. We find broad empirical support for each of these predictions in U.S. cities for data spanning a decade of racial implicit association tests from millions of individuals. We conclude that the organization of cities strongly drives the strength of implicit racial biases and provides potential systematic intervention targets for the development and planning of more equitable societies. 
    more » « less
  4. Although scholars have long studied circumstances that shape prejudice, inquiry into factors associated with long-term prejudice reduction has been more limited. Using a 6-year longitudinal study of non-Black physicians in training ( N = 3,134), we examined the effect of three medical-school factors—interracial contact, medical-school environment, and diversity training—on explicit and implicit racial bias measured during medical residency. When accounting for all three factors, previous contact, and baseline bias, we found that quality of contact continued to predict lower explicit and implicit bias, although the effects were very small. Racial climate, modeling of bias, and hours of diversity training in medical school were not consistently related to less explicit or implicit bias during residency. These results highlight the benefits of interracial contact during an impactful experience such as medical school. Ultimately, professional institutions can play a role in reducing anti-Black bias by encouraging more frequent, and especially more favorable, interracial contact. 
    more » « less
  5. Sherr, Micah; Shafiq, Zubair (Ed.)
    As smart home devices proliferate, protecting the privacy of those who encounter the devices is of the utmost importance both within their own home and in other people's homes. In this study, we conducted a large-scale survey (N=1459) with primary users of and bystanders to smart home devices. While previous work has studied people's privacy experiences and preferences either as smart home primary users or as bystanders, there is a need for a deeper understanding of privacy experiences and preferences in different contexts and across different countries. Instead of classifying people as either primary users or bystanders, we surveyed the same participants across different contexts. We deployed our survey in four countries (Germany, Mexico, the United Kingdom, and the United States) and in two languages (English and Spanish). We found that participants were generally more concerned about devices in their own homes, but perceived video cameras—especially unknown ones—and usability as more concerning in other people's homes. Compared to male participants, female and non-binary participants had less control over configuration of devices and privacy settings—regardless of whether they were the most frequent user. Comparing countries, participants in Mexico were more likely to be comfortable with devices, but also more likely to take privacy precautions around them. We also make cross-contextual recommendations for device designers and policymakers, such as nudges to facilitate social interactions. 
    more » « less