skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Machine learning to model gentrification: A synthesis of emerging forms
Gentrification is a complex and context-specific process that involves changes in the built environment and social fabric of neighborhoods, often resulting in the displacement of vulnerable communities. Machine Learning (ML) has emerged as a powerful predictive tool that is capable of circumventing the methodological challenges that historically held back researchers from producing reliable forecasts of gentrification. Additionally, computer vision ML algorithms for landscape character assessment, or deep mapping, can now capture a wider range of built metrics related to gentrification-induced redevelopment. These novel ML applications promise to rapidly progress our understandings of gentrification and our capacity to translate academic findings into more productive direction for communities and stakeholders, but with this sudden development comes a steep learning curve. The current paper aims to bridge this divide by providing an overview of recent progress and an actionable template of use that is accessible for researchers across a wide array of academic fields. As a secondary point of emphasis, the review goes over Explainable Artificial Intelligence (XAI) tools for gentrification models and opens up discussion on the nuanced challenges that arise when applying black-box models to human systems. Abstract: Gentrification is a complex and context-specific process that involves changes in the built environment and social fabric of neighborhoods, often resulting in the displacement of vulnerable communities. Machine Learning (ML) has emerged as a powerful predictive tool that is capable of circumventing the methodological challenges that historically held back researchers from producing reliable forecasts of gentrification. Additionally, computer vision ML algorithms for landscape character assessment, or deep mapping, can now capture a wider range of built metrics related to gentrification-induced redevelopment. These novel ML applications promise to rapidly progress our understandings of gentrification and our capacity to translate academic findings into more productive direction for communities and stakeholders, but with this sudden development comes a steep learning curve. The current paper aims to bridge this divide by providing an overview of recent progress and an actionable template of use that is accessible for researchers across a wide array of academic fields. As a secondary point of emphasis, the review goes over Explainable Artificial Intelligence (XAI) tools for gentrification models and opens up discussion on the nuanced challenges that arise when applying black-box models to human systems.  more » « less
Award ID(s):
2312047
PAR ID:
10508191
Author(s) / Creator(s):
; ;
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Computers environment and urban systems
Volume:
111
ISSN:
0198-9715
Page Range / eLocation ID:
102119
Subject(s) / Keyword(s):
Gentrification Machine learning Built environment Neighborhood change Computer vision
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Recently artificial intelligence (AI) and machine learning (ML) models have demonstrated remarkable progress with applications developed in various domains. It is also increasingly discussed that AI and ML models and applications should be transparent, explainable, and trustworthy. Accordingly, the field of Explainable AI (XAI) is expanding rapidly. XAI holds substantial promise for improving trust and transparency in AI-based systems by explaining how complex models such as the deep neural network (DNN) produces their outcomes. Moreover, many researchers and practitioners consider that using provenance to explain these complex models will help improve transparency in AI-based systems. In this paper, we conduct a systematic literature review of provenance, XAI, and trustworthy AI (TAI) to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems. Moreover, we also discuss the patterns of recent developments in this area and offer a vision for research in the near future. We hope this literature review will serve as a starting point for scholars and practitioners interested in learning about essential components of provenance, XAI, and TAI. 
    more » « less
  2. Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap-divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability. Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap. We apply the framework to a third case in a new domain, showcasing its affordances. Finally, we discuss conceptual implications of the framework, share practical considerations in its operationalization, and offer guidance on transferring it to new contexts. By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space. 
    more » « less
  3. Machine learning (ML) algorithms have advanced significantly in recent years, progressively evolving into artificial intelligence (AI) agents capable of solving complex, human-like intellectual challenges. Despite the advancements, the interpretability of these sophisticated models lags behind, with many ML architectures remaining black boxes that are too intricate and expansive for human interpretation. Recognizing this issue, there has been a revived interest in the field of explainable AI (XAI) aimed at explaining these opaque ML models. However, XAI tools often suffer from being tightly coupled with the underlying ML models and are inefficient due to redundant computations. We introduce provenance-enabled explainable AI (PXAI). PXAI decouples XAI computation from ML models through a provenance graph that tracks the creation and transformation of all data within the model. PXAI improves XAI computational efficiency by excluding irrelevant and insignificant variables and computation in the provenance graph. Through various case studies, we demonstrate how PXAI enhances computational efficiency when interpreting complex ML models, confirming its potential as a valuable tool in the field of XAI. 
    more » « less
  4. The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence ( AI ) applications used in everyday life. Explainable AI ( XAI ) systems are intended to self-explain the reasoning behind system decisions and predictions. Researchers from different disciplines work together to define, design, and evaluate explainable systems. However, scholars from different disciplines focus on different objectives and fairly independent topics of XAI research, which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. To this end, this article presents a survey and framework intended to share knowledge and experiences of XAI design and evaluation methods across multiple disciplines. Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction, we present a categorization of XAI design goals and evaluation methods. Our categorization presents the mapping between design goals for different XAI user groups and their evaluation methods. From our findings, we develop a framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams. Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research. 
    more » « less
  5. Abstract Recent advances in explainable artificial intelligence (XAI) methods show promise for understanding predictions made by machine learning (ML) models. XAI explains how the input features are relevant or important for the model predictions. We train linear regression (LR) and convolutional neural network (CNN) models to make 1-day predictions of sea ice velocity in the Arctic from inputs of present-day wind velocity and previous-day ice velocity and concentration. We apply XAI methods to the CNN and compare explanations to variance explained by LR. We confirm the feasibility of using a novel XAI method [i.e., global layerwise relevance propagation (LRP)] to understand ML model predictions of sea ice motion by comparing it to established techniques. We investigate a suite of linear, perturbation-based, and propagation-based XAI methods in both local and global forms. Outputs from different explainability methods are generally consistent in showing that wind speed is the input feature with the highest contribution to ML predictions of ice motion, and we discuss inconsistencies in the spatial variability of the explanations. Additionally, we show that the CNN relies on both linear and nonlinear relationships between the inputs and uses nonlocal information to make predictions. LRP shows that wind speed over land is highly relevant for predicting ice motion offshore. This provides a framework to show how knowledge of environmental variables (i.e., wind) on land could be useful for predicting other properties (i.e., sea ice velocity) elsewhere. Significance StatementExplainable artificial intelligence (XAI) is useful for understanding predictions made by machine learning models. Our research establishes trustability in a novel implementation of an explainable AI method known as layerwise relevance propagation for Earth science applications. To do this, we provide a comparative evaluation of a suite of explainable AI methods applied to machine learning models that make 1-day predictions of Arctic sea ice velocity. We use explainable AI outputs to understand how the input features are used by the machine learning to predict ice motion. Additionally, we show that a convolutional neural network uses nonlinear and nonlocal information in making its predictions. We take advantage of the nonlocality to investigate the extent to which knowledge of wind on land is useful for predicting sea ice velocity elsewhere. 
    more » « less