Conflict, manifesting as riots and protests, is a common occurrence in urban environments worldwide. Understanding their likely locations is crucial to policymakers, who may (for example) seek to provide overseas travelers with guidance on safe areas, or local policymakers with the ability to pre-position medical aid or police presences to mediate negative impacts associated with riot events. Past efforts to forecast these events have focused on the use of news and social media, restricting applicability to areas with available data. This study utilizes a ResNet convolutional neural network and high-resolution satellite imagery to estimate the spatial distribution of riots or protests within urban environments. At a global scale (N = 18,631 conflict events), by training our model to understand relationships between urban form and riot events, we are able to predict the likelihood that a given urban area will experience a riot or protest with accuracy as high as 97%. This research has the potential to improve our ability to forecast and understand the relationship between urban form and conflict events, even in data-sparse regions.
more »
« less
This content will become publicly available on January 1, 2026
From Prediction to Explanation: Using Explainable AI to Understand Satellite-Based Riot Forecasting Models
This study investigates the application of explainable AI (XAI) techniques to understand the deep learning models used for predicting urban conflict from satellite imagery. First, a ResNet18 convolutional neural network achieved 89% accuracy in distinguishing riot and non-riot urban areas. Using the Score-CAM technique, regions critical to the model’s predictions were identified, and masking these areas caused a 20.9% drop in the classification accuracy, highlighting their importance. However, Score-CAM’s ability to consistently localize key features was found to be limited, particularly in complex, multi-object urban environments. Analysis revealed minimal alignment between the model-identified features and traditional land use metrics, suggesting that deep learning captures unique patterns not represented in existing GIS datasets. These findings underscore the potential of deep learning to uncover previously unrecognized socio-spatial dynamics while revealing the need for improved interpretability methods. This work sets the stage for future research to enhance explainable AI techniques, bridging the gap between model performance and interpretability and advancing our understanding of urban conflict drivers.
more »
« less
- Award ID(s):
- 2317591
- PAR ID:
- 10631652
- Publisher / Repository:
- MDPI
- Date Published:
- Journal Name:
- Remote Sensing
- Volume:
- 17
- Issue:
- 2
- ISSN:
- 2072-4292
- Page Range / eLocation ID:
- 313
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Explainability is essential for AI models, especially in clinical settings where understanding the model’s decisions is crucial. Despite their impressive performance, black-box AI models are unsuitable for clinical use if their operations cannot be explained to clinicians. While deep neural networks (DNNs) represent the forefront of model performance, their explanations are often not easily interpreted by humans. On the other hand, hand-crafted features extracted to represent different aspects of the input data and traditional machine learning models are generally more understandable. However, they often lack the effectiveness of advanced models due to human limitations in feature design. To address this, we propose ExShall-CNN, a novel explainable shallow convolutional neural network for medical image processing. This model improves upon hand-crafted features to maintain human interpretability, ensuring that its decisions are transparent and understandable. We introduce the explainable shallow convolutional neural network (ExShall-CNN), which combines the interpretability of hand-crafted features with the performance of advanced deep convolutional networks like U-Net for medical image segmentation. Built on recent advancements in machine learning, ExShall-CNN incorporates widely used kernels while ensuring transparency, making its decisions visually interpretable by physicians and clinicians. This balanced approach offers both the accuracy of deep learning models and the explainability needed for clinical applications.more » « less
-
The widespread use of Artificial Intelligence (AI) has highlighted the importance of understanding AI model behavior. This understanding is crucial for practical decision-making, assessing model reliability, and ensuring trustworthiness. Interpreting time series forecasting models faces unique challenges compared to image and text data. These challenges arise from the temporal dependencies between time steps and the evolving importance of input features over time. My thesis focuses on addressing these challenges by aiming for more precise explanations of feature interactions, uncovering spatiotemporal patterns, and demonstrating the practical applicability of these interpretability techniques using real-world datasets and state-of-the-art deep learning models.more » « less
-
We propose and test a novel graph learning-based explainable artificial intelligence (XAI) approach to address the challenge of developing explainable predictions of patient length of stay (LoS) in intensive care units (ICUs). Specifically, we address a notable gap in the literature on XAI methods that identify interactions between model input features to predict patient health outcomes. Our model intrinsically constructs a patient-level graph, which identifies the importance of feature interactions for prediction of health outcomes. It demonstrates state-of-the-art explanation capabilities based on identification of salient feature interactions compared with traditional XAI methods for prediction of LoS. We supplement our XAI approach with a small-scale user study, which demonstrates that our model can lead to greater user acceptance of artificial intelligence (AI) model-based decisions by contributing to greater interpretability of model predictions. Our model lays the foundation to develop interpretable, predictive tools that healthcare professionals can utilize to improve ICU resource allocation decisions and enhance the clinical relevance of AI systems in providing effective patient care. Although our primary research setting is the ICU, our graph learning model can be generalized to other healthcare contexts to accurately identify key feature interactions for prediction of other health outcomes, such as mortality, readmission risk, and hospitalizations.more » « less
-
Abstract Recent progress in deep learning has significantly impacted materials science, leading to accelerated material discovery and innovation. ElemNet, a deep neural network model that predicts formation energy from elemental compositions, exemplifies the application of deep learning techniques in this field. However, the “black-box” nature of deep learning models often raises concerns about their interpretability and reliability. In this study, we propose XElemNet to explore the interpretability of ElemNet by applying a series of explainable artificial intelligence (XAI) techniques, focusing on post-hoc analysis and model transparency. The experiments with artificial binary datasets reveal ElemNet’s effectiveness in predicting convex hulls of element-pair systems across periodic table groups, indicating its capability to effectively discern elemental interactions in most cases. Additionally, feature importance analysis within ElemNet highlights alignment with chemical properties of elements such as reactivity and electronegativity. XElemNet provides insights into the strengths and limitations of ElemNet and offers a potential pathway for explaining other deep learning models in materials science.more » « less
An official website of the United States government
