skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on November 6, 2026

Title: LLM-Based Multi-Agent System and Simplicial Self-Supervised Learning Model for Regional Cancer Prevalence Estimation Using Satellite Imagery
Traditional cancer rate estimations are often limited in spatial resolutions and lack considerations of environmental factors. Satellite imagery has become a vital data source for monitoring diverse urban environments, supporting applications across environmental, socio-demographic, and public health domains. However, while deep learning (DL) tools, particularly convolutional neural networks, have demonstrated strong performance in extracting features from high-resolution imagery, their reliance on local spatial cues often limits their ability to capture complex, non-local, and higher-order structural information. To overcome this limitation, we propose a novel LLM-based multi-agent coordination system for satellite image analysis, which integrates visual and contextual reasoning through a simplicial contrastive learning framework (Agent- SNN). Our Agent-SNN contains two augmented superpixel-based graphs and maximizes mutual information between their latent simplicial complex representations, thereby enabling the system to learn both local and global topological features. The LLM-based agents generate structured prompts that guide the alignment of these representations across modalities. Experiments with satellite imagery of Los Angeles and San Diego demonstrate that Agent-SNN achieves signi cant improvements over state-of-the-art baselines in regional cancer prevalence estimation tasks.  more » « less
Award ID(s):
2523484 2335846
PAR ID:
10639311
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
33rd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Satellite imagery is a readily available data source for monitoring a broad range of urban geographical contexts related to environmental, socio-demographic, and health disparities. To analyze satellite images, deep learning (DL) tools efficiently extract latent multi-dimensional characteristics, beyond identifying specific urban elements like roads and houses. However, current DL approaches tend to largely rely on Convolutional Neural Networks applied to high-resolution imagery, and as such may be limited to capturing only local contextual information. To address this fundamental limitation, we propose to fuse the modalities of satellite imagery and a large language model (LLM). In particular, we develop a novel LLM-based Simplicial Contrastive Learning model (LLM-SCL) based on mutual information maximization between the latent simplicial complex-level representations of two kinds of augmented (superpixel) graphs, which allows for cohesive integration of LLM prompts and learning of both local and global higher-order properties of satellite imagery (from all pixels in an image). Extensive experiments on satellite imagery at several resolutions in Tijuana, Mexico, Los Angeles and San Diego, USA, suggest that LLM-SCL significantly outperforms state-of-the-art baselines on unsupervised image classification tasks. As such, the proposed LLM-SCL opens a new path for more accurate evaluations of latent urban forms and their associations with environmental and health outcome disparities. 
    more » « less
  2. Abstract In this article we present results from transect walks and participatory mapping done in Burkina Faso. Since the Sahelian drought of the 1970s, researchers have continued to depict the Sahelian region of West Africa as an environment experiencing severe degradation; a narrative that persists over time. Recently, however, analyses of satellite imagery have identified remarkable patterns of greening across the Sahel. The causes of this greening are hotly debated. Through this project we aim to inform these debates with on-the-ground perceptions of local farmers and pastoralists. The transect walk method is a community-based process that collects information on the land-use/land-cover (LULC) features across villages. Transects help triangulate data by combining high-resolution satellite imagery, firsthand observations, and local experiences of ecological processes. We describe the methodology behind transects and discuss how they contextualize an otherwise removed process of environmental analysis. We also describe the challenges that arise throughout the fieldwork process. 
    more » « less
  3. Accurate mapping of nearshore bathymetry is essential for coastal management, navigation, and environmental monitoring. Traditional bathymetric mapping methods such as sonar surveys and LiDAR are often time-consuming and costly. This paper introduces BathyFormer, a novel vision transformer- and encoder-based deep learning model designed to estimate nearshore bathymetry from high-resolution multispectral satellite imagery. This methodology involves training the BathyFormer model on a dataset comprising satellite images and corresponding bathymetric data obtained from the Continuously Updated Digital Elevation Model (CUDEM). The model learns to predict water depths by analyzing the spectral signatures and spatial patterns present in the multispectral imagery. Validation of the estimated bathymetry maps using independent hydrographic survey data produces a root mean squared error (RMSE) ranging from 0.55 to 0.73 m at depths of 2 to 5 m across three different locations within the Chesapeake Bay, which were independent of the training set. This approach shows significant promise for large-scale, cost-effective shallow water nearshore bathymetric mapping, providing a valuable tool for coastal scientists, marine planners, and environmental managers. 
    more » « less
  4. The recently discovered spatial-temporal information processing capability of bio-inspired Spiking neural networks (SNN) has enabled some interesting models and applications. However designing large-scale and high-performance model is yet a challenge due to the lack of robust training algorithms. A bio-plausible SNN model with spatial-temporal property is a complex dynamic system. Synapses and neurons behave as filters capable of preserving temporal information. As such neuron dynamics and filter effects are ignored in existing training algorithms, the SNN downgrades into a memoryless system and loses the ability of temporal signal processing. Furthermore, spike timing plays an important role in information representation, but conventional rate-based spike coding models only consider spike trains statistically, and discard information carried by its temporal structures. To address the above issues, and exploit the temporal dynamics of SNNs, we formulate SNN as a network of infinite impulse response (IIR) filters with neuron nonlinearity. We proposed a training algorithm that is capable to learn spatial-temporal patterns by searching for the optimal synapse filter kernels and weights. The proposed model and training algorithm are applied to construct associative memories and classifiers for synthetic and public datasets including MNIST, NMNIST, DVS 128 etc. Their accuracy outperforms state-of-the-art approaches. 
    more » « less
  5. Abstract Superresolution is the general task of artificially increasing the spatial resolution of an image. The recent surge in machine learning (ML) research has yielded many promising ML-based approaches for performing single-image superresolution including applications to satellite remote sensing. We develop a convolutional neural network (CNN) to superresolve the 1- and 2-km bands on the GOES-R series Advanced Baseline Imager (ABI) to a common high resolution of 0.5 km. Access to 0.5-km imagery from ABI band 2 enables the CNN to realistically sharpen lower-resolution bands without significant blurring. We first train the CNN on a proxy task, which allows us to only use ABI imagery, namely, degrading the resolution of ABI bands and training the CNN to restore the original imagery. Comparisons at reduced resolution and at full resolution withLandsat-8/Landsat-9observations illustrate that the CNN produces images with realistic high-frequency detail that is not present in a bicubic interpolation baseline. Estimating all ABI bands at 0.5-km resolution allows for more easily combining information across bands without reconciling differences in spatial resolution. However, more analysis is needed to determine impacts on derived products or multispectral imagery that use superresolved bands. This approach is extensible to other remote sensing instruments that have bands with different spatial resolutions and requires only a small amount of data and knowledge of each channel’s modulation transfer function. Significance StatementSatellite remote sensing instruments often have bands with different spatial resolutions. This work shows that we can artificially increase the resolution of some lower-resolution bands by taking advantage of the texture of higher-resolution bands on theGOES-16ABI instrument using a convolutional neural network. This may help reconcile differences in spatial resolution when combining information across bands, but future analysis is needed to precisely determine impacts on derived products that might use superresolved bands. 
    more » « less