skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Distributed Orchestration of Regression Models Over Administrative Boundaries
Geospatial data collections are now available in a multiplicity of domains. The accompanying data volumes, variety, and diversity of encoding formats within these collections have all continued to grow. These data offer opportunities to extract patterns, understand phenomena, and inform decision making by fitting models to the data. To ensure accuracy and effectiveness, these models need to be constructed at geospatial extents/scopes that are aligned with the nature of decision-making — administrative boundaries such as census tracts, towns, counties, states etc. This entails construction of a large number of models and orchestrating their accompanying resource requirements (CPU, RAM and I/O) within shared computing clusters. In this study, we describe our methodology to facilitate model construction at scale by substantively alleviating resource requirements while preserving accuracy. Our benchmarks demonstrate the suitability of our methodology.  more » « less
Award ID(s):
1931363
PAR ID:
10352252
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
IEEE/ACM 8th International Conference on Big Data Computing, Applications and Technologies (BDCAT)
Page Range / eLocation ID:
80 to 90
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Scientists design models to understand phenomena, make predictions, and/or inform decision-making. This study targets models that encapsulate spatially evolving phenomena. Given a model, our objective is to identify the accuracy of the model across all geospatial extents. A scientist may expect these validations to occur at varying spatial resolutions (e.g., states, counties, towns, and census tracts). Assessing a model with all available ground-truth data is infeasible due to the data volumes involved. We propose a framework to assess the performance of models at scale over diverse spatial data collections. Our methodology ensures orchestration of validation workloads while reducing memory strain, alleviating contention, enabling concurrency, and ensuring high throughput. We introduce the notion of a validation budget that represents an upper-bound on the total number of observations that are used to assess the performance of models across spatial extents. The validation budget attempts to capture the distribution characteristics of observations and is informed by multiple sampling strategies. Our design allows us to decouple the validation from the underlying model-fitting libraries to interoperate with models constructed using different libraries and analytical engines; our advanced research prototype currently supports Scikit-learn, PyTorch, and TensorFlow. 
    more » « less
  2. Spatial data volumes have increased exponentially over the past couple of decades. This growth has been fueled by networked observational devices, remote sensing sources such as satellites, and simulations that characterize spatiotemporal dynamics of phenomena (e.g., climate). Manual inspection of these data becomes unfeasible at such scales. Fitting models to the data offer an avenue to extract patterns from the data, make predictions, and leverage them to understand phenomena and decision-making. Innovations in deep learning and their ability to capture non-linear interactions between features make them particularly relevant for spatial datasets. However, deep learning workloads tend to be resource-intensive. In this study, we design and contrast transfer learning schemes to substantively alleviate resource requirements for training deep learning models over spatial data at scale. We profile the suitability of our methodology using deep networks built over satellite datasets and gridded data. Empirical benchmarks demonstrate that our spatiotemporally aligned transfer learning scheme ensures ~2.87-5.3 fold reduction in completion times for each model without sacrificing on the accuracy of the models. 
    more » « less
  3. Spatial data volumes have grown exponentially over the past several years. The number of domains that spatial data are extensively leveraged include atmospheric sciences, environmental monitoring, ecological modeling, epidemiology, sociology, commerce, and social media among others. These data are often used to understand phenomena and inform decision-making by fitting models to them. In this study, we present our methodology to fit models at scale over spatial data. Our methodology encompasses segmentation, spatial similarity based on the dataset(s) under consideration, and transfer learning schemes that are informed by the spatial similarity to train models faster while utilizing fewer resources. We consider several model fitting algorithms and execution within containerized environments as we profile the suitability of our methodology. Our benchmarks validate the suitability of our methodology to facilitate faster, resource-efficient training of models over spatial data. 
    more » « less
  4. Abstract We have more data about wildlife trafficking than ever before, but it remains underutilized for decision-making. Central to effective wildlife trafficking interventions is collection, aggregation, and analysis of data across a range of source, transit, and destination geographies. Many data are geospatial, but these data cannot be effectively accessed or aggregated without appropriate geospatial data standards. Our goal was to create geospatial data standards to help advance efforts to combat wildlife trafficking. We achieved our goal using voluntary, participatory, and engagement-based workshops with diverse and multisectoral stakeholders, online portals, and electronic communication with more than 100 participants on three continents. The standards support data-to-decision efforts in the field, for example indictments of key figures within wildlife trafficking, and disruption of their networks. Geospatial data standards help enable broader utilization of wildlife trafficking data across disciplines and sectors, accelerate aggregation and analysis of data across space and time, advance evidence-based decision making, and reduce wildlife trafficking. 
    more » « less
  5. Challenges in interactive visualizations over satellite data collections stem primarily from their inherent data volumes. Enabling interactive visualizations of such data results in both processing and I/O (network and disk) on the server side. These are further exacerbated by multiple, concurrent requests issued by different clients. Hotspots may also arise when multiple users are interested in a particular geographical extent. We propose a novel methodology to support interactive visualizations over voluminous satellite imagery. Our system, codenamed Glance, generates models that once installed on the client side, substantially alleviate resource requirements on the server side. Our system dynamically generates imagery during zoom-in operations. Glance also supports image refinements using partial high-resolution information when available. Glance is based broadly on a deep Generative Adversarial Network, and our model is space-efficient to facilitate memory-residency at the clients. We supplement Glance with a module to estimate rendering errors when using the model to generate imagery as opposed to a resource-intensive query-and-retrieve operation to the server. Benchmarks to profile our methodology show substantive improvements in interactivity with up to 23x reduction in time lags without utilizing GPU and 297x-6627x reduction while harnessing GPU. Further, the perceptual quality of the images from our generative model is robust with PSNR values ranging from 32.2-40.5, depending on the scenario and upscale factor. 
    more » « less