skip to main content


Search for: All records

Award ID contains: 1931363

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Synoptic sampling of streams is an inexpensive way to gain insight into the spatial distribution of dissolved constituents in the subsurface critical zone. Few spatial synoptics have focused on urban watersheds although this approach is useful in urban areas where monitoring wells are uncommon. Baseflow stream sampling was used to quantify spatial variability of water chemistry in a highly developed Piedmont watershed in suburban Baltimore, MD having no permitted point discharges. Six synoptic surveys were conducted from 2014 to 2016 after an average of 10 days of no rain, when stream discharge was composed of baseflow from groundwater. Samples collected every 50 m over 5 km were analyzed for nitrate, sulfate, chloride, fluoride, and water stable isotopes. Longitudinal spatial patterns differed across constituents for each survey, but the pattern for each constituent varied little across synoptics. Results suggest a spatially heterogeneous, three‐dimensional pattern of localized groundwater contaminant zones steadily contributing solutes to the stream network, where high concentrations result from current and legacy land use practices. By contrast, observations from 35 point piezometers indicate that sparse groundwater measurements are not a good predictor of baseflow stream chemistry in this geologic setting. Cross‐covariance analysis of stream solute concentrations with groundwater model/backward particle tracking results suggest that spatial changes in base‐flow solute concentrations are associated with urban features such as impervious surface area, fill, and leaking potable water and sanitary sewer pipes. Predicted subsurface residence times suggest that legacy solute sources drive baseflow stream chemistry in the urban critical zone.

     
    more » « less
  2. Abstract

    The estimation of exceedance probabilities for extreme climatic events is critical for infrastructure design and risk assessment. Climatic events occur over a greater space than they are measured with point‐scale in situ gauges. In extreme value theory, the block maxima approach for spatial analysis of extremes depends on properly modeling the spatially varying Generalized Extreme Value marginal parameters (i.e., trend surfaces). Fitting these trend surfaces can be challenging since there are numerous spatial and temporal covariates that are potentially relevant for any given event type and region. Traditionally, covariate selection is based on assumptions regarding the topmost relevant drivers of the event. This work demonstrates the benefit of utilizing elastic‐net regression to support automatic selection from a relatively large set of physically relevant covariates during trend surface estimation. The trend surfaces presented are based on 24‐hr annual maximum precipitation for northeastern Colorado and the Texas‐Louisiana Gulf Coast.

     
    more » « less
  3. Abstract

    Detection and attribution studies generally examine individual climate variables such as temperature and precipitation. Thus, we lack a strong understanding of climate change impacts on correlated climate extremes and compound events, which have become more common in recent years. Here we present a monthly‐scale compound warm and dry attribution study, examining CMIP6 climate models with and without the influence of anthropogenic forcing. We show that most regions have experienced large increases in concurrent warm and dry months in historical simulations with human emissions, while no coherent change has occurred in historical natural‐only simulations without human emissions. At the global scale, the likelihood of compound warm‐dry months has increased 2.7 times due to anthropogenic emissions. With this multivariate perspective, we highlight that anthropogenic emissions have not only impacted individual extremes but also compound extremes. Due to amplified risks from multivariate extremes, our results can provide important insights on the risks of associated climate impacts.

     
    more » « less
  4. Abstract

    Infrastructure systems must change to match the growing complexity of the environments they operate in. Yet the models of governance and the core technologies they rely on are structured around models of relative long-term stability that appear increasingly insufficient and even problematic. As the environments in which infrastructure function become more complex, infrastructure systems must adapt to develop a repertoire of responses sufficient to respond to the increasing variety of conditions and challenges. Whereas in the past infrastructure leadership and system design has emphasized organization strategies that primarily focus on exploitation (e.g., efficiency and production, amenable to conditions of stability), in the future they must create space for exploration, the innovation of what the organization is and does. They will need to create the abilities to maintain themselves in the face of growing complexity by creating the knowledge, processes, and technologies necessary to engage environment complexity. We refer to this capacity asinfrastructure autopoiesis. In doing so infrastructure organizations should focus on four key tenets. First, a shift to sustained adaptation—perpetual change in the face of destabilizing conditions often marked by uncertainty—and away from rigid processes and technologies is necessary. Second, infrastructure organizations should pursue restructuring their bureaucracies to distribute more resources and decisionmaking capacity horizontally, across the organization’s hierarchy. Third, they should build capacity for horizon scanning, the process of systematically searching the environment for opportunities and threats. Fourth, they should emphasize loose fit design, the flexibility of assets to pivot function as the environment changes. The inability to engage with complexity can be expected to result in a decoupling between what our infrastructure systems can do and what we need them to do, and autopoietic capabilities may help close this gap by creating the conditions for a sufficient repertoire to emerge.

     
    more » « less
  5. Abstract

    Merging multiple data streams together can improve the overall length of record and achieve the number of observations required for robust statistical analysis. We merge complementary information from different data streams with a regression-based approach to estimate the 1 April snow water equivalent (SWE) volume over Sierra Nevada, USA. We more than double the length of available data-driven SWE volume records by leveragingin-situsnow depth observations from longer-length snow course records and SWE volumes from a shorter-length snow reanalysis. With the resulting data-driven merged time series (1940–2018), we conduct frequency analysis to estimate return periods and associated uncertainty, which can inform decisions about the water supply, drought response, and flood control. We show that the shorter (~30-year) reanalysis results in an underestimation of the 100-year return period by ~25 years (relative to the ~80-year merged dataset). Drought and flood risk and water resources planning can be substantially affected if return periods of SWE, which are closely related to potential flooding in spring and water availability in summer, are misrepresented.

     
    more » « less
  6. Free, publicly-accessible full text available August 1, 2024
  7. Interactive visual analytics over distributed systems housing voluminous datasets is hindered by three main factors - disk and network I/O, and data processing overhead. Requests over geospatial data are prone to erratic query load and hotspots due to users’ simultaneous interest over a small sub-domain of the overall data space at a time. Interactive analytics in a distributed setting is further hindered in cases of voluminous datasets with large/high-dimensional data objects, such as multi-spectral satellite imagery. The size of the data objects prohibits efficient caching mechanisms that could significantly reduce response latencies. Additionally, extracting information from these large data objects incurs significant data processing overheads and they often entail resource-intensive computational methods. Here, we present our framework, ARGUS, that extracts low- dimensional representation (embeddings) of high-dimensional satellite images during ingestion and houses them in the cache for use in model-driven analysis relating to wildfire detection. These embeddings are versatile and are used to perform model- based extraction of analytical information for a set of dif- ferent scenarios, to reduce the high computational costs that are involved with typical transformations over high-dimensional datasets. The models for each such analytical process are trained in a distributed manner in a connected, multi-task learning fashion, along with the encoder network that generates the original embeddings. 
    more » « less
    Free, publicly-accessible full text available July 8, 2024
  8. Grewe, Lynne L. ; Blasch, Erik P. ; Kadar, Ivan (Ed.)
    Sensor fusion combines data from a suite of sensors into an integrated solution that represents the target environment more accurately than that produced by individual sensors. New developments in Machine Learning (ML) algorithms are leading to increased accuracy, precision, and reliability in sensor fusion performance. However, these increases are accompanied by increases in system costs. Aircraft sensor systems have limited computing, storage, and bandwidth resources, which must balance monetary, computational, and throughput costs, sensor fusion performance, aircraft safety, data security, robustness, and modularity system objectives while meeting strict timing requirements. Performing trade studies of these system objectives should come before incorporating new ML models into the sensor fusion software. A scalable and automated solution is needed to quickly analyze the effects on the system’s objectives of providing additional resources to the new inference models. Given that model-based systems engineering (MBSE) is a focus of the majority of the aerospace industry for designing aircraft mission systems, it follows that leveraging these system models can provide scalability to the system analyses needed. This paper proposes adding empirically derived sensor fusion RNN performance and cost measurement data to machine-readable Model Cards. Furthermore, this paper proposes a scalable and automated sensor fusion system analysis process for ingesting SysML system model information and RNN Model Cards for system analyses. The value of this process is the integration of data analysis and system design that enables rapid enhancements of sensor system development. 
    more » « less
    Free, publicly-accessible full text available June 14, 2024
  9. Gridded spatial datasets arise naturally in environmental, climatic, meteorological, and ecological settings. Each grid point encapsulates a vector of variables representing different measures of interest. Gridded datasets tend to be voluminous since they encapsulate observations for long timescales. Visualizing such datasets poses significant challenges stemming from the need to preserve interactivity, manage I/O overheads, and cope with data volumes. Here we present our methodology to significantly alleviate I/O requirements by leveraging deep neural network-based models. 
    more » « less
    Free, publicly-accessible full text available May 1, 2024
  10. Gridded spatial datasets arise naturally in environmental, climatic, meteorological, and ecological settings. Each grid point encapsulates a vector of variables representing different measures of interest. Gridded datasets tend to be voluminous since they encapsulate observations for long timescales. Visualizing such datasets poses significant challenges stemming from the need to preserve interactivity, manage I/O overheads, and cope with data volumes. Here we present our methodology to significantly alleviate I/O requirements by leveraging deep neural network-based models and a distributed, in-memory cache to facilitate interactive visualizations. Our benchmarks demonstrate that deploying our lightweight models coupled with back-end caching and prefetching schemes can reduce the client's query response time by 92.3% while maintaining a high perceptual quality with a PSNR (peak signal-to-noise ratio) of 38.7 dB. 
    more » « less
    Free, publicly-accessible full text available May 1, 2024