Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available October 3, 2025
-
Free, publicly-accessible full text available July 10, 2025
-
Free, publicly-accessible full text available June 24, 2025
-
This paper extends the star set reachability approach to verify the robustness of feed-forward neural networks (FNNs) with sigmoidal activation functions such as Sigmoid and TanH. The main drawbacks of the star set approach in Sigmoid/TanH FNN verification are scalability, feasibility, and optimality issues in some cases due to the linear programming solver usage. We overcome this challenge by proposing a relaxed star (RStar) with symbolic intervals, which allows the usage of the back-substitution technique in DeepPoly to find bounds when overapproximating activation functions while maintaining the valuable features of a star set. RStar can overapproximate a sigmoidal activation function using four linear constraints (RStar4) or two linear constraints (RStar2), or only the output bounds (RStar0). We implement our RStar reachability algorithms in NNV and compare them to DeepPoly via robustness verification of image classification DNNs benchmarks. The experimental results show that the original star approach (i.e., no relaxation) is the least conservative of all methods yet the slowest. RStar4 is computationally much faster than the original star method and is the second least conservative approach. It certifies up to 40% more images against adversarial attacks than DeepPoly and on average 51 times faster than the star set. Last but not least, RStar0 is the most conservative method, which could only verify two cases for the CIFAR10 small Sigmoid network,
δ = 0.014. However, it is the fastest method that can verify neural networks up to 3528 times faster than the star set and up to 46 times faster than DeepPoly in our evaluation. -
Increases in evapotranspiration (ET) from global warming are decreasing streamflow in headwater basins worldwide. However, these streamflow losses do not occur uniformly due to complex topography. To better understand the heterogeneity of streamflow loss, we use the Budyko shape parameter (ω) as a diagnostic tool. We fit ω to 37-year of hydrologic simulation output in the Upper Colorado River Basin (UCRB), an important headwater basin in the US. We split the UCRB into two categories: peak watersheds with high elevation and steep slopes, and valley watersheds with lower elevation and gradual slopes. Our results demonstrate a relationship between streamflow loss and ω. The valley watersheds with greater streamflow loss have ω higher than 3.1, while the peak watersheds with less streamflow loss have an average ω of 1.3. This work highlights the use of ω as an indicator of streamflow loss and could be generalized to other headwater basin systems.
-
Abstract Integrated hydrological modeling is an effective method for understanding interactions between parts of the hydrologic cycle, quantifying water resources, and furthering knowledge of hydrologic processes. However, these models are dependent on robust and accurate datasets that physically represent spatial characteristics as model inputs. This study evaluates multiple data‐driven approaches for estimating hydraulic conductivity and subsurface properties at the continental‐scale, constructed from existing subsurface dataset components. Each subsurface configuration represents upper (unconfined) hydrogeology, lower (confined) hydrogeology, and the presence of a vertical flow barrier. Configurations are tested in two large‐scale U.S. watersheds using an integrated model. Model results are compared to observed streamflow and steady state water table depth (WTD). We provide model results for a range of configurations and show that both WTD and surface water partitioning are important indicators of performance. We also show that geology data source, total subsurface depth, anisotropy, and inclusion of a vertical flow barrier are the most important considerations for subsurface configurations. While a range of configurations proved viable, we provide a recommended Selected National Configuration 1 km resolution subsurface dataset for use in distributed large‐and continental‐scale hydrologic modeling.
-
Abstract This study synthesizes two different methods for estimating hydraulic conductivity (K) at large scales. We derive analytical approaches that estimate K and apply them to the contiguous United States. We then compare these analytical approaches to three‐dimensional, national gridded K data products and three transmissivity (T) data products developed from publicly available sources. We evaluate these data products using multiple approaches: comparing their statistics qualitatively and quantitatively and with hydrologic model simulations. Some of these datasets were used as inputs for an integrated hydrologic model of the Upper Colorado River Basin and the comparison of the results with observations was used to further evaluate the K data products. Simulated average daily streamflow was compared to daily flow data from 10 USGS stream gages in the domain, and annually averaged simulated groundwater depths are compared to observations from nearly 2000 monitoring wells. We find streamflow predictions from analytically informed simulations to be similar in relative bias and Spearman's rho to the geologically informed simulations.
R ‐squared values for groundwater depth predictions are close between the best performing analytically and geologically informed simulations at 0.68 and 0.70 respectively, with RMSE values under 10 m. We also show that the analytical approach derived by this study produces estimates of K that are similar in spatial distribution, standard deviation, mean value, and modeling performance to geologically‐informed estimates. The results of this work are used to inform a follow‐on study that tests additional data‐driven approaches in multiple basins within the contiguous United States. -
Deep Neural Networks (DNNs) have become a popular instrument for solving various real-world problems. DNNs’ sophisticated structure allows them to learn complex representations and features. For this reason, Binary Neural Networks (BNNs) are widely used on edge devices, such as microcomputers. However, architecture specifics and floating-point number usage result in an increased computational operations complexity. Like other DNNs, BNNs are vulnerable to adversarial attacks; even a small perturbation to the input set may lead to an errant output. Unfortunately, only a few approaches have been proposed for verifying BNNs.This paper proposes an approach to verify BNNs on continuous input space using star reachability analysis. Our approach can compute both exact and overapproximate reachable sets of BNNs with Sign activation functions and use them for verification. The proposed approach is also efficient in constructing a complete set of counterexamples in case a network is unsafe. We implemented our approach in NNV, a neural network verification tool for DNNs and learning-enabled Cyber-Physical Systems. The experimental results show that our star-based approach is less conservative, more efficient, and scalable than the recent SMT-based method implemented in Marabou. We also provide a comparison with a quantization-based tool EEVBNN.more » « less