skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 1, 2025

Title: A Stability Analysis of Neural Networks and Its Application to Tsunami Early Warning
Neural networks (NNs) enable precise modeling of complicated geophysical phenomena but can be sensitive to small input changes. In this work, we present a new method for analyzing this instability in NNs. We focus our analysis on adversarial examples, test‐time inputs with carefully crafted human‐imperceptible perturbations that expose the worst‐case instability in a model's predictions. Our stability analysis is based on a low‐rank expansion of NNs on a fixed input, and we apply our analysis to a NN model for tsunami early warning which takes geodetic measurements as the input and forecasts tsunami waveforms. The result is an improved description of local stability that explains adversarial examples generated by a standard gradient‐based algorithm, and allows the generation of other comparable examples. Our analysis can predict whether noise in the geodetic input will produce an unstable output, and identifies a potential approach to filtering the input that enable more robust forecasting  more » « less
Award ID(s):
2103713
PAR ID:
10579421
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
AGU
Date Published:
Journal Name:
Journal of Geophysical Research: Machine Learning and Computation
Volume:
1
Issue:
4
ISSN:
2993-5210
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The pervasiveness of neural networks (NNs) in critical computer vision and image processing applications makes them very attractive for adversarial manipulation. A large body of existing research thoroughly investigates two broad categories of attacks targeting the integrity of NN models. The first category of attacks, commonly called Adversarial Examples, perturbs the model's inference by carefully adding noise into input examples. In the second category of attacks, adversaries try to manipulate the model during the training process by implanting Trojan backdoors. Researchers show that such attacks pose severe threats to the growing applications of NNs and propose several defenses against each attack type individually. However, such one-sided defense approaches leave potentially unknown risks in real-world scenarios when an adversary can unify different attacks to create new and more lethal ones bypassing existing defenses. In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan. AdvTrojan is stealthy because it can be activated only when: 1) a carefully crafted adversarial perturbation is injected into the input examples during inference, and 2) a Trojan backdoor is implanted during the training process of the model. We leverage adversarial noise in the input space to move Trojan-infected examples across the model decision boundary, making it difficult to detect. The stealthiness behavior of AdvTrojan fools the users into accidentally trusting the infected model as a robust classifier against adversarial examples. AdvTrojan can be implemented by only poisoning the training data similar to conventional Trojan backdoor attacks. Our thorough analysis and extensive experiments on several benchmark datasets show that AdvTrojan can bypass existing defenses with a success rate close to 100% in most of our experimental scenarios and can be extended to attack federated learning as well as high-resolution images. 
    more » « less
  2. Tsunamis generated by seafloor displacements accompanying large submarine earthquakes provide sensitivity to absolute slip position and distribution for offshore faulting analogous to that of geodetic observations for landward faulting. Tsunami recordings at deep‐water and near‐shore ocean bottom pressure sensors and tide gauges, along with runup and inundation measurements, can now be reliably modeled using detailed bathymetric structures and robust numerical codes. As a result, tsunami observations now play an important role in quantifying coseismic slip distributions for large submarine earthquakes in subduction zones and other tectonic environments. Applications of joint modeling or inversion of seismic, geodetic and tsunami observations for recent major earthquakes are described, highlighting the specific contributions of the tsunami observations to source model resolution. Tsunami observations provide unique information on the up‐dip extent of earthquake coseismic slip on subduction zone megathrust faults and occurrence of near‐trench slip, which are usually not well constrained by seismic and land‐based geodetic signals. Tsunami signals also help to detect offshore slow slip that is not evident in seismic or land‐based geodetic data and to balance geophysical constraints on ruptures that extend from on‐shore to off‐shore. Tsunami runup measurements and stratigraphic deposits further provide unique constraints on large earthquake ruptures that occurred prior to modern geophysical instrumentation. 
    more » « less
  3. Abstract- Neural networks (NNs) are increasingly often employed in safety critical systems. It is therefore necessary to ensure that these NNs are robust against malicious interference in the form of adversarial attacks, which cause an NN to misclassify inputs. Many proposed defenses against such attacks incorporate randomness in order to make it harder for an attacker to find small input modifications that result in misclassification. Stochastic computing (SC) is a type of approximate computing based on pseudo-random bit-streams that has been successfully used to implement convolutional neural networks (CNNs). Some results have previously suggested that such stochastic CNNs (SCNNs) are partially robust against adversarial attacks. In this work, we will demonstrate that SCNNs do indeed possess inherent protection against some powerful adversarial attacks. Our results show that the white-box C&W attack is up to 16x less successful compared to an equivalent binary NN, and Boundary Attack even fails to generate adversarial inputs in many cases. 
    more » « less
  4. Deep Neural Networks (DNNs) have shown phenomenal success in a wide range of real-world applications. However, a concerning weakness of DNNs is that they are vulnerable to adversarial attacks. Although there exist methods to detect adversarial attacks, they often suffer constraints on specific attack types and provide limited information to downstream systems. We specifically note that existing adversarial detectors are often binary classifiers, which differentiate clean or adversarial examples. However, detection of adversarial examples is much more complicated than such a scenario. Our key insight is that the confidence probability of detecting an input sample as an adversarial example will be more useful for the system to properly take action to resist potential attacks. In this work, we propose an innovative method for fast confidence detection of adversarial attacks based on integrity of sensor pattern noise embedded in input examples. Experimental results show that our proposed method is capable of providing a confidence distribution model of most of popular adversarial attacks. Furthermore, our presented method can provide early attack warning with even the attack types based on different properties of the confidence distribution models. Since fast confidence detection is a computationally heavy task, we propose an FPGA-Based hardware architecture based on a series of optimization techniques, such as incremental multi-level quantization and etc. We realize our proposed method on an FPGA platform and achieve a high efficiency of 29.740 IPS/W with a power consumption of only 0.7626W. 
    more » « less
  5. Formal certification of Neural Networks (NNs) is crucial for ensuring their safety, fairness, and robustness. Unfortunately, on the one hand, sound and complete certification algorithms of ReLU-based NNs do not scale to large-scale NNs. On the other hand, incomplete certification algorithms are easier to compute, but they result in loose bounds that deteriorate with the depth of NN, which diminishes their effectiveness. In this paper, we ask the following question; can we replace the ReLU activation function with one that opens the door to incomplete certification algorithms that are easy to compute but can produce tight bounds on the NN's outputs? We introduce DeepBern-Nets, a class of NNs with activation functions based on Bernstein polynomials instead of the commonly used ReLU activation. Bernstein polynomials are smooth and differentiable functions with desirable properties such as the so-called range enclosure and subdivision properties. We design a novel Interval Bound Propagation (IBP) algorithm, called Bern-IBP, to efficiently compute tight bounds on DeepBern-Nets outputs. Our approach leverages the properties of Bernstein polynomials to improve the tractability of neural network certification tasks while maintaining the accuracy of the trained networks. We conduct experiments in adversarial robustness and reachability analysis settings to assess the effectiveness of the approach. Our proposed framework achieves high certified accuracy for adversarially-trained NNs, which is often a challenging task for certifiers of ReLU-based NNs. This work establishes Bernstein polynomial activation as a promising alternative for improving NN certification tasks across various NNs applications. 
    more » « less