skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.


Title: Quantification and propagation of Aleatoric uncertainties in topological structures
Quantification and propagation of aleatoric uncertainties distributed in complex topological structures remain a challenge. Existing uncertainty quantification and propagation approaches can only handle parametric uncertainties or high dimensional random quantities distributed in a simply connected spatial domain. There lacks a systematic method that captures the topological characteristics of the structural domain in uncertainty analysis. Therefore, this paper presents a new methodology that quantifies and propagates aleatoric uncertainties, such as the spatially varying local material properties and defects, distributed in a topological spatial domain. We propose a new random field-based uncertainty representation approach that captures the topological characteristics using the shortest interior path distance. Parameterization methods like PPCA and β-Variational Autoencoder (βVAE) are employed to convert the random field representation of uncertainty to a small set of independent random variables. Then non-intrusive uncertainties propagation methods such as polynomial chaos expansion and univariate dimension reduction are employed to propagate the parametric uncertainties to the output of the problem. The effectiveness of the proposed methodology is demonstrated by engineering case studies. The accuracy and computational efficiency of the proposed method is confirmed by comparing with the reference values of Monte Carlo simulations with a sufficiently large number of samples.  more » « less
Award ID(s):
2142290
NSF-PAR ID:
10469945
Author(s) / Creator(s):
; ;
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Reliability Engineering & System Safety
Volume:
233
Issue:
C
ISSN:
0951-8320
Page Range / eLocation ID:
109122
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Quantifying uncertainties for machine learning models is a critical step to reduce human verification effort by detecting predictions with low confidence. This paper proposes a method for uncertainty quantification (UQ) of table structure recognition (TSR). The proposed UQ method is built upon a mixture-of-expert approach termed Test-Time Augmentation (TTA). Our key idea is to enrich and diversify the table representations, to spotlight the cells with high recognition uncertainties. To evaluate the effectiveness, we proposed two heuristics to differentiate highly uncertain cells from normal cells, namely, masking and cell complexity quantification. Masking involves varying the pixel intensity to deem the detection uncertainty. Cell complexity quantification gauges the uncertainty of each cell by its topological relation with neighboring cells. The evaluation results based on standard benchmark datasets demonstrate that the proposed method is effective in quantifying uncertainty in TSR models. To our best knowledge, this study is the first of its kind to enable UQ in TSR tasks. 
    more » « less
  2. Hyperspectral imaging (HSI) technology captures spectral information across a broad wavelength range, providing richer pixel features compared to traditional color images with only three channels. Although pixel classification in HSI has been extensively studied, especially using graph convolution neural networks (GCNs), quantifying epistemic and aleatoric uncertainties associated with the HSI classification (HSIC) results remains an unexplored area. These two uncertainties are effective for out-of-distribution (OOD) and misclassification detection, respectively. In this paper, we adapt two advanced uncertainty quantification models, evidential GCNs (EGCN) and graph posterior networks (GPN), designed for node classifications in graphs, into the realm of HSIC. We first reveal theoretically that a popular uncertainty cross-entropy (UCE) loss function is insufficient to produce good epistemic uncertainty when learning EGCNs. To mitigate the limitations, we propose two regularization terms. One leverages the inherent property of HSI data where each feature vector is a linear combination of the spectra signatures of the confounding materials, while the other is the total variation (TV) regularization to enforce the spatial smoothness of the evidence with edge-preserving. We demonstrate the effectiveness of the proposed regularization terms on both EGCN and GPN on three real-world HSIC datasets for OOD and misclassification detection tasks. 
    more » « less
  3. Hyperspectral imaging (HSI) technology captures spectral information across a broad wavelength range, providing richer pixel features compared to traditional color images with only three channels. Although pixel classification in HSI has been extensively studied, especially using graph convolution neural networks (GCNs), quantifying epistemic and aleatoric uncertainties associated with the HSI classification (HSIC) results remains an unexplored area. These two uncertainties are effective for out-of-distribution (OOD) and misclassification detection, respectively. In this paper, we adapt two advanced uncertainty quantification models, evidential GCNs (EGCN) and graph posterior networks (GPN), designed for node classifications in graphs, into the realm of HSIC. We first reveal theoretically that a popular uncertainty cross-entropy (UCE) loss function is insufficient to produce good epistemic uncertainty when learning EGCNs. To mitigate the limitations, we propose two regularization terms. One leverages the inherent property of HSI data where each feature vector is a linear combination of the spectra signatures of the confounding materials, while the other is the total variation (TV) regularization to enforce the spatial smoothness of the evidence with edge-preserving. We demonstrate the effectiveness of the proposed regularization terms on both EGCN and GPN on three real-world HSIC datasets for OOD and misclassification detection tasks. 
    more » « less
  4. Hyperspectral imaging (HSI) technology captures spectral information across a broad wavelength range, providing richer pixel features compared to traditional color images with only three channels. Although pixel classification in HSI has been extensively studied, especially using graph convolution neural networks (GCNs), quantifying epistemic and aleatoric uncertainties associated with the HSI classification (HSIC) results remains an unexplored area. These two uncertainties are effective for out-of-distribution (OOD) and misclassification detection, respectively. In this paper, we adapt two advanced uncertainty quantification models, evidential GCNs (EGCN) and graph posterior networks (GPN), designed for node classifications in graphs, into the realm of HSIC. We first reveal theoretically that a popular uncertainty cross-entropy (UCE) loss function is insufficient to produce good epistemic uncertainty when learning EGCNs. To mitigate the limitations, we propose two regularization terms. One leverages the inherent property of HSI data where each feature vector is a linear combination of the spectra signatures of the confounding materials, while the other is the total variation (TV) regularization to enforce the spatial smoothness of the evidence with edge-preserving. We demonstrate the effectiveness of the proposed regularization terms on both EGCN and GPN on three real-world HSIC datasets for OOD and misclassification detection tasks. 
    more » « less
  5. Abstract

    Models of bathymetry derived from satellite radar altimetry are essential for modeling many marine processes. They are affected by uncertainties which require quantification. We propose an uncertainty model that assumes errors are caused by the lack of high‐wavenumber content within the altimetry data. The model is then applied to a tsunami hazard assessment. We build a bathymetry uncertainty model for northern Chile. Statistical properties of the altimetry‐predicted bathymetry error are obtained using multibeam data. We find that a Von Karman correlation function and a Laplacian marginal distribution can be used to define an uncertainty model based on a random field. We also propose a method for generating synthetic bathymetry samples conditional to shipboard measurements. The method is further extended to account for interpolation uncertainties, when bathymetry data resolution is finer than10 km. We illustrate the usefulness of the method by quantifying the bathymetry‐induced uncertainty of a tsunami hazard estimate. We demonstrate that tsunami leading wave predictions at middle/near field tide gauges and buoys are insensitive to bathymetry uncertainties in Chile. This result implies that tsunami early warning approaches can take full advantage of altimetry‐predicted bathymetry in numerical simulations. Finally, we evaluate the feasibility of modeling uncertainties in regions without multibeam data by assessing the bathymetry error statistics of 15 globally distributed regions. We find that a general Von Karman correlation and a Laplacian marginal distribution can serve as a first‐order approximation. The standard deviation of the uncertainty random field model varies regionally and is estimated from a proposed scaling law.

     
    more » « less