- Award ID(s):
- 2142290
- NSF-PAR ID:
- 10469945
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- Reliability Engineering & System Safety
- Volume:
- 233
- Issue:
- C
- ISSN:
- 0951-8320
- Page Range / eLocation ID:
- 109122
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Quantifying uncertainties for machine learning models is a critical step to reduce human verification effort by detecting predictions with low confidence. This paper proposes a method for uncertainty quantification (UQ) of table structure recognition (TSR). The proposed UQ method is built upon a mixture-of-expert approach termed Test-Time Augmentation (TTA). Our key idea is to enrich and diversify the table representations, to spotlight the cells with high recognition uncertainties. To evaluate the effectiveness, we proposed two heuristics to differentiate highly uncertain cells from normal cells, namely, masking and cell complexity quantification. Masking involves varying the pixel intensity to deem the detection uncertainty. Cell complexity quantification gauges the uncertainty of each cell by its topological relation with neighboring cells. The evaluation results based on standard benchmark datasets demonstrate that the proposed method is effective in quantifying uncertainty in TSR models. To our best knowledge, this study is the first of its kind to enable UQ in TSR tasks.more » « less
-
Hyperspectral imaging (HSI) technology captures spectral information across a broad wavelength range, providing richer pixel features compared to traditional color images with only three channels. Although pixel classification in HSI has been extensively studied, especially using graph convolution neural networks (GCNs), quantifying epistemic and aleatoric uncertainties associated with the HSI classification (HSIC) results remains an unexplored area. These two uncertainties are effective for out-of-distribution (OOD) and misclassification detection, respectively. In this paper, we adapt two advanced uncertainty quantification models, evidential GCNs (EGCN) and graph posterior networks (GPN), designed for node classifications in graphs, into the realm of HSIC. We first reveal theoretically that a popular uncertainty cross-entropy (UCE) loss function is insufficient to produce good epistemic uncertainty when learning EGCNs. To mitigate the limitations, we propose two regularization terms. One leverages the inherent property of HSI data where each feature vector is a linear combination of the spectra signatures of the confounding materials, while the other is the total variation (TV) regularization to enforce the spatial smoothness of the evidence with edge-preserving. We demonstrate the effectiveness of the proposed regularization terms on both EGCN and GPN on three real-world HSIC datasets for OOD and misclassification detection tasks.more » « less
-
Hyperspectral imaging (HSI) technology captures spectral information across a broad wavelength range, providing richer pixel features compared to traditional color images with only three channels. Although pixel classification in HSI has been extensively studied, especially using graph convolution neural networks (GCNs), quantifying epistemic and aleatoric uncertainties associated with the HSI classification (HSIC) results remains an unexplored area. These two uncertainties are effective for out-of-distribution (OOD) and misclassification detection, respectively. In this paper, we adapt two advanced uncertainty quantification models, evidential GCNs (EGCN) and graph posterior networks (GPN), designed for node classifications in graphs, into the realm of HSIC. We first reveal theoretically that a popular uncertainty cross-entropy (UCE) loss function is insufficient to produce good epistemic uncertainty when learning EGCNs. To mitigate the limitations, we propose two regularization terms. One leverages the inherent property of HSI data where each feature vector is a linear combination of the spectra signatures of the confounding materials, while the other is the total variation (TV) regularization to enforce the spatial smoothness of the evidence with edge-preserving. We demonstrate the effectiveness of the proposed regularization terms on both EGCN and GPN on three real-world HSIC datasets for OOD and misclassification detection tasks.more » « less
-
Hyperspectral imaging (HSI) technology captures spectral information across a broad wavelength range, providing richer pixel features compared to traditional color images with only three channels. Although pixel classification in HSI has been extensively studied, especially using graph convolution neural networks (GCNs), quantifying epistemic and aleatoric uncertainties associated with the HSI classification (HSIC) results remains an unexplored area. These two uncertainties are effective for out-of-distribution (OOD) and misclassification detection, respectively. In this paper, we adapt two advanced uncertainty quantification models, evidential GCNs (EGCN) and graph posterior networks (GPN), designed for node classifications in graphs, into the realm of HSIC. We first reveal theoretically that a popular uncertainty cross-entropy (UCE) loss function is insufficient to produce good epistemic uncertainty when learning EGCNs. To mitigate the limitations, we propose two regularization terms. One leverages the inherent property of HSI data where each feature vector is a linear combination of the spectra signatures of the confounding materials, while the other is the total variation (TV) regularization to enforce the spatial smoothness of the evidence with edge-preserving. We demonstrate the effectiveness of the proposed regularization terms on both EGCN and GPN on three real-world HSIC datasets for OOD and misclassification detection tasks.more » « less
-
Abstract Models of bathymetry derived from satellite radar altimetry are essential for modeling many marine processes. They are affected by uncertainties which require quantification. We propose an uncertainty model that assumes errors are caused by the lack of high‐wavenumber content within the altimetry data. The model is then applied to a tsunami hazard assessment. We build a bathymetry uncertainty model for northern Chile. Statistical properties of the altimetry‐predicted bathymetry error are obtained using multibeam data. We find that a Von Karman correlation function and a Laplacian marginal distribution can be used to define an uncertainty model based on a random field. We also propose a method for generating synthetic bathymetry samples conditional to shipboard measurements. The method is further extended to account for interpolation uncertainties, when bathymetry data resolution is finer than
∼ 10 km. We illustrate the usefulness of the method by quantifying the bathymetry‐induced uncertainty of a tsunami hazard estimate. We demonstrate that tsunami leading wave predictions at middle/near field tide gauges and buoys are insensitive to bathymetry uncertainties in Chile. This result implies that tsunami early warning approaches can take full advantage of altimetry‐predicted bathymetry in numerical simulations. Finally, we evaluate the feasibility of modeling uncertainties in regions without multibeam data by assessing the bathymetry error statistics of 15 globally distributed regions. We find that a general Von Karman correlation and a Laplacian marginal distribution can serve as a first‐order approximation. The standard deviation of the uncertainty random field model varies regionally and is estimated from a proposed scaling law.