Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available June 1, 2026
-
Free, publicly-accessible full text available May 19, 2026
-
Free, publicly-accessible full text available January 1, 2026
-
This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral line of each vertex. The key is to derive a series of edits during compression time; the edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we developed a workflow to fix extrema and integral lines alternatively until convergence within finite iterations; we accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a significant acceleration with an NVIDIA A100 GPU.more » « less
-
We explore an error‐bounded lossy compression approach for reducing scientific data associated with 2D/3D unstructured meshes. While existing lossy compressors offer a high compression ratio with bounded error for regular grid data, methodologies tailored for unstructured mesh data are lacking; for example, one can compress nodal data as 1D arrays, neglecting the spatial coherency of the mesh nodes. Inspired by the SZ compressor, which predicts and quantizes values in a multidimensional array, we dynamically reorganize nodal data into sequences. Each sequence starts with a seed cell; based on a predefined traversal order, the next cell is added to the sequence if the current cell can predict and quantize the nodal data in the next cell with the given error bound. As a result, one can efficiently compress the quantized nodal data in each sequence until all mesh nodes are traversed. This paper also introduces a suite of novel error metrics, namely continuous mean squared error (CMSE) and continuous peak signal‐to‐noise ratio (CPSNR), to assess compression results for unstructured mesh data. The continuous error metrics are defined by integrating the error function on all cells, providing objective statistics across nonuniformly distributed nodes/cells in the mesh. We evaluate our methods with several scientific simulations ranging from ocean‐climate models and computational fluid dynamics simulations with both traditional and continuous error metrics. We demonstrated superior compression ratios and quality than existing lossy compressors.more » « less
-
Lossy compression has been employed to reduce the unprecedented amount of data produced by today's large-scale scientific simulations and high-resolution instruments. To avoid loss of critical information, state-of-the-art scientific lossy compressors provide error controls on relatively simple metrics such as absolute error bound. However, preserving these metrics does not translate to the preservation of topological features, such as critical points in vector fields. To address this problem, we investigate how to effectively preserve the sign of determinant in error-controlled lossy compression, as it is an important quantity of interest used for the robust detection of many topological features. Our contribution is three-fold. (1) We develop a generic theory to derive the allowable perturbation for one row of a matrix while preserving its sign of the determinant. As a practical use-case, we apply this theory to preserve critical points in vector fields because critical point detection can be reduced to the result of the point-in-simplex test that purely relies on the sign of determinants. (2) We optimize this algorithm with a speculative compression scheme to allow for high compression ratios and efficiently parallelize it in distributed environments. (3) We perform solid experiments with real-world datasets, demonstrating that our method achieves up to 440% improvements in compression ratios over state-of-the-art lossy compressors when all critical points need to be preserved. Using the parallelization strategies, our method delivers up to 1.25X and 4.38X performance speedup in data writing and reading compared with the vanilla approach without compression.more » « less
An official website of the United States government

Full Text Available