skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Multilevel Robustness for 2D Vector Field Feature Tracking, Selection and Comparison
Abstract Critical point tracking is a core topic in scientific visualization for understanding the dynamic behaviour of time‐varying vector field data. The topological notion of robustness has been introduced recently to quantify the structural stability of critical points, that is, the robustness of a critical point is the minimum amount of perturbation to the vector field necessary to cancel it. A theoretical basis has been established previously that relates critical point tracking with the notion of robustness, in particular, critical points could be tracked based on their closeness in stability, measured by robustness, instead of just distance proximity within the domain. However, in practice, the computation of classic robustness may produce artifacts when a critical point is close to the boundary of the domain; thus, we do not have a complete picture of the vector field behaviour within its local neighbourhood. To alleviate these issues, we introduce a multilevel robustness framework for the study of 2D time‐varying vector fields. We compute the robustness of critical points across varying neighbourhoods to capture the multiscale nature of the data and to mitigate the boundary effect suffered by the classic robustness computation. We demonstrate via experiments that such a new notion of robustness can be combined seamlessly with existing feature tracking algorithms to improve the visual interpretability of vector fields in terms of feature tracking, selection and comparison for large‐scale scientific simulations. We observe, for the first time, that the minimum multilevel robustness is highly correlated with physical quantities used by domain scientists in studying a real‐world tropical cyclone dataset. Such an observation helps to increase the physical interpretability of robustness.  more » « less
Award ID(s):
1910733 2145499
PAR ID:
10419834
Author(s) / Creator(s):
 ;  ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
42
Issue:
6
ISSN:
0167-7055
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The objective of this work is to develop error-bounded lossy compression methods to preserve topological features in 2D and 3D vector fields. Specifically, we explore the preservation of critical points in piecewise linear and bilinear vector fields. We define the preservation of critical points as, without any false positive, false negative, or false type in the decompressed data, (1) keeping each critical point in its original cell and (2) retaining the type of each critical point (e.g., saddle and attracting node). The key to our method is to adapt a vertex-wise error bound for each grid point and to compress input data together with the error bound field using a modified lossy compressor. Our compression algorithm can be also embarrassingly parallelized for large data handling and in situ processing. We benchmark our method by comparing it with existing lossy compressors in terms of false positive/negative/type rates, compression ratio, and various vector field visualizations with several scientific applications. 
    more » « less
  2. Data compression is a powerful solution for addressing big data challenges in database and data management. In scientific data compression for vector fields, preserving topological information is essential for accurate analysis and visualization. The topological skeleton, a fundamental component of vector field topology, consists of critical points and their connectivity, known as separatrices. While previous work has focused on preserving critical points in error-controlled lossy compression, little attention has been given to preserving separatrices, which are equally important. In this work, we introduce TspSZ, an efficient error-bounded lossy compression framework designed to preserve both critical points and separatrices. Our key contributions are threefold: First, we propose TspSZ, a topological-skeleton-preserving lossy compression framework that integrates two algorithms. This allows existing critical-point-preserving compressors to also retain separatrices, significantly enhancing their ability to preserve topological structures. Second, we optimize TspSZ for efficiency through tailored improvements and parallelization. Specifically, we introduce a new error control mechanism to achieve high compression ratios and implement a shared-memory parallelization strategy to boost compression throughput. Third, we evaluate TspSZ against state-of-the-art lossy and lossless compressors using four real-world scientific datasets. Experimental results show that TspSZ achieves compression ratios of up to 7.7 times while effectively preserving the topological skeleton. This ensures efficient storage and transmission of scientific data without compromising topological integrity. 
    more » « less
  3. Lossy compression has been employed to reduce the unprecedented amount of data produced by today's large-scale scientific simulations and high-resolution instruments. To avoid loss of critical information, state-of-the-art scientific lossy compressors provide error controls on relatively simple metrics such as absolute error bound. However, preserving these metrics does not translate to the preservation of topological features, such as critical points in vector fields. To address this problem, we investigate how to effectively preserve the sign of determinant in error-controlled lossy compression, as it is an important quantity of interest used for the robust detection of many topological features. Our contribution is three-fold. (1) We develop a generic theory to derive the allowable perturbation for one row of a matrix while preserving its sign of the determinant. As a practical use-case, we apply this theory to preserve critical points in vector fields because critical point detection can be reduced to the result of the point-in-simplex test that purely relies on the sign of determinants. (2) We optimize this algorithm with a speculative compression scheme to allow for high compression ratios and efficiently parallelize it in distributed environments. (3) We perform solid experiments with real-world datasets, demonstrating that our method achieves up to 440% improvements in compression ratios over state-of-the-art lossy compressors when all critical points need to be preserved. Using the parallelization strategies, our method delivers up to 1.25X and 4.38X performance speedup in data writing and reading compared with the vanilla approach without compression. 
    more » « less
  4. Grid-free Monte Carlo methods such aswalk on spherescan be used to solve elliptic partial differential equations without mesh generation or global solves. However, such methods independently estimate the solution at every point, and hence do not take advantage of the high spatial regularity of solutions to elliptic problems. We propose a fast caching strategy which first estimates solution values and derivatives at randomly sampled points along the boundary of the domain (or a local region of interest). These cached values then provide cheap, output-sensitive evaluation of the solution (or its gradient) at interior points, via a boundary integral formulation. Unlike classic boundary integral methods, our caching scheme introduces zero statistical bias and does not require a dense global solve. Moreover we can handle imperfect geometry (e.g., with self-intersections) and detailed boundary/source terms without repairing or resampling the boundary representation. Overall, our scheme is similar in spirit tovirtual point lightmethods from photorealistic rendering: it suppresses the typical salt-and-pepper noise characteristic of independent Monte Carlo estimates, while still retaining the many advantages of Monte Carlo solvers: progressive evaluation, trivial parallelization, geometric robustness,etc.We validate our approach using test problems from visual and geometric computing. 
    more » « less
  5. We present the Feature Tracking Kit (FTK), a framework that simplifies, scales, and delivers various feature-tracking algorithms for scientific data. The key of FTK is our simplicial spacetime meshing scheme that generalizes both regular and unstructured spatial meshes to spacetime while tessellating spacetime mesh elements into simplices. The benefits of using simplicial spacetime meshes include (1) reducing ambiguity cases for feature extraction and tracking, (2) simplifying the handling of degeneracies using symbolic perturbations, and (3) enabling scalable and parallel processing. The use of simplicial spacetime meshing simplifies and improves the implementation of several feature-tracking algorithms for critical points, quantum vortices, and isosurfaces. As a software framework, FTK provides end users with VTK/ParaView filters, Python bindings, a command line interface, and programming interfaces for feature-tracking applications. We demonstrate use cases as well as scalability studies through both synthetic data and scientific applications including tokamak, fluid dynamics, and superconductivity simulations. We also conduct endto- end performance studies on the Summit supercomputer. 
    more » « less