skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Compressive Neural Representations of Volumetric Scalar Fields
Abstract We present an approach for compressing volumetric scalar fields using implicit neural representations. Our approach represents a scalar field as a learned function, wherein a neural network maps a point in the domain to an output scalar value. By setting the number of weights of the neural network to be smaller than the input size, we achieve compressed representations of scalar fields, thus framing compression as a type of function approximation. Combined with carefully quantizing network weights, we show that this approach yields highly compact representations that outperform state‐of‐the‐art volume compression approaches. The conceptual simplicity of our approach enables a number of benefits, such as support for time‐varying scalar fields, optimizing to preserve spatial gradients, and random‐access field evaluation. We study the impact of network design choices on compression performance, highlighting how simple network architectures are effective for a broad range of volumes.  more » « less
Award ID(s):
2007444 2006710
PAR ID:
10370178
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
40
Issue:
3
ISSN:
0167-7055
Page Range / eLocation ID:
p. 135-146
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract In this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time‐varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generatein situthat are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time‐varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem — learning a function‐space neural network to reproduce flow map samples under a fixed integration scheme — leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map. 
    more » « less
  2. Abstract The rise of exascale supercomputing has motivated an increase in high‐fidelity computational fluid dynamics (CFD) simulations. The detail in these simulations, often involving shape‐dependent, time‐variant flow domains and low‐speed, complex, turbulent flows, is essential for fueling innovations in fields like wind, civil, automotive, or aerospace engineering. However, the massive amount of data these simulations produce can overwhelm storage systems and negatively affect conventional data management and postprocessing workflows, including iterative procedures such as design space exploration, optimization, and uncertainty quantification. This study proposes a novel sampling method harnessing the signed distance function (SDF) concept: SDF‐biased flow importance sampling (BiFIS) and implicit compression based on implicit neural network representations for transforming large‐size, shape‐dependent flow fields into reduced‐size shape‐agnostic images. Designed to alleviate the above‐mentioned problems, our approach achieves near‐lossless compression ratios of approximately :, reducing the size of a bridge aerodynamics forced‐vibration simulation from roughly to about while maintaining low reproduction errors, in most cases below , which is unachievable with other sampling approaches. Our approach also allows for real‐time analysis and visualization of these massive simulations and does not involve decompression preprocessing steps that yield full simulation data again. Given that image sampling is a fundamental step for any image‐based flow field prediction model, the proposed BiFIS method can significantly improve the accuracy and efficiency of such models, helping any application that relies on precise flow field predictions. The BiFIS code is available onGitHub. 
    more » « less
  3. Implicit neural representations (INR) have been recently proposed as deep learning (DL) based solutions for image compression. An image can be compressed by training an INR model with fewer weights than the number of image pixels to map the coordinates of the image to corresponding pixel values. While traditional training approaches for INRs are based on enforcing pixel-wise image consistency, we propose to further improve image quality by using a new structural regularizer. We present structural regularization for INR compression (SINCO) as a novel INR method for image compression. SINCO imposes structural consistency of the compressed images to the groundtruth by using a segmentation network to penalize the discrepancy of segmentation masks predicted from compressed images. We validate SINCO on brain MRI images by showing that it can achieve better performance than some recent INR methods. 
    more » « less
  4. Physics-informed neural networks (PINNs) have been widely used to solve partial differential equations (PDEs) in a forward and inverse manner using neural networks. However, balancing individual loss terms can be challenging, mainly when training these networks for stiff PDEs and scenarios requiring enforcement of numerous constraints. Even though statistical methods can be applied to assign relative weights to the regression loss for data, assigning relative weights to equation-based loss terms remains a formidable task. This paper proposes a method for assigning relative weights to the mean squared loss terms in the objective function used to train PINNs. Due to the presence of temporal gradients in the governing equation, the physics-informed loss can be recast using numerical integration through backward Euler discretization. The physics-uninformed and physics-informed networks should yield identical predictions when assessed at corresponding spatiotemporal positions. We refer to this consistency as “temporal consistency.” This approach introduces a unique method for training physics-informed neural networks (PINNs), redefining the loss function to allow for assigning relative weights with statistical properties of the observed data. In this work, we consider the two- and three-dimensional Navier–Stokes equations and determine the kinematic viscosity using the spatiotemporal data on the velocity and pressure fields. We consider numerical datasets to test our method. We test the sensitivity of our method to the timestep size, the number of timesteps, noise in the data, and spatial resolution. Finally, we use the velocity field obtained using particle image velocimetry experiments to generate a reference pressure field and test our framework using the velocity and pressure fields. 
    more » « less
  5. Abstract In this paper, we study the problem of learning the weights of a deep convolutional neural network. We consider a network where convolutions are carried out over non-overlapping patches. We develop an algorithm for simultaneously learning all the kernels from the training data. Our approach dubbed deep tensor decomposition (DeepTD) is based on a low-rank tensor decomposition. We theoretically investigate DeepTD under a realizable model for the training data where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted convolutional kernels. We show that DeepTD is sample efficient and provably works as soon as the sample size exceeds the total number of convolutional weights in the network. 
    more » « less