skip to main content


Title: Compressive Neural Representations of Volumetric Scalar Fields
Abstract

We present an approach for compressing volumetric scalar fields using implicit neural representations. Our approach represents a scalar field as a learned function, wherein a neural network maps a point in the domain to an output scalar value. By setting the number of weights of the neural network to be smaller than the input size, we achieve compressed representations of scalar fields, thus framing compression as a type of function approximation. Combined with carefully quantizing network weights, we show that this approach yields highly compact representations that outperform state‐of‐the‐art volume compression approaches. The conceptual simplicity of our approach enables a number of benefits, such as support for time‐varying scalar fields, optimizing to preserve spatial gradients, and random‐access field evaluation. We study the impact of network design choices on compression performance, highlighting how simple network architectures are effective for a broad range of volumes.

 
more » « less
Award ID(s):
2007444 2006710
NSF-PAR ID:
10370178
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
40
Issue:
3
ISSN:
0167-7055
Page Range / eLocation ID:
p. 135-146
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    In this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time‐varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generatein situthat are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time‐varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem — learning a function‐space neural network to reproduce flow map samples under a fixed integration scheme — leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.

     
    more » « less
  2. Implicit neural representations (INR) have been recently proposed as deep learning (DL) based solutions for image compression. An image can be compressed by training an INR model with fewer weights than the number of image pixels to map the coordinates of the image to corresponding pixel values. While traditional training approaches for INRs are based on enforcing pixel-wise image consistency, we propose to further improve image quality by using a new structural regularizer. We present structural regularization for INR compression (SINCO) as a novel INR method for image compression. SINCO imposes structural consistency of the compressed images to the groundtruth by using a segmentation network to penalize the discrepancy of segmentation masks predicted from compressed images. We validate SINCO on brain MRI images by showing that it can achieve better performance than some recent INR methods. 
    more » « less
  3. Abstract

    In this paper, we study the problem of learning the weights of a deep convolutional neural network. We consider a network where convolutions are carried out over non-overlapping patches. We develop an algorithm for simultaneously learning all the kernels from the training data. Our approach dubbed deep tensor decomposition (DeepTD) is based on a low-rank tensor decomposition. We theoretically investigate DeepTD under a realizable model for the training data where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted convolutional kernels. We show that DeepTD is sample efficient and provably works as soon as the sample size exceeds the total number of convolutional weights in the network.

     
    more » « less
  4. This paper introduces a publicly available PyTorch-ABAQUS deep-learning framework of a family of plasticity models where the yield surface is implicitly represented by a scalar-valued function. In particular, our focus is to introduce a practical framework that can be deployed for engineering analysis that employs a user-defined material subroutine (UMAT/VUMAT) for ABAQUS, which is written in FORTRAN. To accomplish this task while leveraging the back-propagation learning algorithm to speed up the neural-network training, we introduce an interface code where the weights and biases of the trained neural networks obtained via the PyTorch library can be automatically converted into a generic FORTRAN code that can be a part of the UMAT/VUMAT algorithm. To enable third-party validation, we purposely make all the data sets, source code used to train the neural-network-based constitutive models, and the trained models available in a public repository. Furthermore, the practicality of the workflow is then further tested on a dataset for anisotropic yield function to showcase the extensibility of the proposed framework. A number of representative numerical experiments are used to examine the accuracy, robustness and reproducibility of the results generated by the neural network models. 
    more » « less
  5. In this work, we propose a linear machine learning force matching approach that can directly extract pair atomic interactions from ab initio calculations in amorphous structures. The local feature representation is specifically chosen to make the linear weights a force field as a force/potential function of the atom pair distance. Consequently, this set of functions is the closest representation of the ab initio forces, given the two-body approximation and finite scanning in the configurational space. We validate this approach in amorphous silica. Potentials in the new force field (consisting of tabulated Si–Si, Si–O, and O–O potentials) are significantly different than existing potentials that are commonly used for silica, even though all of them produce the tetrahedral network structure and roughly similar glass properties. This suggests that the commonly used classical force fields do not offer fundamentally accurate representations of the atomic interaction in silica. The new force field furthermore produces a lower glass transition temperature (Tg ∼ 1800 K) and a positive liquid thermal expansion coefficient, suggesting the extraordinarily high Tg and negative liquid thermal expansion of simulated silica could be artifacts of previously developed classical potentials. Overall, the proposed approach provides a fundamental yet intuitive way to evaluate two-body potentials against ab initio calculations, thereby offering an efficient way to guide the development of classical force fields.

     
    more » « less