skip to main content


Title: Compressive Neural Representations of Volumetric Scalar Fields
Abstract

We present an approach for compressing volumetric scalar fields using implicit neural representations. Our approach represents a scalar field as a learned function, wherein a neural network maps a point in the domain to an output scalar value. By setting the number of weights of the neural network to be smaller than the input size, we achieve compressed representations of scalar fields, thus framing compression as a type of function approximation. Combined with carefully quantizing network weights, we show that this approach yields highly compact representations that outperform state‐of‐the‐art volume compression approaches. The conceptual simplicity of our approach enables a number of benefits, such as support for time‐varying scalar fields, optimizing to preserve spatial gradients, and random‐access field evaluation. We study the impact of network design choices on compression performance, highlighting how simple network architectures are effective for a broad range of volumes.

 
more » « less
Award ID(s):
2007444 2006710
NSF-PAR ID:
10370178
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
40
Issue:
3
ISSN:
0167-7055
Page Range / eLocation ID:
p. 135-146
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    In this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time‐varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generatein situthat are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time‐varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem — learning a function‐space neural network to reproduce flow map samples under a fixed integration scheme — leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.

     
    more » « less
  2. This paper introduces a publicly available PyTorch-ABAQUS deep-learning framework of a family of plasticity models where the yield surface is implicitly represented by a scalar-valued function. In particular, our focus is to introduce a practical framework that can be deployed for engineering analysis that employs a user-defined material subroutine (UMAT/VUMAT) for ABAQUS, which is written in FORTRAN. To accomplish this task while leveraging the back-propagation learning algorithm to speed up the neural-network training, we introduce an interface code where the weights and biases of the trained neural networks obtained via the PyTorch library can be automatically converted into a generic FORTRAN code that can be a part of the UMAT/VUMAT algorithm. To enable third-party validation, we purposely make all the data sets, source code used to train the neural-network-based constitutive models, and the trained models available in a public repository. Furthermore, the practicality of the workflow is then further tested on a dataset for anisotropic yield function to showcase the extensibility of the proposed framework. A number of representative numerical experiments are used to examine the accuracy, robustness and reproducibility of the results generated by the neural network models. 
    more » « less
  3. Blasch, Erik ; Ravela, Sai (Ed.)
    A coupled path-planning and sensor configuration method is proposed. The path-planning objective is to minimize exposure to an unknown, spatially-varying, and temporally static scalar field called the threat field. The threat field is modeled as a weighted sum of several scalar fields, each representing a mode of threat. A heterogeneous sensor network takes noisy measurements of the threat field. Each sensor in the network observes one or more threat modes within a circular field of view (FoV). The sensors are configurable, i.e., parameters such as location and size of field of view can be changed. The measurement noise is assumed to be normally distributed with zero mean and a variance that monotonically increases with the size of the FoV, emulating the FoV v/s resolution trade-off in most sensors. Gaussian Process regression is used to estimate the threat field from these measurements. The main innovation of this work is that sensor configuration is performed by maximizing a so-called task-driven information gain (TDIG) metric, which quantifies uncertainty reduction in the cost of the planned path. Because the TDIG does not have any convenient structural properties, a surrogate function called the self-adaptive mutual information (SAMI) is considered. Sensor configuration based on the TDIG or SAMI introduces coupling with path-planning in accordance with the dynamic data-driven application systems paradigm. The benefit of this approach is that near-optimal plans are found with a relatively small number of measurements. In comparison to decoupled path-planning and sensor configuration based on traditional information-driven metrics, the proposed CSCP method results in near-optimal plans with fewer measurements. 
    more » « less
  4. Abstract

    This paper focuses on the implications of a commutative formulation that integrates branch‐cutting cosmology, the Wheeler–DeWitt equation, and Hořava–Lifshitz quantum gravity. Building on a mini‐superspace structure, we explore the impact of an inflaton‐type scalar field on the wave function of the Universe. Specifically analyzing the dynamical solutions of branch‐cut gravity within a mini‐superspace framework, we emphasize the scalar field's influence on the evolution of the evolution of the wave function of the Universe. Our research unveils a helix‐like function that characterizes a topologically foliated spacetime structure. The starting point is the Hořava–Lifshitz action, which depends on the scalar curvature of the branched Universe and its derivatives, with running coupling constants denoted as . The corresponding wave equations are derived and are resolved. The commutative quantum gravity approach preserves the diffeomorphism property of General Relativity, maintaining compatibility with the Arnowitt–Deser–Misner formalism. Additionally, we delve into a mini‐superspace of variables, incorporating scalar‐inflaton fields and exploring inflationary models, particularly chaotic and nonchaotic scenarios. We obtained solutions for the wave equations without recurring to numerical approximations.

     
    more » « less
  5. Deep convolutional neural network (DNN) has demonstrated phenomenal success and been widely used in many computer vision tasks. However, its enormous model size and high computing complexity prohibits its wide deployment into resource limited embedded system, such as FPGA and mGPU. As the two most widely adopted model compression techniques, weight pruning and quantization compress DNN model through introducing weight sparsity (i.e., forcing partial weights as zeros) and quantizing weights into limited bit-width values, respectively. Although there are works attempting to combine the weight pruning and quantization, we still observe disharmony between weight pruning and quantization, especially when more aggressive compression schemes (e.g., Structured pruning and low bit-width quantization) are used. In this work, taking FPGA as the test computing platform and Processing Elements (PE) as the basic parallel computing unit, we first propose a PE-wise structured pruning scheme, which introduces weight sparsification with considering of the architecture of PE. In addition, we integrate it with an optimized weight ternarization approach which quantizes weights into ternary values ({-1,0,+1}), thus converting the dominant convolution operations in DNN from multiplication-and-accumulation (MAC) to addition-only, as well as compressing the original model (from 32-bit floating point to 2-bit ternary representation) by at least 16 times. Then, we investigate and solve the coexistence issue between PE-wise Structured pruning and ternarization, through proposing a Weight Penalty Clipping (WPC) technique with self-adapting threshold. Our experiment shows that the fusion of our proposed techniques can achieve the best state-of-the-art ∼21× PE-wise structured compression rate with merely 1.74%/0.94% (top-1/top-5) accuracy degradation of ResNet-18 on ImageNet dataset. 
    more » « less