Abstract We present an approach for compressing volumetric scalar fields using implicit neural representations. Our approach represents a scalar field as a learned function, wherein a neural network maps a point in the domain to an output scalar value. By setting the number of weights of the neural network to be smaller than the input size, we achieve compressed representations of scalar fields, thus framing compression as a type of function approximation. Combined with carefully quantizing network weights, we show that this approach yields highly compact representations that outperform state‐of‐the‐art volume compression approaches. The conceptual simplicity of our approach enables a number of benefits, such as support for time‐varying scalar fields, optimizing to preserve spatial gradients, and random‐access field evaluation. We study the impact of network design choices on compression performance, highlighting how simple network architectures are effective for a broad range of volumes.
more »
« less
Neural Flow Map Reconstruction
Abstract In this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time‐varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generatein situthat are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time‐varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem — learning a function‐space neural network to reproduce flow map samples under a fixed integration scheme — leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.
more »
« less
- Award ID(s):
- 2007444
- PAR ID:
- 10406064
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Computer Graphics Forum
- Volume:
- 41
- Issue:
- 3
- ISSN:
- 0167-7055
- Format(s):
- Medium: X Size: p. 391-402
- Size(s):
- p. 391-402
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We introduce Neural Flow Maps, a novel simulation method bridging the emerging paradigm of implicit neural representations with fluid simulation based on the theory of flow maps, to achieve state-of-the-art simulation of in-viscid fluid phenomena. We devise a novel hybrid neural field representation, Spatially Sparse Neural Fields (SSNF), which fuses small neural networks with a pyramid of overlapping, multi-resolution, and spatially sparse grids, to compactly represent long-term spatiotemporal velocity fields at high accuracy. With this neural velocity buffer in hand, we compute long-term, bidirectional flow maps and their Jacobians in a mechanistically symmetric manner, to facilitate drastic accuracy improvement over existing solutions. These long-range, bidirectional flow maps enable high advection accuracy with low dissipation, which in turn facilitates high-fidelity incompressible flow simulations that manifest intricate vortical structures. We demonstrate the efficacy of our neural fluid simulation in a variety of challenging simulation scenarios, including leapfrogging vortices, colliding vortices, vortex reconnections, as well as vortex generation from moving obstacles and density differences. Our examples show increased performance over existing methods in terms of energy conservation, visual complexity, adherence to experimental observations, and preservation of detailed vortical structures.more » « less
-
We propose a method for autonomously learning an object-centric representation of a continuous and high-dimensional environment that is suitable for planning. Such representations can immediately be transferred between tasks that share the same types of objects, resulting in agents that require fewer samples to learn a model of a new task. We first demonstrate our approach on a 2D crafting domain consisting of numerous objects where the agent learns a compact, lifted representation that generalises across objects. We then apply it to a series of Minecraft tasks to learn object-centric representations and object types---directly from pixel data---that can be leveraged to solve new tasks quickly. The resulting learned representations enable the use of a task-level planner, resulting in an agent capable of transferring learned representations to form complex, long-term plans.more » « less
-
Field inversion machine learning (FIML) has the advantages of model consistency and low data dependency and has been used to augment imperfect turbulence models. However, the solver-intrusive field inversion has a high entry bar, and existing FIML studies focused on improving only steady-state or time-averaged periodic flow predictions. To break this limit, this paper develops an open-source FIML framework for time-accurate unsteady flow, where both spatial and temporal variations of flow are of interest. We augment a Reynolds-Averaged Navier–Stokes (RANS) turbulence model's production term with a scalar field. We then integrate a neural network (NN) model into the flow solver to compute the above augmentation scalar field based on local flow features at each time step. Finally, we optimize the weights and biases of the built-in NN model to minimize the regulated spatial-temporal prediction error between the augmented flow solver and reference data. We consider the spatial-temporal evolution of unsteady flow over a 45° ramp and use only the surface pressure as the training data. The unsteady-FIML-trained model accurately predicts the spatial-temporal variations of unsteady flow fields. In addition, the trained model exhibits reasonably good prediction accuracy for various ramp angles, Reynolds numbers, and flow variables (e.g., velocity fields) that are not used in training, highlighting its generalizability. The FIML capability has been integrated into our open-source framework DAFoam. It has the potential to train more accurate RANS turbulence models for other unsteady flow phenomena, such as wind gust response, bubbly flow, and particle dispersion in the atmosphere.more » « less
-
We report the presence of a simple neural mechanism that represents an input- output function as a vector within autoregressive transformer language models (LMs). Using causal mediation analysis on a diverse range of in-context-learning (ICL) tasks, we find that a small number attention heads transport a compact representation of the demonstrated task, which we call a function vector (FV). FVs are robust to changes in context, i.e., they trigger execution of the task on inputs such as zero-shot and natural text settings that do not resemble the ICL contexts from which they are collected. We test FVs across a range of tasks, models, and layers and find strong causal effects across settings in middle layers. We investigate the internal structure of FVs and find while that they often contain information that encodes the output space of the function, this information alone is not sufficient to reconstruct an FV. Finally, we test semantic vector composition in FVs, and find that to some extent they can be summed to create vectors that trigger new complex tasks. Our findings show that compact, causal internal vector representations of function abstractions can be explicitly extracted from LLMs.more » « less
An official website of the United States government
