skip to main content


Title: Neural Flow Map Reconstruction
Abstract

In this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time‐varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generatein situthat are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time‐varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem — learning a function‐space neural network to reproduce flow map samples under a fixed integration scheme — leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.

 
more » « less
Award ID(s):
2007444
NSF-PAR ID:
10406064
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
41
Issue:
3
ISSN:
0167-7055
Format(s):
Medium: X Size: p. 391-402
Size(s):
p. 391-402
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We present an approach for compressing volumetric scalar fields using implicit neural representations. Our approach represents a scalar field as a learned function, wherein a neural network maps a point in the domain to an output scalar value. By setting the number of weights of the neural network to be smaller than the input size, we achieve compressed representations of scalar fields, thus framing compression as a type of function approximation. Combined with carefully quantizing network weights, we show that this approach yields highly compact representations that outperform state‐of‐the‐art volume compression approaches. The conceptual simplicity of our approach enables a number of benefits, such as support for time‐varying scalar fields, optimizing to preserve spatial gradients, and random‐access field evaluation. We study the impact of network design choices on compression performance, highlighting how simple network architectures are effective for a broad range of volumes.

     
    more » « less
  2. We propose a method for autonomously learning an object-centric representation of a continuous and high-dimensional environment that is suitable for planning. Such representations can immediately be transferred between tasks that share the same types of objects, resulting in agents that require fewer samples to learn a model of a new task. We first demonstrate our approach on a 2D crafting domain consisting of numerous objects where the agent learns a compact, lifted representation that generalises across objects. We then apply it to a series of Minecraft tasks to learn object-centric representations and object types---directly from pixel data---that can be leveraged to solve new tasks quickly. The resulting learned representations enable the use of a task-level planner, resulting in an agent capable of transferring learned representations to form complex, long-term plans. 
    more » « less
  3. Field inversion machine learning (FIML) has the advantages of model consistency and low data dependency and has been used to augment imperfect turbulence models. However, the solver-intrusive field inversion has a high entry bar, and existing FIML studies focused on improving only steady-state or time-averaged periodic flow predictions. To break this limit, this paper develops an open-source FIML framework for time-accurate unsteady flow, where both spatial and temporal variations of flow are of interest. We augment a Reynolds-Averaged Navier–Stokes (RANS) turbulence model's production term with a scalar field. We then integrate a neural network (NN) model into the flow solver to compute the above augmentation scalar field based on local flow features at each time step. Finally, we optimize the weights and biases of the built-in NN model to minimize the regulated spatial-temporal prediction error between the augmented flow solver and reference data. We consider the spatial-temporal evolution of unsteady flow over a 45° ramp and use only the surface pressure as the training data. The unsteady-FIML-trained model accurately predicts the spatial-temporal variations of unsteady flow fields. In addition, the trained model exhibits reasonably good prediction accuracy for various ramp angles, Reynolds numbers, and flow variables (e.g., velocity fields) that are not used in training, highlighting its generalizability. The FIML capability has been integrated into our open-source framework DAFoam. It has the potential to train more accurate RANS turbulence models for other unsteady flow phenomena, such as wind gust response, bubbly flow, and particle dispersion in the atmosphere.

     
    more » « less
  4. Larochelle, Hugo ; Kamath, Gautam ; Hadsell, Raia ; Cho, Kyunghyun (Ed.)
    Neural scene representations, both continuous and discrete, have recently emerged as a powerful new paradigm for 3D scene understanding. Recent efforts have tackled unsupervised discovery of object-centric neural scene representations. However, the high cost of ray-marching, exacerbated by the fact that each object representation has to be ray-marched separately, leads to insufficiently sampled radiance fields and thus, noisy renderings, poor framerates, and high memory and time complexity during training and rendering. Here, we propose to represent objects in an object-centric, compositional scene representation as light fields. We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields. Dubbed Compositional Object Light Fields (COLF), our method enables unsupervised learning of object-centric neural scene representations, state-of-the-art reconstruction and novel view synthesis performance on standard datasets, and rendering and training speeds at orders of magnitude faster than existing 3D approaches. 
    more » « less
  5. We report the presence of a simple neural mechanism that represents an input- output function as a vector within autoregressive transformer language models (LMs). Using causal mediation analysis on a diverse range of in-context-learning (ICL) tasks, we find that a small number attention heads transport a compact representation of the demonstrated task, which we call a function vector (FV). FVs are robust to changes in context, i.e., they trigger execution of the task on inputs such as zero-shot and natural text settings that do not resemble the ICL contexts from which they are collected. We test FVs across a range of tasks, models, and layers and find strong causal effects across settings in middle layers. We investigate the internal structure of FVs and find while that they often contain information that encodes the output space of the function, this information alone is not sufficient to reconstruct an FV. Finally, we test semantic vector composition in FVs, and find that to some extent they can be summed to create vectors that trigger new complex tasks. Our findings show that compact, causal internal vector representations of function abstractions can be explicitly extracted from LLMs. 
    more » « less