skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning Controllable Adaptive Simulation for Multi-resolution Physics
Simulating the time evolution of physical systems is pivotal in many scientific and engineering problems. An open challenge in simulating such systems is their multi-resolution dynamics: a small fraction of the system is extremely dynamic, and requires very fine-grained resolution, while a majority of the system is changing slowly and can be modeled by coarser spatial scales. Typical learning-based surrogate models use a uniform spatial scale, which needs to resolve to the finest required scale and can waste a huge compute to achieve required accuracy. We introduced Learning controllable Adaptive simulation for Multiresolution Physics (LAMP) as the first full deep learning-based surrogate model that jointly learns the evolution model and optimizes appropriate spatial resolutions that devote more compute to the highly dynamic regions. LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNNbased actor-critic for learning the policy of spatial refinement and coarsening. We introduced learning techniques that optimize LAMP with weighted sum of error and computational cost as objective, allowing LAMP to adapt to varying relative importance of error vs. computation tradeoff at inference time. We evaluated our method in a 1D benchmark of nonlinear PDEs and a challenging 2D mesh-based simulation. We demonstrated that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error: it achieves an average of 33.7% error reduction for 1D nonlinear PDEs, and outperforms MeshGraphNets + classical Adaptive Mesh Refinement (AMR) in 2D mesh-based simulations.  more » « less
Award ID(s):
1835598 1918940
PAR ID:
10471866
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
NSF-PAR
Date Published:
Journal Name:
International Conference on Learning Representations (ICLR)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Simulating the time evolution of Partial Differential Equations (PDEs) of large-scale systems is crucial in many scientific and engineering domains such as fluid dynamics, weather forecasting and their inverse optimization problems. However, both classical solvers and recent deep learning-based surrogate models are typically extremely computationally intensive, because of their local evolution: they need to update the state of each discretized cell at each time step during inference. Here we develop Latent Evolution of PDEs (LE-PDE), a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs. LE-PDE learns a compact, global representation of the system and efficiently evolves it fully in the latent space with learned latent evolution models. LE-PDE achieves speedup by having a much smaller latent dimension to update during long rollout as compared to updating in the input space. We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability. We further introduce techniques for speeding-up inverse optimization of boundary conditions for PDEs via backpropagation through time in latent space, and an annealing technique to address the non-differentiability and sparse interaction of boundary conditions. We test our method in a 1D benchmark of nonlinear PDEs, 2D Navier-Stokes flows into turbulent phase and an inverse optimization of boundary conditions in 2D Navier-Stokes flow. Compared to state-of-the-art deep learning-based surrogate models and other strong baselines, we demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy. 
    more » « less
  2. Abstract Surrogate models driven by sizeable datasets and scientific machine-learning methods have emerged as an attractive microstructure simulation tool with the potential to deliver predictive microstructure evolution dynamics with huge savings in computational costs. Taking 2D and 3D grain growth simulations as an example, we present a completely overhauled computational framework based on graph neural networks with not only excellent agreement to both the ground truth phase-field methods and theoretical predictions, but enhanced accuracy and efficiency compared to previous works based on convolutional neural networks. These improvements can be attributed to the graph representation, both improved predictive power and a more flexible data structure amenable to adaptive mesh refinement. As the simulated microstructures coarsen, our method can adaptively adopt remeshed grids and larger timesteps to achieve further speedup. The data-to-model pipeline with training procedures together with the source codes are provided. 
    more » « less
  3. Abstract The inverse problem for radiative transfer is important in many applications, such as optical tomography and remote sensing. Major challenges include large memory requirements and computational expense, which arise from high-dimensionality and the need for iterations in solving the inverse problem. Here, to alleviate these issues, we propose adaptive-mesh inversion: a goal-orientedhp-adaptive mesh refinement method for solving inverse radiative transfer problems. One novel aspect here is that the two optimizations (one for inversion, and one for mesh adaptivity) are treated simultaneously and blended together. By exploiting the connection between duality-based mesh adaptivity and adjoint-based inversion techniques, we propose a goal-oriented error estimator, which is cheap to compute, and can efficiently guide the mesh-refinement to numerically solve the inverse problem. We use discontinuous Galerkin spectral element methods to discretize the forward and the adjoint problems. Then, based on the goal-oriented error estimator, we propose anhp-adaptive algorithm to refine the meshes. Numerical experiments are presented at the end and show convergence speed-up and reduced memory occupation by the goal-oriented mesh adaptive method. 
    more » « less
  4. Today's scientific simulations require a significant reduction of data volume because of extremely large amounts of data they produce and the limited I/O bandwidth and storage space. Error-bounded lossy compression has been considered one of the most effective solutions to the above problem. However, little work has been done to improve error-bounded lossy compression for Adaptive Mesh Refinement (AMR) simulation data. Unlike the previous work that only leverages 1D compression, in this work, we propose to leverage high-dimensional (e.g., 3D) compression for each refinement level of AMR data. To remove the data redundancy across different levels, we propose three pre-process strategies and adaptively use them based on the data characteristics. Experiments on seven AMR datasets from a real-world large-scale AMR simulation demonstrate that our proposed approach can improve the compression ratio by up to 3.3X under the same data distortion, compared to the state-of-the-art method. In addition, we leverage the flexibility of our approach to tune the error bound for each level, which achieves much lower data distortion on two application-specific metrics. 
    more » « less
  5. An adaptive mesh refinement method for numerically solving optimal control problems is developed using Legendre-Gauss-Radau direct collocation. In regions of the solution where the desired accuracy tolerance has not been met, the mesh is refined by either increasing the degree of the approximating polynomial in a mesh interval or dividing a mesh interval into subintervals. In regions of the solution where the desired accuracy tolerance has been met, the mesh size may be reduced by either merging adjacent mesh intervals or decreasing the degree of the approximating polynomial in a mesh interval. Coupled with the mesh refinement method described in this paper is a newly developed relative error estimate that is based on the differences between solutions obtained from the collocation method and those obtained by solving initial-value and terminal-value problems in each mesh interval using an interpolated control obtained from the collocation method. Because the error estimate is based on explicit simulation, the solution obtained via collocation is in close agreement with the solution obtained via explicit simulation using the control on the final mesh, which ensures that the control is an accurate approximation of the true optimal control. The method is demonstrated on three examples from the open literature, and the results obtained show an improvement in final mesh size when compared against previously developed mesh refinement methods. 
    more » « less