skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on December 1, 2024

Title: End-to-end differentiability and tensor processing unit computing to accelerate materials’ inverse design
Numerical simulations have revolutionized material design. However, although simulations excel at mapping an input material to its output property, their direct application to inverse design has traditionally been limited by their high computing cost and lack of differentiability. Here, taking the example of the inverse design of a porous matrix featuring targeted sorption isotherm, we introduce a computational inverse design framework that addresses these challenges, by programming differentiable simulation on TensorFlow platform that leverages automated end-to-end differentiation. Thanks to its differentiability, the simulation is used to directly train a deep generative model, which outputs an optimal porous matrix based on an arbitrary input sorption isotherm curve. Importantly, this inverse design pipeline leverages the power of tensor processing units (TPU)—an emerging family of dedicated chips, which, although they are specialized in deep learning, are flexible enough for intensive scientific simulations. This approach holds promise to accelerate inverse materials design.  more » « less
Award ID(s):
1922167
NSF-PAR ID:
10467396
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
Nature
Date Published:
Journal Name:
npj Computational Materials
Volume:
9
Issue:
1
ISSN:
2057-3960
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Polarized resonant soft X-ray scattering (P-RSoXS) has emerged as a powerful synchrotron-based tool that combines the principles of X-ray scattering and X-ray spectroscopy. P-RSoXS provides unique sensitivity to molecular orientation and chemical heterogeneity in soft materials such as polymers and biomaterials. Quantitative extraction of orientation information from P-RSoXS pattern data is challenging, however, because the scattering processes originate from sample properties that must be represented as energy-dependent three-dimensional tensors with heterogeneities at nanometre to sub-nanometre length scales. This challenge is overcome here by developing an open-source virtual instrument that uses graphical processing units (GPUs) to simulate P-RSoXS patterns from real-space material representations with nanoscale resolution. This computational framework – calledCyRSoXS(https://github.com/usnistgov/cyrsoxs) – is designed to maximize GPU performance, including algorithms that minimize both communication and memory footprints. The accuracy and robustness of the approach are demonstrated by validating against an extensive set of test cases, which include both analytical solutions and numerical comparisons, demonstrating an acceleration of over three orders of magnitude relative to the current state-of-the-art P-RSoXS simulation software. Such fast simulations open up a variety of applications that were previously computationally unfeasible, including pattern fitting, co-simulation with the physical instrument foroperandoanalytics, data exploration and decision support, data creation and integration into machine learning workflows, and utilization in multi-modal data assimilation approaches. Finally, the complexity of the computational framework is abstracted away from the end user by exposingCyRSoXSto Python usingPybind. This eliminates input/output requirements for large-scale parameter exploration and inverse design, and democratizes usage by enabling seamless integration with a Python ecosystem (https://github.com/usnistgov/nrss) that can include parametric morphology generation, simulation result reduction, comparison with experiment and data fitting approaches.

     
    more » « less
  2. Simulating the time evolution of Partial Differential Equations (PDEs) of large-scale systems is crucial in many scientific and engineering domains such as fluid dynamics, weather forecasting and their inverse optimization problems. However, both classical solvers and recent deep learning-based surrogate models are typically extremely computationally intensive, because of their local evolution: they need to update the state of each discretized cell at each time step during inference. Here we develop Latent Evolution of PDEs (LE-PDE), a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs. LE-PDE learns a compact, global representation of the system and efficiently evolves it fully in the latent space with learned latent evolution models. LE-PDE achieves speedup by having a much smaller latent dimension to update during long rollout as compared to updating in the input space. We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability. We further introduce techniques for speeding-up inverse optimization of boundary conditions for PDEs via backpropagation through time in latent space, and an annealing technique to address the non-differentiability and sparse interaction of boundary conditions. We test our method in a 1D benchmark of nonlinear PDEs, 2D Navier-Stokes flows into turbulent phase and an inverse optimization of boundary conditions in 2D Navier-Stokes flow. Compared to state-of-the-art deep learning-based surrogate models and other strong baselines, we demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy. 
    more » « less
  3. Multi-robot cooperative control has been extensively studied using model-based distributed control methods. However, such control methods rely on sensing and perception modules in a sequential pipeline design, and the separation of perception and controls may cause processing latencies and compounding errors that affect control performance. End-to-end learning overcomes this limitation by implementing direct learning from onboard sensing data, with control commands output to the robots. Challenges exist in end-to-end learning for multi-robot cooperative control, and previous results are not scalable. We propose in this article a novel decentralized cooperative control method for multi-robot formations using deep neural networks, in which inter-robot communication is modeled by a graph neural network (GNN). Our method takes LiDAR sensor data as input, and the control policy is learned from demonstrations that are provided by an expert controller for decentralized formation control. Although it is trained with a fixed number of robots, the learned control policy is scalable. Evaluation in a robot simulator demonstrates the triangular formation behavior of multi-robot teams of different sizes under the learned control policy.

     
    more » « less
  4. Bhatele, A. ; Hammond, J. ; Baboulin, M. ; Kruse, C. (Ed.)
    The reactive force field (ReaxFF) interatomic potential is a powerful tool for simulating the behavior of molecules in a wide range of chemical and physical systems at the atomic level. Unlike traditional classical force fields, ReaxFF employs dynamic bonding and polarizability to enable the study of reactive systems. Over the past couple decades, highly optimized parallel implementations have been developed for ReaxFF to efficiently utilize modern hardware such as multi-core processors and graphics processing units (GPUs). However, the complexity of the ReaxFF potential poses challenges in terms of portability to new architectures (AMD and Intel GPUs, RISC-V processors, etc.), and limits the ability of computational scientists to tailor its functional form to their target systems. In this regard, the convergence of cyber-infrastructure for high performance computing (HPC) and machine learning (ML) presents new opportunities for customization, programmer productivity and performance portability. In this paper, we explore the benefits and limitations of JAX, a modern ML library in Python representing a prime example of the convergence of HPC and ML software, for implementing ReaxFF. We demonstrate that by leveraging auto-differentiation, just-in-time compilation, and vectorization capabilities of JAX, one can attain a portable, performant, and easy to maintain ReaxFF software. Beyond enabling MD simulations, end-to-end differentiability of trajectories produced by ReaxFF implemented with JAX makes it possible to perform related tasks such as force field parameter optimization and meta-analysis without requiring any significant software developments. We also discuss scalability limitations using the current version of JAX for ReaxFF simulations. 
    more » « less
  5. Abstract

    The water retention behavior—a critical factor of unsaturated flow in porous media—can be strongly affected by deformation in the solid matrix. However, it remains challenging to model the water retention behavior with explicit consideration of its dependence on deformation. Here, we propose a data-driven approach that can automatically discover an interpretable model describing the water retention behavior of a deformable porous material, which can be as accurate as non-interpretable models obtained by other data-driven approaches. Specifically, we present a divide-and-conquer approach for discovering a mathematical expression that best fits a neural network trained with the data collected from a series of image-based drainage simulations at the pore-scale. We validate the predictive capability of the symbolically regressed counterpart of the trained neural network against unseen pore-scale simulations. Further, through incorporating the discovered symbolic function into a continuum-scale simulation, we showcase the inherent portability of the proposed approach: The discovered water retention model can provide results comparable to those from a hierarchical multi-scale model, while bypassing the need for sub-scale simulations at individual material points.

     
    more » « less