skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on June 1, 2025

Title: Operator learning for homogenizing hyperelastic materials, without PDE data
In this work, we address operator learning for stochastic homogenization in nonlinear elasticity. A Fourier neural operator is employed to learn the map between the input field describing the material at fine scale and the deformation map. We propose a variationally-consistent loss function that does not involve solution field data. The methodology is tested on materials described either by piecewise constant fields at microscale, or by random fields at mesoscale. High prediction accuracy is obtained for both the solution field and the homogenized response. We show, in particular, that the accuracy achieved with the proposed strategy is comparable to that obtained with the conventional data-driven training method.  more » « less
Award ID(s):
2022040
NSF-PAR ID:
10539887
Author(s) / Creator(s):
;
Publisher / Repository:
Science Direct
Date Published:
Journal Name:
Mechanics Research Communications
Volume:
138
Issue:
C
ISSN:
0093-6413
Page Range / eLocation ID:
104281
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The solution of a partial differential equation can be obtained by computing the inverse operator map between the input and the solution space. Towards this end, we introduce a multiwavelet-based neural operator learning scheme that compresses the associated operator's kernel using fine-grained wavelets. By explicitly embedding the inverse multiwavelet filters, we learn the projection of the kernel onto fixed multiwavelet polynomial bases. The projected kernel is trained at multiple scales derived from using repeated computation of multiwavelet transform. This allows learning the complex dependencies at various scales and results in a resolution-independent scheme. Compare to the prior works, we exploit the fundamental properties of the operator's kernel which enable numerically efficient representation. We perform experiments on the Korteweg-de Vries (KdV) equation, Burgers' equation, Darcy Flow, and Navier-Stokes equation. Compared with the existing neural operator approaches, our model shows significantly higher accuracy and achieves state-of-the-art in a range of datasets. For the time-varying equations, the proposed method exhibits a (2X−10X) improvement (0.0018 (0.0033) relative L2 error for Burgers' (KdV) equation). By learning the mappings between function spaces, the proposed method has the ability to find the solution of a high-resolution input after learning from lower-resolution data. 
    more » « less
  2. The solution of a partial differential equation can be obtained by computing the inverse operator map between the input and the solution space. Towards this end, we introduce a multiwavelet-based neural operator learning scheme that compresses the associated operator's kernel using fine-grained wavelets. By explicitly embedding the inverse multiwavelet filters, we learn the projection of the kernel onto fixed multiwavelet polynomial bases. The projected kernel is trained at multiple scales derived from using repeated computation of multiwavelet transform. This allows learning the complex dependencies at various scales and results in a resolution-independent scheme. Compare to the prior works, we exploit the fundamental properties of the operator's kernel which enable numerically efficient representation. We perform experiments on the Korteweg-de Vries (KdV) equation, Burgers' equation, Darcy Flow, and Navier-Stokes equation. Compared with the existing neural operator approaches, our model shows significantly higher accuracy and achieves state-of-the-art in a range of datasets. For the time-varying equations, the proposed method exhibits a ( 2 X − 10 X ) improvement ( 0.0018 ( 0.0033 ) relative L 2 error for Burgers' (KdV) equation). By learning the mappings between function spaces, the proposed method has the ability to find the solution of a high-resolution input after learning from lower-resolution data. 
    more » « less
  3. This article presents a numerical strategy for actively manipulating electromagnetic (EM) fields in layered media. In particular, we develop a scheme to characterize an EM source that will generate some predetermined field patterns in prescribed disjoint exterior regions in layered media. The proposed question of specifying such an EM source is not an inverse source problem (ISP) since the existence of a solution is not guaranteed. Moreover, our problem allows for the possibility of prescribing different EM fields in mutually disjoint exterior regions. This question involves a linear inverse problem that requires solving a severely ill-posed optimization problem (i.e. suffering from possible non-existence or non-uniqueness of a solution). The forward operator is defined by expressing the EM fields as a function of the current at the source using the layered media Green’s function (LMGF), accounting for the physical parameters of the layered media. This results to integral equations that are then discretized using the method of moments (MoM), yielding an illposed system of linear equations. Unlike in ISPs, stability with respect to data is not an issue here since no data is measured. Rather, stability with respect to input current approximation is important. To get such stable solutions, we applied two regularization methods, namely, the truncated singular value decomposition (TSVD) method and the Tikhonov regularization method with the Morozov Discrepancy Principle. We performed several numerical simulations to support the theoretical framework and analyzes, and to demonstrate the accuracy and feasibility of the proposed numerical algorithms. 
    more » « less
  4. Abstract

    Magnetic fields and their dynamical interplay with matter in galaxy clusters contribute to the physical properties and evolution of the intracluster medium. However, the current understanding of the origin and properties of cluster magnetic fields is still limited by observational challenges. In this article, we map the magnetic fields at hundreds-kpc scales of five clusters RXC J1314.4-2515, Abell 2345, Abell 3376, MCXC J0352.4-7401, and El Gordo using the synchrotron intensity gradient technique in conjunction with high-resolution radio observations from the Jansky Very Large Array (JVLA) and the Karoo Array Telescope (MeerKAT). We demonstrate that the magnetic field orientation of radio relics derived from synchrotron intensity gradient is in agreement with that obtained with synchrotron polarization. Most importantly, the synchrotron intensity gradient is not limited by Faraday depolarization in the cluster central regions and allows us to map magnetic fields in the radio halos of RXC J1314.4-2515 and El Gordo. We find that magnetic fields in radio halos exhibit a preferential direction along the major merger axis and show turbulent structures at higher angular resolution. The results are consistent with expectations from numerical simulations, which predict turbulent magnetic fields in cluster mergers that are stirred and amplified by matter motions.

     
    more » « less
  5. Abstract

    In this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time‐varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generatein situthat are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time‐varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem — learning a function‐space neural network to reproduce flow map samples under a fixed integration scheme — leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.

     
    more » « less