skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on November 6, 2025

Title: Geometric neural operators (gnps) for data-driven deep learning in non-euclidean settings
We introduce Geometric Neural Operators (GNPs) for data-driven deep learning of geometric features for tasks in non-euclidean settings. We present a formulation for accounting for geometric contributions along with practical neural network architectures and factorizations for training. We then demonstrate how GNPs can be used (i) to estimate geometric properties, such as the metric and curvatures of surfaces, (ii) to approximate solutions of geometric partial differential equations on manifolds, and (iii) to solve Bayesian inverse problems for identifying manifold shapes. These results show a few ways GNPs can be used for incorporating the roles of geometry in the data-driven learning of operators.  more » « less
Award ID(s):
2306101
PAR ID:
10611638
Author(s) / Creator(s):
;
Publisher / Repository:
IOP Science
Date Published:
Journal Name:
Machine Learning: Science and Technology
Volume:
5
Issue:
4
ISSN:
2632-2153
Page Range / eLocation ID:
045033
Subject(s) / Keyword(s):
Machine Learning Neural Operator Deep Learning Geometric Learning Differential Geometry Partial Differential Equations Bayesian Learning
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We develop data-driven methods incorporating geometric and topological information to learn parsimonious representations of nonlinear dynamics from observations. The approaches learn nonlinear state-space models of the dynamics for general manifold latent spaces using training strategies related to Variational Autoencoders (VAEs). Our methods are referred to as Geometric Dynamic (GD) Variational Autoencoders (GD-VAEs). We learn encoders and decoders for the system states and evolution based on deep neural network architectures that include general Multilayer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and other architectures. Motivated by problems arising in parameterized PDEs and physics, we investigate the performance of our methods on tasks for learning reduced dimensional representations of the nonlinear Burgers Equations, Constrained Mechanical Systems, and spatial fields of Reaction-Diffusion Systems. GD-VAEs provide methods that can be used to obtain representations in manifold latent spaces for diverse learning tasks involving dynamics. 
    more » « less
  2. We investigated a mechanism for quick release and transfer of gold nanoparticles (GNPs) from a soft substrate to another substrate under laser illumination. The heating of GNPs on a soft substrate with a continuous-wave laser causes a rapid thermal expansion of the substrate, which can be used to selectively release and place GNPs onto another surface. In-plane and out-of-plane nanostructures are successfully fabricated using this method. This rapid release-and-place process can be used for additive nonmanufacturing of metallic nanostructures under ambient conditions, which paves a way for affordable nanomanufacturing and enables a wide variety of applications in nanophotonics, ultrasensitive sensing, and nonlinear plasmonics. 
    more » « less
  3. The computational efficiency of many neural operators, widely used for learning solutions of PDEs, relies on the fast Fourier transform (FFT) for performing spectral computations. As the FFT is limited to equispaced (rectangular) grids, this limits the efficiency of such neural operators when applied to problems where the input and output functions need to be processed on general non-equispaced point distributions. Leveraging the observation that a limited set of Fourier (Spectral) modes suffice to provide the required expressivity of a neural operator, we propose a simple method, based on the efficient direct evaluation of the underlying spectral transformation, to extend neural operators to arbitrary domains. An efficient implementation of such direct spectral evaluations is coupled with existing neural operator models to allow the processing of data on arbitrary non-equispaced distributions of points. With extensive empirical evaluation, we demonstrate that the proposed method allows us to extend neural operators to arbitrary point distributions with significant gains in training speed over baselines, while retaining or improving the accuracy of Fourier neural operators (FNOs) and related neural operators. 
    more » « less
  4. The computational efficiency of many neural operators, widely used for learning solutions of PDEs, relies on the fast Fourier transform (FFT) for performing spectral computations. As the FFT is limited to equispaced (rectangular) grids, this limits the efficiency of such neural operators when applied to problems where the input and output functions need to be processed on general non-equispaced point distributions. Leveraging the observation that a limited set of Fourier (Spectral) modes suffice to provide the required expressivity of a neural operator, we propose a simple method, based on the efficient direct evaluation of the underlying spectral transformation, to extend neural operators to arbitrary domains. An efficient implementation of such direct spectral evaluations is coupled with existing neural operator models to allow the processing of data on arbitrary non-equispaced distributions of points. With extensive empirical evaluation, we demonstrate that the proposed method allows us to extend neural operators to arbitrary point distributions with significant gains in training speed over baselines, while retaining or improving the accuracy of Fourier neural operators (FNOs) and related neural operators. 
    more » « less
  5. Time-evolution of partial differential equations is the key to model several dynamical processes, events forecasting but the operators associated with such problems are non-linear. We propose a Padé approximation based exponential neural operator scheme for efficiently learning the map between a given initial condition and activities at a later time. The multiwavelets bases are used for space discretization. By explicitly embedding the exponential operators in the model, we reduce the training parameters and make it more data-efficient which is essential in dealing with scarce real-world datasets. The Padé exponential operator uses a to model the non-linearity compared to recent neural operators that rely on using multiple linear operator layers in succession. We show theoretically that the gradients associated with the recurrent Padé network are bounded across the recurrent horizon. We perform experiments on non-linear systems such as Korteweg-de Vries (KdV) and Kuramoto–Sivashinsky (KS) equations to show that the proposed approach achieves the best performance and at the same time is data-efficient. We also show that urgent real-world problems like Epidemic forecasting (for example, COVID-19) can be formulated as a 2D time-varying operator problem. The proposed Padé exponential operators yield better prediction results ( better MAE than best neural operator (non-neural operator deep learning model)) compared to state-of-the-art forecasting models. 
    more » « less