skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Choose a Transformer: Fourier or Galerkin
In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts.  more » « less
Award ID(s):
2136075
PAR ID:
10342811
Author(s) / Creator(s):
Date Published:
Journal Name:
Advances in neural information processing systems
Volume:
34
ISSN:
1049-5258
Page Range / eLocation ID:
24924-24940
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this article, we introduce an error representation function to perform adaptivity in time of the recently developed timemarching Discontinuous Petrov–Galerkin (DPG) scheme. We first provide an analytical expression for the error that is the Riesz representation of the residual. Then, we approximate the error by enriching the test space in such a way that it contains the optimal test functions. The local error contributions can be efficiently computed by adding a few equations to the time-marching scheme. We analyze the quality of such approximation by constructing a Fortin operator and providing an a posteriori error estimate. The time-marching scheme proposed in this article provides an optimal solution along with a set of efficient and reliable local error contributions to perform adaptivity. We validate our method for both parabolic and hyperbolic problems. 
    more » « less
  2. We present a novel method for learning reduced-order models of dynamical systems using nonlinear manifolds. First, we learn the manifold by identifying nonlinear structure in the data through a general representation learning problem. The proposed approach is driven by embeddings of low-order polynomial form. A projection onto the nonlinear manifold reveals the algebraic structure of the reduced-space system that governs the problem of interest. The matrix operators of the reduced-order model are then inferred from the data using operator inference. Numerical experiments on a number of nonlinear problems demonstrate the generalizability of the methodology and the increase in accuracy that can be obtained over reduced-order modeling methods that employ a linear subspace approximation. 
    more » « less
  3. A rigorous physics-informed learning methodology is proposed for predictions of wave solutions and band structures in electronic and optical superlattice structures. The methodology is enabled by proper orthogonal decomposition (POD) and Galerkin projection of the wave equation. The approach solves the wave eigenvalue problem in POD space constituted by a finite set of basis functions (or POD modes). The POD ensures that the generated modes are optimized and tailored to the parametric variations of the system. Galerkin projection however enforces physical principles in the methodology to further enhance the accuracy and efficiency of the developed model. It has been demonstrated that the POD-Galerkin methodology offers an approach with a reduction in degrees of freedom by 4 orders of magnitude, compared to direct numerical simulation (DNS). A computing speedup near 15,000 times over DNS can be achieved with high accuracy for either of the superlattice structures if only the band structure is calculated without the wave solution. If both wave function solution and band structure are needed, a 2-order reduction in computational time can be achieved with a relative least square error (LSE) near 1%. When the training is incomplete or the desired eigenstates are slightly beyond the training bounds, an accurate prediction with an LSE near 1%-2% still can be reached if more POD modes are included. This reveals its remarkable learning ability to reach correct solutions with the guidance of physical principles provided by Galerkin projection. 
    more » « less
  4. At the core of the popular Transformer architecture is the self-attention mechanism, which dynamically assigns softmax weights to each input token so that the model can focus on the most salient information. However, the softmax structure slows down the attention computation due to its row-wise nature, and it inherently introduces competition among tokens: as the weight assigned to one token increases, the weights of others decrease. This competitive dynamic may narrow the focus of self-attention to a limited set of features, potentially overlooking other informative characteristics. Recent experimental studies have shown that using the element-wise sigmoid function helps eliminate token competition and reduce the computational overhead. Despite these promising empirical results, a rigorous comparison between sigmoid and softmax self-attention mechanisms remains absent in the literature. This paper closes this gap by theoretically demonstrating that sigmoid self-attention is more sample-efficient than its softmax counterpart. Toward that goal, we represent the self-attention matrix as a mixture of experts and show that ``experts'' in sigmoid self-attention require significantly less data to achieve the same approximation error as those in softmax self-attention. 
    more » « less
  5. A Transformer-based deep direct sampling method is proposed for electrical impedance tomography, a well-known severely ill-posed nonlinear boundary value inverse problem. A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and the reconstructed images. An effort is made to give a specific example to a fundamental question: whether and how one can benefit from the theoretical structure of a mathematical problem to develop task-oriented and structure-conforming deep neural networks? Specifically, inspired by direct sampling methods for inverse problems, the 1D boundary data in different frequencies are preprocessed by a partial differential equation-based feature map to yield 2D harmonic extensions as different input channels. Then, by introducing learnable non-local kernels, the direct sampling is recast to a modified attention mechanism. The new method achieves superior accuracy over its predecessors and contemporary operator learners and shows robustness to noises in benchmarks. This research shall strengthen the insights that, despite being invented for natural language processing tasks, the attention mechanism offers great flexibility to be modified in conformity with the a priori mathematical knowledge, which ultimately leads to the design of more physics-compatible neural architectures. 
    more » « less