skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Neural Network Method for Integral Fractional Laplace Equations
A neural network method for fractional order diffusion equations with integral fractional Laplacian is studied. We employ the Ritz formulation for the corresponding fractional equation and then derive an approximate solution of an optimization problem in the function class of neural network sets. Connecting the neural network sets with weighted Sobolev spaces, we prove the convergence and establish error estimates of the neural network method in the energy norm. To verify the theoretical results, we carry out numerical experiments and report their outcome.  more » « less
Award ID(s):
2110571
PAR ID:
10612367
Author(s) / Creator(s):
; ;
Publisher / Repository:
Global-Science Press
Date Published:
Journal Name:
East Asian Journal on Applied Mathematics
Volume:
13
Issue:
1
ISSN:
2079-7362
Page Range / eLocation ID:
95 to 118
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Fractional calculus is an increasingly recognized important tool for modeling complicated dynamics in modern engineering systems. While, in some ways, fractional derivatives are a straight-forward generalization of integer-order derivatives that are ubiquitous in engineering modeling, in other ways the use of them requires quite a bit of mathematical expertise and familiarity with some mathematical concepts that are not in everyday use across the broad spectrum of engineering disciplines. In more colloquial terms, the learning curve is steep. While the authors recognize the need for fundamental competence in tools used in engineering, a computational tool that can provide an alternative means to compute fractional derivatives does have a useful role in engineering modeling. This paper presents the use of a symmetric neural network that is trained entirely on integer-order derivatives to provide a means to compute fractional derivatives. The training data does not contain any fractional-order derivatives at all, and is composed of only integer-order derivatives. The means by which a fractional derivative can be obtained is by requiring the neural network to be symmetric, that is, it is the composition of two identical sets of layers trained on integer-order derivatives. From that, the information contained in the nodes between the two sets of layers contains half-order derivative information 
    more » « less
  2. Partial differential equations are common models in biology for predicting and explaining complex behaviors. Nevertheless, deriving the equations and estimating the corresponding parameters remains challenging from data. In particular, the fine description of the interactions between species requires care for taking into account various regimes such as saturation effects. We apply a method based on neural networks to discover the underlying PDE systems, which involve fractional terms and may also contain integration terms based on observed data. Our proposed framework, called Frac-PDE-Net, adapts the PDE-Net 2.0 by adding layers that are designed to learn fractional and integration terms. The key technical challenge of this task is the identifiability issue. More precisely, one needs to identify the main terms and combine similar terms among a huge number of candidates in fractional form generated by the neural network scheme due to the division operation. In order to overcome this barrier, we set up certain assumptions according to realistic biological behavior. Additionally, we use an L2-norm based term selection criterion and the sparse regression to obtain a parsimonious model. It turns out that the method of Frac-PDE-Net is capable of recovering the main terms with accurate coefficients, allowing for effective long term prediction. We demonstrate the interest of the method on a biological PDE model proposed to study the pollen tube growth problem. 
    more » « less
  3. There has been an increasing interest in using neural networks in closed-loop control systems to improve performance and reduce computational costs for on-line implementation. However, providing safety and stability guarantees for these systems is challenging due to the nonlinear and compositional structure of neural networks. In this paper, we propose a novel forward reachability analysis method for the safety verification of linear time-varying systems with neural networks in feedback interconnection. Our technical approach relies on abstracting the nonlinear activation functions by quadratic constraints, which leads to an outer-approximation of forward reachable sets of the closed-loop system. We show that we can compute these approximate reachable sets using semidefinite programming. We illustrate our method in a quadrotor example, in which we first approximate a nonlinear model predictive controller via a deep neural network and then apply our analysis tool to certify finite-time reachability and constraint satisfaction of the closed-loop system. 
    more » « less
  4. Comparing representations of complex stimuli in neural network layers to human brain representations or behavioral judgments can guide model development. However, even qualitatively distinct neural network models often predict similar representational geometries of typical stimulus sets. We propose a Bayesian experimental design approach to synthesizing stimulus sets for adjudicating among representational models efficiently. We apply our method to discriminate among candidate neural network models of behavioral face dissimilarity judgments. Our results indicate that a neural network trained to invert a 3D-face-model graphics renderer is more human-aligned than the same architecture trained on identification, classification, or autoencoding. Our proposed stimulus synthesis objective is generally applicable to designing experiments to be analyzed by representational similarity analysis for model comparison. 
    more » « less
  5. We introduce a general method for the interpretation and comparison of neural models. The method is used to factor a complex neural model into its functional components, which are comprised of sets of co-firing neurons that cut across layers of the network architecture, and which we call neural pathways. The function of these pathways can be understood by identifying correlated task level and linguistic heuristics in such a way that this knowledge acts as a lens for approximating what the network has learned to apply to its intended task. As a case study for investigating the utility of these pathways, we present an examination of pathways identified in models trained for two standard tasks, namely Named Entity Recognition and Recognizing Textual Entailment. 
    more » « less