skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: Approximate solutions to second-order parabolic equations: Evolution systems and discretization
We study the discretization of a linear evolution partial differential equation when its Green’s function is known or well approximated. We provide error estimates both for the spatial approximation and for the time stepping approximation. We show that, in fact, an approximation of the Green function is almost as good as the Green function itself. For suitable time-dependent parabolic equations, we explain how to obtain good, explicit approximations of the Green function using the Dyson-Taylor commutator method that we developed in J. Math. Phys. 51 (2010), n. 10, 103502 (reference [15]). This approximation for short time, when combined with a bootstrap argument, gives an approximate solution on any fixed time interval within any prescribed tolerance.  more » « less
Award ID(s):
1909103
PAR ID:
10474804
Author(s) / Creator(s):
; ;
Corporate Creator(s):
; ;
Publisher / Repository:
American Institute of Mathematical Sciences
Date Published:
Journal Name:
Discrete and Continuous Dynamical Systems - S
Volume:
15
Issue:
12
ISSN:
1937-1632
Page Range / eLocation ID:
3571 to 3602
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We propose an empirical relative value learning (ERVL) algorithm for non-parametric MDPs with continuous state space and finite actions and average reward criterion. The ERVL algorithm relies on function approximation via nearest neighbors, and minibatch samples for value function update. It is universal (will work for any MDP), computationally quite simple and yet provides arbitrarily good approximation with high probability in finite time. This is the first such algorithm for non-parametric (and continuous state space) MDPs with average reward criteria with these provable properties as far as we know. Numerical evaluation on a benchmark problem of optimal replacement suggests good performance. 
    more » « less
  2. It has long been a challenging problem to design algorithms for Markov decision processes (MDPs) with continuous states and actions that are provably approximately optimal and can provide arbitrarily good approximation for any MDP. In this paper, we propose an empirical value learning algorithm for average MDPs with continuous states and actions that combines empirical value iteration with n function-parametric approximation and approximation of transition probability distribution with kernel density estimation. We view each iteration as operation of random operator and argue convergence using the probabilistic contraction analysis method that the authors (along with others) have recently developed. 
    more » « less
  3. null (Ed.)
    Variational methods, such as mean-field (MF) and tree-reweighted (TRW), provide computationally efficient approximations of the log-partition function for generic graphical models but their approximation ratio is generally not quantified. As the primary contribution of this work, we provide an approach to quantify their approximation ratio for any discrete pairwise graphical model with non-negative potentials through a property of the underlying graph structure G. Specifically, we argue that (a variant of) TRW produces an estimate within factor K(G) which captures how far G is from tree structure. As a consequence, the approximation ratio is 1 for trees. The quantity K(G) is the solution of a min-max problem associated with the spanning tree polytope of G that can be evaluated in polynomial time for any graph. We provide a near linear-time variant that achieves an approximation ratio depending on the minimal (across edges) effective resistance of the graph. We connect our results to the graph partition approximation method and thus provide a unified perspective. 
    more » « less
  4. Standard regularized training procedures correspond to maximizing a posterior distribution over parameters, known as maximum a posteriori (MAP) estimation. However, model parameters are of interest only insomuch as they combine with the functional form of a model to provide a function that can make good predictions. Moreover, the most likely parameters under the parameter posterior do not generally correspond to the most likely function induced by the parameter posterior. In fact, we can re-parametrize a model such that any setting of parameters can maximize the parameter posterior. As an alternative, we investigate the benefits and drawbacks of directly estimating the most likely function implied by the model and the data. We show that this procedure leads to pathological solutions when using neural networks and prove conditions under which the procedure is well-behaved, as well as a scalable approximation. Under these conditions, we find that function-space MAP estimation can lead to flatter minima, better generalization, and improved robustness to overfitting 
    more » « less
  5. Standard regularized training procedures correspond to maximizing a posterior distribution over parameters, known as maximum a posteriori (MAP) estimation. However, model parameters are of interest only insomuch as they combine with the functional form of a model to provide a function that can make good predictions. Moreover, the most likely parameters under the parameter posterior do not generally correspond to the most likely function induced by the parameter posterior. In fact, we can re-parametrize a model such that any setting of parameters can maximize the parameter posterior. As an alternative, we investigate the benefits and drawbacks of directly estimating the most likely function implied by the model and the data. We show that this procedure leads to pathological solutions when using neural networks and prove conditions under which the procedure is well-behaved, as well as a scalable approximation. Under these conditions, we find that function-space MAP estimation can lead to flatter minima, better generalization, and improved robustness to overfitting. 
    more » « less