skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Uniform Asymptotic Approximation Method with Pöschl–Teller Potential
In this paper, we study analytical approximate solutions for second-order homogeneous differential equations with the existence of only two turning points (but without poles) by using the uniform asymptotic approximation (UAA) method. To be more concrete, we consider the Pöschl–Teller (PT) potential, for which analytical solutions are known. Depending on the values of the parameters involved in the PT potential, we find that the upper bounds of the errors of the approximate solutions in general are ≲0.15∼10% for the first-order approximation of the UAA method. The approximations can be easily extended to high orders, for which the errors are expected to be much smaller. Such obtained analytical solutions can be used to study cosmological perturbations in the framework of quantum cosmology as well as quasi-normal modes of black holes.  more » « less
Award ID(s):
2308845
PAR ID:
10489493
Author(s) / Creator(s):
; ; ; ; ; ;
Editor(s):
Academic Editor: Lorenzo Iorio
Publisher / Repository:
MDPI, Basel, Switzerland
Date Published:
Journal Name:
Universe
Volume:
9
Issue:
11
ISSN:
2218-1997
Page Range / eLocation ID:
471
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we study ultra-weak discontinuous Galerkin methods with generalized numerical fluxes for multi-dimensional high order partial differential equations on both unstructured simplex and Cartesian meshes. The equations we consider as examples are the nonlinear convection-diffusion equation and the biharmonic equation. Optimal error estimates are obtained for both equations under certain conditions, and the key step is to carefully design global projections to eliminate numerical errors on the cell interface terms of ultra-weak schemes on general dimensions. The well-posedness and approximation capability of these global projections are obtained for arbitrary order polynomial space based on a wide class of generalized numerical fluxes on regular meshes. These projections can serve as general analytical tools to be naturally applied to a wide class of high order equations. Numerical experiments are conducted to demonstrate these theoretical results. 
    more » « less
  2. A canonical feature of the constraint satisfaction problems in NP is approximation hardness, where in the worst case, finding sufficient-quality approximate solutions is exponentially hard for all known methods. Fundamentally, the lack of any guided local minimum escape method ensures both exact and approximate classical approximation hardness, but the equivalent mechanism(s) for quantum algorithms are poorly understood. For algorithms based on Hamiltonian time evolution, we explore this question through the prototypically hard MAX-3-XORSAT problem class. We conclude that the mechanisms for quantum exact and approximation hardness are fundamentally distinct. We review known results from the literature, and identify mechanisms that make conventional quantum methods (such as Adiabatic Quantum Computing) weak approximation algorithms in the worst case. We construct a family of spectrally filtered quantum algorithms that escape these issues, and develop analytical theories for their performance. We show that, for random hypergraphs in the approximation-hard regime, if we define the energy to be E=Nunsat−Nsat, spectrally filtered quantum optimization will return states with E≤qmEGS (where EGS is the ground state energy) in sub-quadratic time, where conservatively, qm≃0.59. This is in contrast to qm→0 for the hardest instances with classical searches. We test all of these claims with extensive numerical simulations. We do not claim that this approximation guarantee holds for all possible hypergraphs, though our algorithm's mechanism can likely generalize widely. These results suggest that quantum computers are more powerful for approximate optimization than had been previously assumed. 
    more » « less
  3. Abstract The replacement of a nonlinear parameter-to-observable mapping with a linear (affine) approximation is often carried out to reduce the computational costs associated with solving large-scale inverse problems governed by partial differential equations (PDEs). In the case of a linear parameter-to-observable mapping with normally distributed additive noise and a Gaussian prior measure on the parameters, the posterior is Gaussian. However, substituting an accurate model for a (possibly well justified) linear surrogate model can give misleading results if the induced model approximation error is not accounted for. To account for the errors, the Bayesian approximation error (BAE) approach can be utilised, in which the first and second order statistics of the errors are computed via sampling. The most common linear approximation is carried out via linear Taylor expansion, which requires the computation of (Fréchet) derivatives of the parameter-to-observable mapping with respect to the parameters of interest. In this paper, we prove that the (approximate) posterior measure obtained by replacing the nonlinear parameter-to-observable mapping with a linear approximation is in fact independent of the choice of the linear approximation when the BAE approach is employed. Thus, somewhat non-intuitively, employing the zero-model as the linear approximation gives the same approximate posterior as any other choice of linear approximations of the parameter-to-observable model. The independence of the linear approximation is demonstrated mathematically and illustrated with two numerical PDE-based problems: an inverse scattering type problem and an inverse conductivity type problem. 
    more » « less
  4. Many chemical reactions and molecular processes occur on time scales that are significantly longer than those accessible by direct simulations. One successful approach to estimating dynamical statistics for such processes is to use many short time series of observations of the system to construct a Markov state model, which approximates the dynamics of the system as memoryless transitions between a set of discrete states. The dynamical Galerkin approximation (DGA) is a closely related framework for estimating dynamical statistics, such as committors and mean first passage times, by approximating solutions to their equations with a projection onto a basis. Because the projected dynamics are generally not memoryless, the Markov approximation can result in significant systematic errors. Inspired by quasi-Markov state models, which employ the generalized master equation to encode memory resulting from the projection, we reformulate DGA to account for memory and analyze its performance on two systems: a two-dimensional triple well and the AIB9 peptide. We demonstrate that our method is robust to the choice of basis and can decrease the time series length required to obtain accurate kinetics by an order of magnitude. 
    more » « less
  5. Abstract. Numerical models are a powerful tool for investigating the dynamic processes in the interior of the Earth and other planets, but the reliability and predictive power of these discretized models depends on the numerical method as well as an accurate representation of material properties in space and time. In the specific context of geodynamic models, particle methods have been applied extensively because of their suitability for advection-dominated processes and have been used in applications such as tracking the composition of solid rock and melt in the Earth's mantle, fluids in lithospheric- and crustal-scale models, light elements in the liquid core, and deformation properties like accumulated finite strain or mineral grain size, along with many applications outside the Earth sciences. There have been significant benchmarking efforts to measure the accuracy and convergence behavior of particle methods, but these efforts have largely been limited to instantaneous solutions, or time-dependent models without analytical solutions. As a consequence, there is little understanding about the interplay of particle advection errors and errors introduced in the solution of the underlying transient, nonlinear flow equations. To address these limitations, we present two new dynamic benchmarks for transient Stokes flow with analytical solutions that allow us to quantify the accuracy of various advection methods in nonlinear flow. We use these benchmarks to measure the accuracy of our particle algorithm as implemented in the ASPECT geodynamic modeling software against commonly employed field methods and analytical solutions. In particular, we quantify if an algorithm that is higher-order accurate in time will allow for better overall model accuracy and verify that our algorithm reaches its intended optimal convergence rate. We then document that the observed increased accuracy of higher-order algorithms matters for geodynamic applications with an example of modeling small-scale convection underneath an oceanic plate and show that the predicted place and time of onset of small-scale convection depends significantly on the chosen particle advection method. Descriptions and implementations of our benchmarks are openly available and can be used to verify other advection algorithms. The availability of accurate, scalable, and efficient particle methods as part of the widely used open-source code ASPECT will allow geodynamicists to investigate complex time-dependent geodynamic processes such as elastic deformation, anisotropic fabric development, melt generation and migration, and grain damage. 
    more » « less