skip to main content


This content will become publicly available on November 1, 2024

Title: Uniform Asymptotic Approximation Method with Pöschl–Teller Potential

In this paper, we study analytical approximate solutions for second-order homogeneous differential equations with the existence of only two turning points (but without poles) by using the uniform asymptotic approximation (UAA) method. To be more concrete, we consider the Pöschl–Teller (PT) potential, for which analytical solutions are known. Depending on the values of the parameters involved in the PT potential, we find that the upper bounds of the errors of the approximate solutions in general are ≲0.15∼10% for the first-order approximation of the UAA method. The approximations can be easily extended to high orders, for which the errors are expected to be much smaller. Such obtained analytical solutions can be used to study cosmological perturbations in the framework of quantum cosmology as well as quasi-normal modes of black holes.

 
more » « less
Award ID(s):
2308845
NSF-PAR ID:
10489493
Author(s) / Creator(s):
; ; ; ; ; ;
Editor(s):
Academic Editor: Lorenzo Iorio
Publisher / Repository:
MDPI, Basel, Switzerland
Date Published:
Journal Name:
Universe
Volume:
9
Issue:
11
ISSN:
2218-1997
Page Range / eLocation ID:
471
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Dietary DNA metabarcoding enables researchers to identify and characterize trophic interactions with a high degree of taxonomic precision. It is also sensitive to sources of bias and contamination in the field and lab. One of the earliest and most common strategies for dealing with such sensitivities has been to filter resulting sequence data to remove low-abundance sequences before conducting ecological analyses based on the presence or absence of food taxa. Although this step is now often perceived to be both necessary and sufficient for cleaning up datasets, evidence to support this perception is lacking and more attention needs to be paid to the related risk of introducing other undesirable errors. Using computer simulations, we demonstrate that common strategies to remove low-abundance sequences can erroneously eliminate true dietary sequences in ways that impact downstream dietary inferences. Using real data from well-studied wildlife populations in Yellowstone National Park, we further show how these strategies can markedly alter the composition of individual dietary profiles in ways that scale-up to obscure ecological interpretations about dietary generalism, specialism, and niche partitioning. Although the practice of removing low-abundance sequences may continue to be a useful strategy to address a subset of research questions that focus on a subset of relatively abundant food resources, its continued widespread use risks generating misleading perceptions about the structure of trophic networks. Researchers working with dietary DNA metabarcoding data—or similar data such as environmental DNA, microbiomes, or pathobiomes—should be aware of potential drawbacks and consider alternative bioinformatic, experimental, and statistical solutions. We used fecal DNA metabarcoding to characterize the diets of bison and bighorn sheep in winter and summer. Our analyses are based on 35 samples (median per species per season = 10) analyzed using the P6 loop of the chloroplast trnL(UAA) intron together with publicly available plant reference data (Illumina sequence read data are available at NCBI (BioProject: PRJNA780500)). Obicut was used to trim reads with a minimum quality threshold of 30, and primers were removed from forward and reverse reads using cutadapt. All further sequence identifications were performed using obitools; forward and reverse sequences were aligned using the illuminapairedend command using a minimum alignment score of 40, and only joined sequences retained. We used the obiuniq command to group identical sequences and tally them within samples, enabling us to quantify the relative read abundance (RRA) of each sequence. Sequences that occurred ≤2 times overall or that were ≤8 bp were discarded. Sequences were considered to be likely PCR artifacts if they were highly similar to another sequence (1 bp difference) and had a much lower abundance (0.05%) in the majority of samples in which they occurred; we discarded these sequences using the obiclean command. Overall, we characterized 357 plant sequences and a subset of 355 sequences were retained in the dataset after rarefying samples to equal sequencing depth. We then applied relative read abundance thresholds from 0% to 5% to the fecal samples. We compared differences in the inferred dietary richness within and between species based on individual samples, based on average richness across samples, and based on the total richness of each population after accounting for differences in sample size. The readme file contains an explanation of each of the variables in the dataset. Information on the methodology can be found in the associated manuscript referenced above.  
    more » « less
  2. Abstract

    Physics‐based simulations of earthquake ground motion are useful to complement recorded ground motions. However, the computational expense of performing numerical simulations hinders their applicability to tasks that require real‐time solutions or ensembles of solutions for different earthquake sources. To enable rapid physics‐based solutions, we present a reduced‐order modeling approach based on interpolated proper orthogonal decomposition (POD) to predict peak ground velocities (PGVs). As a demonstrator, we consider PGVs from regional 3D wave propagation simulations at the location of the 2008MW5.4 Chino Hills earthquake using double‐couple sources with varying depth and focal mechanisms. These simulations resolve frequencies ≤1.0 Hz and include topography, viscoelastic attenuation, and S‐wave speeds ≥500 m/s. We evaluate the accuracy of the interpolated POD reduced‐order model (ROM) as a function of the approximation method. Comparing the radial basis function (RBF), multilayer perceptron neural network, random forest, andk‐nearest neighbor, we find that the RBF interpolation gives the lowest error (≈0.1 cm/s) when tested against an independent data set. We also find that evaluating the ROM is 107–108times faster than the wave propagation simulations. We use the ROM to generate PGV maps for 1 million different focal mechanisms, in which we identify potentially damaging ground motions and quantify correlations between focal mechanism, depth, and accuracy of the predicted PGV. Our results demonstrate that the ROM can rapidly and accurately approximate the PGV from wave propagation simulations with variable source properties, topography, and complex subsurface structure.

     
    more » « less
  3. Abstract

    Bayesian Markov chain Monte Carlo explores tree space slowly, in part because it frequently returns to the same tree topology. An alternative strategy would be to explore tree space systematically, and never return to the same topology. In this article, we present an efficient parallelized method to map out the high likelihood set of phylogenetic tree topologies via systematic search, which we show to be a good approximation of the high posterior set of tree topologies on the data sets analyzed. Here, “likelihood” of a topology refers to the tree likelihood for the corresponding tree with optimized branch lengths. We call this method “phylogenetic topographer” (PT). The PT strategy is very simple: starting in a number of local topology maxima (obtained by hill-climbing from random starting points), explore out using local topology rearrangements, only continuing through topologies that are better than some likelihood threshold below the best observed topology. We show that the normalized topology likelihoods are a useful proxy for the Bayesian posterior probability of those topologies. By using a nonblocking hash table keyed on unique representations of tree topologies, we avoid visiting topologies more than once across all concurrent threads exploring tree space. We demonstrate that PT can be used directly to approximate a Bayesian consensus tree topology. When combined with an accurate means of evaluating per-topology marginal likelihoods, PT gives an alternative procedure for obtaining Bayesian posterior distributions on phylogenetic tree topologies.

     
    more » « less
  4. The one‐dimensional steady state analytical solution of the energy conservation equation obtained by Robin (1955, https://doi.org/10.3189/002214355793702028) is frequently used in glaciology. This solution assumes a linear change in surface velocity from a minimum value equal to minus the mass balance at the surface to zero at the bed. Here we show that this assumption of a linear velocity profile leads to large errors in the calculated temperature profile and especially in basal temperature. By prescribing a nonlinear power function of elevation above the bed for the vertical velocity profile arising from use of the Shallow Ice Approximation, we derive a new analytical solution for temperature. We show that the solution produces temperature profiles identical to numerical temperature solutions with the Shallow Ice Approximation vertical velocity near ice divides. We quantify the importance of strain heating and demonstrate that integrating the strain heating and adding it to the geothermal heat flux at the bed is a reasonable approximation for the interior regions. Our analytical solution does not include horizontal advection components, so we compare our solution with numerical solutions of a two‐dimensional advection‐diffusion model and assess the applicability and errors of the analytical solution away from the ice divide. We show that several parameters and assumptions impact the spatial extent of applicability of the new solution including surface mass balance rate and surface temperature lapse rate. We delineate regions of Greenland and Antarctica within which the analytical solution at any depth is likely within 2 K of the actual temperatures with horizontal advection.

     
    more » « less
  5. Abstract

    Large-scale dynamics of the oceans and the atmosphere are governed by primitive equations (PEs). Due to the nonlinearity and nonlocality, the numerical study of the PEs is generally challenging. Neural networks have been shown to be a promising machine learning tool to tackle this challenge. In this work, we employ physics-informed neural networks (PINNs) to approximate the solutions to the PEs and study the error estimates. We first establish the higher-order regularity for the global solutions to the PEs with either full viscosity and diffusivity, or with only the horizontal ones. Such a result for the case with only the horizontal ones is new and required in the analysis under the PINNs framework. Then we prove the existence of two-layer tanh PINNs of which the corresponding training error can be arbitrarily small by taking the width of PINNs to be sufficiently wide, and the error between the true solution and its approximation can be arbitrarily small provided that the training error is small enough and the sample set is large enough. In particular, all the estimates area priori, and our analysis includes higher-order (in spatial Sobolev norm) error estimates. Numerical results on prototype systems are presented to further illustrate the advantage of using the$$H^s$$Hsnorm during the training.

     
    more » « less