skip to main content


Title: A Fully Explicit Integrator for Modeling Astrophysical Reactive Flows
Abstract

Simulating complex astrophysical reacting flows is computationally expensive—reactions are stiff and typically require implicit integration methods. The reaction update is often the most expensive part of a simulation, which motivates the exploration of more economical methods. In this research note, we investigate how the explicit Runge–Kutta–Chebyshev (RKC) method performs compared to an implicit method when applied to astrophysical reactive flows. These integrators are applied to simulations of X-ray bursts arising from unstable thermonuclear burning of accreted fuel on the surface of neutron stars. We show that the RKC method performs with similar accuracy to our traditional implicit integrator, but is more computationally efficient when run on CPUs.

 
more » « less
NSF-PAR ID:
10481760
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
DOI PREFIX: 10.3847
Date Published:
Journal Name:
Research Notes of the AAS
Volume:
7
Issue:
12
ISSN:
2515-5172
Format(s):
Medium: X Size: Article No. 282
Size(s):
["Article No. 282"]
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract Adaptive time stepping methods for metastable dynamics of the Allen–Cahn and Cahn–Hilliard equations are investigated in the spatially continuous, semi-discrete setting. We analyse the performance of a number of first and second order methods, formally predicting step sizes required to satisfy specified local truncation error $$\sigma $$ σ in the limit of small length scale parameter $$\epsilon \rightarrow 0$$ ϵ → 0 during meta-stable dynamics. The formal predictions are made under stability assumptions that include the preservation of the asymptotic structure of the diffuse interface, a concept we call profile fidelity. In this setting, definite statements about the relative behaviour of time stepping methods can be made. Some methods, including all so-called energy stable methods but also some fully implicit methods, require asymptotically more time steps than others. The formal analysis is confirmed in computational studies. We observe that some provably energy stable methods popular in the literature perform worse than some more standard schemes. We show further that when Backward Euler is applied to meta-stable Allen–Cahn dynamics, the energy decay and profile fidelity properties for these discretizations are preserved for much larger time steps than previous analysis would suggest. The results are established asymptotically for general interfaces, with a rigorous proof for radial interfaces. It is shown analytically and computationally that for most reaction terms, Eyre type time stepping performs asymptotically worse due to loss of profile fidelity. 
    more » « less
  2. Abstract

    Inverse methods involving compressive sensing are tested in the application of two‐dimensional aperture‐synthesis imaging of radar backscatter from field‐aligned plasma density irregularities in the ionosphere. We consider basis pursuit denoising, implemented with the fast iterative shrinkage thresholding algorithm, and orthogonal matching pursuit (OMP) with a wavelet basis in the evaluation. These methods are compared with two more conventional optimization methods rooted in entropy maximization (MaxENT) and adaptive beamforming (linearly constrained minimum variance or often “Capon's Method.”) Synthetic data corresponding to an extended ionospheric radar target are considered. We find that MaxENT outperforms the other methods in terms of its ability to recover imagery of an extended target with broad dynamic range. Fast iterative shrinkage thresholding algorithm performs reasonably well but does not reproduce the full dynamic range of the target. It is also the most computationally expensive of the methods tested. OMP is very fast computationally but prone to a high degree of clutter in this application. We also point out that the formulation of MaxENT used here is very similar to OMP in some respects, the difference being that the former reconstructs the logarithm of the image rather than the image itself from basis vectors extracted from the observation matrix. MaxENT could in that regard be considered a form of compressive sensing.

     
    more » « less
  3. null (Ed.)
    Context. A realistic parametrization of convection and convective boundary mixing in conventional stellar evolution codes is still the subject of ongoing research. To improve the current situation, multidimensional hydrodynamic simulations are used to study convection in stellar interiors. Such simulations are numerically challenging, especially for flows at low Mach numbers which are typical for convection during early evolutionary stages. Aims. We explore the benefits of using a low-Mach hydrodynamic flux solver and demonstrate its usability for simulations in the astrophysical context. Simulations of convection for a realistic stellar profile are analyzed regarding the properties of convective boundary mixing. Methods. The time-implicit Seven-League Hydro (SLH) code was used to perform multidimensional simulations of convective helium shell burning based on a 25  M ⊙ star model. The results obtained with the low-Mach AUSM + -up solver were compared to results when using its non low-Mach variant AUSM B + -up. We applied well-balancing of the gravitational source term to maintain the initial hydrostatic background stratification. The computational grids have resolutions ranging from 180 × 90 2 to 810 × 540 2 cells and the nuclear energy release was boosted by factors of 3 × 10 3 , 1 × 10 4 , and 3 × 10 4 to study the dependence of the results on these parameters. Results. The boosted energy input results in convection at Mach numbers in the range of 10 −3 –10 −2 . Standard mixing-length theory predicts convective velocities of about 1.6 × 10 −4 if no boosting is applied. The simulations with AUSM + -up show a Kolmogorov-like inertial range in the kinetic energy spectrum that extends further toward smaller scales compared with its non low-Mach variant. The kinetic energy dissipation of the AUSM + -up solver already converges at a lower resolution compared to AUSM B + -up. The extracted entrainment rates at the boundaries of the convection zone are well represented by the bulk Richardson entrainment law and the corresponding fitting parameters are in agreement with published results for carbon shell burning. However, our study needs to be validated by simulations at higher resolution. Further, we find that a general increase in the entropy in the convection zone may significantly contribute to the measured entrainment of the top boundary. Conclusion. This study demonstrates the successful application of the AUSM + -up solver to a realistic astrophysical setup. Compressible simulations of convection in early phases at nominal stellar luminosity will benefit from its low-Mach capabilities. Similar to other studies, our extrapolated entrainment rate for the helium-burning shell would lead to an unrealistic growth of the convection zone if it is applied over the lifetime of the zone. Studies at nominal stellar luminosities and different phases of the same convection zone are needed to detect a possible evolution of the entrainment rate and the impact of radiation on convective boundary mixing. 
    more » « less
  4. Abstract Particle filters avoid parametric estimates for Bayesian posterior densities, which alleviates Gaussian assumptions in nonlinear regimes. These methods, however, are more sensitive to sampling errors than Gaussian-based techniques such as ensemble Kalman filters. A recent study by the authors introduced an iterative strategy for particle filters that match posterior moments—where iterations improve the filter’s ability to draw samples from non-Gaussian posterior densities. The iterations follow from a factorization of particle weights, providing a natural framework for combining particle filters with alternative filters to mitigate the impact of sampling errors. The current study introduces a novel approach to forming an adaptive hybrid data assimilation methodology, exploiting the theoretical strengths of nonparametric and parametric filters. At each data assimilation cycle, the iterative particle filter performs a sequence of updates while the prior sample distribution is non-Gaussian, then an ensemble Kalman filter provides the final adjustment when Gaussian distributions for marginal quantities are detected. The method employs the Shapiro–Wilk test to determine when to make the transition between filter algorithms, which has outstanding power for detecting departures from normality. Experiments using low-dimensional models demonstrate that the approach has a significant value, especially for nonhomogeneous observation networks and unknown model process errors. Moreover, hybrid factors are extended to consider marginals of more than one collocated variables using a test for multivariate normality. Findings from this study motivate the use of the proposed method for geophysical problems characterized by diverse observation networks and various dynamic instabilities, such as numerical weather prediction models. Significance Statement Data assimilation statistically processes observation errors and model forecast errors to provide optimal initial conditions for the forecast, playing a critical role in numerical weather forecasting. The ensemble Kalman filter, which has been widely adopted and developed in many operational centers, assumes Gaussianity of the prior distribution and solves a linear system of equations, leading to bias in strong nonlinear regimes. On the other hand, particle filters avoid many of those assumptions but are sensitive to sampling errors and are computationally expensive. We propose an adaptive hybrid strategy that combines their advantages and minimizes the disadvantages of the two methods. The hybrid particle filter–ensemble Kalman filter is achieved with the Shapiro–Wilk test to detect the Gaussianity of the ensemble members and determine the timing of the transition between these filter updates. Demonstrations in this study show that the proposed method is advantageous when observations are heterogeneous and when the model has an unknown bias. Furthermore, by extending the statistical hypothesis test to the test for multivariate normality, we consider marginals of more than one collocated variable. These results encourage further testing for real geophysical problems characterized by various dynamic instabilities, such as real numerical weather prediction models. 
    more » « less
  5. The thermal radiative transfer (TRT) equations form an integro-differential system that describes the propagation and collisional interactions of photons. Computing accurate and efficient numerical solutions TRT are challenging for several reasons, the first of which is that TRT is defined on a high-dimensional phase space that includes the independent variables of time, space, and velocity. In order to reduce the dimensionality of the phase space, classical approaches such as the P$_N$ (spherical harmonics) or the S$_N$ (discrete ordinates) ansatz are often used in the literature. In this work, we introduce a novel approach: the hybrid discrete (H$^T_N$) approximation to the radiative thermal transfer equations. This approach acquires desirable properties of both P$_N$ and S$_N$, and indeed reduces to each of these approximations in various limits: H$^1_N$ $\equiv$ P$_N$ and H$^T_0$ $\equiv$ S$_T$. We prove that H$^T_N$ results in a system of hyperbolic partial differential equations for all $T\ge 1$ and $N\ge 0$. Another challenge in solving the TRT system is the inherent stiffness due to the large timescale separation between propagation and collisions, especially in the diffusive (i.e., highly collisional) regime. This stiffness challenge can be partially overcome via implicit time integration, although fully implicit methods may become computationally expensive due to the strong nonlinearity and system size. On the other hand, explicit time-stepping schemes that are not also asymptotic-preserving in the highly collisional limit require resolving the mean-free path between collisions, making such schemes prohibitively expensive. In this work we develop a numerical method that is based on a nodal discontinuous Galerkin discretization in space, coupled with a semi-implicit discretization in time. In particular, we make use of a second order explicit Runge-Kutta scheme for the streaming term and an implicit Euler scheme for the material coupling term. Furthermore, in order to solve the material energy equation implicitly after each predictor and corrector step, we linearize the temperature term using a Taylor expansion; this avoids the need for an iterative procedure, and therefore improves efficiency. In order to reduce unphysical oscillation, we apply a slope limiter after each time step. Finally, we conduct several numerical experiments to verify the accuracy, efficiency, and robustness of the H$^T_N$ ansatz and the numerical discretizations. 
    more » « less