skip to main content


Title: An accurate treatment of scattering and diffusion in piecewise power-law models for cosmic ray and radiation/neutrino transport
ABSTRACT

A popular numerical method to model the dynamics of a ‘full spectrum’ of cosmic rays (CRs), also applicable to radiation/neutrino hydrodynamics, is to discretize the spectrum at each location/cell as a piecewise power law in ‘bins’ of momentum (or frequency) space. This gives rise to a pair of conserved quantities (e.g. CR number and energy) that are exchanged between cells or bins, which in turn give the update to the normalization and slope of the spectrum in each bin. While these methods can be evolved exactly in momentum-space (e.g. considering injection, absorption, continuous losses/gains), numerical challenges arise dealing with spatial fluxes, if the scattering rates depend on momentum. This has often been treated either by neglecting variation of those rates ‘within the bin,’ or sacrificing conservation – introducing significant errors. Here, we derive a rigorous treatment of these terms, and show that the variation within the bin can be accounted for accurately with a simple set of scalar correction coefficients that can be written entirely in terms of other, explicitly evolved ‘bin-integrated’ quantities. This eliminates the relevant errors without added computational cost, has no effect on the numerical stability of the method, and retains manifest conservation. We derive correction terms both for methods that explicitly integrate flux variables (e.g. two-moment or M1-like) methods, as well as single-moment (advection-diffusion, FLD-like) methods, and approximate corrections valid in various limits.

 
more » « less
Award ID(s):
2108318 1713353 1911233
NSF-PAR ID:
10385408
Author(s) / Creator(s):
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
518
Issue:
4
ISSN:
0035-8711
Format(s):
Medium: X Size: p. 5882-5892
Size(s):
["p. 5882-5892"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Background: When phenotypic characters are described in the literature, they may be constrained or clarified with additional information such as the location or degree of expression, these terms are called “modifiers”. With effort underway to convert narrative character descriptions to computable data, ontologies for such modifiers are needed. Such ontologies can also be used to guide term usage in future publications. Spatial and method modifiers are the subjects of ontologies that already have been developed or are under development. In this work, frequency (e.g., rarely, usually), certainty (e.g., probably, definitely), degree (e.g., slightly, extremely), and coverage modifiers (e.g., sparsely, entirely) are collected, reviewed, and used to create two modifier ontologies with different design considerations. The basic goal is to express the sequential relationships within a type of modifiers, for example, usually is more frequent than rarely, in order to allow data annotated with ontology terms to be classified accordingly. Method: Two designs are proposed for the ontology, both using the list pattern: a closed ordered list (i.e., five-bin design) and an open ordered list design. The five-bin design puts the modifier terms into a set of 5 fixed bins with interval object properties, for example, one_level_more/less_frequently_than, where new terms can only be added as synonyms to existing classes. The open list approach starts with 5 bins, but supports the extensibility of the list via ordinal properties, for example, more/less_frequently_than, allowing new terms to be inserted as a new class anywhere in the list. The consequences of the different design decisions are discussed in the paper. CharaParser was used to extract modifiers from plant, ant, and other taxonomic descriptions. After a manual screening, 130 modifier words were selected as the candidate terms for the modifier ontologies. Four curators/experts (three biologists and one information scientist specialized in biosemantics) reviewed and categorized the terms into 20 bins using the Ontology Term Organizer (OTO) (http://biosemantics.arizona.edu/OTO). Inter-curator variations were reviewed and expressed in the final ontologies. Results: Frequency, certainty, degree, and coverage terms with complete agreement among all curators were used as class labels or exact synonyms. Terms with different interpretations were either excluded or included using “broader synonym” or “not recommended” annotation properties. These annotations explicitly allow for the user to be aware of the semantic ambiguity associated with the terms and whether they should be used with caution or avoided. Expert categorization results showed that 16 out of 20 bins contained terms with full agreements, suggesting differentiating the modifiers into 5 levels/bins balances the need to differentiate modifiers and the need for the ontology to reflect user consensus. Two ontologies, developed using the Protege ontology editor, are made available as OWL files and can be downloaded from https://github.com/biosemantics/ontologies. Contribution: We built the first two modifier ontologies following a consensus-based approach with terms commonly used in taxonomic literature. The five-bin ontology has been used in the Explorer of Taxon Concepts web toolkit to compute the similarity between characters extracted from literature to facilitate taxon concepts alignments. The two ontologies will also be used in an ontology-informed authoring tool for taxonomists to facilitate consistency in modifier term usage. 
    more » « less
  2. It takes great effort to manually or semi-automatically convert free-text phenotype narratives (e.g., morphological descriptions in taxonomic works) to a computable format before they can be used in large-scale analyses. We argue that neither a manual curation approach nor an information extraction approach based on machine learning is a sustainable solution to produce computable phenotypic data that are FAIR (Findable, Accessible, Interoperable, Reusable) (Wilkinson et al. 2016). This is because these approaches do not scale to all biodiversity, and they do not stop the publication of free-text phenotypes that would need post-publication curation. In addition, both manual and machine learning approaches face great challenges: the problem of inter-curator variation (curators interpret/convert a phenotype differently from each other) in manual curation, and keywords to ontology concept translation in automated information extraction, make it difficult for either approach to produce data that are truly FAIR. Our empirical studies show that inter-curator variation in translating phenotype characters to Entity-Quality statements (Mabee et al. 2007) is as high as 40% even within a single project. With this level of variation, curated data integrated from multiple curation projects may still not be FAIR. The key causes of this variation have been identified as semantic vagueness in original phenotype descriptions and difficulties in using standardized vocabularies (ontologies). We argue that the authors describing characters are the key to the solution. Given the right tools and appropriate attribution, the authors should be in charge of developing a project's semantics and ontology. This will speed up ontology development and improve the semantic clarity of the descriptions from the moment of publication. In this presentation, we will introduce the Platform for Author-Driven Computable Data and Ontology Production for Taxonomists, which consists of three components: a web-based, ontology-aware software application called 'Character Recorder,' which features a spreadsheet as the data entry platform and provides authors with the flexibility of using their preferred terminology in recording characters for a set of specimens (this application also facilitates semantic clarity and consistency across species descriptions); a set of services that produce RDF graph data, collects terms added by authors, detects potential conflicts between terms, dispatches conflicts to the third component and updates the ontology with resolutions; and an Android mobile application, 'Conflict Resolver,' which displays ontological conflicts and accepts solutions proposed by multiple experts. a web-based, ontology-aware software application called 'Character Recorder,' which features a spreadsheet as the data entry platform and provides authors with the flexibility of using their preferred terminology in recording characters for a set of specimens (this application also facilitates semantic clarity and consistency across species descriptions); a set of services that produce RDF graph data, collects terms added by authors, detects potential conflicts between terms, dispatches conflicts to the third component and updates the ontology with resolutions; and an Android mobile application, 'Conflict Resolver,' which displays ontological conflicts and accepts solutions proposed by multiple experts. Fig. 1 shows the system diagram of the platform. The presentation will consist of: a report on the findings from a recent survey of 90+ participants on the need for a tool like Character Recorder; a methods section that describes how we provide semantics to an existing vocabulary of quantitative characters through a set of properties that explain where and how a measurement (e.g., length of perigynium beak) is taken. We also report on how a custom color palette of RGB values obtained from real specimens or high-quality specimen images, can be used to help authors choose standardized color descriptions for plant specimens; and a software demonstration, where we show how Character Recorder and Conflict Resolver can work together to construct both human-readable descriptions and RDF graphs using morphological data derived from species in the plant genus Carex (sedges). The key difference of this system from other ontology-aware systems is that authors can directly add needed terms to the ontology as they wish and can update their data according to ontology updates. a report on the findings from a recent survey of 90+ participants on the need for a tool like Character Recorder; a methods section that describes how we provide semantics to an existing vocabulary of quantitative characters through a set of properties that explain where and how a measurement (e.g., length of perigynium beak) is taken. We also report on how a custom color palette of RGB values obtained from real specimens or high-quality specimen images, can be used to help authors choose standardized color descriptions for plant specimens; and a software demonstration, where we show how Character Recorder and Conflict Resolver can work together to construct both human-readable descriptions and RDF graphs using morphological data derived from species in the plant genus Carex (sedges). The key difference of this system from other ontology-aware systems is that authors can directly add needed terms to the ontology as they wish and can update their data according to ontology updates. The software modules currently incorporated in Character Recorder and Conflict Resolver have undergone formal usability studies. We are actively recruiting Carex experts to participate in a 3-day usability study of the entire system of the Platform for Author-Driven Computable Data and Ontology Production for Taxonomists. Participants will use the platform to record 100 characters about one Carex species. In addition to usability data, we will collect the terms that participants submit to the underlying ontology and the data related to conflict resolution. Such data allow us to examine the types and the quantities of logical conflicts that may result from the terms added by the users and to use Discrete Event Simulation models to understand if and how term additions and conflict resolutions converge. We look forward to a discussion on how the tools (Character Recorder is online at http://shark.sbs.arizona.edu/chrecorder/public) described in our presentation can contribute to producing and publishing FAIR data in taxonomic studies. 
    more » « less
  3. null (Ed.)
    Abstract. We present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arranged into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS. 
    more » « less
  4. ABSTRACT

    Peculiar motion of galaxies probes the structure growth in the universe. In this study, we employ the galaxy stellar mass-binding energy (massE) relation with only two nuisance parameters to build the largest peculiar-velocity (PV) catalogue to date, consisting of 229 890 ellipticals from the main galaxy sample (MGS) of the Sloan Digital Sky Survey (SDSS). We quantify the distribution of the massE-based distances in individual narrow redshift bins (dz = 0.005), and then estimate the PV of each galaxy based on its offset from the Gaussian mean of the distribution. As demonstrated with the Uchuu-SDSS mock data, the derived PV and momentum power spectra are insensitive to accurate calibration of the massE relation itself, enabling measurements out to a redshift of 0.2, well beyond the current limit of z = 0.1 using other galaxy scaling laws. We then measure the momentum power spectrum and demonstrate that it remains almost unchanged if varying significantly the redshift bin size within which the distance is measured, as well as the intercept and slope of the massE relation, respectively. By fitting the spectra using the perturbation theory model with four free parameters, fσ8 is constrained to fσ8 = 0.459$^{+0.068}_{-0.069}$ over Δz = 0.02–0.2, 0.416$^{+0.074}_{-0.076}$ over Δz = 0.02–0.1, and 0.526$^{+0.133}_{-0.148}$ over Δz = 0.1–0.2. The error of fσ8 is 2.1 times smaller than that by the redshift space distortion (RSD) of the same sample. A Fisher matrix forecast illustrates that the constraint on fσ8 from the massE-based PV can potentially exceed that from the stage-IV RSD in late universe (z<0.5).

     
    more » « less
  5. ABSTRACT

    A reduced speed-of-light (RSOL) approximation is a useful technique for magnetohydrodynamic (MHD)-particle-in-cell (PIC) simulations. With an RSOL, some ‘in-code’ speed-of-light $\tilde{c}$ is set to much lower values than the true c, allowing simulations to take larger time-steps (which are restricted by the Courant condition given the large CR speeds). However, due to the absence of a well-formulated RSOL implementation from the literature, with naive substitution of the true c with a RSOL, the CR properties in MHD-PIC simulations (e.g. CR energy or momentum density, gyro radius) vary artificially with respect to each other and with respect to the converged ($\tilde{c} \rightarrow c$) solutions, with different choices of a RSOL. Here, we derive a new formulation of the MHD-PIC equations with an RSOL and show that (1) it guarantees all steady-state properties of the CR distribution function, and background plasma/MHD quantities are independent of the RSOL $\tilde{c}$ even for $\tilde{c} \ll c$; (2) it ensures that the simulation can simultaneously represent the real physical values of CR number, mass, momentum, and energy density; (3) it retains the correct physical meaning of various terms like the electric field; and (4) it ensures the numerical time-step for CRs can always be safely increased by a factor $\sim c/\tilde{c}$. This new RSOL formulation should enable greater self-consistency and reduced CPU cost in simulations of CR–MHD interactions.

     
    more » « less