skip to main content


Title: Locust: C++ software for simulation of RF detection
Abstract

The Locust simulation package is a new C++ software tool developed to simulate the measurement of time-varying electromagnetic fields using RF detection techniques. Modularity and flexibility allow for arbitrary input signals, while concurrently supporting tight integration with physics-based simulations as input. External signals driven by the Kassiopeia particle tracking package are discussed, demonstrating conditional feedback between Locust and Kassiopeia during software execution. An application of the simulation to the Project 8 experiment is described. Locust is publicly available athttps://github.com/project8/locust_mc.

 
more » « less
NSF-PAR ID:
10303287
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; « less
Publisher / Repository:
IOP Publishing
Date Published:
Journal Name:
New Journal of Physics
Volume:
21
Issue:
11
ISSN:
1367-2630
Page Range / eLocation ID:
Article No. 113051
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Background

    Differential correlation networks are increasingly used to delineate changes in interactions among biomolecules. They characterize differences between omics networks under two different conditions, and can be used to delineate mechanisms of disease initiation and progression.

    Results

    We present a new R package, , that facilitates the estimation and visualization of differential correlation networks using multiple correlation measures and inference methods. The software is implemented in , and , and is available athttps://github.com/sqyu/CorDiffViz. Visualization has been tested for the Chrome and Firefox web browsers. A demo is available athttps://diffcornet.github.io/CorDiffViz/demo.html.

    Conclusions

    Our software offers considerable flexibility by allowing the user to interact with the visualization and choose from different estimation methods and visualizations. It also allows the user to easily toggle between correlation networks for samples under one condition and differential correlations between samples under two conditions. Moreover, the software facilitates integrative analysis of cross-correlation networks between two omics data sets.

     
    more » « less
  2. Abstract

    We present an expansion of FLEET, a machine-learning algorithm optimized to select transients that are most likely tidal disruption events (TDEs). FLEET is based on a random forest algorithm trained on both the light curves and host galaxy information of 4779 spectroscopically classified transients. We find that for transients with a probability of being a TDE,P(TDE) > 0.5, we can successfully recover TDEs with ≈40% completeness and ≈30% purity when using their first 20 days of photometry or a similar completeness and ≈50% purity when including 40 days of photometry, an improvement of almost 2 orders of magnitude compared to random selection. Alternatively, we can recover TDEs with a maximum purity of ≈80% and a completeness of ≈30% when considering only transients withP(TDE) > 0.8. We explore the use of FLEET for future time-domain surveys such as the Legacy Survey of Space and Time on the Vera C. Rubin Observatory (Rubin) and the Nancy Grace Roman Space Telescope (Roman). We estimate that ∼104well-observed TDEs could be discovered every year by Rubin and ∼200 TDEs by Roman. Finally, we run FLEET on the TDEs from our Rubin survey simulation and find that we can recover ∼30% of them at redshiftz< 0.5 withP(TDE) > 0.5, or ∼3000 TDEs yr–1that FLEET could uncover from the Rubin stream. We have demonstrated that we will be able to run FLEET on Rubin photometry as soon as this survey begins. FLEET is provided as an open source package on GitHub: https://github.com/gmzsebastian/FLEET.

     
    more » « less
  3. Abstract Background

    Advances in microbiome science are being driven in large part due to our ability to study and infer microbial ecology from genomes reconstructed from mixed microbial communities using metagenomics and single-cell genomics. Such omics-based techniques allow us to read genomic blueprints of microorganisms, decipher their functional capacities and activities, and reconstruct their roles in biogeochemical processes. Currently available tools for analyses of genomic data can annotate and depict metabolic functions to some extent; however, no standardized approaches are currently available for the comprehensive characterization of metabolic predictions, metabolite exchanges, microbial interactions, and microbial contributions to biogeochemical cycling.

    Results

    We present METABOLIC (METabolic And BiogeOchemistry anaLyses In miCrobes), a scalable software to advance microbial ecology and biogeochemistry studies using genomes at the resolution of individual organisms and/or microbial communities. The genome-scale workflow includes annotation of microbial genomes, motif validation of biochemically validated conserved protein residues, metabolic pathway analyses, and calculation of contributions to individual biogeochemical transformations and cycles. The community-scale workflow supplements genome-scale analyses with determination of genome abundance in the microbiome, potential microbial metabolic handoffs and metabolite exchange, reconstruction of functional networks, and determination of microbial contributions to biogeochemical cycles. METABOLIC can take input genomes from isolates, metagenome-assembled genomes, or single-cell genomes. Results are presented in the form of tables for metabolism and a variety of visualizations including biogeochemical cycling potential, representation of sequential metabolic transformations, community-scale microbial functional networks using a newly defined metric “MW-score” (metabolic weight score), and metabolic Sankey diagrams. METABOLIC takes ~ 3 h with 40 CPU threads to process ~ 100 genomes and corresponding metagenomic reads within which the most compute-demanding part of hmmsearch takes ~ 45 min, while it takes ~ 5 h to complete hmmsearch for ~ 3600 genomes. Tests of accuracy, robustness, and consistency suggest METABOLIC provides better performance compared to other software and online servers. To highlight the utility and versatility of METABOLIC, we demonstrate its capabilities on diverse metagenomic datasets from the marine subsurface, terrestrial subsurface, meadow soil, deep sea, freshwater lakes, wastewater, and the human gut.

    Conclusion

    METABOLIC enables the consistent and reproducible study of microbial community ecology and biogeochemistry using a foundation of genome-informed microbial metabolism, and will advance the integration of uncultivated organisms into metabolic and biogeochemical models. METABOLIC is written in Perl and R and is freely available under GPLv3 athttps://github.com/AnantharamanLab/METABOLIC.

     
    more » « less
  4. Abstract

    New observational facilities are probing astrophysical transients such as stellar explosions and gravitational-wave sources at ever-increasing redshifts, while also revealing new features in source property distributions. To interpret these observations, we need to compare them to predictions from stellar population models. Such models require the metallicity-dependent cosmic star formation history ((Z,z)) as an input. Large uncertainties remain in the shape and evolution of this function. In this work, we propose a simple analytical function for(Z,z). Variations of this function can be easily interpreted because the parameters link to its shape in an intuitive way. We fit our analytical function to the star-forming gas of the cosmological TNG100 simulation and find that it is able to capture the main behavior well. As an example application, we investigate the effect of systematic variations in the(Z,z)parameters on the predicted mass distribution of locally merging binary black holes. Our main findings are that (i) the locations of features are remarkably robust against variations in the metallicity-dependent cosmic star formation history, and (ii) the low-mass end is least affected by these variations. This is promising as it increases our chances of constraining the physics that govern the formation of these objects (https://github.com/LiekeVanSon/SFRD_fit/tree/7348a1ad0d2ed6b78c70d5100fb3cd2515493f02/).

     
    more » « less
  5. Abstract

    Computational workflows are widely used in data analysis, enabling automated tracking of steps and storage of provenance information, leading to innovation and decision-making in the scientific community. However, the growing popularity of workflows has raised concerns about reproducibility and reusability which can hinder collaboration between institutions and users. In order to address these concerns, it is important to standardize workflows or provide tools that offer a framework for describing workflows and enabling computational reusability. One such set of standards that has recently emerged is the Common Workflow Language (CWL), which offers a robust and flexible framework for data analysis tools and workflows. To promote portability, reproducibility, and interoperability of AI/ML workflows, we developedgeoweaver_cwl, a Python package that automatically describes AI/ML workflows from a workflow management system (WfMS) named Geoweaver into CWL. In this paper, we test our Python package on multiple use cases from different domains. Our objective is to demonstrate and verify the utility of this package. We make all the code and dataset open online and briefly describe the experimental implementation of the package in this paper, confirming thatgeoweaver_cwlcan lead to a well-versed AI process while disclosing opportunities for further extensions. Thegeoweaver_cwlpackage is publicly released online athttps://pypi.org/project/geoweaver-cwl/0.0.1/and exemplar results are accessible at:https://github.com/amrutakale08/geoweaver_cwl-usecases.

     
    more » « less