skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: sami2py – overview and applications
ami2py is a Python module that runs the SAMI2 (Sami2 is Another Model of the Ionosphere) ionospheric model, as well as load and archive the results. SAMI2 is a model developed by the Naval Research Laboratory to simulate the motions of plasma in a two-dimensional ionospheric environment along a dipole magnetic field. SAMI2 solves for the chemical and dynamical evolution of seven ion species in this environment (H+ , He+ , N+ , O+ , N+ 2 , NO , and O2 ). The Python implementation allows for additional modifications to the empirical models within SAMI2, including the exospheric temperature in the empirical thermosphere and the input of E×B ion drifts. The code is open source and available to the community on GitHub. The work here discusses the implementation and use of sami2py, including integration with the pysat ecosystem and the growin python package for ionospheric calculations. As part of the Application Usability Level (AUL) framework, we will discuss the usability of this code in terms of several ionospheric applications.  more » « less
Award ID(s):
2029840
PAR ID:
10378385
Author(s) / Creator(s):
Date Published:
Journal Name:
Frontiers in astronomy and space sciences
ISSN:
2296-987X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The Galactic electron density model NE2001 describes the multicomponent ionized structure of the Milky Way interstellar medium. NE2001 forward models the dispersion and scattering of compact radio sources, including pulsars, fast radio bursts, active galactic nuclei, and masers, and the model is routinely used to predict the distances of radio sources lacking independent distance measures. Here we present the open-source package NE2001p, a fully Python implementation of NE2001. The model parameters are identical to NE2001 but the computational architecture is optimized for Python, yielding small (<1%) numerical differences between NE2001p and the Fortran code. NE2001p can be used on the command-line and through Python scripts available on PyPI. Future package releases will include modular extensions aimed at providing short-term improvements to model accuracy, including a modified thick disk scale height and additional clumps and voids. This implementation of NE2001 is a springboard to a next-generation Galactic electron density model now in development. 
    more » « less
  2. Abstract The escape of heavy ions from the Earth atmosphere is a consequence of energization and transport mechanisms, including photoionization, electron precipitation, ion‐electron‐neutral chemistry, and collisions. Numerous studies considered the outflow of O+ions only, but ignored the observational record of outflowing N+. In spite of 12% mass difference, N+and O+ions have different ionization potentials, ionospheric chemistry, and scale heights. We expanded the Polar Wind Outflow Model (PWOM) to include N+and key molecular ions in the polar wind. We refer to this model expansion as the Seven Ion Polar Wind Outflow Model (7iPWOM), which involves expanded schemes for suprathermal electron production and ion‐electron‐neutral chemistry and collisions. Numerical experiments, designed to probe the influence of season, as well as that of solar conditions, suggest that N+is a significant ion species in the polar ionosphere and its presence largely improves the polar wind solution, as compared to observations. 
    more » « less
  3. Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV) and a thin device thickness. Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and a multi-stage physics-informed deep learning model. This reduces the computational resource demands by orders of magnitude and facilitates fast reconstruction. Our deep learning algorithm can reconstruct object volumes over 4×6×0.6 mm3. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint. # DeepInMiniscope: Deep-learning-powered physics-informed integrated miniscope [https://doi.org/10.5061/dryad.6t1g1jx83](https://doi.org/10.5061/dryad.6t1g1jx83) ## Description of the data and file structure ### DeepInMiniscope: Learned Integrated Miniscope ### Datasets, models and codes for 2D and 3D sample reconstructions. Dataset for 2D reconstruction includes test data for green stained lens tissue. Input: measured images of green fluorescent stained lens tissue, dissembled into sub-FOV patches. Output: the slide containing green lens tissue features. Dataset for 3D sample reconstructions includes test data for 3D reconstruction of in-vivo mouse brain video recording. Input: Time-series standard-derivation of difference-to-local-mean weighted raw video. Output: reconstructed 4-D volumetric video containing a 3-dimensional distribution of neural activities. ## Files and variables ### Download data, code, and sample results 1. Download data `data.zip`, code `code.zip`, results `results.zip`. 2. Unzip the downloaded files and place them in the same main folder. 3. Confirm that the main folder contains three subfolders: `data`, `code`, and `results`. Inside the `data` and `code` folder, there should be subfolders for each test case. ## Data 2D_lenstissue **data_2d_lenstissue.mat:** Measured images of green fluorescent stained lens tissue, disassembled into sub-FOV patches. * **Xt:** stacked 108 FOVs of measured image, each centered at one microlens unit with 720 x 720 pixels. Data dimension in order of (batch, height, width, FOV). * **Yt:** placeholder variable for reconstructed object, each centered at corresponding microlens unit, with 180 x 180 voxels. Data dimension in order of (batch, height, width, FOV). **reconM_0308:** Trained Multi-FOV ADMM-Net model for 2D lens tissue reconstruction. **gen_lenstissue.mat:** Generated lens tissue reconstruction by running the model with code **2D_lenstissue.py** * **generated_images:** stacked 108 reconstructed FOVs of lens tissue sample by multi-FOV ADMM-Net, the assembled full sample reconstruction is shown in results/2D_lenstissue_reconstruction.png 3D_mouse **reconM_g704_z5_v4:** Trained 3D Multi-FOV ADMM-Net model for 3D sample reconstructions **t_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290.mat:** Time-series standard-deviation of difference-to-local-mean weighted raw video. * **Xts:** test video with 290 frames and each frame 6 FOVs, with 1408 x 1408 pixels per FOV. Data dimension in order of (frames, height, width, FOV). **gen_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290_v4.mat:** Generated 4D volumetric video containing 3-dimensional distribution of neural activities. * **generated_images_fu:** frame-by-frame 3D reconstruction of recorded video in uint8 format. Data dimension in order of (batch, FOV, height, width, depth). Each frame contains 6 FOVs, and each FOV has 13 reconstruction depths with 416 x 416 voxels per depth. Variables inside saved model subfolders (reconM_0308 and reconM_g704_z5_v4): * **saved_model.pb:** model computation graph including architecture and input/output definitions. * **keras_metadata.pb:** Keras metadata for the saved model, including model class, training configuration, and custom objects. * **assets:** external files for custom assets loaded during model training/inference. This folder is empty, as the model does not use custom assets. * **variables.data-00000-of-00001:** numerical values of model weights and parameters. * **variables.index:** index file that maps variable names to weight locations in .data. ## Code/software ### Set up the Python environment 1. Download and install the [Anaconda distribution](https://www.anaconda.com/download). 2. The code was tested with the following packages * python=3.9.7 * tensorflow=2.7.0 * keras=2.7.0 * matplotlib=3.4.3 * scipy=1.7.1 ## Code **2D_lenstissue.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **lenstissue_2D.m:** Matlab code to display the generated image and reassemble sub-FOV patches. **sup_psf.m:** Matlab script to load microlens coordinates data and to generate PSF pattern. **lenscoordinates.xls:** Microlens units coordinates table. **3D mouse.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **mouse_3D.m:** Matlab code to display the reconstructed neural activity video and to calculate temporal correlation. ## Access information Other publicly accessible locations of the data: * [https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope](https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope) 
    more » « less
  4. The burgeoning sophistication of Artificial Intelligence (AI) has catalyzed the rapid proliferation of Large Language Models (LLMs) within software development. These models are increasingly employed to automate the generation of functionally correct code, address complex computational problems, and facilitate the debugging of existing software systems. However, LLM-generated code often faces challenges due to inherent inefficiencies, including redundant logical structures, factually inconsistent content (hallucinations), and programming errors. To address this issue, our research rigorously evaluated the computational efficiency of Python code generated by three prominent LLMs: GPT-4o-Mini, GPT-3.5-Turbo, and GPT-4-Turbo. The evaluation metrics encompass execution time, memory utilization, and peak memory consumption, while maintaining the functional correctness of the generated code. Leveraging the EffiBench benchmark datasets within the Google Vertex AI Workbench environment, across a spectrum of machine configurations, the study implemented a consistent seed parameter to ensure experimental reproducibility. Furthermore, we investigated the impact of two distinct optimization strategies: Chain-of-Thought (CoT) prompting and model fine-tuning. Our findings reveal a significant enhancement in efficiency metrics for GPT-4o-Mini and GPT-3.5-Turbo when employing CoT prompting; however, this trend was not observed for GPT-4-Turbo. Based on its promising performance with CoT prompting, we selected the GPT-4o-Mini model for subsequent fine-tuning, aiming to further enhance both its computational efficiency and accuracy. However, contrary to our expectations, fine-tuning the GPT-4o-Mini model led to a discernible degradation in both its accuracy and computational efficiency. In conclusion, this study provides empirical evidence suggesting that the deployment of high-CPU machine configurations, in synergy with the utilization of the GPT-4o-Mini model and CoT prompting techniques, yields demonstrably more efficient and accurate LLM-generated Python code, particularly within computationally intensive application scenarios. 
    more » « less
  5. null (Ed.)
    We describe JetLag, a Python-based environment that provides access to a distributed, interactive, asynchronous many-task (AMT) computing framework called Phylanx. This environment encompasses the entire computing process, from a Jupyter front-end for managing code and results to the collection and visualization of performance data.We use a Python decorator to access the abstract syntax tree of Python functions and transpile them into a set of C++ data structures which are then executed by the HPX runtime. The environment includes services for sending functions and their arguments to run as jobs on remote resources.A set of Docker and Singularity containers are used to simplify the setup of the JetLag environment. The JetLag system is suitable for a variety of array computational tasks, including machine learning and exploratory data analysis. 
    more » « less