{"Abstract":["This is software and data to support the manuscript "Variations in Tropical Cyclone Size and Rainfall Patterns based on Synoptic-Scale Moisture Environments in the North Atlantic," which we are submitting to the journal, Journal of Geophysical Research Atmospheres.The MIT license applies to all source code and scripts published in this dataset.The software includes all code that is necessary to follow and evaluate the work. Public datasets include (1) the Atlantic hurricane database HURDAT2 (https://www.nhc.noaa.gov/data/#hurdat), (2) NASA’s Global Precipitation Measurement IMERG final precipitation (https://catalog.data.gov/dataset/gpm-imerg-final-precipitation-l3-half-hourly-0-1-degree-x-0-1-degree-v07-gpm-3imerghh-at-g), (3) the Tropical Cyclone Extended Best Track Dataset (https://rammb2.cira.colostate.edu/research/tropical-cyclones/tc_extended_best_track_dataset/), (4) the European Centre for Medium-Range Weather Forecasts (ECMWF) atmospheric reanalysis (ERA5) (https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5), and (5) the Statistical Hurricane Intensity Prediction Scheme (SHIPS) dataset (https://rammb.cira.colostate.edu/research/tropical_cyclones/ships/data/). We are also including four datasets generated by the code that will be helpful in evaluating the work. Lastly, we used the eofs software package, a python package for computing empirical orthogonal functions (EOFs), available publicly here: https://doi.org/10.5334/jors.122.All figures and tables in the manuscript are generated using Python, ArcGIS Pro, and GraphPad/Prism 10 Software:ArcGIS Pro used to make Figures 5GraphPad/Prism 10 Software used to make box plots in Figures 6-9Python used to make Figures 1-4, 10-11, and Tables 1-5Public Datasets:HURDAT2: Landsea, C. and Beven, J., 2019: The revised Atlantic hurricane database (HURDAT2). March 2022, https://www.aoml.noaa.gov/hrd/hurdat/hurdat2-format.pdfIMERG:NASA EarthData: GPM IMERG Final Precipitation L3 Half Hourly 0.1 degree x 0.1 degree V06. 9 December 2024, https://catalog.data.gov/dataset/gpm-imerg-final-precipitation-l3-half-hourly-0-1-degree-x-0-1-degree-v07-gpm-3imerghh-at-g. Note that this dataset is not longer publicly available, as it has been replaced with IMERG version 7: https://disc.gsfc.nasa.gov/datasets/GPM_3IMERGHH_07/summary?keywords="IMERG final"Extended Best Track:Regional and Mesoscale Meteorology Branch, 2022: The Tropical Cyclone Extended Best Track Dataset (EBTRK). March 2022, https://rammb2.cira.colostate.edu/research/tropical-cyclones/tc_extended_best_track_dataset/ERA5: Guillory, A. (2022). ERA5. Ecmwf [Dataset]. https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5. (Accessed March 2, 2023). Also: Hersbach, H., and Coauthors, 2020: The ERA5 global reanalysis. Quarterly Journal of the Royal Meteorological Society, 146, 1999–2049, https://doi.org/10.1002/qj.3803SHIPS:Ships Predictor Files - Colorado State University (2022). Statistical Tropical Cyclone Intensity Forecast Technique Development. https://rammb.cira.colostate.edu/research/tropical_cyclones/ships/data/ships_predictor_file_2022.pdf. Also: DeMaria, M., and J. Kaplan, 1994: A Statistical Hurricane Intensity Prediction Scheme (SHIPS) for the Atlantic Basin. Weather and Forecasting, 9, 209–220, https://doi.org/10.1175/1520-0434(1994)009<0209:ASHIPS>2.0.CO;2.Public Software: Dawson, A., 2016: eofs: A Library for EOF Analysis of Meteorological, Oceanographic, and Climate Data. JORS, 4, 14, https://doi.org/10.5334/jors.122.van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., Yager, N., et al. (2014). Scikit-image: Image processing in Python [Software]. PeerJ, 2, e453. https://doi.org/10.7717/peerj.453"]}
more »
« less
Software and Data For "Evaluating the Skillfulness of Experimental High Resolution Model Forecasts of Tropical Cyclone Precipitation using an Object-Based Methodology"
{"Abstract":["This is software and data to support the manuscript "Evaluating the Skillfulness of Experimental High Resolution Model Forecasts of Tropical Cyclone Precipitation using an Object-Based Methodology," which we are submitting to the journal Weather and Forecasting. The software includes all code that is necessary to follow and evaluate the work. We are also including some of the HAFS and HWRF-B model output for testing the code. Additional model output is available upon request. Public datasets include the Atlnatic hurricane database HURDAT2 (https://www.nhc.noaa.gov/data/#hurdat) and Stage IV precipitation (https://data.eol.ucar.edu/dataset/21.093)."]}
more »
« less
- Award ID(s):
- 2011981
- PAR ID:
- 10644685
- Publisher / Repository:
- University Libraries, Virginia Tech
- Date Published:
- Subject(s) / Keyword(s):
- Atmospheric Sciences FOS: Earth and related environmental sciences
- Format(s):
- Medium: X Size: 113104529 Bytes
- Size(s):
- 113104529 Bytes
- Institution:
- Virginia Tech
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
First release of software for the article: Searching for Low-Mass Exoplanets Amid Stellar Variability with a Fixed Effects Linear Model of Line-by-Line Shape Changes. This repository includes code and summaries corresponding to the paper "Searching for Low-Mass Exoplanets Amid Stellar Variability with a Fixed Effects Linear Model of Line-by-Line Shape Changes". The code uses R (https://www.r-project.org/) and assumes you have installed the following packages: tidyverse, rhdf5, Matrix, patchwork, collapse, parallel, pbmcapply Data used for this paper can be found at: https://doi.org/10.5281/zenodo.14841436 If you come across any issues or bugs, please contact Joseph Salzer at jsalzer@wisc.edu.more » « less
-
{"Abstract":["Supplementary code and model files for the manuscript entitled "Elucidating the Magma Plumbing System of Ol Doinyo Lengai (Natron Rift, Tanzania) Using Satellite Geodesy and Numerical Modeling". OlDoinyoLengai_code_and_models.zip contains all necessary Matlab code, functions, input and output files for the GNSS, InSAR, and joint inversions presented in our manuscript necessary to reproduce the results. dMODELS is an open source code developed by the United States Geological Survey. The originally published program is available here: https://pubs.usgs.gov/tm/13/b1/ and the revised software archived here will also be available through the USGS website code.usgs.gov/vsc/publications/OlDoinyoLengai or by contacting Maurizio Battaglia. With this manuscript we are providing an update to dMODELS that includes improved graphics and joint inversion capabilities for both InSAR and GNSS data. <\/p>"],"Other":["This work was funded by the National Science Foundation (NSF) grant number EAR-1943681, Virginia Tech, Korean Institute of Geosciences and Minerals (KIGAM), and Ardhi University. Funding for this work also came from USAID via the Volcano Disaster Assistance Program and from the U.S. Geological Survey (USGS) Volcano Hazards Program.This material is based on services provided by the GAGE Facility, operated by UNAVCO, Inc., with support from the National Science Foundation, the National Aeronautics and Space Administration, and the U.S. Geological Survey under NSF Cooperative Agreement EAR-1724794. We acknowledge and thank Alaska Satellite Facility for making InSAR data freely available and TZVOLCANO GNSS data sets available through the UNAVCO data archive."]}more » « less
-
{"Abstract":["# DeepCaImX## Introduction#### Two-photon calcium imaging provides large-scale recordings of neuronal activities at cellular resolution. A robust, automated and high-speed pipeline to simultaneously segment the spatial footprints of neurons and extract their temporal activity traces while decontaminating them from background, noise and overlapping neurons is highly desirable to analyze calcium imaging data. In this paper, we demonstrate DeepCaImX, an end-to-end deep learning method based on an iterative shrinkage-thresholding algorithm and a long-short-term-memory neural network to achieve the above goals altogether at a very high speed and without any manually tuned hyper-parameters. DeepCaImX is a multi-task, multi-class and multi-label segmentation method composed of a compressed-sensing-inspired neural network with a recurrent layer and fully connected layers. It represents the first neural network that can simultaneously generate accurate neuronal footprints and extract clean neuronal activity traces from calcium imaging data. We trained the neural network with simulated datasets and benchmarked it against existing state-of-the-art methods with in vivo experimental data. DeepCaImX outperforms existing methods in the quality of segmentation and temporal trace extraction as well as processing speed. DeepCaImX is highly scalable and will benefit the analysis of mesoscale calcium imaging. \n\n## System and Environment Requirements#### 1. Both CPU and GPU are supported to run the code of DeepCaImX. A CUDA compatible GPU is preferred. * In our demo of full-version, we use a GPU of Quadro RTX8000 48GB to accelerate the training speed.* In our demo of mini-version, at least 6 GB momory of GPU/CPU is required.#### 2. Python 3.9 and Tensorflow 2.10.0#### 3. Virtual environment: Anaconda Navigator 2.2.0#### 4. Matlab 2023a\n\n## Demo and installation#### 1 (_Optional_) GPU environment setup. We need a Nvidia parallel computing platform and programming model called _CUDA Toolkit_ and a GPU-accelerated library of primitives for deep neural networks called _CUDA Deep Neural Network library (cuDNN)_ to build up a GPU supported environment for training and testing our model. The link of CUDA installation guide is https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html and the link of cuDNN installation guide is https://docs.nvidia.com/deeplearning/cudnn/installation/overview.html. #### 2 Install Anaconda. Link of installation guide: https://docs.anaconda.com/free/anaconda/install/index.html#### 3 Launch Anaconda prompt and install Python 3.x and Tensorflow 2.9.0 as the virtual environment.#### 4 Open the virtual environment, and then pip install mat73, opencv-python, python-time and scipy.#### 5 Download the "DeepCaImX_training_demo.ipynb" in folder "Demo (full-version)" for a full version and the simulated dataset via the google drive link. Then, create and put the training dataset in the path "./Training Dataset/". If there is a limitation on your computing resource or a quick test on our code, we highly recommand download the demo from the folder "Mini-version", which only requires around 6.3 GB momory in training. #### 6 Run: Use Anaconda to launch the virtual environment and open "DeepCaImX_training_demo.ipynb" or "DeepCaImX_testing_demo.ipynb". Then, please check and follow the guide of "DeepCaImX_training_demo.ipynb" or or "DeepCaImX_testing_demo.ipynb" for training or testing respectively.#### Note: Every package can be installed in a few minutes.\n\n## Run DeepCaImX#### 1. Mini-version demo* Download all the documents in the folder of "Demo (mini-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n#### 2. Full-version demo* Download all the documents in the folder of "Demo (full-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n## Data Tailor#### A data tailor developed by Matlab is provided to support a basic data tiling processing. In the folder of "Data Tailor", we can find a "tailor.m" script and an example "test.tiff". After running "tailor.m" by matlab, user is able to choose a "tiff" file from a GUI as loading the sample to be tiled. Settings include size of FOV, overlapping area, normalization option, name of output file and output data format. The output files can be found at local folder, which is at the same folder as the "tailor.m".\n\n## Simulated Dataset#### 1. Dataset generator (FISSA Version): The algorithm for generating simulated dataset is based on the paper of FISSA (_Keemink, S.W., Lowe, S.C., Pakan, J.M.P. et al. FISSA: A neuropil decontamination toolbox for calcium imaging signals. Sci Rep 8, 3493 (2018)_) and SimCalc repository (https://github.com/rochefort-lab/SimCalc/). For the code used to generate the simulated data, please download the documents in the folder "Simulated Dataset Generator". #### Training dataset: https://drive.google.com/file/d/1WZkIE_WA7Qw133t2KtqTESDmxMwsEkjJ/view?usp=share_link#### Testing Dataset: https://drive.google.com/file/d/1zsLH8OQ4kTV7LaqQfbPDuMDuWBcHGWcO/view?usp=share_link\n\n#### 2. Dataset generator (NAOMi Version): The algorithm for generating simulated dataset is based on the paper of NAOMi (_Song, A., Gauthier, J. L., Pillow, J. W., Tank, D. W. & Charles, A. S. Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods. Journal of neuroscience methods 358, 109173 (2021)_). For the code use to generate the simulated data, please go to this link: https://bitbucket.org/adamshch/naomi_sim/src/master/code/## Experimental Dataset#### We used the samples from ABO dataset:https://github.com/AllenInstitute/AllenSDK/wiki/Use-the-Allen-Brain-Observatory-%E2%80%93-Visual-Coding-on-AWS.#### The segmentation ground truth can be found in the folder "Manually Labelled ROIs". #### The segmentation ground truth of depth 175, 275, 375, 550 and 625 um are manually labeled by us. #### The code for creating ground truth of extracted traces can be found in "Prepro_Exp_Sample.ipynb" in the folder "Preprocessing of Experimental Sample"."]}more » « less
-
These are supplementary data, code, and model files associated with the manuscript "Detecting Transient Deformation at the Active Volcano Ol Doinyo Lengai in Tanzania with the TZVOLCANO Network" in consideration for publication in the Geophysical Research Letters. tzvolcano_code_and_models.zip contains all necessary Targeted Projection Operator (TPO) software, input, and output files for the GNSS inversions presented in our manuscript necessary to reproduce the results. The TPO program is a Unix/Linux code developed by Kang-Hyeun Ji working at the Korea Institute for Geoscience and Mineral Resources, Daejeon, South Korea. The source code is available in the supplementary Zenodo repository. We also include input and output model files for the USGS code dMODELS for reproducibility. Please see the README.txt file for more details. This study was funded by the US National Science Foundation grant number EAR-1943681 to Virginia Tech, internal university funds via Ardhi University, and Ministry of Science and ICT of Korea Basic Research Project GP2021-006 to the Korea Institute of Geosciences and Mineral Resources. We acknowledge and thank the EarthScope Consortium for archiving and making TZVOLCANO GNSS datasets freely available, supported by the National Science Foundation’s Seismological Facility for the Advancement of Geoscience (SAGE) Award under Cooperative Support Agreement EAR-1851048 and Geodetic Facility for the Advancement of Geoscience (GAGE) Award under NSF Cooperative Agreement EAR-1724794.more » « less
An official website of the United States government
