{"Abstract":["Supplementary code and model files for the manuscript entitled "Elucidating the Magma Plumbing System of Ol Doinyo Lengai (Natron Rift, Tanzania) Using Satellite Geodesy and Numerical Modeling". OlDoinyoLengai_code_and_models.zip contains all necessary Matlab code, functions, input and output files for the GNSS, InSAR, and joint inversions presented in our manuscript necessary to reproduce the results. dMODELS is an open source code developed by the United States Geological Survey. The originally published program is available here: https://pubs.usgs.gov/tm/13/b1/ and the revised software archived here will also be available through the USGS website code.usgs.gov/vsc/publications/OlDoinyoLengai or by contacting Maurizio Battaglia. With this manuscript we are providing an update to dMODELS that includes improved graphics and joint inversion capabilities for both InSAR and GNSS data. <\/p>"],"Other":["This work was funded by the National Science Foundation (NSF) grant number EAR-1943681, Virginia Tech, Korean Institute of Geosciences and Minerals (KIGAM), and Ardhi University. Funding for this work also came from USAID via the Volcano Disaster Assistance Program and from the U.S. Geological Survey (USGS) Volcano Hazards Program.This material is based on services provided by the GAGE Facility, operated by UNAVCO, Inc., with support from the National Science Foundation, the National Aeronautics and Space Administration, and the U.S. Geological Survey under NSF Cooperative Agreement EAR-1724794. We acknowledge and thank Alaska Satellite Facility for making InSAR data freely available and TZVOLCANO GNSS data sets available through the UNAVCO data archive."]}
more »
« less
Data for paper on the limits of ocean forcing on the exchange flow
This folder contains model extractions for the manuscript "The Limits of Oceanic Forcing on the Exchange Flow of the Salish Sea" by Robert Sanchez, Sarah Giddings, and Emily Lemagie. At the moment, data is still being uploaded. The model extractions contain hourly fields of ssh, wind stress, u,v,temp,and salt. Feel free to reach out to the corresponding author (Robert Sanchez) for code or clarifications. Code to produce TEF from these sections will be available on GitHub soon. The data comes from the LiveOcean model (here), with other extractions available here. Model details can be found here.
more »
« less
- Award ID(s):
- 1944735
- PAR ID:
- 10623589
- Publisher / Repository:
- Zenodo
- Date Published:
- Format(s):
- Medium: X
- Right(s):
- Creative Commons Attribution 4.0 International
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
{"Abstract":["This is software and data to support the manuscript "Evaluating the Skillfulness of Experimental High Resolution Model Forecasts of Tropical Cyclone Precipitation using an Object-Based Methodology," which we are submitting to the journal Weather and Forecasting. The software includes all code that is necessary to follow and evaluate the work. We are also including some of the HAFS and HWRF-B model output for testing the code. Additional model output is available upon request. Public datasets include the Atlnatic hurricane database HURDAT2 (https://www.nhc.noaa.gov/data/#hurdat) and Stage IV precipitation (https://data.eol.ucar.edu/dataset/21.093)."]}more » « less
-
This resource contains source code and select data products behind the following Master's Thesis: Platt, L. (2024). Basins modulate signatures of river salinization (Master's thesis). University of Wisconsin-Madison, Freshwater and Marine Sciences. The source code represents an R-based data processing and modeling pipeline written using the R package "targets". Some of the folders in the source code zipfile are intentionally left empty (except for a hidden file ".placeholder") in order for the code repository to be setup with the required folder structure. To execute this code, download the zip folder, unzip, and open the salt-modeling-data.Rproj file. Then, reference the instructions in the README.md file for installing packages, building the pipeline, and examining the results. Newer versions of this repository may be updated in GitHub at github.com/lindsayplatt/salt-modeling-data. In addition to the source code, this resource contains three data files containing intermediate products of the pipeline. The first two represent data prepared for the random forest modeling. Data download and processing were completed in pipeline phases 1 - 5, and the random forest modeling was completed in phase 6 (see source code). site_attributes.csv which contains the USGS gage site numbers and their associated basin attributes site_classifications.csv which contains the classification of a site for both episodic signatures ("Episodic" or "Not episodic") and baseflow salinization signatures ("positive", "none", "negative", or NA). Note that an NA in the baseflow classification column means that the site did not meet minimum data requirements for calculating a trend and was not used in the random forest model for baseflow salinization. site_attribute_details.csv contains a table of each attribute shorthand used as column names in site_attributes.csv and their names, units, description, and data source.more » « less
-
{"Abstract":["We use open source human gut microbiome data to learn a microbial\n “language” model by adapting techniques from Natural Language Processing\n (NLP). Our microbial “language” model is trained in a self-supervised\n fashion (i.e., without additional external labels) to capture the\n interactions among different microbial taxa and the common compositional\n patterns in microbial communities. The learned model produces\n contextualized taxon representations that allow a single microbial taxon\n to be represented differently according to the specific microbial\n environment in which it appears. The model further provides a sample\n representation by collectively interpreting different microbial taxa in\n the sample and their interactions as a whole. We demonstrate that, while\n our sample representation performs comparably to baseline models in\n in-domain prediction tasks such as predicting Irritable Bowel Disease\n (IBD) and diet patterns, it significantly outperforms them when\n generalizing to test data from independent studies, even in the presence\n of substantial distribution shifts. Through a variety of analyses, we\n further show that the pre-trained, context-sensitive embedding captures\n meaningful biological information, including taxonomic relationships,\n correlations with biological pathways, and relevance to IBD expression,\n despite the model never being explicitly exposed to such signals."],"Methods":["No additional raw data was collected for this project. All inputs\n are available publicly. American Gut Project, Halfvarson, and Schirmer raw\n data are available from the NCBI database (accession numbers PRJEB11419,\n PRJEB18471, and PRJNA398089, respectively). We used the curated data\n produced by Tataru and David, 2020."],"TechnicalInfo":["# Code and data for "Learning a deep language model for microbiomes:\n the power of large scale unlabeled microbiome data" ## Data: *\n vocab_embeddings.npy * Fixed vocabulary embeddings produced from prior\n work: [Decoding the language of microbiomes using word-embedding\n techniques, and applications in inflammatory bowel\n disease](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859). Adapted from [here](http://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/embed/). * microbiomedata.zip * Contains the labels and data for the three datasets used in this study. Specifically, it includes: * IBD_(test|train)*(512|otu).npy and IBD*(test|train)_labels.npy * halfvarson_(512_otu|otu).npy and halfvarson_IBD_labels.npy * schirmer_IBD_(512_otu|otu).npy and schirmer_IBD_labels.npy * (test|train)encodings_(512|1897).npy * The data are stored as n_samples x max_sample_size x 2 numpy arrays, containing both the vocab IDs of the taxa in the samples, as well as the abundance values for each taxa. data[:,:,0] will give the vocab IDs, and data[:,:,1] will give the abundances. * Files which mention '512' are truncated to only have up to 512 taxa in them (max_sample_size = 512). * Note that we refer to the schirmer dataset as HMP2 in the paper. * (test|train)encodings_(512|1897).npy represents the full collection of [American Gut Project](https://doi.org/10.1128%2FmSystems.00031-18) data, regardless of whether that data has IBD labels or not, split into train / test splits. * Also contains the folders fruitdata and vegdata containing fruit and vegetable data respectively, and the file README, which documents the contents of the first two folders. * American Gut Project, Halfvarson, and Schirmer raw data are available from the NCBI database (accession numbers PRJEB11419, PRJEB18471, and PRJNA398089, respectively). We used the curated data produced by [Tataru and David, 2020](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859). * pretrainedmodels.zip * Contains a sequence of pretrained discriminator models across different epochs, allowing users to compute embeddings without having to pretrain models themselves. Each model is stored as a pair of a pytorch_model.bin file containing weights and a config.json file containing model config parameters. Each pair is located in its own folder whose name corresponds to epoch. E.g., "5head5layer_epoch60_disc" stores the discriminator model that were trained for 60 epochs. Model checkpoints can be loaded by providing a path to the pytorch_model.bin file in the --load_disc argument of begin.py in microbiome_transformers-master/finetune_discriminator. * ensemble.zip * Contains the result of an ensemble finetuning run, allowing users to perform interpretability / attribution experiments without having to train models themselves. Each model is similarly stored as a pytorch_model.bin file and config.json file in its own folder. E.g., the run3_epoch0_disc folder stores the model from the third finetuning run (with epoch0 reflecting that the finetuning only takes one epoch). * seqs_.07_embed.fasta * Contains the 16S sequences associated with each taxon vocabulary element of our study, originally produced by prior work: [Decoding the language of microbiomes using word-embedding techniques, and applications in inflammatory bowel disease](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859). Also available [here](http://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/embed/seqs_.07_embed.fasta). ## Code/Software: Note that the Dryad repository stores the code and software discussed here is available at [this](https://doi.org/10.5281/zenodo.13858903) site, which is linked under the "Software" tab on the current page.\\ The following software include hardcoded absolute paths to various files of interest (described above). These paths have been changed to be of the form "/path/to/file_of_interest", where the "path/to" portion must be changed to reflect the actual paths on whichever system you run these on. * Attribution_calculations.ipynb * Used to calculate per-sample model prediction scores, per-taxa attribution values (used for interpretability), as well as per-taxa averaged embeddings (used for plotting the taxa). Note the current file is set to compute attributions only for IBD, but can easily be changed for Schirmer/HMP2 and Halfvarson. * Process_Attributions_No_GPU.ipynb * Takes the per-sample prediction scores and the per-taxa attribution values (both from Attribution_calculations.ipynb) and identifies the taxa most and least associated with IBD. * assign_16S_to_phyla.R * An R script that makes phylogenetic assignments to the 16S sequences from seqs_.07_embed.fasta. Invoke with 'Rscript assign_16S_to_phyla.R' and no arguments. * run_blast_with_downloads.sh * Compares the overlap in ASVs between Halfvarson and AGP versus between HMP2 and AGP. Must have BLAST installed. BLAST parameters are set in file, via the results filtering lines ("awk '$5 < 1e-20 && $8 >= 99' | \\\\"), that set the e-value to 20^-20 and the percent similarity to 99%, with one line for each of the two pairwise comparisons. Simply run via "bash run_blast_with_downloads.sh". * Plot_microbiome_transformers_results.ipynb * Loads the averaged taxa embeddings (from Attribution_calculations.ipynb) and the vocabulary embeddings (from [Decoding the language of microbiomes using word-embedding techniques, and applications in inflammatory bowel disease](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859) / vocab_embeddings.npy), as well as the taxonomic assignments (from assign_16S_to_phyla.R), and generates the various TSNE-based plots of the embedding space geometry. It also generates plots to compare the clustering quality of the averaged embeddings and the vocabulary embeddings. * DeepMicro.zip * A modified version of [DeepMicro](https://github.com/minoh0201/DeepMicro), adapted to more easily run the DeepMicro-based baselines included in our paper. Most additional functionality is described in the 'help' strings of the additional arguments and the docstrings of the functions. In particular, since our data include unlabeled samples witch nonetheless contribute to learning an embedding space, we needed to add a "--pretraining_data" argument to allow such data to be included in the self-supervised learning portion of the baselines. * "convert_data.py" under the "data" folder serves as a utility to help convert from the coordinate-list format of this study to the one-hot abundance table format expected by DeepMicro. * "get_unlabeled_pretraining_data.py" under the "data" folder processes labeled microbiome datasets (fruit, vegetable, and IBD) and extends them with unlabeled data from the American Gut Project (AGP). * host_to_ids.py under the data/host_to_indices folder will combine metadata from err-to-qid.txt and AG_mapping.txt (both available at *[https://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/AG_new](https://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/AG_new)*) with the sequences in seqs_.07_embed.fasta and the numpy data files to create dictionaries that map from host ids to indices in the numpy files, then store those as pickle files. This allow for future training runs from the transformer or the baselines to block their train / validation / test splits by host id. * exps_ae.sh, exps_cae.sh, and exps_baselines.sh are shell scripts with the python commands that run the various DeepMicro-based baselines. * "display_results.py" is a helper for accumulating experimental results and displaying them in a table. * property_pathway_correlations.zip * A folder containing the required code and files to run the property and pathway correlation experiments. * property_pathway_correlations contains three subfolders: * figures: stores output figures such as the heatmap of property - pathway correlation strengths. * csvs: contains gen_hists.py, which takes the outputs of significant correlation counts / strength from metabolic_pathway_correlations.R and plots a histogram to compare the property correlations of the initial vocabulary embeddings with those of the learned embeddings. Also contains significant_correlations_tests.py, which applies non-parametric and permutation tests to statistically determine whether the learned embeddings tend to have stronger property correlations. Also reports the effect size via Cliff's Delta and Cohen's d statistics. * new_hists: will store the histogram generated from gen_hists.py * pathways: stores text and csv outputs, such as the correlation strengths between each property and pathway pair (property_pathway_dict_allsig.txt), the top 20 pathways associated with each property (top20Paths_per_property_(ids|names)_v2.csv), and list of which pathway is most correlated with each property (property_pathway_dict.txt). * metabolic_pathways: contains the code and data required to actually run the correlation tests. The code appears in metabolic_pathway_correlations.R, and simply runs with the command Rscript and no arguments. The data appears in the data subfolder, which itself contains three subfolders: * embed: contains embeddings to be loaded by metabolic_pathway_correlations.R, e.g., merged_sequences_embeddings.txt or glove_emb_AG_newfilter.07_100.txt. Also contains a script assemble_new_embs.py, which lets new embeddings txt files be formatted from a pytorch embeddings tensor, such as the one stored in epoch_120_IBD_avg_vocab_embeddings.pth, as well as seqs_.07_embed.txt. * AG_new/pathways: contains a bunch of files like "corr_matches_i_i+9.RDS", which store intermediate results of the permutation tests, so they don't all have to be calculated at once. Should be recomputed with each run. * pathways: mostly stores various other input and output RDS files: * corr_matches.rds : stores intermediate results of statistical significance testing with model embeddings. Recomputed each time. * corr_matches_pca.rds : stores prior result of statistical significance testing with PCA embeddings. Loaded from storage by default. * filtered_otu_pathway_table.RDS / txt : stores associations of each taxa vocab entry with metabolic pathways, filtered to exclude pathways that are no longer present in KEGG. * pathway_table.RDS : updated pathway table saved by metabolic_pathway_correlations.R each run. * pca_embedded_taxa.rds : stores PCA embeddings of all the vocab taxa entries. * microbiome_transformers.zip * A backup of our [GitHub repository](https://github.com/QuintinPope/microbiome_transformers) for the model architecture (both generator and discriminator), the pretraining processes for both, as well as the model finetuning scripts. Contains its own READMEs. * Has the code for pretraining generator models. See pretrain_generator/train_command.sh and pretrain_generator/README.MD * Has the code for using those models to pretrain discriminator models. See pretrain_discriminator/train_command.sh and pretrain_discriminator/README.MD * Has the code for finetuning those pretrained discriminator models on the classification data in our study (both within-distribution experiments and out of distribution experiments). * See finetune_discriminator/README.MD for general info on finetuning. * See finetune_discriminator/run_agp_agp_exps.sh for the commands to run the in-distribution experiments. * See finetune_discriminator/run_agp_HF_SH_cross_gen_ensemble_tests.sh to run the out of distribution experiments using an ensemble of models. * See finetune_discriminator/run_agp_HF_SH_cross_gen_val_set_tests.sh to run the out of distribution experiments without an ensemble and using a val set for stopping condition. ## File Structures: **microbiomedata.zip** ``` |____total_IBD_otu.npy |____IBD_train_512.npy |____halfvarson_IBD_labels.npy |____IBD_train_otu.npy |____test_encodings_512.npy |____total_IBD_512.npy |____train_encodings_512.npy |____schirmer_IBD_labels.npy |____schirmer_IBD_512_otu.npy |____fruitdata | |____FRUIT_FREQUENCY_all_label.npy | |____FRUIT_FREQUENCY_otu_512.npy | |____FRUIT_FREQUENCY_binary24_labels.npy | |____FRUIT_FREQUENCY_all_otu.npy | |____FRUIT_FREQUENCY_binary34_labels.npy |____vegdata | |____VEGETABLE_FREQUENCY_all_label.npy | |____VEGETABLE_FREQUENCY_binary24_labels.npy | |____VEGETABLE_FREQUENCY_otu_512.npy | |____VEGETABLE_FREQUENCY_all_otu.npy | |____VEGETABLE_FREQUENCY_binary34_labels.npy |____README |____schirmer_IBD_otu.npy |____IBD_test_label.npy |____IBD_test_512.npy |____IBD_train_label.npy |____IBD_test_otu.npy |____test_encodings_1897.npy |____halfvarson_otu.npy |____halfvarson_512_otu.npy |____total_IBD_label.npy |____train_encodings_1897.npy ``` **pretrainedmodels.zip** ``` ____5head5layer_epoch60_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch30_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch105_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch0_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch45_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch90_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch120_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch15_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch75_disc | |____config.json | |____pytorch_model.bin ``` **ensemble.zip** ``` |____run4_epoch0_disc | |____config.json | |____pytorch_model.bin |____run8_epoch0_disc | |____config.json | |____pytorch_model.bin |____run1_epoch0_disc | |____config.json | |____pytorch_model.bin |____run2_epoch0_disc | |____config.json | |____pytorch_model.bin |____run10_epoch0_disc | |____config.json | |____pytorch_model.bin |____run7_epoch0_disc | |____config.json | |____pytorch_model.bin |____run9_epoch0_disc | |____config.json | |____pytorch_model.bin |____run5_epoch0_disc | |____config.json | |____pytorch_model.bin |____run6_epoch0_disc | |____config.json | |____pytorch_model.bin |____run3_epoch0_disc | |____config.json | |____pytorch_model.bin ``` **DeepMicro.zip** ``` |____LICENSE |____deep_env_config.yml |____DM.py |____exception_handle.py |____README.md |____exps_cae.sh |____exps_ae.sh |____exps_baselines.sh |____results | |____display_results.py | |____plots |____data | |____host_to_indices | | |____host_to_ids.py | |____marker.zip | |____UserLabelExample.csv | |____convert_data.py | |____get_unlabeled_pretraining_data.py | |____UserDataExample.csv | |____abundance.zip |____DNN_models.py ``` **property_pathway_correlations.zip** ``` |____metabolic_pathways | |____metabolic_pathway_correlations.R | |____data | | |____AG_new | | | |____pathways | | | | |____corr_matches_141_150.RDS | | | | |____corr_matches_81_90.RDS | | | | |____corr_matches_21_30.RDS | | | | |____corr_matches_51_60.RDS | | | | |____corr_matches_121_130.RDS | | | | |____corr_matches_101_110.RDS | | | | |____corr_matches_61_70.RDS | | | | |____corr_matches_31_40.RDS | | | | |____corr_matches_131_140.RDS | | | | |____corr_matches_181_190.RDS | | | | |____corr_matches_161_170.RDS | | | | |____corr_matches_11_20.RDS | | | | |____corr_matches_1_10.RDS | | | | |____corr_matches_191_200.RDS | | | | |____corr_matches_171_180.RDS | | | | |____corr_matches_71_80.RDS | | | | |____corr_matches_91_100.RDS | | | | |____corr_matches_111_120.RDS | | | | |____corr_matches_41_50.RDS | | | | |____corr_matches_151_160.RDS | | |____embed | | | |____seqs_.07_embed.txt | | | |____merged_sequences_embeddings.txt | | | |____assemble_new_embs.py | | | |____epoch_120_IBD_avg_vocab_embeddings.pth | | | |____glove_emb_AG_newfilter.07_100.txt | | |____pathways | | | |____filtered_otu_pathway_table.RDS | | | |____pca_embedded_taxa.rds | | | |____pathway_table.RDS | | | |____corr_matches.rds | | | |____filtered_otu_pathway_table.txt | | | |____corr_matches_pca.rds |____figures | |____csvs | | |____significant_correlations_tests.py | | |____gen_hists.py | |____new_hists |____pathways | |____top20Paths_per_property_ids_v2.csv | |____top20Paths_per_property_names_v2.csv | |____property_pathway_dict_allsig.txt | |____property_pathway_dict.txt ``` **microbiome_transformers.zip** ``` |____electra_trace.py |____multitaskfinetune | |____begin.py | |____pretrain_hf.py | |____electra_discriminator.py | |____dataset.py | |____startup |____finetune_discriminator | |____begin.py | |____pretrain_hf.py | |____electra_pretrain_model.py | |____electra_discriminator.py | |____run_agp_agp_exps.sh | |____run_agp_HF_SH_cross_gen_val_set_tests.sh | |____run_agp_HF_SH_cross_gen_ensemble_tests.sh | |____hf_startup_3 | |____hf_startup_4 | |____README.MD | |____dataset.py | |____torch_rbf.py |____combine_sets.py |____pretrain_discriminator | |____begin.py | |____pretrain_hf.py | |____electra_pretrain_model.py | |____hf_startup | |____README.MD | |____train_command.sh | |____dataset.py |____benchmark_startup |____pretrain_generator | |____begin.py | |____pretrain_hf.py | |____electra_pretrain_model.py | |____hf_startup | |____README.MD | |____train_command.sh | |____dataset.py |____README.md |____compress_data.py |____generate_commands.py |____attention_benchmark | |____begin.py | |____pretrain_hf.py | |____electra_discriminator.py | |____hf_startup | |____dataset.py |____data_analyze.py |____benchmarks.py ``` # Usage Instructions Intended to cover both repeating the experiments we performed in our paper, or extending our methods to new datasets: * Prepare input data and initial embeddings * Vocabulary: Set the initial vocabulary size to accommodate all the unique OTUs/ASVs found in the data, plus special tokens such as mask, padding, and cls tokens. * Initial embeddings: Each vocabulary element (including special tokens) is assigned a unique embedding vector. * Input data format: Given the highly sparse nature of most microbiome samples relative to vocabulary size, we store each sample’s abundance information in coordinate-list format. I.e., a data file is a numpy array of size (n_samples, max_sample_size, 2), and each sample is stored as a (max_sample_size, 2) array. * Pretrain a language model on those embeddings * ELECTRA generators: Pretrain a sequence of generator models on unsupervised microbiome data. See pretrain_generator/train_command.sh and pretrain_generator/README.MD in microbiome_transformers.zip * ELECTRA discriminators: Pretrain a sequence of discriminator models on unsupervised microbiome data using outputs from the previously trained generators to generate substitutions for the original sequences. See pretrain_discriminator/train_command.sh and pretrain_discriminator/README.MD in microbiome_transformers.zip * Characterize the language model with the following interpretability steps: * Perform taxonomic assignments: Use assign_16S_to_phyla.R (or similar R code) to map your sequences to the phylogenetic hierarchy. * Attribution calculations: Use Attribution_calculations.ipynb to calculate per-sample model prediction scores, per-taxa attribution values (used for interpretability), as well as per-taxa averaged embeddings (used for plotting the taxa). * Embeddings visualizations and embedding space clustering: * Provide Plot_microbiome_transformers_results.ipynb with the paths to your per-taxa averaged embeddings calculated above, initial vocabulary embeddings (equivalent of vocab_embeddings.npy), and taxonomic assignments. * It will help generate TSNE visualizations of the two embedding spaces, as well as cross-comparisons of where taxa in one embedding space appear in the other embedding space. * The notebook contains preset regions for which parts of the two embedding spaces to compare (via bounding boxes with the select_by_rectangles function). These regions will likely not work for a new dataset, so you'll have to change them. * Finally, the notebook will also plot graphs comparing the clusterability of the data in the original two embedding spaces (non TSNE), so as to not be fooled by the dimension reduction technique. * Identify high-attribution taxa: * Process_Attributions_No_GPU.ipynb takes the per-sample prediction scores and the per-taxa attribution values (both from Attribution_calculations.ipynb) and identifies the taxa most and least associated with IBD. * It also includes filtration steps for the attribution calculations (e.g., only analyze taxa that appear >= 5 times, only use attribution scores that are confident and correct, etc), reflecting those we used in the paper. * The notebook will identify the taxa IDs of the top and bottom attributed taxa, then it will use seqs_.07_embed.fasta (or similar taxa-ID mapping) to print the 16S sequences associated with those taxa. * Pathway correlations: * Use assemble_new_embs.py to format pytorch vocab embedding files into the expected format for metabolic_pathway_correlations.R * Use metabolic_pathway_correlations.R (in the metabolic_pathways folder of property_pathway_correlations.zip) to produce heatmaps of embedding dim / metabolic pathway correlation strengths, and to save a file with the statistically significant correlation data. * Use gen_hists.py (in the figures/csvs folder of property_pathway_correlations.zip) to generate histograms comparing embedding dim / pathway correlation strengths of the initial fixed embeddings with those of the learned contextual embeddings. * Use significant_correlations_tests.py (also in the figures/csvs folder of property_pathway_correlations.zip) to apply non-parametric statistical tests to determine whether the distribution of embedding dim / pathway correlation strengths from the learned contextual embeddings is shifted right compared to those from the fixed embeddings. * Evaluate the language model for downstream task * First, account for any patients who have multiple samples in the dataset by blocking out any train / validation / test splits you perform by patient ID. Future steps will assume you have dictionaries (stored as pickle files) that map from some patient ID strings (which just need to be unique per patient) to indices of the data files (i.e., you need one mapping dict per training data file). In general, the way to do this will depend on how your patient metadata is structured. You can look to host_to_ids.py (in DeepMicro.zip) to see how we combined metadata from multiple files and compared that with the different training data numpy files to produce this mapping. * To run experiments using our paper's transformer methods: * "Within distribution" evaluations: Relevant commands are in finetune_discriminator/run_agp_agp_exps.sh in microbiome_transformers.zip * "Out of distribution" evaluations: Relevant commands are in finetune_discriminator/run_agp_HF_SH_cross_gen_ensemble_tests.sh (when using an ensemble of models) and finetune_discriminator/run_agp_HF_SH_cross_gen_val_set_tests.sh (without using an ensemble and when using a val set for stopping condition). Both are in microbiome_transformers.zip * See also finetune_discriminator/README.MD in microbiome_transformers.zip for more general information about the finetuning functionality * To run experiments using the DeepMicro-derived baseline methods: * See exps_ae.sh, exps_cae.sh, and exps_baselines.sh in DeepMicro.zip for the experiment commands (for both in-distribution and out of distribution experiments) * Also see README.md in DeepMicro.zip for more general information on using DeepMicro and our modifications to it. ## Changelog: **01/29/2025** Updated significant_correlations_tests.py to apply permutation testing and report Cohan's d and Cliff's Delta. Added run_blast_with_downloads.sh, which reports how many taxa in Halfvarson match to any taxa in AGP and how many taxa in Schirmer match any taxa in AGP. It's a way of comparing which of Schirmer or Halfvarson is more similar to AGP in terms of taxa that are present. We also slightly clarified the README's language to make it clearer where the software can be found."]}more » « less
-
{"Abstract":["# DeepCaImX## Introduction#### Two-photon calcium imaging provides large-scale recordings of neuronal activities at cellular resolution. A robust, automated and high-speed pipeline to simultaneously segment the spatial footprints of neurons and extract their temporal activity traces while decontaminating them from background, noise and overlapping neurons is highly desirable to analyze calcium imaging data. In this paper, we demonstrate DeepCaImX, an end-to-end deep learning method based on an iterative shrinkage-thresholding algorithm and a long-short-term-memory neural network to achieve the above goals altogether at a very high speed and without any manually tuned hyper-parameters. DeepCaImX is a multi-task, multi-class and multi-label segmentation method composed of a compressed-sensing-inspired neural network with a recurrent layer and fully connected layers. It represents the first neural network that can simultaneously generate accurate neuronal footprints and extract clean neuronal activity traces from calcium imaging data. We trained the neural network with simulated datasets and benchmarked it against existing state-of-the-art methods with in vivo experimental data. DeepCaImX outperforms existing methods in the quality of segmentation and temporal trace extraction as well as processing speed. DeepCaImX is highly scalable and will benefit the analysis of mesoscale calcium imaging. \n\n## System and Environment Requirements#### 1. Both CPU and GPU are supported to run the code of DeepCaImX. A CUDA compatible GPU is preferred. * In our demo of full-version, we use a GPU of Quadro RTX8000 48GB to accelerate the training speed.* In our demo of mini-version, at least 6 GB momory of GPU/CPU is required.#### 2. Python 3.9 and Tensorflow 2.10.0#### 3. Virtual environment: Anaconda Navigator 2.2.0#### 4. Matlab 2023a\n\n## Demo and installation#### 1 (_Optional_) GPU environment setup. We need a Nvidia parallel computing platform and programming model called _CUDA Toolkit_ and a GPU-accelerated library of primitives for deep neural networks called _CUDA Deep Neural Network library (cuDNN)_ to build up a GPU supported environment for training and testing our model. The link of CUDA installation guide is https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html and the link of cuDNN installation guide is https://docs.nvidia.com/deeplearning/cudnn/installation/overview.html. #### 2 Install Anaconda. Link of installation guide: https://docs.anaconda.com/free/anaconda/install/index.html#### 3 Launch Anaconda prompt and install Python 3.x and Tensorflow 2.9.0 as the virtual environment.#### 4 Open the virtual environment, and then pip install mat73, opencv-python, python-time and scipy.#### 5 Download the "DeepCaImX_training_demo.ipynb" in folder "Demo (full-version)" for a full version and the simulated dataset via the google drive link. Then, create and put the training dataset in the path "./Training Dataset/". If there is a limitation on your computing resource or a quick test on our code, we highly recommand download the demo from the folder "Mini-version", which only requires around 6.3 GB momory in training. #### 6 Run: Use Anaconda to launch the virtual environment and open "DeepCaImX_training_demo.ipynb" or "DeepCaImX_testing_demo.ipynb". Then, please check and follow the guide of "DeepCaImX_training_demo.ipynb" or or "DeepCaImX_testing_demo.ipynb" for training or testing respectively.#### Note: Every package can be installed in a few minutes.\n\n## Run DeepCaImX#### 1. Mini-version demo* Download all the documents in the folder of "Demo (mini-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n#### 2. Full-version demo* Download all the documents in the folder of "Demo (full-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n## Data Tailor#### A data tailor developed by Matlab is provided to support a basic data tiling processing. In the folder of "Data Tailor", we can find a "tailor.m" script and an example "test.tiff". After running "tailor.m" by matlab, user is able to choose a "tiff" file from a GUI as loading the sample to be tiled. Settings include size of FOV, overlapping area, normalization option, name of output file and output data format. The output files can be found at local folder, which is at the same folder as the "tailor.m".\n\n## Simulated Dataset#### 1. Dataset generator (FISSA Version): The algorithm for generating simulated dataset is based on the paper of FISSA (_Keemink, S.W., Lowe, S.C., Pakan, J.M.P. et al. FISSA: A neuropil decontamination toolbox for calcium imaging signals. Sci Rep 8, 3493 (2018)_) and SimCalc repository (https://github.com/rochefort-lab/SimCalc/). For the code used to generate the simulated data, please download the documents in the folder "Simulated Dataset Generator". #### Training dataset: https://drive.google.com/file/d/1WZkIE_WA7Qw133t2KtqTESDmxMwsEkjJ/view?usp=share_link#### Testing Dataset: https://drive.google.com/file/d/1zsLH8OQ4kTV7LaqQfbPDuMDuWBcHGWcO/view?usp=share_link\n\n#### 2. Dataset generator (NAOMi Version): The algorithm for generating simulated dataset is based on the paper of NAOMi (_Song, A., Gauthier, J. L., Pillow, J. W., Tank, D. W. & Charles, A. S. Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods. Journal of neuroscience methods 358, 109173 (2021)_). For the code use to generate the simulated data, please go to this link: https://bitbucket.org/adamshch/naomi_sim/src/master/code/## Experimental Dataset#### We used the samples from ABO dataset:https://github.com/AllenInstitute/AllenSDK/wiki/Use-the-Allen-Brain-Observatory-%E2%80%93-Visual-Coding-on-AWS.#### The segmentation ground truth can be found in the folder "Manually Labelled ROIs". #### The segmentation ground truth of depth 175, 275, 375, 550 and 625 um are manually labeled by us. #### The code for creating ground truth of extracted traces can be found in "Prepro_Exp_Sample.ipynb" in the folder "Preprocessing of Experimental Sample"."]}more » « less
An official website of the United States government
