skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Thursday, March 12 until 2:00 AM ET on Friday, March 13 due to maintenance. We apologize for the inconvenience.


Title: Notebooks for combining the National Water Model results/inputs with observations from SNOTEL and MODIS at SNOTEL sites
This resource includes Jupyter Notebooks that combine (merge) model results with observations. There are four folders: - NWM_SnowAssessment: This folder includes codes required for combining model results with observations. It also has an output folder that contains outputs of running five Jupyter Notebooks within the code folder. The order to run the Jupyter Notebooks is as follows. First run Combine_obs_mod_[*].ipynb where [*] is P (precipitation), SWE (snow water equivalent), TAir (air temperature), and FSNO (snow covered area fraction). This combines the model outputs and observations for each variable. Then, run Combine_obs_mod_P_SWE_TAir_FSNO.ipynb. - NWM_Reanalysis: This folder contains the National Water Model version 2 retrospective simulations that were retrieved and pre-processed at SNOTEL sites using https://doi.org/10.4211/hs.3d4976bf6eb84dfbbe11446ab0e31a0a and https://doi.org/10.4211/hs.1b66a752b0cc467eb0f46bda5fdc4b34. - SNOTEL: This folder contains preprocessed SNOTEL observations that were created using https://doi.org/10.4211/hs.d1fe0668734e4892b066f198c4015b06. - GEE: This folder contains MODIS observations that we downloaded using https://doi.org/10.4211/hs.d287f010b2dd48edb0573415a56d47f8. Note that the existing CSV file is the merged file of the downloaded CSV files.  more » « less
Award ID(s):
1664061
PAR ID:
10488109
Author(s) / Creator(s):
;
Publisher / Repository:
HydroShare
Date Published:
Subject(s) / Keyword(s):
Google Earth Engine MODIS Snow water equivalent NDSI SNOTEL Snow covered area fraction National Water Model
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This dataset of 7304 aluminum grain boundaries provides comprehensive coverage of the 5D space of crystallographic character. The dataset and some of its characteristics are described in detail in https://doi.org/10.1016/j.actamat.2022.118006. The dataset here includes a zip file with all 7304 minimum energy grain boundary structure files, which are minimized dump files from LAMMPS. The dump files only include atoms +/- 15 angstroms from the grain boundary plane. The CSV file contains information about all 7304 grain boundaries, including information about the crystallographic character and a few computed properties. A README file provides a description of the columns of the CSV file. 
    more » « less
  2. This dataset contains raw data, processed data, and the codes used for data processing in our manuscript from our Fourier-transform infrared (FTIR) spectroscopy, Nuclear magnetic resonance (NMR), Raman spectroscopy, and X-ray diffraction (XRD) experiments. The data and codes for the fits of our unpolarized Raman spectra to polypeptide spectra is also included. The following explains the folder structure of the data provided in this dataset, which is also explained in the file ReadMe.txt. Browsing the data in Tree view is recommended. Folder contents Codes Raman Data Processing: The MATLAB script file RamanDecomposition.m contains the code to decompose the sub-peaks across different polarized Raman spectra (XX, XZ, ZX, ZZ, and YY), considering a set of pre-determined restrictions. The helper functions used in RamanDecomposition.m are included in the Helpers folder. RamanDecomposition.pdf is a PDF printout of the MATLAB code and output. P Value Simulation: 31_helix.ipynb and a_helix.ipynb: These two Jupyter Notebook files contain the intrinsic P value simulation for the 31-helix and alpha-helix structures. The simulation results were used to prepare Supplementary Table 4. See more details in the comments contained. Vector.py, Atom.py, Amino.py, and Helpers.py: These python files contains the class definitions used in 31_helix.ipynb and a_helix.ipynb. See more details in the comments contained. FTIR FTIR Raw Transmission.opj: This Origin data file contains the raw transmission data measured on single silk strand and used for FTIR spectra analysis. FTIR Deconvoluted Oscillators.opj: This Origin data file was generated from the data contained in the previous file using W-VASE software from J. A. Woollam, Inc. FTIR Unpolarized MultiStrand Raw Transmission.opj: This Origin data file contains the raw transmission data measured on multiple silk strands. The datasets contained in the first two files above were used to plot Figure 2a-b and the FTIR data points in Figure 4a, and Supplementary Figure 6. The datasets contained in the third file above were used to plot Supplementary Figure 3a. The datasets contained in the first two files above were used to plot Figure 2a-b, FTIR data points in Figure 4a, and Supplementary Figure 6. NMR Raw data files of the 13C MAS NMR spectra: ascii-spec_CP.txt: cross-polarized spectrum ascii-spec_DP.txt: direct-polarized spectrum Data is in ASCII format (comma separated values) using the following columns: Data point number Intensity Frequency [Hz] Frequency [ppm] Polypeptide Spectrum Fits MATLAB scripts (.m files) and Helpers: The MATLAB script file Raman_Fitting_Process_Part_1.m and Raman_Fitting_Process_Part_2.m contains the step-by-step instructions to perform the fitting process of our calculated unpolarized Raman spectrum, using digitized model polypeptide Raman spectra. The Helper folder contains two helper functions used by the above scripts. See the scripts for further instruction and information. Data aPA.csv, bPA.csv, GlyI.csv, GlyII.csv files: These csv files contain the digitized Raman spectra of poly-alanine, beta-alanine, poly-glycine-I, and poly-glycine-II. Raman_Exp_Data.mat: This MATLAB data file contains the processed, polarized Raman spectra obtained from our experiments. Variable freq is the wavenumber information of each collected spectrum. The variables xx, yy, zz, xz, zx represent the polarized Raman spectra collected. These variables are used to calculate the unpolarized Raman spectrum in Raman_Fitting_Process_Part_2.m. See the scripts for further instruction and information. Raman Raman Raw Data.mat: This MATLAB data file contains all the raw data used for Raman spectra analysis. All variables are of MATLAB structure data type. Each variable has fields called Freq and Raw, with Freq contains the wavenumber information of the measured spectra and Raw contains 5 measured Raman signal strengths. Variable XX, XZ, ZX, ZZ, and YY were used to plot and sub-peak analysis for Figure 2c-d, Raman data points in Figure 4a, Figure 5b, Supplementary Figure 2, and Supplementary Figure 7. Variable WideRange was used to plot and identify the peaks for Supplementary Figure 3b. X-Ray X-Ray.mat: This MATLAB data file contains the raw X-ray data used for the diffraction analysis in Supplementary Figure 5. 
    more » « less
  3. {"Abstract":["We use open source human gut microbiome data to learn a microbial\n “language” model by adapting techniques from Natural Language Processing\n (NLP). Our microbial “language” model is trained in a self-supervised\n fashion (i.e., without additional external labels) to capture the\n interactions among different microbial taxa and the common compositional\n patterns in microbial communities. The learned model produces\n contextualized taxon representations that allow a single microbial taxon\n to be represented differently according to the specific microbial\n environment in which it appears. The model further provides a sample\n representation by collectively interpreting different microbial taxa in\n the sample and their interactions as a whole. We demonstrate that, while\n our sample representation performs comparably to baseline models in\n in-domain prediction tasks such as predicting Irritable Bowel Disease\n (IBD) and diet patterns, it significantly outperforms them when\n generalizing to test data from independent studies, even in the presence\n of substantial distribution shifts. Through a variety of analyses, we\n further show that the pre-trained, context-sensitive embedding captures\n meaningful biological information, including taxonomic relationships,\n correlations with biological pathways, and relevance to IBD expression,\n despite the model never being explicitly exposed to such signals."],"Methods":["No additional raw data was collected for this project. All inputs\n are available publicly. American Gut Project, Halfvarson, and Schirmer raw\n data are available from the NCBI database (accession numbers PRJEB11419,\n PRJEB18471, and PRJNA398089, respectively). We used the curated data\n produced by Tataru and David, 2020."],"TechnicalInfo":["# Code and data for "Learning a deep language model for microbiomes:\n the power of large scale unlabeled microbiome data" ## Data: *\n vocab_embeddings.npy * Fixed vocabulary embeddings produced from prior\n work: [Decoding the language of microbiomes using word-embedding\n techniques, and applications in inflammatory bowel\n disease](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859). Adapted from [here](http://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/embed/). * microbiomedata.zip * Contains the labels and data for the three datasets used in this study. Specifically, it includes: * IBD_(test|train)*(512|otu).npy and IBD*(test|train)_labels.npy * halfvarson_(512_otu|otu).npy and halfvarson_IBD_labels.npy * schirmer_IBD_(512_otu|otu).npy and schirmer_IBD_labels.npy * (test|train)encodings_(512|1897).npy * The data are stored as n_samples x max_sample_size x 2 numpy arrays, containing both the vocab IDs of the taxa in the samples, as well as the abundance values for each taxa. data[:,:,0] will give the vocab IDs, and data[:,:,1] will give the abundances. * Files which mention '512' are truncated to only have up to 512 taxa in them (max_sample_size = 512). * Note that we refer to the schirmer dataset as HMP2 in the paper. * (test|train)encodings_(512|1897).npy represents the full collection of [American Gut Project](https://doi.org/10.1128%2FmSystems.00031-18) data, regardless of whether that data has IBD labels or not, split into train / test splits. * Also contains the folders fruitdata and vegdata containing fruit and vegetable data respectively, and the file README, which documents the contents of the first two folders. * American Gut Project, Halfvarson, and Schirmer raw data are available from the NCBI database (accession numbers PRJEB11419, PRJEB18471, and PRJNA398089, respectively). We used the curated data produced by [Tataru and David, 2020](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859). * pretrainedmodels.zip * Contains a sequence of pretrained discriminator models across different epochs, allowing users to compute embeddings without having to pretrain models themselves. Each model is stored as a pair of a pytorch_model.bin file containing weights and a config.json file containing model config parameters. Each pair is located in its own folder whose name corresponds to epoch. E.g., "5head5layer_epoch60_disc" stores the discriminator model that were trained for 60 epochs. Model checkpoints can be loaded by providing a path to the pytorch_model.bin file in the --load_disc argument of begin.py in microbiome_transformers-master/finetune_discriminator. * ensemble.zip * Contains the result of an ensemble finetuning run, allowing users to perform interpretability / attribution experiments without having to train models themselves. Each model is similarly stored as a pytorch_model.bin file and config.json file in its own folder. E.g., the run3_epoch0_disc folder stores the model from the third finetuning run (with epoch0 reflecting that the finetuning only takes one epoch). * seqs_.07_embed.fasta * Contains the 16S sequences associated with each taxon vocabulary element of our study, originally produced by prior work: [Decoding the language of microbiomes using word-embedding techniques, and applications in inflammatory bowel disease](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859). Also available [here](http://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/embed/seqs_.07_embed.fasta). ## Code/Software: Note that the Dryad repository stores the code and software discussed here is available at [this](https://doi.org/10.5281/zenodo.13858903) site, which is linked under the "Software" tab on the current page.\\ The following software include hardcoded absolute paths to various files of interest (described above). These paths have been changed to be of the form "/path/to/file_of_interest", where the "path/to" portion must be changed to reflect the actual paths on whichever system you run these on. * Attribution_calculations.ipynb * Used to calculate per-sample model prediction scores, per-taxa attribution values (used for interpretability), as well as per-taxa averaged embeddings (used for plotting the taxa). Note the current file is set to compute attributions only for IBD, but can easily be changed for Schirmer/HMP2 and Halfvarson. * Process_Attributions_No_GPU.ipynb * Takes the per-sample prediction scores and the per-taxa attribution values (both from Attribution_calculations.ipynb) and identifies the taxa most and least associated with IBD. * assign_16S_to_phyla.R * An R script that makes phylogenetic assignments to the 16S sequences from seqs_.07_embed.fasta. Invoke with 'Rscript assign_16S_to_phyla.R' and no arguments. * run_blast_with_downloads.sh * Compares the overlap in ASVs between Halfvarson and AGP versus between HMP2 and AGP. Must have BLAST installed. BLAST parameters are set in file, via the results filtering lines ("awk '$5 < 1e-20 && $8 >= 99' | \\\\"), that set the e-value to 20^-20 and the percent similarity to 99%, with one line for each of the two pairwise comparisons. Simply run via "bash run_blast_with_downloads.sh". * Plot_microbiome_transformers_results.ipynb * Loads the averaged taxa embeddings (from Attribution_calculations.ipynb) and the vocabulary embeddings (from [Decoding the language of microbiomes using word-embedding techniques, and applications in inflammatory bowel disease](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859) / vocab_embeddings.npy), as well as the taxonomic assignments (from assign_16S_to_phyla.R), and generates the various TSNE-based plots of the embedding space geometry. It also generates plots to compare the clustering quality of the averaged embeddings and the vocabulary embeddings. * DeepMicro.zip * A modified version of [DeepMicro](https://github.com/minoh0201/DeepMicro), adapted to more easily run the DeepMicro-based baselines included in our paper. Most additional functionality is described in the 'help' strings of the additional arguments and the docstrings of the functions. In particular, since our data include unlabeled samples witch nonetheless contribute to learning an embedding space, we needed to add a "--pretraining_data" argument to allow such data to be included in the self-supervised learning portion of the baselines. * "convert_data.py" under the "data" folder serves as a utility to help convert from the coordinate-list format of this study to the one-hot abundance table format expected by DeepMicro. * "get_unlabeled_pretraining_data.py" under the "data" folder processes labeled microbiome datasets (fruit, vegetable, and IBD) and extends them with unlabeled data from the American Gut Project (AGP).  * host_to_ids.py under the data/host_to_indices folder will combine metadata from err-to-qid.txt and AG_mapping.txt (both available at *[https://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/AG_new](https://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/AG_new)*) with the sequences in seqs_.07_embed.fasta and the numpy data files to create dictionaries that map from host ids to indices in the numpy files, then store those as pickle files. This allow for future training runs from the transformer or the baselines to block their train / validation / test splits by host id. * exps_ae.sh, exps_cae.sh, and exps_baselines.sh are shell scripts with the python commands that run the various DeepMicro-based baselines. * "display_results.py" is a helper for accumulating experimental results and displaying them in a table. * property_pathway_correlations.zip * A folder containing the required code and files to run the property and pathway correlation experiments. * property_pathway_correlations contains three subfolders: * figures: stores output figures such as the heatmap of property - pathway correlation strengths. * csvs: contains gen_hists.py, which takes the outputs of significant correlation counts / strength from metabolic_pathway_correlations.R and plots a histogram to compare the property correlations of the initial vocabulary embeddings with those of the learned embeddings. Also contains significant_correlations_tests.py, which applies non-parametric and permutation tests to statistically determine whether the learned embeddings tend to have stronger property correlations. Also reports the effect size via Cliff's Delta and Cohen's d statistics. * new_hists: will store the histogram generated from gen_hists.py * pathways: stores text and csv outputs, such as the correlation strengths between each property and pathway pair (property_pathway_dict_allsig.txt), the top 20 pathways associated with each property (top20Paths_per_property_(ids|names)_v2.csv), and list of which pathway is most correlated with each property (property_pathway_dict.txt). * metabolic_pathways: contains the code and data required to actually run the correlation tests. The code appears in metabolic_pathway_correlations.R, and simply runs with the command Rscript and no arguments. The data appears in the data subfolder, which itself contains three subfolders: * embed: contains embeddings to be loaded by metabolic_pathway_correlations.R, e.g., merged_sequences_embeddings.txt or glove_emb_AG_newfilter.07_100.txt. Also contains a script assemble_new_embs.py, which lets new embeddings txt files be formatted from a pytorch embeddings tensor, such as the one stored in epoch_120_IBD_avg_vocab_embeddings.pth, as well as seqs_.07_embed.txt. * AG_new/pathways: contains a bunch of files like "corr_matches_i_i+9.RDS", which store intermediate results of the permutation tests, so they don't all have to be calculated at once. Should be recomputed with each run. * pathways: mostly stores various other input and output RDS files: * corr_matches.rds : stores intermediate results of statistical significance testing with model embeddings. Recomputed each time. * corr_matches_pca.rds : stores prior result of statistical significance testing with PCA embeddings. Loaded from storage by default. * filtered_otu_pathway_table.RDS / txt : stores associations of each taxa vocab entry with metabolic pathways, filtered to exclude pathways that are no longer present in KEGG. * pathway_table.RDS : updated pathway table saved by metabolic_pathway_correlations.R each run. * pca_embedded_taxa.rds : stores PCA embeddings of all the vocab taxa entries. * microbiome_transformers.zip * A backup of our [GitHub repository](https://github.com/QuintinPope/microbiome_transformers) for the model architecture (both generator and discriminator), the pretraining processes for both, as well as the model finetuning scripts. Contains its own READMEs. * Has the code for pretraining generator models. See pretrain_generator/train_command.sh and pretrain_generator/README.MD * Has the code for using those models to pretrain discriminator models. See pretrain_discriminator/train_command.sh and pretrain_discriminator/README.MD * Has the code for finetuning those pretrained discriminator models on the classification data in our study (both within-distribution experiments and out of distribution experiments). * See finetune_discriminator/README.MD for general info on finetuning. * See finetune_discriminator/run_agp_agp_exps.sh for the commands to run the in-distribution experiments. * See finetune_discriminator/run_agp_HF_SH_cross_gen_ensemble_tests.sh to run the out of distribution experiments using an ensemble of models. * See finetune_discriminator/run_agp_HF_SH_cross_gen_val_set_tests.sh to run the out of distribution experiments without an ensemble and using a val set for stopping condition. ## File Structures: **microbiomedata.zip** ``` |____total_IBD_otu.npy |____IBD_train_512.npy |____halfvarson_IBD_labels.npy |____IBD_train_otu.npy |____test_encodings_512.npy |____total_IBD_512.npy |____train_encodings_512.npy |____schirmer_IBD_labels.npy |____schirmer_IBD_512_otu.npy |____fruitdata | |____FRUIT_FREQUENCY_all_label.npy | |____FRUIT_FREQUENCY_otu_512.npy | |____FRUIT_FREQUENCY_binary24_labels.npy | |____FRUIT_FREQUENCY_all_otu.npy | |____FRUIT_FREQUENCY_binary34_labels.npy |____vegdata | |____VEGETABLE_FREQUENCY_all_label.npy | |____VEGETABLE_FREQUENCY_binary24_labels.npy | |____VEGETABLE_FREQUENCY_otu_512.npy | |____VEGETABLE_FREQUENCY_all_otu.npy | |____VEGETABLE_FREQUENCY_binary34_labels.npy |____README |____schirmer_IBD_otu.npy |____IBD_test_label.npy |____IBD_test_512.npy |____IBD_train_label.npy |____IBD_test_otu.npy |____test_encodings_1897.npy |____halfvarson_otu.npy |____halfvarson_512_otu.npy |____total_IBD_label.npy |____train_encodings_1897.npy ``` **pretrainedmodels.zip** ``` ____5head5layer_epoch60_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch30_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch105_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch0_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch45_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch90_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch120_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch15_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch75_disc | |____config.json | |____pytorch_model.bin ``` **ensemble.zip** ``` |____run4_epoch0_disc | |____config.json | |____pytorch_model.bin |____run8_epoch0_disc | |____config.json | |____pytorch_model.bin |____run1_epoch0_disc | |____config.json | |____pytorch_model.bin |____run2_epoch0_disc | |____config.json | |____pytorch_model.bin |____run10_epoch0_disc | |____config.json | |____pytorch_model.bin |____run7_epoch0_disc | |____config.json | |____pytorch_model.bin |____run9_epoch0_disc | |____config.json | |____pytorch_model.bin |____run5_epoch0_disc | |____config.json | |____pytorch_model.bin |____run6_epoch0_disc | |____config.json | |____pytorch_model.bin |____run3_epoch0_disc | |____config.json | |____pytorch_model.bin ``` **DeepMicro.zip** ``` |____LICENSE |____deep_env_config.yml |____DM.py |____exception_handle.py |____README.md |____exps_cae.sh |____exps_ae.sh |____exps_baselines.sh |____results | |____display_results.py | |____plots |____data | |____host_to_indices | | |____host_to_ids.py | |____marker.zip | |____UserLabelExample.csv | |____convert_data.py | |____get_unlabeled_pretraining_data.py | |____UserDataExample.csv | |____abundance.zip |____DNN_models.py ``` **property_pathway_correlations.zip** ``` |____metabolic_pathways | |____metabolic_pathway_correlations.R | |____data | | |____AG_new | | | |____pathways | | | | |____corr_matches_141_150.RDS | | | | |____corr_matches_81_90.RDS | | | | |____corr_matches_21_30.RDS | | | | |____corr_matches_51_60.RDS | | | | |____corr_matches_121_130.RDS | | | | |____corr_matches_101_110.RDS | | | | |____corr_matches_61_70.RDS | | | | |____corr_matches_31_40.RDS | | | | |____corr_matches_131_140.RDS | | | | |____corr_matches_181_190.RDS | | | | |____corr_matches_161_170.RDS | | | | |____corr_matches_11_20.RDS | | | | |____corr_matches_1_10.RDS | | | | |____corr_matches_191_200.RDS | | | | |____corr_matches_171_180.RDS | | | | |____corr_matches_71_80.RDS | | | | |____corr_matches_91_100.RDS | | | | |____corr_matches_111_120.RDS | | | | |____corr_matches_41_50.RDS | | | | |____corr_matches_151_160.RDS | | |____embed | | | |____seqs_.07_embed.txt | | | |____merged_sequences_embeddings.txt | | | |____assemble_new_embs.py | | | |____epoch_120_IBD_avg_vocab_embeddings.pth | | | |____glove_emb_AG_newfilter.07_100.txt | | |____pathways | | | |____filtered_otu_pathway_table.RDS | | | |____pca_embedded_taxa.rds | | | |____pathway_table.RDS | | | |____corr_matches.rds | | | |____filtered_otu_pathway_table.txt | | | |____corr_matches_pca.rds |____figures | |____csvs | | |____significant_correlations_tests.py | | |____gen_hists.py | |____new_hists |____pathways | |____top20Paths_per_property_ids_v2.csv | |____top20Paths_per_property_names_v2.csv | |____property_pathway_dict_allsig.txt | |____property_pathway_dict.txt ``` **microbiome_transformers.zip** ``` |____electra_trace.py |____multitaskfinetune | |____begin.py | |____pretrain_hf.py | |____electra_discriminator.py | |____dataset.py | |____startup |____finetune_discriminator | |____begin.py | |____pretrain_hf.py | |____electra_pretrain_model.py | |____electra_discriminator.py | |____run_agp_agp_exps.sh | |____run_agp_HF_SH_cross_gen_val_set_tests.sh | |____run_agp_HF_SH_cross_gen_ensemble_tests.sh | |____hf_startup_3 | |____hf_startup_4 | |____README.MD | |____dataset.py | |____torch_rbf.py |____combine_sets.py |____pretrain_discriminator | |____begin.py | |____pretrain_hf.py | |____electra_pretrain_model.py | |____hf_startup | |____README.MD | |____train_command.sh | |____dataset.py |____benchmark_startup |____pretrain_generator | |____begin.py | |____pretrain_hf.py | |____electra_pretrain_model.py | |____hf_startup | |____README.MD | |____train_command.sh | |____dataset.py |____README.md |____compress_data.py |____generate_commands.py |____attention_benchmark | |____begin.py | |____pretrain_hf.py | |____electra_discriminator.py | |____hf_startup | |____dataset.py |____data_analyze.py |____benchmarks.py ``` # Usage Instructions Intended to cover both repeating the experiments we performed in our paper, or extending our methods to new datasets: * Prepare input data and initial embeddings * Vocabulary: Set the initial vocabulary size to accommodate all the unique OTUs/ASVs found in the data, plus special tokens such as mask, padding, and cls tokens. * Initial embeddings: Each vocabulary element (including special tokens) is assigned a unique embedding vector. * Input data format: Given the highly sparse nature of most microbiome samples relative to vocabulary size, we store each sample’s abundance information in coordinate-list format. I.e., a data file is a numpy array of size (n_samples, max_sample_size, 2), and each sample is stored as a (max_sample_size, 2) array.  * Pretrain a language model on those embeddings * ELECTRA generators: Pretrain a sequence of generator models on unsupervised microbiome data. See pretrain_generator/train_command.sh and pretrain_generator/README.MD in microbiome_transformers.zip * ELECTRA discriminators: Pretrain a sequence of discriminator models on unsupervised microbiome data using outputs from the previously trained generators to generate substitutions for the original sequences. See pretrain_discriminator/train_command.sh and pretrain_discriminator/README.MD in microbiome_transformers.zip * Characterize the language model with the following interpretability steps: * Perform taxonomic assignments: Use assign_16S_to_phyla.R (or similar R code) to map your sequences to the phylogenetic hierarchy. * Attribution calculations: Use Attribution_calculations.ipynb to calculate per-sample model prediction scores, per-taxa attribution values (used for interpretability), as well as per-taxa averaged embeddings (used for plotting the taxa). * Embeddings visualizations and embedding space clustering: * Provide Plot_microbiome_transformers_results.ipynb with the paths to your per-taxa averaged embeddings calculated above, initial vocabulary embeddings (equivalent of vocab_embeddings.npy), and taxonomic assignments. * It will help generate TSNE visualizations of the two embedding spaces, as well as cross-comparisons of where taxa in one embedding space appear in the other embedding space. * The notebook contains preset regions for which parts of the two embedding spaces to compare (via bounding boxes with the select_by_rectangles function). These regions will likely not work for a new dataset, so you'll have to change them. * Finally, the notebook will also plot graphs comparing the clusterability of the data in the original two embedding spaces (non TSNE), so as to not be fooled by the dimension reduction technique. * Identify high-attribution taxa: * Process_Attributions_No_GPU.ipynb takes the per-sample prediction scores and the per-taxa attribution values (both from Attribution_calculations.ipynb) and identifies the taxa most and least associated with IBD. * It also includes filtration steps for the attribution calculations (e.g., only analyze taxa that appear >= 5 times, only use attribution scores that are confident and correct, etc), reflecting those we used in the paper.  * The notebook will identify the taxa IDs of the top and bottom attributed taxa, then it will use seqs_.07_embed.fasta (or similar taxa-ID mapping) to print the 16S sequences associated with those taxa. * Pathway correlations: * Use assemble_new_embs.py to format pytorch vocab embedding files into the expected format for metabolic_pathway_correlations.R * Use metabolic_pathway_correlations.R (in the metabolic_pathways folder of property_pathway_correlations.zip) to produce heatmaps of embedding dim / metabolic pathway correlation strengths, and to save a file with the statistically significant correlation data. * Use gen_hists.py (in the figures/csvs folder of property_pathway_correlations.zip) to generate histograms comparing embedding dim / pathway correlation strengths of the initial fixed embeddings with those of the learned contextual embeddings. * Use significant_correlations_tests.py (also in the figures/csvs folder of property_pathway_correlations.zip) to apply non-parametric statistical tests to determine whether the distribution of embedding dim / pathway correlation strengths from the learned contextual embeddings is shifted right compared to those from the fixed embeddings. * Evaluate the language model for downstream task * First, account for any patients who have multiple samples in the dataset by blocking out any train / validation / test splits you perform by patient ID. Future steps will assume you have dictionaries (stored as pickle files) that map from some patient ID strings (which just need to be unique per patient) to indices of the data files (i.e., you need one mapping dict per training data file). In general, the way to do this will depend on how your patient metadata is structured. You can look to host_to_ids.py (in DeepMicro.zip) to see how we combined metadata from multiple files and compared that with the different training data numpy files to produce this mapping. * To run experiments using our paper's transformer methods: * "Within distribution" evaluations: Relevant commands are in finetune_discriminator/run_agp_agp_exps.sh in microbiome_transformers.zip * "Out of distribution" evaluations: Relevant commands are in finetune_discriminator/run_agp_HF_SH_cross_gen_ensemble_tests.sh (when using an ensemble of models) and finetune_discriminator/run_agp_HF_SH_cross_gen_val_set_tests.sh (without using an ensemble and when using a val set for stopping condition). Both are in microbiome_transformers.zip * See also finetune_discriminator/README.MD in microbiome_transformers.zip for more general information about the finetuning functionality * To run experiments using the DeepMicro-derived baseline methods: * See exps_ae.sh, exps_cae.sh, and exps_baselines.sh in DeepMicro.zip for the experiment commands (for both in-distribution and out of distribution experiments) * Also see README.md in DeepMicro.zip for more general information on using DeepMicro and our modifications to it. ## Changelog: **01/29/2025** Updated significant_correlations_tests.py to apply permutation testing and report Cohan's d and Cliff's Delta. Added run_blast_with_downloads.sh, which reports how many taxa in Halfvarson match to any taxa in AGP and how many taxa in Schirmer match any taxa in AGP. It's a way of comparing which of Schirmer or Halfvarson is more similar to AGP in terms of taxa that are present. We also slightly clarified the README's language to make it clearer where the software can be found."]} 
    more » « less
  4. This dataset provides segregation energy spectra information for cobalt solute in 7272 aluminum grain boundaries that span the 5D space of crystallographic character. The dataset and some of its characteristics are described in detail in https://doi.org/10.1016/j.actamat.2024.120448. The information about the segregation energy spectra are included in a CSV file. Each GB is identified by a computeID that is listed in the CSV file. The crystallographic character and selected properties for each GB, as well as its structure, are available in another dataset at https://doi.org/10.17632/4ykjz4ngwt, and which is described in an article at https://doi.org/10.1016/j.actamat.2022.118006. Note that the A README file provides a description of the columns of the CSV file. 
    more » « less
  5. The radon isotope and stable water isotope data for Coal Creek Watershed, Colorado, consists of d2H, d18O, and 222Rn values from samples collected at 8 stream location along Coal Creek, samples from 7 groundwater springs within the watershed, and precipitation isotope samples collected by Next Generation Water Observing System (NGWOS) from a collector within the watershed. All stream and spring samples were collected between June and October, 2021, and precipitation isotope samples were collected between November 2020 and September 2021. These data were collected to evaluate how groundwater contributions to Coal Creek originating from a fractured hillslope and alluvial fan respond to summer monsoon rains and seasonal drying. Understanding of groundwater-surface water interactions in montane systems in critical for the future of water availability in the Western US as groundwater contributions are expected to become more important for sustaining summer stream flows. This data package contains: (1) a csv of all radon samples; (2) a csv of all stream and spring isotope samples; (3) a csv of precipitation isotope samples; and (4) a csv of locations for each sampling site. The dataset additionally includes a file-level metadata (flmd.csv) file that lists each file contained in the dataset with associated metadata; and a data dictionary (dd.csv) file that contains column/row headers used throughout the files along with a definition, units, and data type. 
    more » « less