Title: Data from: Parallel processing in speech perception with local and global representations of linguistic context
Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence-level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in nonidentical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition. MEG Data MEG data is in FIFF format and can be opened with MNE-Python. Data has been directly converted from the acquisition device native format without any preprocessing. Events contained in the data indicate the stimuli in numerical order. Subjects R2650 and R2652 heard stimulus 11b instead of 11. Predictor Variables The original audio files are copyrighted and cannot be shared, but the make_audio folder contains make_clips.py which can be used to extract the exact clips from the commercially available audiobook (ISBN 978-1480555280). The predictors directory contains all the predictors used in the original study as pickled eelbrain objects. They can be loaded in Python with the eelbrain.load.unpickle function. The TextGrids directory contains the TextGrids aligned to the audio files. Source Localization The localization.zip file contains files needed for source localization. Structural brain models used in the published analysis are reconstructed by scaling the FreeSurfer fsaverage brain (distributed with FreeSurfer) based on each subject's `MRI scaling parameters.cfg` file. This can be done using the `mne.scale_mri` function. Each subject's MEG folder contains a `subject-trans.fif` file which contains the coregistration between MEG sensor space and (scaled) MRI space, which is used to compute the forward solution. more »« less
Brodbeck, Christian; Bhattasali, Shohini; Cruz Heredia, Aura AL; Resnik, Philip; Simon, Jonathan Z; Lau, Ellen
(, eLife)
Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence-level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in nonidentical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition.
{"Abstract":["Data files were used in support of the research paper titled \u201cMitigating RF Jamming Attacks at the Physical Layer with Machine Learning<\/em>" which has been submitted to the IET Communications journal.<\/p>\n\n---------------------------------------------------------------------------------------------<\/p>\n\nAll data was collected using the SDR implementation shown here: https://github.com/mainland/dragonradio/tree/iet-paper. Particularly for antenna state selection, the files developed for this paper are located in 'dragonradio/scripts/:'<\/p>\n\n'ModeSelect.py': class used to defined the antenna state selection algorithm<\/li>'standalone-radio.py': SDR implementation for normal radio operation with reconfigurable antenna<\/li>'standalone-radio-tuning.py': SDR implementation for hyperparameter tunning<\/li>'standalone-radio-onmi.py': SDR implementation for omnidirectional mode only<\/li><\/ul>\n\n---------------------------------------------------------------------------------------------<\/p>\n\nAuthors: Marko Jacovic, Xaime Rivas Rey, Geoffrey Mainland, Kapil R. Dandekar\nContact: krd26@drexel.edu<\/p>\n\n---------------------------------------------------------------------------------------------<\/p>\n\nTop-level directories and content will be described below. Detailed descriptions of experiments performed are provided in the paper.<\/p>\n\n---------------------------------------------------------------------------------------------<\/p>\n\nclassifier_training: files used for training classifiers that are integrated into SDR platform<\/p>\n\n'logs-8-18' directory contains OTA SDR collected log files for each jammer type and under normal operation (including congested and weaklink states)<\/li>'classTrain.py' is the main parser for training the classifiers<\/li>'trainedClassifiers' contains the output classifiers generated by 'classTrain.py'<\/li><\/ul>\n\npost_processing_classifier: contains logs of online classifier outputs and processing script<\/p>\n\n'class' directory contains .csv logs of each RTE and OTA experiment for each jamming and operation scenario<\/li>'classProcess.py' parses the log files and provides classification report and confusion matrix for each multi-class and binary classifiers for each observed scenario - found in 'results->classifier_performance'<\/li><\/ul>\n\npost_processing_mgen: contains MGEN receiver logs and parser<\/p>\n\n'configs' contains JSON files to be used with parser for each experiment<\/li>'mgenLogs' contains MGEN receiver logs for each OTA and RTE experiment described. Within each experiment logs are separated by 'mit' for mitigation used, 'nj' for no jammer, and 'noMit' for no mitigation technique used. File names take the form *_cj_* for constant jammer, *_pj_* for periodic jammer, *_rj_* for reactive jammer, and *_nj_* for no jammer. Performance figures are found in 'results->mitigation_performance'<\/li><\/ul>\n\nray_tracing_emulation: contains files related to Drexel area, Art Museum, and UAV Drexel area validation RTE studies.<\/p>\n\nDirectory contains detailed 'readme.txt' for understanding.<\/li>Please note: the processing files and data logs present in 'validation' folder were developed by Wolfe et al. and should be cited as such, unless explicitly stated differently. \n\tS. Wolfe, S. Begashaw, Y. Liu and K. R. Dandekar, "Adaptive Link Optimization for 802.11 UAV Uplink Using a Reconfigurable Antenna," MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM), 2018, pp. 1-6, doi: 10.1109/MILCOM.2018.8599696.<\/li><\/ul>\n\t<\/li><\/ul>\n\nresults: contains results obtained from study<\/p>\n\n'classifier_performance' contains .txt files summarizing binary and multi-class performance of online SDR system. Files obtained using 'post_processing_classifier.'<\/li>'mitigation_performance' contains figures generated by 'post_processing_mgen.'<\/li>'validation' contains RTE and OTA performance comparison obtained by 'ray_tracing_emulation->validation->matlab->outdoor_hover_plots.m'<\/li><\/ul>\n\ntuning_parameter_study: contains the OTA log files for antenna state selection hyperparameter study<\/p>\n\n'dataCollect' contains a folder for each jammer considered in the study, and inside each folder there is a CSV file corresponding to a different configuration of the learning parameters of the reconfigurable antenna. The configuration selected was the one that performed the best across all these experiments and is described in the paper.<\/li>'data_summary.txt'this file contains the summaries from all the CSV files for convenience.<\/li><\/ul>"]}
Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV) and a thin device thickness. Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and a multi-stage physics-informed deep learning model. This reduces the computational resource demands by orders of magnitude and facilitates fast reconstruction. Our deep learning algorithm can reconstruct object volumes over 4×6×0.6 mm3. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint. # DeepInMiniscope: Deep-learning-powered physics-informed integrated miniscope [https://doi.org/10.5061/dryad.6t1g1jx83](https://doi.org/10.5061/dryad.6t1g1jx83) ## Description of the data and file structure ### DeepInMiniscope: Learned Integrated Miniscope ### Datasets, models and codes for 2D and 3D sample reconstructions. Dataset for 2D reconstruction includes test data for green stained lens tissue. Input: measured images of green fluorescent stained lens tissue, dissembled into sub-FOV patches. Output: the slide containing green lens tissue features. Dataset for 3D sample reconstructions includes test data for 3D reconstruction of in-vivo mouse brain video recording. Input: Time-series standard-derivation of difference-to-local-mean weighted raw video. Output: reconstructed 4-D volumetric video containing a 3-dimensional distribution of neural activities. ## Files and variables ### Download data, code, and sample results 1. Download data `data.zip`, code `code.zip`, results `results.zip`. 2. Unzip the downloaded files and place them in the same main folder. 3. Confirm that the main folder contains three subfolders: `data`, `code`, and `results`. Inside the `data` and `code` folder, there should be subfolders for each test case. ## Data 2D_lenstissue **data_2d_lenstissue.mat:** Measured images of green fluorescent stained lens tissue, disassembled into sub-FOV patches. * **Xt:** stacked 108 FOVs of measured image, each centered at one microlens unit with 720 x 720 pixels. Data dimension in order of (batch, height, width, FOV). * **Yt:** placeholder variable for reconstructed object, each centered at corresponding microlens unit, with 180 x 180 voxels. Data dimension in order of (batch, height, width, FOV). **reconM_0308:** Trained Multi-FOV ADMM-Net model for 2D lens tissue reconstruction. **gen_lenstissue.mat:** Generated lens tissue reconstruction by running the model with code **2D_lenstissue.py** * **generated_images:** stacked 108 reconstructed FOVs of lens tissue sample by multi-FOV ADMM-Net, the assembled full sample reconstruction is shown in results/2D_lenstissue_reconstruction.png 3D_mouse **reconM_g704_z5_v4:** Trained 3D Multi-FOV ADMM-Net model for 3D sample reconstructions **t_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290.mat:** Time-series standard-deviation of difference-to-local-mean weighted raw video. * **Xts:** test video with 290 frames and each frame 6 FOVs, with 1408 x 1408 pixels per FOV. Data dimension in order of (frames, height, width, FOV). **gen_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290_v4.mat:** Generated 4D volumetric video containing 3-dimensional distribution of neural activities. * **generated_images_fu:** frame-by-frame 3D reconstruction of recorded video in uint8 format. Data dimension in order of (batch, FOV, height, width, depth). Each frame contains 6 FOVs, and each FOV has 13 reconstruction depths with 416 x 416 voxels per depth. Variables inside saved model subfolders (reconM_0308 and reconM_g704_z5_v4): * **saved_model.pb:** model computation graph including architecture and input/output definitions. * **keras_metadata.pb:** Keras metadata for the saved model, including model class, training configuration, and custom objects. * **assets:** external files for custom assets loaded during model training/inference. This folder is empty, as the model does not use custom assets. * **variables.data-00000-of-00001:** numerical values of model weights and parameters. * **variables.index:** index file that maps variable names to weight locations in .data. ## Code/software ### Set up the Python environment 1. Download and install the [Anaconda distribution](https://www.anaconda.com/download). 2. The code was tested with the following packages * python=3.9.7 * tensorflow=2.7.0 * keras=2.7.0 * matplotlib=3.4.3 * scipy=1.7.1 ## Code **2D_lenstissue.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **lenstissue_2D.m:** Matlab code to display the generated image and reassemble sub-FOV patches. **sup_psf.m:** Matlab script to load microlens coordinates data and to generate PSF pattern. **lenscoordinates.xls:** Microlens units coordinates table. **3D mouse.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **mouse_3D.m:** Matlab code to display the reconstructed neural activity video and to calculate temporal correlation. ## Access information Other publicly accessible locations of the data: * [https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope](https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope)
{"Abstract":["We use open source human gut microbiome data to learn a microbial\n “language” model by adapting techniques from Natural Language Processing\n (NLP). Our microbial “language” model is trained in a self-supervised\n fashion (i.e., without additional external labels) to capture the\n interactions among different microbial taxa and the common compositional\n patterns in microbial communities. The learned model produces\n contextualized taxon representations that allow a single microbial taxon\n to be represented differently according to the specific microbial\n environment in which it appears. The model further provides a sample\n representation by collectively interpreting different microbial taxa in\n the sample and their interactions as a whole. We demonstrate that, while\n our sample representation performs comparably to baseline models in\n in-domain prediction tasks such as predicting Irritable Bowel Disease\n (IBD) and diet patterns, it significantly outperforms them when\n generalizing to test data from independent studies, even in the presence\n of substantial distribution shifts. Through a variety of analyses, we\n further show that the pre-trained, context-sensitive embedding captures\n meaningful biological information, including taxonomic relationships,\n correlations with biological pathways, and relevance to IBD expression,\n despite the model never being explicitly exposed to such signals."],"Methods":["No additional raw data was collected for this project. All inputs\n are available publicly. American Gut Project, Halfvarson, and Schirmer raw\n data are available from the NCBI database (accession numbers PRJEB11419,\n PRJEB18471, and PRJNA398089, respectively). We used the curated data\n produced by Tataru and David, 2020."],"TechnicalInfo":["# Code and data for "Learning a deep language model for microbiomes:\n the power of large scale unlabeled microbiome data" ## Data: *\n vocab_embeddings.npy * Fixed vocabulary embeddings produced from prior\n work: [Decoding the language of microbiomes using word-embedding\n techniques, and applications in inflammatory bowel\n disease](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859). Adapted from [here](http://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/embed/). * microbiomedata.zip * Contains the labels and data for the three datasets used in this study. Specifically, it includes: * IBD_(test|train)*(512|otu).npy and IBD*(test|train)_labels.npy * halfvarson_(512_otu|otu).npy and halfvarson_IBD_labels.npy * schirmer_IBD_(512_otu|otu).npy and schirmer_IBD_labels.npy * (test|train)encodings_(512|1897).npy * The data are stored as n_samples x max_sample_size x 2 numpy arrays, containing both the vocab IDs of the taxa in the samples, as well as the abundance values for each taxa. data[:,:,0] will give the vocab IDs, and data[:,:,1] will give the abundances. * Files which mention '512' are truncated to only have up to 512 taxa in them (max_sample_size = 512). * Note that we refer to the schirmer dataset as HMP2 in the paper. * (test|train)encodings_(512|1897).npy represents the full collection of [American Gut Project](https://doi.org/10.1128%2FmSystems.00031-18) data, regardless of whether that data has IBD labels or not, split into train / test splits. * Also contains the folders fruitdata and vegdata containing fruit and vegetable data respectively, and the file README, which documents the contents of the first two folders. * American Gut Project, Halfvarson, and Schirmer raw data are available from the NCBI database (accession numbers PRJEB11419, PRJEB18471, and PRJNA398089, respectively). We used the curated data produced by [Tataru and David, 2020](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859). * pretrainedmodels.zip * Contains a sequence of pretrained discriminator models across different epochs, allowing users to compute embeddings without having to pretrain models themselves. Each model is stored as a pair of a pytorch_model.bin file containing weights and a config.json file containing model config parameters. Each pair is located in its own folder whose name corresponds to epoch. E.g., "5head5layer_epoch60_disc" stores the discriminator model that were trained for 60 epochs. Model checkpoints can be loaded by providing a path to the pytorch_model.bin file in the --load_disc argument of begin.py in microbiome_transformers-master/finetune_discriminator. * ensemble.zip * Contains the result of an ensemble finetuning run, allowing users to perform interpretability / attribution experiments without having to train models themselves. Each model is similarly stored as a pytorch_model.bin file and config.json file in its own folder. E.g., the run3_epoch0_disc folder stores the model from the third finetuning run (with epoch0 reflecting that the finetuning only takes one epoch). * seqs_.07_embed.fasta * Contains the 16S sequences associated with each taxon vocabulary element of our study, originally produced by prior work: [Decoding the language of microbiomes using word-embedding techniques, and applications in inflammatory bowel disease](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859). Also available [here](http://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/embed/seqs_.07_embed.fasta). ## Code/Software: Note that the Dryad repository stores the code and software discussed here is available at [this](https://doi.org/10.5281/zenodo.13858903) site, which is linked under the "Software" tab on the current page.\\ The following software include hardcoded absolute paths to various files of interest (described above). These paths have been changed to be of the form "/path/to/file_of_interest", where the "path/to" portion must be changed to reflect the actual paths on whichever system you run these on. * Attribution_calculations.ipynb * Used to calculate per-sample model prediction scores, per-taxa attribution values (used for interpretability), as well as per-taxa averaged embeddings (used for plotting the taxa). Note the current file is set to compute attributions only for IBD, but can easily be changed for Schirmer/HMP2 and Halfvarson. * Process_Attributions_No_GPU.ipynb * Takes the per-sample prediction scores and the per-taxa attribution values (both from Attribution_calculations.ipynb) and identifies the taxa most and least associated with IBD. * assign_16S_to_phyla.R * An R script that makes phylogenetic assignments to the 16S sequences from seqs_.07_embed.fasta. Invoke with 'Rscript assign_16S_to_phyla.R' and no arguments. * run_blast_with_downloads.sh * Compares the overlap in ASVs between Halfvarson and AGP versus between HMP2 and AGP. Must have BLAST installed. BLAST parameters are set in file, via the results filtering lines ("awk '$5 < 1e-20 && $8 >= 99' | \\\\"), that set the e-value to 20^-20 and the percent similarity to 99%, with one line for each of the two pairwise comparisons. Simply run via "bash run_blast_with_downloads.sh". * Plot_microbiome_transformers_results.ipynb * Loads the averaged taxa embeddings (from Attribution_calculations.ipynb) and the vocabulary embeddings (from [Decoding the language of microbiomes using word-embedding techniques, and applications in inflammatory bowel disease](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007859) / vocab_embeddings.npy), as well as the taxonomic assignments (from assign_16S_to_phyla.R), and generates the various TSNE-based plots of the embedding space geometry. It also generates plots to compare the clustering quality of the averaged embeddings and the vocabulary embeddings. * DeepMicro.zip * A modified version of [DeepMicro](https://github.com/minoh0201/DeepMicro), adapted to more easily run the DeepMicro-based baselines included in our paper. Most additional functionality is described in the 'help' strings of the additional arguments and the docstrings of the functions. In particular, since our data include unlabeled samples witch nonetheless contribute to learning an embedding space, we needed to add a "--pretraining_data" argument to allow such data to be included in the self-supervised learning portion of the baselines. * "convert_data.py" under the "data" folder serves as a utility to help convert from the coordinate-list format of this study to the one-hot abundance table format expected by DeepMicro. * "get_unlabeled_pretraining_data.py" under the "data" folder processes labeled microbiome datasets (fruit, vegetable, and IBD) and extends them with unlabeled data from the American Gut Project (AGP). * host_to_ids.py under the data/host_to_indices folder will combine metadata from err-to-qid.txt and AG_mapping.txt (both available at *[https://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/AG_new](https://files.cqls.oregonstate.edu/David_Lab/microbiome_embeddings/data/AG_new)*) with the sequences in seqs_.07_embed.fasta and the numpy data files to create dictionaries that map from host ids to indices in the numpy files, then store those as pickle files. This allow for future training runs from the transformer or the baselines to block their train / validation / test splits by host id. * exps_ae.sh, exps_cae.sh, and exps_baselines.sh are shell scripts with the python commands that run the various DeepMicro-based baselines. * "display_results.py" is a helper for accumulating experimental results and displaying them in a table. * property_pathway_correlations.zip * A folder containing the required code and files to run the property and pathway correlation experiments. * property_pathway_correlations contains three subfolders: * figures: stores output figures such as the heatmap of property - pathway correlation strengths. * csvs: contains gen_hists.py, which takes the outputs of significant correlation counts / strength from metabolic_pathway_correlations.R and plots a histogram to compare the property correlations of the initial vocabulary embeddings with those of the learned embeddings. Also contains significant_correlations_tests.py, which applies non-parametric and permutation tests to statistically determine whether the learned embeddings tend to have stronger property correlations. Also reports the effect size via Cliff's Delta and Cohen's d statistics. * new_hists: will store the histogram generated from gen_hists.py * pathways: stores text and csv outputs, such as the correlation strengths between each property and pathway pair (property_pathway_dict_allsig.txt), the top 20 pathways associated with each property (top20Paths_per_property_(ids|names)_v2.csv), and list of which pathway is most correlated with each property (property_pathway_dict.txt). * metabolic_pathways: contains the code and data required to actually run the correlation tests. The code appears in metabolic_pathway_correlations.R, and simply runs with the command Rscript and no arguments. The data appears in the data subfolder, which itself contains three subfolders: * embed: contains embeddings to be loaded by metabolic_pathway_correlations.R, e.g., merged_sequences_embeddings.txt or glove_emb_AG_newfilter.07_100.txt. Also contains a script assemble_new_embs.py, which lets new embeddings txt files be formatted from a pytorch embeddings tensor, such as the one stored in epoch_120_IBD_avg_vocab_embeddings.pth, as well as seqs_.07_embed.txt. * AG_new/pathways: contains a bunch of files like "corr_matches_i_i+9.RDS", which store intermediate results of the permutation tests, so they don't all have to be calculated at once. Should be recomputed with each run. * pathways: mostly stores various other input and output RDS files: * corr_matches.rds : stores intermediate results of statistical significance testing with model embeddings. Recomputed each time. * corr_matches_pca.rds : stores prior result of statistical significance testing with PCA embeddings. Loaded from storage by default. * filtered_otu_pathway_table.RDS / txt : stores associations of each taxa vocab entry with metabolic pathways, filtered to exclude pathways that are no longer present in KEGG. * pathway_table.RDS : updated pathway table saved by metabolic_pathway_correlations.R each run. * pca_embedded_taxa.rds : stores PCA embeddings of all the vocab taxa entries. * microbiome_transformers.zip * A backup of our [GitHub repository](https://github.com/QuintinPope/microbiome_transformers) for the model architecture (both generator and discriminator), the pretraining processes for both, as well as the model finetuning scripts. Contains its own READMEs. * Has the code for pretraining generator models. See pretrain_generator/train_command.sh and pretrain_generator/README.MD * Has the code for using those models to pretrain discriminator models. See pretrain_discriminator/train_command.sh and pretrain_discriminator/README.MD * Has the code for finetuning those pretrained discriminator models on the classification data in our study (both within-distribution experiments and out of distribution experiments). * See finetune_discriminator/README.MD for general info on finetuning. * See finetune_discriminator/run_agp_agp_exps.sh for the commands to run the in-distribution experiments. * See finetune_discriminator/run_agp_HF_SH_cross_gen_ensemble_tests.sh to run the out of distribution experiments using an ensemble of models. * See finetune_discriminator/run_agp_HF_SH_cross_gen_val_set_tests.sh to run the out of distribution experiments without an ensemble and using a val set for stopping condition. ## File Structures: **microbiomedata.zip** ``` |____total_IBD_otu.npy |____IBD_train_512.npy |____halfvarson_IBD_labels.npy |____IBD_train_otu.npy |____test_encodings_512.npy |____total_IBD_512.npy |____train_encodings_512.npy |____schirmer_IBD_labels.npy |____schirmer_IBD_512_otu.npy |____fruitdata | |____FRUIT_FREQUENCY_all_label.npy | |____FRUIT_FREQUENCY_otu_512.npy | |____FRUIT_FREQUENCY_binary24_labels.npy | |____FRUIT_FREQUENCY_all_otu.npy | |____FRUIT_FREQUENCY_binary34_labels.npy |____vegdata | |____VEGETABLE_FREQUENCY_all_label.npy | |____VEGETABLE_FREQUENCY_binary24_labels.npy | |____VEGETABLE_FREQUENCY_otu_512.npy | |____VEGETABLE_FREQUENCY_all_otu.npy | |____VEGETABLE_FREQUENCY_binary34_labels.npy |____README |____schirmer_IBD_otu.npy |____IBD_test_label.npy |____IBD_test_512.npy |____IBD_train_label.npy |____IBD_test_otu.npy |____test_encodings_1897.npy |____halfvarson_otu.npy |____halfvarson_512_otu.npy |____total_IBD_label.npy |____train_encodings_1897.npy ``` **pretrainedmodels.zip** ``` ____5head5layer_epoch60_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch30_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch105_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch0_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch45_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch90_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch120_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch15_disc | |____config.json | |____pytorch_model.bin |____5head5layer_epoch75_disc | |____config.json | |____pytorch_model.bin ``` **ensemble.zip** ``` |____run4_epoch0_disc | |____config.json | |____pytorch_model.bin |____run8_epoch0_disc | |____config.json | |____pytorch_model.bin |____run1_epoch0_disc | |____config.json | |____pytorch_model.bin |____run2_epoch0_disc | |____config.json | |____pytorch_model.bin |____run10_epoch0_disc | |____config.json | |____pytorch_model.bin |____run7_epoch0_disc | |____config.json | |____pytorch_model.bin |____run9_epoch0_disc | |____config.json | |____pytorch_model.bin |____run5_epoch0_disc | |____config.json | |____pytorch_model.bin |____run6_epoch0_disc | |____config.json | |____pytorch_model.bin |____run3_epoch0_disc | |____config.json | |____pytorch_model.bin ``` **DeepMicro.zip** ``` |____LICENSE |____deep_env_config.yml |____DM.py |____exception_handle.py |____README.md |____exps_cae.sh |____exps_ae.sh |____exps_baselines.sh |____results | |____display_results.py | |____plots |____data | |____host_to_indices | | |____host_to_ids.py | |____marker.zip | |____UserLabelExample.csv | |____convert_data.py | |____get_unlabeled_pretraining_data.py | |____UserDataExample.csv | |____abundance.zip |____DNN_models.py ``` **property_pathway_correlations.zip** ``` |____metabolic_pathways | |____metabolic_pathway_correlations.R | |____data | | |____AG_new | | | |____pathways | | | | |____corr_matches_141_150.RDS | | | | |____corr_matches_81_90.RDS | | | | |____corr_matches_21_30.RDS | | | | |____corr_matches_51_60.RDS | | | | |____corr_matches_121_130.RDS | | | | |____corr_matches_101_110.RDS | | | | |____corr_matches_61_70.RDS | | | | |____corr_matches_31_40.RDS | | | | |____corr_matches_131_140.RDS | | | | |____corr_matches_181_190.RDS | | | | |____corr_matches_161_170.RDS | | | | |____corr_matches_11_20.RDS | | | | |____corr_matches_1_10.RDS | | | | |____corr_matches_191_200.RDS | | | | |____corr_matches_171_180.RDS | | | | |____corr_matches_71_80.RDS | | | | |____corr_matches_91_100.RDS | | | | |____corr_matches_111_120.RDS | | | | |____corr_matches_41_50.RDS | | | | |____corr_matches_151_160.RDS | | |____embed | | | |____seqs_.07_embed.txt | | | |____merged_sequences_embeddings.txt | | | |____assemble_new_embs.py | | | |____epoch_120_IBD_avg_vocab_embeddings.pth | | | |____glove_emb_AG_newfilter.07_100.txt | | |____pathways | | | |____filtered_otu_pathway_table.RDS | | | |____pca_embedded_taxa.rds | | | |____pathway_table.RDS | | | |____corr_matches.rds | | | |____filtered_otu_pathway_table.txt | | | |____corr_matches_pca.rds |____figures | |____csvs | | |____significant_correlations_tests.py | | |____gen_hists.py | |____new_hists |____pathways | |____top20Paths_per_property_ids_v2.csv | |____top20Paths_per_property_names_v2.csv | |____property_pathway_dict_allsig.txt | |____property_pathway_dict.txt ``` **microbiome_transformers.zip** ``` |____electra_trace.py |____multitaskfinetune | |____begin.py | |____pretrain_hf.py | |____electra_discriminator.py | |____dataset.py | |____startup |____finetune_discriminator | |____begin.py | |____pretrain_hf.py | |____electra_pretrain_model.py | |____electra_discriminator.py | |____run_agp_agp_exps.sh | |____run_agp_HF_SH_cross_gen_val_set_tests.sh | |____run_agp_HF_SH_cross_gen_ensemble_tests.sh | |____hf_startup_3 | |____hf_startup_4 | |____README.MD | |____dataset.py | |____torch_rbf.py |____combine_sets.py |____pretrain_discriminator | |____begin.py | |____pretrain_hf.py | |____electra_pretrain_model.py | |____hf_startup | |____README.MD | |____train_command.sh | |____dataset.py |____benchmark_startup |____pretrain_generator | |____begin.py | |____pretrain_hf.py | |____electra_pretrain_model.py | |____hf_startup | |____README.MD | |____train_command.sh | |____dataset.py |____README.md |____compress_data.py |____generate_commands.py |____attention_benchmark | |____begin.py | |____pretrain_hf.py | |____electra_discriminator.py | |____hf_startup | |____dataset.py |____data_analyze.py |____benchmarks.py ``` # Usage Instructions Intended to cover both repeating the experiments we performed in our paper, or extending our methods to new datasets: * Prepare input data and initial embeddings * Vocabulary: Set the initial vocabulary size to accommodate all the unique OTUs/ASVs found in the data, plus special tokens such as mask, padding, and cls tokens. * Initial embeddings: Each vocabulary element (including special tokens) is assigned a unique embedding vector. * Input data format: Given the highly sparse nature of most microbiome samples relative to vocabulary size, we store each sample’s abundance information in coordinate-list format. I.e., a data file is a numpy array of size (n_samples, max_sample_size, 2), and each sample is stored as a (max_sample_size, 2) array. * Pretrain a language model on those embeddings * ELECTRA generators: Pretrain a sequence of generator models on unsupervised microbiome data. See pretrain_generator/train_command.sh and pretrain_generator/README.MD in microbiome_transformers.zip * ELECTRA discriminators: Pretrain a sequence of discriminator models on unsupervised microbiome data using outputs from the previously trained generators to generate substitutions for the original sequences. See pretrain_discriminator/train_command.sh and pretrain_discriminator/README.MD in microbiome_transformers.zip * Characterize the language model with the following interpretability steps: * Perform taxonomic assignments: Use assign_16S_to_phyla.R (or similar R code) to map your sequences to the phylogenetic hierarchy. * Attribution calculations: Use Attribution_calculations.ipynb to calculate per-sample model prediction scores, per-taxa attribution values (used for interpretability), as well as per-taxa averaged embeddings (used for plotting the taxa). * Embeddings visualizations and embedding space clustering: * Provide Plot_microbiome_transformers_results.ipynb with the paths to your per-taxa averaged embeddings calculated above, initial vocabulary embeddings (equivalent of vocab_embeddings.npy), and taxonomic assignments. * It will help generate TSNE visualizations of the two embedding spaces, as well as cross-comparisons of where taxa in one embedding space appear in the other embedding space. * The notebook contains preset regions for which parts of the two embedding spaces to compare (via bounding boxes with the select_by_rectangles function). These regions will likely not work for a new dataset, so you'll have to change them. * Finally, the notebook will also plot graphs comparing the clusterability of the data in the original two embedding spaces (non TSNE), so as to not be fooled by the dimension reduction technique. * Identify high-attribution taxa: * Process_Attributions_No_GPU.ipynb takes the per-sample prediction scores and the per-taxa attribution values (both from Attribution_calculations.ipynb) and identifies the taxa most and least associated with IBD. * It also includes filtration steps for the attribution calculations (e.g., only analyze taxa that appear >= 5 times, only use attribution scores that are confident and correct, etc), reflecting those we used in the paper. * The notebook will identify the taxa IDs of the top and bottom attributed taxa, then it will use seqs_.07_embed.fasta (or similar taxa-ID mapping) to print the 16S sequences associated with those taxa. * Pathway correlations: * Use assemble_new_embs.py to format pytorch vocab embedding files into the expected format for metabolic_pathway_correlations.R * Use metabolic_pathway_correlations.R (in the metabolic_pathways folder of property_pathway_correlations.zip) to produce heatmaps of embedding dim / metabolic pathway correlation strengths, and to save a file with the statistically significant correlation data. * Use gen_hists.py (in the figures/csvs folder of property_pathway_correlations.zip) to generate histograms comparing embedding dim / pathway correlation strengths of the initial fixed embeddings with those of the learned contextual embeddings. * Use significant_correlations_tests.py (also in the figures/csvs folder of property_pathway_correlations.zip) to apply non-parametric statistical tests to determine whether the distribution of embedding dim / pathway correlation strengths from the learned contextual embeddings is shifted right compared to those from the fixed embeddings. * Evaluate the language model for downstream task * First, account for any patients who have multiple samples in the dataset by blocking out any train / validation / test splits you perform by patient ID. Future steps will assume you have dictionaries (stored as pickle files) that map from some patient ID strings (which just need to be unique per patient) to indices of the data files (i.e., you need one mapping dict per training data file). In general, the way to do this will depend on how your patient metadata is structured. You can look to host_to_ids.py (in DeepMicro.zip) to see how we combined metadata from multiple files and compared that with the different training data numpy files to produce this mapping. * To run experiments using our paper's transformer methods: * "Within distribution" evaluations: Relevant commands are in finetune_discriminator/run_agp_agp_exps.sh in microbiome_transformers.zip * "Out of distribution" evaluations: Relevant commands are in finetune_discriminator/run_agp_HF_SH_cross_gen_ensemble_tests.sh (when using an ensemble of models) and finetune_discriminator/run_agp_HF_SH_cross_gen_val_set_tests.sh (without using an ensemble and when using a val set for stopping condition). Both are in microbiome_transformers.zip * See also finetune_discriminator/README.MD in microbiome_transformers.zip for more general information about the finetuning functionality * To run experiments using the DeepMicro-derived baseline methods: * See exps_ae.sh, exps_cae.sh, and exps_baselines.sh in DeepMicro.zip for the experiment commands (for both in-distribution and out of distribution experiments) * Also see README.md in DeepMicro.zip for more general information on using DeepMicro and our modifications to it. ## Changelog: **01/29/2025** Updated significant_correlations_tests.py to apply permutation testing and report Cohan's d and Cliff's Delta. Added run_blast_with_downloads.sh, which reports how many taxa in Halfvarson match to any taxa in AGP and how many taxa in Schirmer match any taxa in AGP. It's a way of comparing which of Schirmer or Halfvarson is more similar to AGP in terms of taxa that are present. We also slightly clarified the README's language to make it clearer where the software can be found."]}
{"Abstract":["Data files were used in support of the research paper titled "\u201cExperimentation Framework for Wireless\nCommunication Systems under Jamming Scenarios" which has been submitted to the IET Cyber-Physical Systems: Theory & Applications journal. <\/p>\n\nAuthors: Marko Jacovic, Michael J. Liston, Vasil Pano, Geoffrey Mainland, Kapil R. Dandekar\nContact: krd26@drexel.edu<\/p>\n\n---------------------------------------------------------------------------------------------<\/p>\n\nTop-level directories correspond to the case studies discussed in the paper. Each includes the sub-directories: logs, parsers, rayTracingEmulation, results. <\/p>\n\n--------------------------------<\/p>\n\nlogs: - data logs collected from devices under test\n - 'defenseInfrastucture' contains console output from a WARP 802.11 reference design network. Filename structure follows '*x*dB_*y*.txt' in which *x* is the reactive jamming power level and *y* is the jaming duration in samples (100k samples = 1 ms). 'noJammer.txt' does not include the jammer and is a base-line case. 'outMedian.txt' contains the median statistics for log files collected prior to the inclusion of the calculation in the processing script. \n - 'uavCommunication' contains MGEN logs at each receiver for cases using omni-directional and RALA antennas with a 10 dB constant jammer and without the jammer. Omni-directional folder contains multiple repeated experiments to provide reliable results during each calculation window. RALA directories use s*N* folders in which *N* represents each antenna state. \n - 'vehicularTechnologies' contains MGEN logs at the car receiver for different scenarios. 'rxNj_5rep.drc' does not consider jammers present, 'rx33J_5rep.drc' introduces the periodic jammer, in 'rx33jSched_5rep.drc' the device under test uses time scheduling around the periodic jammer, in 'rx33JSchedRandom_5rep.drc' the same modified time schedule is used with a random jammer. <\/p>\n\n--------------------------------<\/p>\n\nparsers: - scripts used to collect or process the log files used in the study\n - 'defenseInfrastructure' contains the 'xputFiveNodes.py' script which is used to control and log the throughput of a 5-node WARP 802.11 reference design network. Log files are manually inspected to generate results (end of log file provides a summary). \n - 'uavCommunication' contains a 'readMe.txt' file which describes the parsing of the MGEN logs using TRPR. TRPR must be installed to run the scripts and directory locations must be updated. \n - 'vehicularTechnologies' contains the 'mgenParser.py' script and supporting 'bfb.json' configuration file which also require TRPR to be installed and directories to be updated. <\/p>\n\n--------------------------------<\/p>\n\nrayTracingEmulation: - 'wirelessInsiteImages': images of model used in Wireless Insite\n - 'channelSummary.pdf': summary of channel statistics from ray-tracing study\n - 'rawScenario': scenario files resulting from code base directly from ray-tracing output based on configuration defined by '*WI.json' file \n - 'processedScenario': pre-processed scenario file to be used by DYSE channel emulator based on configuration defined by '*DYSE.json' file, applies fixed attenuation measured externally by spectrum analyzer and additional transmit power per node if desired\n - DYSE scenario file format: time stamp (milli seconds), receiver ID, transmitter ID, main path gain (dB), main path phase (radians), main path delay (micro seconds), Doppler shift (Hz), multipath 1 gain (dB), multipath 1 phase (radians), multipath 1 delay relative to main path delay (micro seconds), multipath 2 gain (dB), multipath 2 phase (radians), multipath 2 delay relative to main path delay (micro seconds)\n - 'nodeMapping.txt': mapping of Wireless Insite transceivers to DYSE channel emulator physical connections required\n - 'uavCommunication' directory additionally includes 'antennaPattern' which contains the RALA pattern data for the omni-directional mode ('omni.csv') and directional state ('90.csv')<\/p>\n\n--------------------------------<\/p>\n\nresults: - contains performance results used in paper based on parsing of aforementioned log files\n <\/p>"]}
Brodbeck, Christian, Bhattasali, Shohini, Cruz Heredia, Aura A., Resnik, Philip, Simon, Jonathan Z., and Lau, Ellen. Data from: Parallel processing in speech perception with local and global representations of linguistic context. Web. doi:10.5061/dryad.nvx0k6dv0.
Brodbeck, Christian, Bhattasali, Shohini, Cruz Heredia, Aura A., Resnik, Philip, Simon, Jonathan Z., & Lau, Ellen. Data from: Parallel processing in speech perception with local and global representations of linguistic context. https://doi.org/10.5061/dryad.nvx0k6dv0
Brodbeck, Christian, Bhattasali, Shohini, Cruz Heredia, Aura A., Resnik, Philip, Simon, Jonathan Z., and Lau, Ellen.
"Data from: Parallel processing in speech perception with local and global representations of linguistic context". Country unknown/Code not available: Dryad. https://doi.org/10.5061/dryad.nvx0k6dv0.https://par.nsf.gov/biblio/10340192.
@article{osti_10340192,
place = {Country unknown/Code not available},
title = {Data from: Parallel processing in speech perception with local and global representations of linguistic context},
url = {https://par.nsf.gov/biblio/10340192},
DOI = {10.5061/dryad.nvx0k6dv0},
abstractNote = {{"Abstract":["Speech processing is highly incremental. It is widely accepted that human\n listeners continuously use the linguistic context to anticipate upcoming\n concepts, words, and phonemes. However, previous evidence supports two\n seemingly contradictory models of how a predictive context is integrated\n with the bottom-up sensory input: Classic psycholinguistic paradigms\n suggest a two-stage process, in which acoustic input initially leads to\n local, context-independent representations, which are then quickly\n integrated with contextual constraints. This contrasts with the view that\n the brain constructs a single coherent, unified interpretation of the\n input, which fully integrates available information across\n representational hierarchies, and thus uses contextual constraints to\n modulate even the earliest sensory representations. To distinguish these\n hypotheses, we tested magnetoencephalography responses to continuous\n narrative speech for signatures of local and unified predictive models.\n Results provide evidence that listeners employ both types of models in\n parallel. Two local context models uniquely predict some part of early\n neural responses, one based on sublexical phoneme sequences, and one based\n on the phonemes in the current word alone; at the same time, even early\n responses to phonemes also reflect a unified model that incorporates\n sentence-level constraints to predict upcoming phonemes. Neural source\n localization places the anatomical origins of the different predictive\n models in nonidentical parts of the superior temporal lobes bilaterally,\n with the right hemisphere showing a relative preference for more local\n models. These results suggest that speech processing recruits both local\n and unified predictive models in parallel, reconciling previous disparate\n findings. Parallel models might make the perceptual system more robust,\n facilitate processing of unexpected inputs, and serve a function in\n language acquisition."],"Other":["MEG Data MEG data is in FIFF format and can be opened with MNE-Python.\n Data has been directly converted from the acquisition device native format\n without any preprocessing. Events contained in the data indicate the\n stimuli in numerical order. Subjects R2650 and R2652 heard stimulus 11b\n instead of 11. Predictor Variables The original audio files are\n copyrighted and cannot be shared, but the make_audio folder contains\n make_clips.py which can be used to extract the exact clips from the\n commercially available audiobook (ISBN 978-1480555280). The predictors\n directory contains all the predictors used in the original study as\n pickled eelbrain objects. They can be loaded in Python with the\n eelbrain.load.unpickle function. The TextGrids directory contains the\n TextGrids aligned to the audio files. Source Localization The\n localization.zip file contains files needed for source localization.\n Structural brain models used in the published analysis are reconstructed\n by scaling the FreeSurfer fsaverage brain (distributed with FreeSurfer)\n based on each subject's `MRI scaling parameters.cfg` file. This can\n be done using the `mne.scale_mri` function. Each subject's MEG folder\n contains a `subject-trans.fif` file which contains the coregistration\n between MEG sensor space and (scaled) MRI space, which is used to compute\n the forward solution."]}},
journal = {},
publisher = {Dryad},
author = {Brodbeck, Christian and Bhattasali, Shohini and Cruz Heredia, Aura A. and Resnik, Philip and Simon, Jonathan Z. and Lau, Ellen},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.