skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Simulation codes and data for "Reduced modeling and global instability of finite-Reynolds-number flow in compliant rectangular channels"
{"Abstract":["Zip files with codes and data to make the plots in the manuscript "Reduced modeling and global instability of finite-Reynolds-number flow in compliant rectangular channels" by Wang & Christov (2022)."]}  more » « less
Award ID(s):
1705637
PAR ID:
10379351
Author(s) / Creator(s):
;
Publisher / Repository:
Purdue University Research Repository
Date Published:
Edition / Version:
1.0
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This dataset contains maps of water yield and nitrogen (N) yield each year, covering the Mississippi/Atchafalaya River Basin (MARB) spanning from 1980 to 2017. These maps were reconstructed by aggregating from a daily model (Dynamic Land Ecosystem Model, DLEM) estimates and are at 5-min×5-min (0.08333° Lat × 0.08333° Lon) resolution. There are two subfolders, "TT" and "DT", within this folder. "TT" and "DT" respectively indicate "traditional timing" and "dynamic timing" of nitrogen fertilizer applications in regards to the model experiments in the main text. The "TT" folder contains the gridded model estimates of water yield (named by "Runoff") and nitrogen yield (named by "Nleach") at annual bases. TT reflects our best estimate of water and N fluxes within the context of multi-factor environmental changes including climate, atmospheric CO2 concentration, N deposition, land use, and human management practices (such as fertilizer use, tillage, tile drainage, etc.). The "DT" folder only contains the model estimates of nitrogen yield (“Nleach”) under an alternative N management practice. More details can be found in the linked publication. 
    more » « less
  2. {"Abstract":["# DeepCaImX## Introduction#### Two-photon calcium imaging provides large-scale recordings of neuronal activities at cellular resolution. A robust, automated and high-speed pipeline to simultaneously segment the spatial footprints of neurons and extract their temporal activity traces while decontaminating them from background, noise and overlapping neurons is highly desirable to analyze calcium imaging data. In this paper, we demonstrate DeepCaImX, an end-to-end deep learning method based on an iterative shrinkage-thresholding algorithm and a long-short-term-memory neural network to achieve the above goals altogether at a very high speed and without any manually tuned hyper-parameters. DeepCaImX is a multi-task, multi-class and multi-label segmentation method composed of a compressed-sensing-inspired neural network with a recurrent layer and fully connected layers. It represents the first neural network that can simultaneously generate accurate neuronal footprints and extract clean neuronal activity traces from calcium imaging data. We trained the neural network with simulated datasets and benchmarked it against existing state-of-the-art methods with in vivo experimental data. DeepCaImX outperforms existing methods in the quality of segmentation and temporal trace extraction as well as processing speed. DeepCaImX is highly scalable and will benefit the analysis of mesoscale calcium imaging. ![alt text](https://github.com/KangningZhang/DeepCaImX/blob/main/imgs/Fig1.png)\n\n## System and Environment Requirements#### 1. Both CPU and GPU are supported to run the code of DeepCaImX. A CUDA compatible GPU is preferred. * In our demo of full-version, we use a GPU of Quadro RTX8000 48GB to accelerate the training speed.* In our demo of mini-version, at least 6 GB momory of GPU/CPU is required.#### 2. Python 3.9 and Tensorflow 2.10.0#### 3. Virtual environment: Anaconda Navigator 2.2.0#### 4. Matlab 2023a\n\n## Demo and installation#### 1 (_Optional_) GPU environment setup. We need a Nvidia parallel computing platform and programming model called _CUDA Toolkit_ and a GPU-accelerated library of primitives for deep neural networks called _CUDA Deep Neural Network library (cuDNN)_ to build up a GPU supported environment for training and testing our model. The link of CUDA installation guide is https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html and the link of cuDNN installation guide is https://docs.nvidia.com/deeplearning/cudnn/installation/overview.html. #### 2 Install Anaconda. Link of installation guide: https://docs.anaconda.com/free/anaconda/install/index.html#### 3 Launch Anaconda prompt and install Python 3.x and Tensorflow 2.9.0 as the virtual environment.#### 4 Open the virtual environment, and then  pip install mat73, opencv-python, python-time and scipy.#### 5 Download the "DeepCaImX_training_demo.ipynb" in folder "Demo (full-version)" for a full version and the simulated dataset via the google drive link. Then, create and put the training dataset in the path "./Training Dataset/". If there is a limitation on your computing resource or a quick test on our code, we highly recommand download the demo from the folder "Mini-version", which only requires around 6.3 GB momory in training. #### 6 Run: Use Anaconda to launch the virtual environment and open "DeepCaImX_training_demo.ipynb" or "DeepCaImX_testing_demo.ipynb". Then, please check and follow the guide of "DeepCaImX_training_demo.ipynb" or or "DeepCaImX_testing_demo.ipynb" for training or testing respectively.#### Note: Every package can be installed in a few minutes.\n\n## Run DeepCaImX#### 1. Mini-version demo* Download all the documents in the folder of "Demo (mini-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n#### 2. Full-version demo* Download all the documents in the folder of "Demo (full-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n## Data Tailor#### A data tailor developed by Matlab is provided to support a basic data tiling processing. In the folder of "Data Tailor", we can find a "tailor.m" script and an example "test.tiff". After running "tailor.m" by matlab, user is able to choose a "tiff" file from a GUI as loading the sample to be tiled. Settings include size of FOV, overlapping area, normalization option, name of output file and output data format. The output files can be found at local folder, which is at the same folder as the "tailor.m".\n\n## Simulated Dataset#### 1. Dataset generator (FISSA Version): The algorithm for generating simulated dataset is based on the paper of FISSA (_Keemink, S.W., Lowe, S.C., Pakan, J.M.P. et al. FISSA: A neuropil decontamination toolbox for calcium imaging signals. Sci Rep 8, 3493 (2018)_) and SimCalc repository (https://github.com/rochefort-lab/SimCalc/). For the code used to generate the simulated data, please download the documents in the folder "Simulated Dataset Generator". #### Training dataset: https://drive.google.com/file/d/1WZkIE_WA7Qw133t2KtqTESDmxMwsEkjJ/view?usp=share_link#### Testing Dataset: https://drive.google.com/file/d/1zsLH8OQ4kTV7LaqQfbPDuMDuWBcHGWcO/view?usp=share_link\n\n#### 2. Dataset generator (NAOMi Version): The algorithm for generating simulated dataset is based on the paper of NAOMi (_Song, A., Gauthier, J. L., Pillow, J. W., Tank, D. W. & Charles, A. S. Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods. Journal of neuroscience methods 358, 109173 (2021)_). For the code use to generate the simulated data, please go to this link: https://bitbucket.org/adamshch/naomi_sim/src/master/code/## Experimental Dataset#### We used the samples from ABO dataset:https://github.com/AllenInstitute/AllenSDK/wiki/Use-the-Allen-Brain-Observatory-%E2%80%93-Visual-Coding-on-AWS.#### The segmentation ground truth can be found in the folder "Manually Labelled ROIs". #### The segmentation ground truth of depth 175, 275, 375, 550 and 625 um are manually labeled by us. #### The code for creating ground truth of extracted traces can be found in "Prepro_Exp_Sample.ipynb" in the folder "Preprocessing of Experimental Sample"."]} 
    more » « less
  3. This dataset reports data from Nghiem et al. (2024), "Testing floc settling velocity models in rivers and freshwater wetlands." Please refer to "readme.xlsx" for a description of each data file. The original sediment grain size distribution data for each sample can be found online on the NASA Delta-X repository. 
    more » « less
  4. Dataset Description This dataset contains 6710 structural configurations and solvophobicity values for topologically and chemically diverse coarse-grained polymer chains. Additionally, 480 polymers include shear-rate dependent viscosity profiles at 2 wt% polymer concentration.The data is provided as serialized objects using the pickle Python module.All files were generated using Python version 3.10. Data There are three pickle files containing serialized Python objects. Key files include: data_aug10.pickle  Contains the coarse-grained polymer dataset with 6710 entries.  Each entry includes: Polymer graph Squared radius of gyration (at lambda = 0). Solvophobicity (lambda). Bead count (N). Chain virial number (Xi). topo_param_visc.pickle   Shear-rate-dependent viscosity profiles of 480 polymer systems. target_curves.pickle  Contains 30 target viscosity profiles used for active learning. Usage To load the dataset stored in data_aug10.pickle, use the following code: import pickle with open("data_aug10.pickle", "rb") as handle:    (        (x_train, y_train, c_train, l_train, graph_train),        (x_valid, y_valid, c_valid, l_valid, graph_valid),        (x_test, y_test, c_test, l_test, graph_test),        NAMES,        SCALER,        SCALER_y,        le    ) = pickle.load(handle) x: node features for each polymer graph y: labels (e.g., predicted properties) c: topological class indices l: topological descriptors graph: NetworkX graphs representing polymer topology NAMES: list of topological class names SCALER: fitted scaler for topological descriptors (l) SCALER_y: fitted scaler for property labels (y) le: label encoder for topological class indices   To load the dataset stored in topo_param_visc.pickle, use the following code: import pickle with open("poly_data_ml.pickle", "rb") as handle:    desc_all, ps_all, curve_all, shear_rate, graph_all = pickle.load(handle) desc_all: topological descriptors for each polymer graph ps_all: fitted Carreau–Yasuda model parameters curve_all: fitted viscosity curves shear_rate: shear rates corresponding to each viscosity curve graph_all: polymer graphs represented as NetworkX objects   First 30: seed dataset Next 150: 5 iterations (30 each) from class-balanced space-filling Following 150: space-filling without class balancing Final 150: active learning samples    To load the dataset stored in target_curves.pickle, use the following code: import pickle with open("target_curves.pickle", "rb") as handle:    data = pickle.load(f) curves = data['curves']params = data['params']shear_rate = data["xx"]   curves: target viscosity curves used as design objectives params: Carreau–Yasuda model parameters fitted to the target curves shear_rate: shear rate values associated with the target curves     Help, Suggestions, Corrections?If you need help, have suggestions, identify issues, or have corrections, please send your comments to Shengli Jiang at sj0161@princeton.edu GitHubAdditional data and code relevant for this study is additionally accessible at https://github.com/webbtheosim/cg-topo-solv 
    more » « less
  5. Information about grants funded by NSF to support SES research from 2000-2015. The grants included in this dataset are a subset that we identified as having an SES research focus from a set of grants accessed using the Dimensions platform (https://dimensions.ai). CSV file with 35 columns and names in header row: "Grant Searched" lists the granting NSF program (text); "Grant Searched 2" lists a secondary granting NSF program, if applicable (text); "Grant ID" is the ID from the Dimensions platform (string); "Grant Number" is the NSF Award number (integer); "Number of Papers (NSF)" is the count of publications listed under "PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH" in the NSF Award Search page for the grant (integer); "Number of Pubs Tracked" is the count of publications from "Number of Papers (NSF)" included in our analysis (integer); "Publication notes" are our notes about the publication information. We used "subset" to denote when a grant was associated with >10 publications and we used a random sample of 10 publications in our analysis (text); "Unique ID" is our unique identifier for each grant in the dataset (integer); "Collaborative/Cross Program" denotes whether the grant was submitted as part of a set of collaborative or cross-program proposals. In this case, all linked proposals are given the same unique identifier and treated together in the analysis. (text); "Title" is the title of the grant (text); "Title translated" is the title of the grant translated to English, where applicable (text); "Abstract" is the abstract of the grant (text); "Abstract translated" is the abstract of the grant translated to English, where applicable (text); "Funding Amount" is the numeric value of funding awarded to the grant (integer); "Currency" is the currency associated with the field "Funding Amount" (text); "Funding Amount in USD" is the numeric value of funding awarded to the grant expressed in US Dollars (integer); "Start Date" is the start date of the grant (mm/dd/yyyy); "Start Year" is the year in which grant funding began (year); "End Date" is the end date of the grant (mm/dd/yyyy); "End Year" is the year in which the term of the grant expired (year); "Researchers" lists the Principal Investigators on the grant in First Name Last Name format, separated by semi-colons (text); "Research Organization - original" gives the affiliation of the lead PI as listed in the grant (text); "Research Organization - standardized" gives the affiliation of each PI in the list, separated by semi-colons (text); "GRID ID" is a list of the unique identifier for each the Research Organization in the Global Research Identifier Database [https://grid.ac/?_ga=2.26738100.847204331.1643218575-1999717347.1643218575], separated by semi-colons (string); "Country of Research organization" is a list of the countries in which each Research Organization is located, separated by semi-colons (text); "Funder" gives the NSF Directorate that funded the grant (text); "Source Linkout" is a link to the NSF Award Search page with information about the grant (URL); "Dimensions URL" is a link to information about the grant in Dimensions (URL); "FOR (ANZSRC) Categories" is a list of Field of Research categories [from the Australian and New Zealand Standard Research Classification (ANZSRC) system] associated with each grant, separated by semi-colons (string); "FOR [1-5]" give the FOR categories separated. "NOTES" provide any other notes added by the authors of this dataset during our processing of these data. 
    more » « less