skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Data from "Membrane Fusion and Drug Delivery with Carbon Nanotube Porins" by Nga T. Ho, Marc Siggel, Karen V. Camacho, Ramachandra M. Bhaskara , Jacqueline M. Hicks, Yun-Chiao Yao, Yuliang Zhang, Jurgen Kofinger , Gerhard Hummer, and Aleksandr Noy
{"Abstract":["Data from "Membrane Fusion and Drug Delivery with Carbon Nanotube Porins""]}  more » « less
Award ID(s):
1710211
PAR ID:
10344719
Author(s) / Creator(s):
;
Publisher / Repository:
figshare
Date Published:
Subject(s) / Keyword(s):
Biophysics
Format(s):
Medium: X Size: 971428 Bytes
Size(s):
971428 Bytes
Sponsoring Org:
National Science Foundation
More Like this
  1. {"Abstract":["This is the 3-m resolution digital elevation model from "Microcontinent Breakup and Links to Possible Plate Boundary Reorganization in the Northern Gulf of California, México". Digital elevation was constructed from two 0.5-m resolution Pleiades satellite images (product type: 50cm Panchromatic + 2m (4-Band) Multispectral Bundle) using the NASA Ames Stereo Pipeline software. WGS1984 UTM Zone 12N."]} 
    more » « less
  2. {"Abstract":["Contains the model output and topography files necessary to reproduce the results of "Linearity of the climate system response to raising and lowering West Antarctic and coastal Antarctic topography" by Andrew G. Pauling, Cecilia M. Bitz and Eric J. Steig. Published in Journal of Climate, https://doi.org/10.1175/JCLI-D-22-0416.1.<\/p>\n\nPlease download and extract the data from each of the tar.gz.files. A description of the directories, run names, and use of the topography files is given in the file readme.txt within the dataset.<\/p>"]} 
    more » « less
  3. This dataset contains maps of water yield and nitrogen (N) yield each year, covering the Mississippi/Atchafalaya River Basin (MARB) spanning from 1980 to 2017. These maps were reconstructed by aggregating from a daily model (Dynamic Land Ecosystem Model, DLEM) estimates and are at 5-min×5-min (0.08333° Lat × 0.08333° Lon) resolution. There are two subfolders, "TT" and "DT", within this folder. "TT" and "DT" respectively indicate "traditional timing" and "dynamic timing" of nitrogen fertilizer applications in regards to the model experiments in the main text. The "TT" folder contains the gridded model estimates of water yield (named by "Runoff") and nitrogen yield (named by "Nleach") at annual bases. TT reflects our best estimate of water and N fluxes within the context of multi-factor environmental changes including climate, atmospheric CO2 concentration, N deposition, land use, and human management practices (such as fertilizer use, tillage, tile drainage, etc.). The "DT" folder only contains the model estimates of nitrogen yield (“Nleach”) under an alternative N management practice. More details can be found in the linked publication. 
    more » « less
  4. {"Abstract":["# DeepCaImX## Introduction#### Two-photon calcium imaging provides large-scale recordings of neuronal activities at cellular resolution. A robust, automated and high-speed pipeline to simultaneously segment the spatial footprints of neurons and extract their temporal activity traces while decontaminating them from background, noise and overlapping neurons is highly desirable to analyze calcium imaging data. In this paper, we demonstrate DeepCaImX, an end-to-end deep learning method based on an iterative shrinkage-thresholding algorithm and a long-short-term-memory neural network to achieve the above goals altogether at a very high speed and without any manually tuned hyper-parameters. DeepCaImX is a multi-task, multi-class and multi-label segmentation method composed of a compressed-sensing-inspired neural network with a recurrent layer and fully connected layers. It represents the first neural network that can simultaneously generate accurate neuronal footprints and extract clean neuronal activity traces from calcium imaging data. We trained the neural network with simulated datasets and benchmarked it against existing state-of-the-art methods with in vivo experimental data. DeepCaImX outperforms existing methods in the quality of segmentation and temporal trace extraction as well as processing speed. DeepCaImX is highly scalable and will benefit the analysis of mesoscale calcium imaging. ![alt text](https://github.com/KangningZhang/DeepCaImX/blob/main/imgs/Fig1.png)\n\n## System and Environment Requirements#### 1. Both CPU and GPU are supported to run the code of DeepCaImX. A CUDA compatible GPU is preferred. * In our demo of full-version, we use a GPU of Quadro RTX8000 48GB to accelerate the training speed.* In our demo of mini-version, at least 6 GB momory of GPU/CPU is required.#### 2. Python 3.9 and Tensorflow 2.10.0#### 3. Virtual environment: Anaconda Navigator 2.2.0#### 4. Matlab 2023a\n\n## Demo and installation#### 1 (_Optional_) GPU environment setup. We need a Nvidia parallel computing platform and programming model called _CUDA Toolkit_ and a GPU-accelerated library of primitives for deep neural networks called _CUDA Deep Neural Network library (cuDNN)_ to build up a GPU supported environment for training and testing our model. The link of CUDA installation guide is https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html and the link of cuDNN installation guide is https://docs.nvidia.com/deeplearning/cudnn/installation/overview.html. #### 2 Install Anaconda. Link of installation guide: https://docs.anaconda.com/free/anaconda/install/index.html#### 3 Launch Anaconda prompt and install Python 3.x and Tensorflow 2.9.0 as the virtual environment.#### 4 Open the virtual environment, and then  pip install mat73, opencv-python, python-time and scipy.#### 5 Download the "DeepCaImX_training_demo.ipynb" in folder "Demo (full-version)" for a full version and the simulated dataset via the google drive link. Then, create and put the training dataset in the path "./Training Dataset/". If there is a limitation on your computing resource or a quick test on our code, we highly recommand download the demo from the folder "Mini-version", which only requires around 6.3 GB momory in training. #### 6 Run: Use Anaconda to launch the virtual environment and open "DeepCaImX_training_demo.ipynb" or "DeepCaImX_testing_demo.ipynb". Then, please check and follow the guide of "DeepCaImX_training_demo.ipynb" or or "DeepCaImX_testing_demo.ipynb" for training or testing respectively.#### Note: Every package can be installed in a few minutes.\n\n## Run DeepCaImX#### 1. Mini-version demo* Download all the documents in the folder of "Demo (mini-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n#### 2. Full-version demo* Download all the documents in the folder of "Demo (full-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n## Data Tailor#### A data tailor developed by Matlab is provided to support a basic data tiling processing. In the folder of "Data Tailor", we can find a "tailor.m" script and an example "test.tiff". After running "tailor.m" by matlab, user is able to choose a "tiff" file from a GUI as loading the sample to be tiled. Settings include size of FOV, overlapping area, normalization option, name of output file and output data format. The output files can be found at local folder, which is at the same folder as the "tailor.m".\n\n## Simulated Dataset#### 1. Dataset generator (FISSA Version): The algorithm for generating simulated dataset is based on the paper of FISSA (_Keemink, S.W., Lowe, S.C., Pakan, J.M.P. et al. FISSA: A neuropil decontamination toolbox for calcium imaging signals. Sci Rep 8, 3493 (2018)_) and SimCalc repository (https://github.com/rochefort-lab/SimCalc/). For the code used to generate the simulated data, please download the documents in the folder "Simulated Dataset Generator". #### Training dataset: https://drive.google.com/file/d/1WZkIE_WA7Qw133t2KtqTESDmxMwsEkjJ/view?usp=share_link#### Testing Dataset: https://drive.google.com/file/d/1zsLH8OQ4kTV7LaqQfbPDuMDuWBcHGWcO/view?usp=share_link\n\n#### 2. Dataset generator (NAOMi Version): The algorithm for generating simulated dataset is based on the paper of NAOMi (_Song, A., Gauthier, J. L., Pillow, J. W., Tank, D. W. & Charles, A. S. Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods. Journal of neuroscience methods 358, 109173 (2021)_). For the code use to generate the simulated data, please go to this link: https://bitbucket.org/adamshch/naomi_sim/src/master/code/## Experimental Dataset#### We used the samples from ABO dataset:https://github.com/AllenInstitute/AllenSDK/wiki/Use-the-Allen-Brain-Observatory-%E2%80%93-Visual-Coding-on-AWS.#### The segmentation ground truth can be found in the folder "Manually Labelled ROIs". #### The segmentation ground truth of depth 175, 275, 375, 550 and 625 um are manually labeled by us. #### The code for creating ground truth of extracted traces can be found in "Prepro_Exp_Sample.ipynb" in the folder "Preprocessing of Experimental Sample"."]} 
    more » « less
  5. Dataset Description This dataset contains 6710 structural configurations and solvophobicity values for topologically and chemically diverse coarse-grained polymer chains. Additionally, 480 polymers include shear-rate dependent viscosity profiles at 2 wt% polymer concentration.The data is provided as serialized objects using the pickle Python module.All files were generated using Python version 3.10. Data There are three pickle files containing serialized Python objects. Key files include: data_aug10.pickle  Contains the coarse-grained polymer dataset with 6710 entries.  Each entry includes: Polymer graph Squared radius of gyration (at lambda = 0). Solvophobicity (lambda). Bead count (N). Chain virial number (Xi). topo_param_visc.pickle   Shear-rate-dependent viscosity profiles of 480 polymer systems. target_curves.pickle  Contains 30 target viscosity profiles used for active learning. Usage To load the dataset stored in data_aug10.pickle, use the following code: import pickle with open("data_aug10.pickle", "rb") as handle:    (        (x_train, y_train, c_train, l_train, graph_train),        (x_valid, y_valid, c_valid, l_valid, graph_valid),        (x_test, y_test, c_test, l_test, graph_test),        NAMES,        SCALER,        SCALER_y,        le    ) = pickle.load(handle) x: node features for each polymer graph y: labels (e.g., predicted properties) c: topological class indices l: topological descriptors graph: NetworkX graphs representing polymer topology NAMES: list of topological class names SCALER: fitted scaler for topological descriptors (l) SCALER_y: fitted scaler for property labels (y) le: label encoder for topological class indices   To load the dataset stored in topo_param_visc.pickle, use the following code: import pickle with open("poly_data_ml.pickle", "rb") as handle:    desc_all, ps_all, curve_all, shear_rate, graph_all = pickle.load(handle) desc_all: topological descriptors for each polymer graph ps_all: fitted Carreau–Yasuda model parameters curve_all: fitted viscosity curves shear_rate: shear rates corresponding to each viscosity curve graph_all: polymer graphs represented as NetworkX objects   First 30: seed dataset Next 150: 5 iterations (30 each) from class-balanced space-filling Following 150: space-filling without class balancing Final 150: active learning samples    To load the dataset stored in target_curves.pickle, use the following code: import pickle with open("target_curves.pickle", "rb") as handle:    data = pickle.load(f) curves = data['curves']params = data['params']shear_rate = data["xx"]   curves: target viscosity curves used as design objectives params: Carreau–Yasuda model parameters fitted to the target curves shear_rate: shear rate values associated with the target curves     Help, Suggestions, Corrections?If you need help, have suggestions, identify issues, or have corrections, please send your comments to Shengli Jiang at sj0161@princeton.edu GitHubAdditional data and code relevant for this study is additionally accessible at https://github.com/webbtheosim/cg-topo-solv 
    more » « less