skip to main content


Title: The simulation experiment description markup language (SED-ML): language specification for level 1 version 4
Abstract Computational simulation experiments increasingly inform modern biological research, and bring with them the need to provide ways to annotate, archive, share and reproduce the experiments performed. These simulations increasingly require extensive collaboration among modelers, experimentalists, and engineers. The Minimum Information About a Simulation Experiment (MIASE) guidelines outline the information needed to share simulation experiments. SED-ML is a computer-readable format for the information outlined by MIASE, created as a community project and supported by many investigators and software tools. The first versions of SED-ML focused on deterministic and stochastic simulations of models. Level 1 Version 4 of SED-ML substantially expands these capabilities to cover additional types of models, model languages, parameter estimations, simulations and analyses of models, and analyses and visualizations of simulation results. To facilitate consistent practices across the community, Level 1 Version 4 also more clearly describes the use of SED-ML constructs, and includes numerous concrete validation rules. SED-ML is supported by a growing ecosystem of investigators, model languages, and software tools, including eight languages for constraint-based, kinetic, qualitative, rule-based, and spatial models, over 20 simulation tools, visual editors, model repositories, and validators. Additional information about SED-ML is available at https://sed-ml.org/ .  more » « less
Award ID(s):
1933453
NSF-PAR ID:
10301098
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Journal of Integrative Bioinformatics
Volume:
18
Issue:
3
ISSN:
1613-4516
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Obeid, Iyad ; Picone, Joseph ; Selesnick, Ivan (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing a large open source database of high-resolution digital pathology images known as the Temple University Digital Pathology Corpus (TUDP) [1]. Our long-term goal is to release one million images. We expect to release the first 100,000 image corpus by December 2020. The data is being acquired at the Department of Pathology at Temple University Hospital (TUH) using a Leica Biosystems Aperio AT2 scanner [2] and consists entirely of clinical pathology images. More information about the data and the project can be found in Shawki et al. [3]. We currently have a National Science Foundation (NSF) planning grant [4] to explore how best the community can leverage this resource. One goal of this poster presentation is to stimulate community-wide discussions about this project and determine how this valuable resource can best meet the needs of the public. The computing infrastructure required to support this database is extensive [5] and includes two HIPAA-secure computer networks, dual petabyte file servers, and Aperio’s eSlide Manager (eSM) software [6]. We currently have digitized over 50,000 slides from 2,846 patients and 2,942 clinical cases. There is an average of 12.4 slides per patient and 10.5 slides per case with one report per case. The data is organized by tissue type as shown below: Filenames: tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_0a001_00123456_lvl0001_s000.svs tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_00123456.docx Explanation: tudp: root directory of the corpus v1.0.0: version number of the release svs: the image data type gastro: the type of tissue 000001: six-digit sequence number used to control directory complexity 00123456: 8-digit patient MRN 2015_03_05: the date the specimen was captured 0s15_12345: the clinical case name 0s15_12345_0a001_00123456_lvl0001_s000.svs: the actual image filename consisting of a repeat of the case name, a site code (e.g., 0a001), the type and depth of the cut (e.g., lvl0001) and a token number (e.g., s000) 0s15_12345_00123456.docx: the filename for the corresponding case report We currently recognize fifteen tissue types in the first installment of the corpus. The raw image data is stored in Aperio’s “.svs” format, which is a multi-layered compressed JPEG format [3,7]. Pathology reports containing a summary of how a pathologist interpreted the slide are also provided in a flat text file format. A more complete summary of the demographics of this pilot corpus will be presented at the conference. Another goal of this poster presentation is to share our experiences with the larger community since many of these details have not been adequately documented in scientific publications. There are quite a few obstacles in collecting this data that have slowed down the process and need to be discussed publicly. Our backlog of slides dates back to 1997, meaning there are a lot that need to be sifted through and discarded for peeling or cracking. Additionally, during scanning a slide can get stuck, stalling a scan session for hours, resulting in a significant loss of productivity. Over the past two years, we have accumulated significant experience with how to scan a diverse inventory of slides using the Aperio AT2 high-volume scanner. We have been working closely with the vendor to resolve many problems associated with the use of this scanner for research purposes. This scanning project began in January of 2018 when the scanner was first installed. The scanning process was slow at first since there was a learning curve with how the scanner worked and how to obtain samples from the hospital. From its start date until May of 2019 ~20,000 slides we scanned. In the past 6 months from May to November we have tripled that number and how hold ~60,000 slides in our database. This dramatic increase in productivity was due to additional undergraduate staff members and an emphasis on efficient workflow. The Aperio AT2 scans 400 slides a day, requiring at least eight hours of scan time. The efficiency of these scans can vary greatly. When our team first started, approximately 5% of slides failed the scanning process due to focal point errors. We have been able to reduce that to 1% through a variety of means: (1) best practices regarding daily and monthly recalibrations, (2) tweaking the software such as the tissue finder parameter settings, and (3) experience with how to clean and prep slides so they scan properly. Nevertheless, this is not a completely automated process, making it very difficult to reach our production targets. With a staff of three undergraduate workers spending a total of 30 hours per week, we find it difficult to scan more than 2,000 slides per week using a single scanner (400 slides per night x 5 nights per week). The main limitation in achieving this level of production is the lack of a completely automated scanning process, it takes a couple of hours to sort, clean and load slides. We have streamlined all other aspects of the workflow required to database the scanned slides so that there are no additional bottlenecks. To bridge the gap between hospital operations and research, we are using Aperio’s eSM software. Our goal is to provide pathologists access to high quality digital images of their patients’ slides. eSM is a secure website that holds the images with their metadata labels, patient report, and path to where the image is located on our file server. Although eSM includes significant infrastructure to import slides into the database using barcodes, TUH does not currently support barcode use. Therefore, we manage the data using a mixture of Python scripts and manual import functions available in eSM. The database and associated tools are based on proprietary formats developed by Aperio, making this another important point of community-wide discussion on how best to disseminate such information. Our near-term goal for the TUDP Corpus is to release 100,000 slides by December 2020. We hope to continue data collection over the next decade until we reach one million slides. We are creating two pilot corpora using the first 50,000 slides we have collected. The first corpus consists of 500 slides with a marker stain and another 500 without it. This set was designed to let people debug their basic deep learning processing flow on these high-resolution images. We discuss our preliminary experiments on this corpus and the challenges in processing these high-resolution images using deep learning in [3]. We are able to achieve a mean sensitivity of 99.0% for slides with pen marks, and 98.9% for slides without marks, using a multistage deep learning algorithm. While this dataset was very useful in initial debugging, we are in the midst of creating a new, more challenging pilot corpus using actual tissue samples annotated by experts. The task will be to detect ductal carcinoma (DCIS) or invasive breast cancer tissue. There will be approximately 1,000 images per class in this corpus. Based on the number of features annotated, we can train on a two class problem of DCIS or benign, or increase the difficulty by increasing the classes to include DCIS, benign, stroma, pink tissue, non-neoplastic etc. Those interested in the corpus or in participating in community-wide discussions should join our listserv, nedc_tuh_dpath@googlegroups.com, to be kept informed of the latest developments in this project. You can learn more from our project website: https://www.isip.piconepress.com/projects/nsf_dpath. 
    more » « less
  2. Abstract Computational models have great potential to accelerate bioscience, bioengineering, and medicine. However, it remains challenging to reproduce and reuse simulations, in part, because the numerous formats and methods for simulating various subsystems and scales remain siloed by different software tools. For example, each tool must be executed through a distinct interface. To help investigators find and use simulation tools, we developed BioSimulators (https://biosimulators.org), a central registry of the capabilities of simulation tools and consistent Python, command-line and containerized interfaces to each version of each tool. The foundation of BioSimulators is standards, such as CellML, SBML, SED-ML and the COMBINE archive format, and validation tools for simulation projects and simulation tools that ensure these standards are used consistently. To help modelers find tools for particular projects, we have also used the registry to develop recommendation services. We anticipate that BioSimulators will help modelers exchange, reproduce, and combine simulations. 
    more » « less
  3. This dataset contains monthly average output files from the iCAM6 simulations used in the manuscript "Enhancing understanding of the hydrological cycle via pairing of process-oriented and isotope ratio tracers," in review at the Journal of Advances in Modeling Earth Systems. A file corresponding to each of the tagged and isotopic variables used in this manuscript is included. Files are at 0.9° latitude x 1.25° longitude, and are in NetCDF format. Data from two simulations are included: 1) a simulation where the atmospheric model was "nudged" to ERA5 wind and surface pressure fields, by adding an additional tendency (see section 3.1 of associated manuscript), and 2) a simulation where the atmospheric state was allowed to freely evolve, using only boundary conditions imposed at the surface and top of atmosphere. Specific information about each of the variables provided is located in the "usage notes" section below. Associated article abstract: The hydrologic cycle couples the Earth's energy and carbon budgets through evaporation, moisture transport, and precipitation. Despite a wealth of observations and models, fundamental limitations remain in our capacity to deduce even the most basic properties of the hydrological cycle, including the spatial pattern of the residence time (RT) of water in the atmosphere and the mean distance traveled from evaporation sources to precipitation sinks. Meanwhile, geochemical tracers such as stable water isotope ratios provide a tool to probe hydrological processes, yet their interpretation remains equivocal despite several decades of use. As a result, there is a need for new mechanistic tools that link variations in water isotope ratios to underlying hydrological processes. Here we present a new suite of “process-oriented tags,” which we use to explicitly trace hydrological processes within the isotopically enabled Community Atmosphere Model, version 6 (iCAM6). Using these tags, we test the hypotheses that precipitation isotope ratios respond to parcel rainout, variations in atmospheric RT, and preserve information regarding meteorological conditions during evaporation. We present results for a historical simulation from 1980 to 2004, forced with winds from the ERA5 reanalysis. We find strong evidence that precipitation isotope ratios record information about atmospheric rainout and meteorological conditions during evaporation, but little evidence that precipitation isotope ratios vary with water vapor RT. These new tracer methods will enable more robust linkages between observations of isotope ratios in the modern hydrologic cycle or proxies of past terrestrial environments and the environmental processes underlying these observations.   Details about the simulation setup can be found in section 3 of the associated open-source manuscript, "Enhancing understanding of the hydrological cycle via pairing of process‐oriented and isotope ratio tracers." In brief, we conducted two simulations of the atmosphere from 1980-2004 using the isotope-enabled version of the Community Atmosphere Model 6 (iCAM6) at 0.9x1.25° horizontal resolution, and with 30 vertical hybrid layers spanning from the surface to ~3 hPa. In the first simulation, wind and surface pressure fields were "nudged" toward the ERA5 reanalysis dataset by adding a nudging tendency, preventing the model from diverging from observed/reanalysis wind fields. In the second simulation, no additional nudging tendency was included, and the model was allowed to evolve 'freely' with only boundary conditions provided at the top (e.g., incoming solar radiation) and bottom (e.g., observed sea surface temperatures) of the model. In addition to the isotopic variables, our simulation included a suite of 'process-oriented tracers,' which we describe in section 2 of the manuscript. These variables are meant to track a property of water associated with evaporation, condensation, or atmospheric transport. Metadata are provided about each of the files below; moreover, since the attached files are NetCDF data - this information is also provided with the data files. NetCDF metadata can be accessed using standard tools (e.g., ncdump). Each file has 4 variables: the tagged quantity, and the associated coordinate variables (time, latitude, longitude). The latter three are identical across all files, only the tagged quantity changes. Twelve files are provided for the nudged simulation, and an additional three are provided for the free simulations: Nudged simulation files iCAM6_nudged_1980-2004_mon_RHevap: Mass-weighted mean evaporation source property: RH (%) with respect to surface temperature. iCAM6_nudged_1980-2004_mon_Tevap: Mass-weighted mean evaporation source property: surface temperature in Kelvin iCAM6_nudged_1980-2004_mon_Tcond: Mass-weighted mean condensation property: temperature (K) iCAM6_nudged_1980-2004_mon_columnQ: Total (vertically integrated) precipitable water (kg/m2).  Not a tagged quantity, but necessary to calculate depletion times in section 4.3 (e.g., Fig. 11 and 12). iCAM6_nudged_1980-2004_mon_d18O: Precipitation d18O (‰ VSMOW) iCAM6_nudged_1980-2004_mon_d18Oevap_0: Mass-weighted mean evaporation source property - d18O of the evaporative flux (e.g., the 'initial' isotope ratio prior to condensation), (‰ VSMOW) iCAM6_nudged_1980-2004_mon_dxs: Precipitation deuterium excess (‰ VSMOW) - note that precipitation d2H can be calculated from this file and the precipitation d18O as d2H = d-excess - 8*d18O. iCAM6_nudged_1980-2004_mon_dexevap_0: Mass-weighted mean evaporation source property - deuterium excess of the evaporative flux iCAM6_nudged_1980-2004_mon_lnf: Integrated property - ln(f) calculated from the constant-fractionation d18O tracer (see section 3.2). iCAM6_nudged_1980-2004_mon_precip: Total precipitation rate in m/s. Note there is an error in the metadata in this file - it is total precipitation, not just convective precipitation. iCAM6_nudged_1980-2004_mon_residencetime: Mean atmospheric water residence time (in days). iCAM6_nudged_1980-2004_mon_transportdistance: Mean atmospheric water transport distance (in km). Free simulation files iCAM6_free_1980-2004_mon_d18O: Precipitation d18O (‰ VSMOW) iCAM6_free_1980-2004_mon_dxs: Precipitation deuterium excess (‰ VSMOW) - note that precipitation d2H can be calculated from this file and the precipitation d18O as d2H = d-excess - 8*d18O. iCAM6_free_1980-2004_mon_precip: Total precipitation rate in m/s. Note there is an error in the metadata in this file - it is total precipitation, not just convective precipitation. 
    more » « less
  4. This data set for the manuscript entitled "Design of Peptides that Fold and Self-Assemble on Graphite" includes all files needed to run and analyze the simulations described in the this manuscript in the molecular dynamics software NAMD, as well as the output of the simulations. The files are organized into directories corresponding to the figures of the main text and supporting information. They include molecular model structure files (NAMD psf or Amber prmtop format), force field parameter files (in CHARMM format), initial atomic coordinates (pdb format), NAMD configuration files, Colvars configuration files, NAMD log files, and NAMD output including restart files (in binary NAMD format) and trajectories in dcd format (downsampled to 10 ns per frame). Analysis is controlled by shell scripts (Bash-compatible) that call VMD Tcl scripts or python scripts. These scripts and their output are also included.

    Version: 2.0

    Changes versus version 1.0 are the addition of the free energy of folding, adsorption, and pairing calculations (Sim_Figure-7) and shifting of the figure numbers to accommodate this addition.


    Conventions Used in These Files
    ===============================

    Structure Files
    ----------------
    - graph_*.psf or sol_*.psf (original NAMD (XPLOR?) format psf file including atom details (type, charge, mass), as well as definitions of bonds, angles, dihedrals, and impropers for each dipeptide.)

    - graph_*.pdb or sol_*.pdb (initial coordinates before equilibration)
    - repart_*.psf (same as the above psf files, but the masses of non-water hydrogen atoms have been repartitioned by VMD script repartitionMass.tcl)
    - freeTop_*.pdb (same as the above pdb files, but the carbons of the lower graphene layer have been placed at a single z value and marked for restraints in NAMD)
    - amber_*.prmtop (combined topology and parameter files for Amber force field simulations)
    - repart_amber_*.prmtop (same as the above prmtop files, but the masses of non-water hydrogen atoms have been repartitioned by ParmEd)

    Force Field Parameters
    ----------------------
    CHARMM format parameter files:
    - par_all36m_prot.prm (CHARMM36m FF for proteins)
    - par_all36_cgenff_no_nbfix.prm (CGenFF v4.4 for graphene) The NBFIX parameters are commented out since they are only needed for aromatic halogens and we use only the CG2R61 type for graphene.
    - toppar_water_ions_prot_cgenff.str (CHARMM water and ions with NBFIX parameters needed for protein and CGenFF included and others commented out)

    Template NAMD Configuration Files
    ---------------------------------
    These contain the most commonly used simulation parameters. They are called by the other NAMD configuration files (which are in the namd/ subdirectory):
    - template_min.namd (minimization)
    - template_eq.namd (NPT equilibration with lower graphene fixed)
    - template_abf.namd (for adaptive biasing force)

    Minimization
    -------------
    - namd/min_*.0.namd

    Equilibration
    -------------
    - namd/eq_*.0.namd

    Adaptive biasing force calculations
    -----------------------------------
    - namd/eabfZRest7_graph_chp1404.0.namd
    - namd/eabfZRest7_graph_chp1404.1.namd (continuation of eabfZRest7_graph_chp1404.0.namd)

    Log Files
    ---------
    For each NAMD configuration file given in the last two sections, there is a log file with the same prefix, which gives the text output of NAMD. For instance, the output of namd/eabfZRest7_graph_chp1404.0.namd is eabfZRest7_graph_chp1404.0.log.

    Simulation Output
    -----------------
    The simulation output files (which match the names of the NAMD configuration files) are in the output/ directory. Files with the extensions .coor, .vel, and .xsc are coordinates in NAMD binary format, velocities in NAMD binary format, and extended system information (including cell size) in text format. Files with the extension .dcd give the trajectory of the atomic coorinates over time (and also include system cell information). Due to storage limitations, large DCD files have been omitted or replaced with new DCD files having the prefix stride50_ including only every 50 frames. The time between frames in these files is 50 * 50000 steps/frame * 4 fs/step = 10 ns. The system cell trajectory is also included for the NPT runs are output/eq_*.xst.

    Scripts
    -------
    Files with the .sh extension can be found throughout. These usually provide the highest level control for submission of simulations and analysis. Look to these as a guide to what is happening. If there are scripts with step1_*.sh and step2_*.sh, they are intended to be run in order, with step1_*.sh first.


    CONTENTS
    ========

    The directory contents are as follows. The directories Sim_Figure-1 and Sim_Figure-8 include README.txt files that describe the files and naming conventions used throughout this data set.

    Sim_Figure-1: Simulations of N-acetylated C-amidated amino acids (Ac-X-NHMe) at the graphite–water interface.

    Sim_Figure-2: Simulations of different peptide designs (including acyclic, disulfide cyclized, and N-to-C cyclized) at the graphite–water interface.

    Sim_Figure-3: MM-GBSA calculations of different peptide sequences for a folded conformation and 5 misfolded/unfolded conformations.

    Sim_Figure-4: Simulation of four peptide molecules with the sequence cyc(GTGSGTG-GPGG-GCGTGTG-SGPG) at the graphite–water interface at 370 K.

    Sim_Figure-5: Simulation of four peptide molecules with the sequence cyc(GTGSGTG-GPGG-GCGTGTG-SGPG) at the graphite–water interface at 295 K.

    Sim_Figure-5_replica: Temperature replica exchange molecular dynamics simulations for the peptide cyc(GTGSGTG-GPGG-GCGTGTG-SGPG) with 20 replicas for temperatures from 295 to 454 K.

    Sim_Figure-6: Simulation of the peptide molecule cyc(GTGSGTG-GPGG-GCGTGTG-SGPG) in free solution (no graphite).

    Sim_Figure-7: Free energy calculations for folding, adsorption, and pairing for the peptide CHP1404 (sequence: cyc(GTGSGTG-GPGG-GCGTGTG-SGPG)). For folding, we calculate the PMF as function of RMSD by replica-exchange umbrella sampling (in the subdirectory Folding_CHP1404_Graphene/). We make the same calculation in solution, which required 3 seperate replica-exchange umbrella sampling calculations (in the subdirectory Folding_CHP1404_Solution/). Both PMF of RMSD calculations for the scrambled peptide are in Folding_scram1404/. For adsorption, calculation of the PMF for the orientational restraints and the calculation of the PMF along z (the distance between the graphene sheet and the center of mass of the peptide) are in Adsorption_CHP1404/ and Adsorption_scram1404/. The actual calculation of the free energy is done by a shell script ("doRestraintEnergyError.sh") in the 1_free_energy/ subsubdirectory. Processing of the PMFs must be done first in the 0_pmf/ subsubdirectory. Finally, files for free energy calculations of pair formation for CHP1404 are found in the Pair/ subdirectory.

    Sim_Figure-8: Simulation of four peptide molecules with the sequence cyc(GTGSGTG-GPGG-GCGTGTG-SGPG) where the peptides are far above the graphene–water interface in the initial configuration.

    Sim_Figure-9: Two replicates of a simulation of nine peptide molecules with the sequence cyc(GTGSGTG-GPGG-GCGTGTG-SGPG) at the graphite–water interface at 370 K.

    Sim_Figure-9_scrambled: Two replicates of a simulation of nine peptide molecules with the control sequence cyc(GGTPTTGGGGGGSGGPSGTGGC) at the graphite–water interface at 370 K.

    Sim_Figure-10: Adaptive biasing for calculation of the free energy of the folded peptide as a function of the angle between its long axis and the zigzag directions of the underlying graphene sheet.

     

    This material is based upon work supported by the US National Science Foundation under grant no. DMR-1945589. A majority of the computing for this project was performed on the Beocat Research Cluster at Kansas State University, which is funded in part by NSF grants CHE-1726332, CNS-1006860, EPS-1006860, and EPS-0919443. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562, through allocation BIO200030. 
    more » « less
  5. Abstract. Antarctic ice shelves are vulnerable to warming ocean temperatures, and some have already begun thinning in response to increased basal melt rates.Sea level is therefore expected to rise due to Antarctic contributions, but uncertainties in its amount and timing remain largely unquantified. In particular, there is substantial uncertainty in future basal melt rates arising from multi-model differences in thermal forcing and how melt rates depend on that thermal forcing. To facilitate uncertainty quantification in sea level rise projections, we build, validate, and demonstrate projections from a computationally efficient statistical emulator of a high-resolution (4 km) Antarctic ice sheet model, the Community Ice Sheet Model version 2.1. The emulator is trained to a large (500-member) ensemble of 200-year-long 4 km resolution transient ice sheet simulations, whereby regional basal melt rates are perturbed by idealized (yet physically informed) trajectories. The main advantage of our emulation approach is that by sampling a wide range of possible basal melt trajectories, the emulator can be used to (1) produce probabilistic sea level rise projections over much larger Monte Carlo ensembles than are possible by direct numerical simulation alone, thereby providing better statistical characterization of uncertainties, and (2) predict the simulated ice sheet response under differing assumptions about basal melt characteristics as new oceanographic studies are published, without having to run additional numerical ice sheet simulations. As a proof of concept, we propagate uncertainties about future basal melt rate trajectories, derived from regional ocean models, to generate probabilistic sea level rise estimates for 100 and 200 years into the future. 
    more » « less