skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: Modeling dataset: Long-term Change in Metabolism Phenology across North-Temperate Lakes, Wisconsin, USA 1979-2019
This dataset includes model configurations, scripts and outputs to process and recreate the outputs from Ladwig et al. (2021): Long-term Change in Metabolism Phenology across North-Temperate Lakes. The provided scripts will process the input data from various sources, as well as recreate the figures from the manuscript. Further, all output data from the metabolism models of Allequash, Big Muskellunge, Crystal, Fish, Mendota, Monona, Sparkling and Trout are included.  more » « less
Award ID(s):
1759865
PAR ID:
10439094
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
Environmental Data Initiative
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This dataset includes model configurations, scripts and outputs to process and recreate the outputs from Ladwig et al. (2021): Long-term Change in Metabolism Phenology across North-Temperate Lakes. The provided scripts will process the input data from various sources, as well as recreate the figures from the manuscript. Further, all output data from the metabolism models of Allequash, Big Muskellunge, Crystal, Fish, Mendota, Monona, Sparkling and Trout are included. 
    more » « less
  2. Scripts, model configurations and outputs to process the data and recreate the figures from Ladwig, R., Rock, L.A, Dugan, H.A. (-): Impact of salinization on lake stratification and spring mixing. This repository includes the setup and output from the lake model ensemble (GLM, GOTM, Simstrat) ran on the lakes Mendota and Monona. Scripts to run the models are located under /numerical and the scripts to process the results for the discussion of the paper are in the top main repository. The scripts to derive the theoretical solution are located under /analytical. Buoy monitoring data are located under /fieldmonitoring. The final figures are located under /figs_HD.

     
    more » « less
  3. This dataset stores the data of the article The effect of Pliocene regional climate changes on silicate weathering: a potential amplifier of Pliocene-Pleistocene cooling P. Maffre, J. Chiang & N. Swanson-Hysell, Climate of the Past). This study uses a climate model (GCM) to reproduce an estimate of Pliocene Sea Surface Temperature (SST). The main GCM outputs of this modeling (with a slab ocean model) are stored in "GCM_outputs_for_GEOCLIM/", as well as the climatologies from ERA5 reanalysis. The other GCM outputs that were used in intermediary steps (coupled ocean-atmosphere, and fixed SST simulations) are stored in "other_GCM_outputs/". The forcing files (Q-flux) and other boundary conditions to run the "main" GCM simulations can be found in "other_GCM_outputs/Q-flux_derivation/", as well as the scripts used to generate them. Secondly, the mentioned study uses the GCM outputs in "GCM_outputs_for_GEOCLIM/" as inputs for the silicate weathering model GEOCLIM-DynSoil-Steady-State (https://github.com/piermafrost/GEOCLIM-dynsoil-steady-state/tree/PEN), to investigate weathering and equilibrium CO2 changes due to Pliocene SST conditions. The results of these simulations are stored in "GEOCLIM-DynSoil-Steady-State_outputs/". The purpose of this dataset is to provide the raw outputs used to draw the conclusions of Maffre et al. (2023), and to allow the experiments to be reproduced, by providing the scripts to generate the boundary conditions. 
    more » « less
  4. null (Ed.)
    Datasets are often derived by manipulating raw data with statistical software packages. The derivation of a dataset must be recorded in terms of both the raw input and the manipulations applied to it. Statistics packages typically provide limited help in documenting provenance for the resulting derived data. At best, the operations performed by the statistical package are described in a script. Disparate representations make these scripts hard to understand for users. To address these challenges, we created Continuous Capture of Metadata (C2Metadata), a system to capture data transformations in scripts for statistical packages and represent it as metadata in a standard format that is easy to understand. We do so by devising a Structured Data Transformation Algebra (SDTA), which uses a small set of algebraic operators to express a large fraction of data manipulation performed in practice. We then implement SDTA, inspired by relational algebra, in a data transformation specification language we call SDTL. In this demonstration, we showcase C2Metadata’s capture of data transformations from a pool of sample transformation scripts in at least two languages: SPSS®and Stata®(SAS®and R are under development), for social science data in a large academic repository. We will allow the audience to explore C2Metadata using a web-based interface, visualize the intermediate steps and trace the provenance and changes of data at different levels for better understanding of the process. 
    more » « less
  5. null (Ed.)
    Structured Data Transformation Language (SDTL) provides structured, machine actionable representations of data transformation commands found in statistical analysis software.   The Continuous Capture of Metadata for Statistical Data Project (C2Metadata) created SDTL as part of an automated system that captures provenance metadata from data transformation scripts and adds variable derivations to standard metadata files.  SDTL also has potential for auditing scripts and for translating scripts between languages.  SDTL is expressed in a set of JSON schemas, which are machine actionable and easily serialized to other formats.  Statistical software languages have a number of special features that have been carried into SDTL.  We explain how SDTL handles differences among statistical languages and complex operations, such as merging files and reshaping data tables from “wide” to “long”. 
    more » « less