skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Community Workflows to Advance Reproducibility in Hydrologic Modeling: Separating Model‐Agnostic and Model‐Specific Configuration Steps in Applications of Large‐Domain Hydrologic Models
Abstract Despite the proliferation of computer‐based research on hydrology and water resources, such research is typically poorly reproducible. Published studies have low reproducibility due to incomplete availability of data and computer code, and a lack of documentation of workflow processes. This leads to a lack of transparency and efficiency because existing code can neither be quality controlled nor reused. Given the commonalities between existing process‐based hydrologic models in terms of their required input data and preprocessing steps, open sharing of code can lead to large efficiency gains for the modeling community. Here, we present a model configuration workflow that provides full reproducibility of the resulting model instantiations in a way that separates the model‐agnostic preprocessing of specific data sets from the model‐specific requirements that models impose on their input files. We use this workflow to create large‐domain (global and continental) and local configurations of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model connected to the mizuRoute routing model. These examples show how a relatively complex model setup over a large domain can be organized in a reproducible and structured way that has the potential to accelerate advances in hydrologic modeling for the community as a whole. We provide a tentative blueprint of how community modeling initiatives can be built on top of workflows such as this. We term our workflow the “Community Workflows to Advance Reproducibility in Hydrologic Modeling” (CWARHM; pronounced “swarm”).  more » « less
Award ID(s):
1835569 1928369
PAR ID:
10382361
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
DOI PREFIX: 10.1029
Date Published:
Journal Name:
Water Resources Research
Volume:
58
Issue:
11
ISSN:
0043-1397
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Krzhizhanovskaya, Valeria V.; Závodszky, Gábor; Lees, Michael H.; Dongarra, Jack J.; Sloot, Peter M.; Brissos, Sérgio; Teixeira, João (Ed.)
    The HydroFrame project is a community platform designed to facilitate integrated hydrologic modeling across the US. As a part of HydroFrame, we seek to design innovative workflow solutions that create pathways to enable hydrologic analysis for three target user groups: the modeler, the analyzer, and the domain science educator. We present the initial progress on the HydroFrame community platform using an automated Kepler workflow. This workflow performs end-to-end hydrology simulations involving data ingestion, preprocessing, analysis, modeling, and visualization. We demonstrate how different modules of the workflow can be reused and repurposed for the three target user groups. The Kepler workflow ensures complete reproducibility through a built-in provenance framework that collects workflow specific parameters, software versions, and hardware system configuration. In addition, we aim to optimize the utilization of large-scale computational resources to adjust to the needs of all three user groups. Towards this goal, we present a design that leverages provenance data and machine learning techniques to predict performance and forecast failures using an automatic performance collection component of the pipeline. 
    more » « less
  2. Scientific workflow management systems (WfMS) provide a systematic way to streamline necessary processes in scientific research. The demand for FAIR (Findable, Accessible, Interoperable, and Reusable) workflows is increasing in the scientific community, particularly in GIScience, where data is not just an output but an integral part of iterative advanced processes. Traditional WfMS often lack the capability to ensure geospatial data and process transparency, leading to challenges in reproducibility and replicability of research findings. This paper proposes the conceptualization and development of FAIR-oriented GIScience WfMS, aiming to incorporate the FAIR principles into the entire lifecycle of geospatial data processing and analysis. To enhance the findability and accessibility of workflows, the WfMS utilizes Harvard Dataverse to share all workflow-related digital resources, organized into workflow datasets, nodes, and case studies. Each resource is assigned a unique DOI (Digital Object Identifier), ensuring easy access and discovery. More importantly, the WfMS complies with the Common Workflow Language (CWL) standard to guarantee interoperability and reproducibility of workflows. It also enables the integration of diverse tools and software, supporting complex analyses that require multiple processing steps. This paper demonstrates the prototype of the GIScience WfMS and illustrates two geospatial science case studies, reflecting its flexibility in selecting appropriate techniques for various datasets and research goals. The user-friendly workflow designer makes it accessible to users with different levels of technical expertise, promoting reusable, reproducible, and replicable GIScience studies. 
    more » « less
  3. We describe the design and implementation of DetTrace, a reproducible container abstraction for Linux implemented in user space. All computation that occurs inside a DetTrace container is a pure function of the initial filesystem state of the container. Reproducible containers can be used for a variety of purposes, including replication for fault-tolerance, reproducible software builds and reproducible data analytics. We use DetTrace to achieve, in an automatic fashion, reproducibility for 12,130 Debian package builds, containing over 800 million lines of code, as well as bioinformatics and machine learning workflows. We show that, while software in each of these domains is initially irreproducible, DetTrace brings reproducibility without requiring any hardware, OS or application changes. DetTrace's performance is dictated by the frequency of system calls: IO-intensive software builds have an average overhead of 3.49x, while a compute-bound bioinformatics workflow is under 2%. 
    more » « less
  4. Abstract Many have argued that datasets resulting from scientific research should be part of the scholarly record as first class research products. Data sharing mandates from funding agencies and scientific journal publishers along with calls from the scientific community to better support transparency and reproducibility of scientific research have increased demand for tools and support for publishing datasets. Hydrology domain‐specific data publication services have been developed alongside more general purpose and even commercial data repositories. Prominent among these are the Hydrologic Information System (HIS) and HydroShare repositories developed by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI). More broadly, however, multiple organizations have been involved in the practice of data publication in the hydrology domain, each having different roles that have shaped data publication and reuse. Bibliographic and archival approaches to data publication have been advanced, but both have limitations with respect to hydrologic data. Specific recommendations for improving data publication infrastructure, support, and practices to move beyond existing limitations and enable more effective data publication in support of scientific research in the hydrology domain include: improving support for journal article‐based data access and data citation, considering the workflow for data publication, enhancing support for reproducible science, encouraging publication of curated reference data collections, advancing interoperability standards for sharing data and metadata among repositories, developing partnerships with university libraries offering data services, and developing more specific data management plans. While presented in the context of CUAHSI's data repositories and experience, these recommendations are broadly applicable to other domains. This article is categorized under:Science of Water > Methods 
    more » « less
  5. Constructing and executing reproducible workflows is fundamental to performing research in a variety of scientific domains. Many of the current commercial and open source solutions for workflow en- gineering impose constraints—either technical or budgetary—upon researchers, requiring them to use their limited funding on expensive cloud platforms or spend valuable time acquiring knowledge of software systems and processes outside of their domain expertise. Even though many commercial solutions offer free-tier services, they often do not meet the resource and architectural requirements (memory, data storage, compute time, networking, etc) for researchers to run their workflows effectively at scale. Tapis Workflows abstracts away the complexities of workflow creation and execution behind a web-based API with a simplified workflow model comprised of only pipelines and tasks. This paper will de- tail how Tapis Workflows approaches workflow management by exploring its domain model, the technologies used, application architecture, design patterns, how organizations are leveraging Tapis Workflows to solve unique problems in their scientific workflows, and this projects’s vision for a simple, open source, extensible, and easily deployable workflow engine. 
    more » « less