skip to main content


Title: Open Science Expectations for Simulation-Based Research
There is strong agreement across the sciences that replicable workflows are needed for computational modeling. Open and replicable workflows not only strengthen public confidence in the sciences, but also result in more efficient community science. However, the massive size and complexity of geoscience simulation outputs, as well as the large cost to produce and preserve these outputs, present problems related to data storage, preservation, duplication, and replication. The simulation workflows themselves present additional challenges related to usability, understandability, documentation, and citation. These challenges make it difficult for researchers to meet the bewildering variety of data management requirements and recommendations across research funders and scientific journals. This paper introduces initial outcomes and emerging themes from the EarthCube Research Coordination Network project titled “What About Model Data? - Best Practices for Preservation and Replicability,” which is working to develop tools to assist researchers in determining what elements of geoscience modeling research should be preserved and shared to meet evolving community open science expectations. Specifically, the paper offers approaches to address the following key questions: • How should preservation of model software and outputs differ for projects that are oriented toward knowledge production vs. projects oriented toward data production? • What components of dynamical geoscience modeling research should be preserved and shared? • What curation support is needed to enable sharing and preservation for geoscience simulation models and their output? • What cultural barriers impede geoscience modelers from making progress on these topics?  more » « less
Award ID(s):
1929757
NSF-PAR ID:
10356857
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Frontiers in Climate
Volume:
3
ISSN:
2624-9553
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    It has become common for researchers to make their data publicly available to meet the data management and accessibility requirements of funding agencies and scientific publishers. However, many researchers face the challenge of determining what data to preserve and share and where to preserve and share those data. This can be especially challenging for those who run dynamical models, which can produce complex, voluminous data outputs, and have not considered what outputs may need to be preserved and shared as part of the project design. This manuscript presents findings from the NSF EarthCube Research Coordination Network project titled “What About Model Data? Best Practices for Preservation and Replicability” (https://modeldatarcn.github.io/). These findings suggest that if the primary goal of sharing data are to communicate knowledge, most simulation-based research projects only need to preserve and share selected model outputs along with the full simulation experiment workflow. One major result of this project has been the development of a rubric, designed to provide guidance for making decisions on what simulation output needs to be preserved and shared in trusted community repositories to achieve the goal of knowledge communication. This rubric, along with use cases for selected projects, provide scientists with guidance on data accessibility requirements in the planning process of research, allowing for more thoughtful development of data management plans and funding requests. Additionally, this rubric can be referred to by publishers for what is expected in terms of data accessibility for publication.

     
    more » « less
  2. Adoption of data and compute-intensive research in geosciences is hindered by the same social and technological reasons as other science disciplines - we're humans after all. As a result, many of the new opportunities to advance science in today's rapidly evolving technology landscape are not approachable by domain geoscientists. Organizations must acknowledge and actively mitigate these intrinsic biases and knowledge gaps in their users and staff. Over the past ten years, CyVerse (www.cyverse.org) has carried out the mission "to design, deploy, and expand a national cyberinfrastructure for life sciences research, and to train scientists in its use." During this time, CyVerse has supported and enabled transdisciplinary collaborations across institutions and communities, overseen many successes, and encountered failures. Our lessons learned in user engagement, both social and technical, are germane to the problems facing the geoscience community today. A key element of overcoming social barriers is to set up an effective education, outreach, and training (EOT) team to drive initial adoption as well as continued use. A strong EOT group can reach new users, particularly those in under-represented communities, reduce power distance relationships, and mitigate users' uncertainty avoidance toward adopting new technology. Timely user support across the life of a project, based on mutual respect between the developers' and researchers' different skill sets, is critical to successful collaboration. Without support, users become frustrated and abandon research questions whose technical issues require solutions that are 'simple' from a developer's perspective, but are unknown by the scientist. At CyVerse, we have found there is no one solution that fits all research challenges. Our strategy has been to maintain a system of systems (SoS) where users can choose 'lego-blocks' to build a solution that matches their problem. This SoS ideology has allowed CyVerse users to extend and scale workflows without becoming entangled in problems which reduce productivity and slow scientific discovery. Likewise, CyVerse addresses the handling of data through its entire lifecycle, from creation to publication to future reuse, supporting community driven big data projects and individual researchers. 
    more » « less
  3. null (Ed.)
    The Twitter-Based Knowledge Graph for Researchers project is an effort to construct a knowledge graph of computation-based tasks and corresponding outputs. It will be utilized by subject matter experts, statisticians, and developers. A knowledge graph is a directed graph of knowledge accumulated from a variety of sources. For our application, Subject Matter Experts (SMEs) are experts in their respective non-computer science fields, but are not necessarily experienced with running heavy computation on datasets. As a result, they find it difficult to generate workflows for their projects involving Twitter data and advanced analysis. Workflow management systems and libraries that facilitate computation are only practical when the users of these systems understand what analysis they need to perform. Our goal is to bridge this gap in understanding. Our queryable knowledge graph will generate a visual workflow for these experts and researchers to achieve their project goals. After meeting with our client, we established two primary deliverables. First, we needed to create an ontology of all Twitter-related information that an SME might want to answer. Secondly, we needed to build a knowledge graph based on this ontology and produce a set of APIs to trigger a set of network algorithms based on the information queried to the graph. An ontology is simply the class structure/schema for the graph. Throughout future meetings, we established some more specific additional requirements. Most importantly, the client stressed that users should be able to bring their own data and add it to our knowledge graph. As more research is completed and new technologies are released, it will be important to be able to edit and add to the knowledge graph. Next, we must be able to provide metrics about the data itself. These metrics will be useful for both our own work, and future research surrounding graph search problems and search optimization. Additionally, our system should provide users with information regarding the original domain that the algorithms and workflows were run against. That way they can choose the best workflow for their data. The project team first conducted a literature review, reading reports from the CS5604 Information Retrieval courses in 2016 and 2017 to extract information related to Twitter data and algorithms. This information was used to construct our raw ontology in Google Sheets, which contained a set of dataset-algorithm-dataset tuples. The raw ontology was then converted into nodes and edges csv files for building the knowledge graph. After implementing our original solution on a CentOS virtual machine hosted by the Virginia Tech Department of Computer Science, we transitioned our solution to Grakn, an open-source knowledge graph database that supports hypergraph functionality. When finalizing our workflow paths, we noted some nodes depended on completion of two or more inputs, representing an ”AND” edge. This phenomenon is modeled as a hyperedge with Grakn, initiating our transition from Neo4J to Grakn. Currently, our system supports queries through the console, where a user can type a Graql statement to retrieve information about data in the graph, from relationships to entities to derived rules. The user can also interact with the data via Grakn's data visualizer: Workbase. The user can enter Graql queries to visualize connections within the knowledge graph. 
    more » « less
  4. Cross-cultural research provides invaluable information about the origins of and explanations for cognitive and behavioral diversity. Interest in cross-cultural research is growing, but the field continues to be dominated by WEIRD (Western, Educated, Industrialized, Rich, and Democratic) researchers conducting WEIRD science with WEIRD participants, using WEIRD protocols. To make progress toward improving cognitive and behavioral science, we argue that the field needs (1) data workflows and infrastructures to support long-term high-quality research that is compliant with open-science frameworks; (2) process and participation standards to ensure research is valid, equitable, participatory, and inclusive; (3) training opportunities and resources to ensure the highest standards of proficiency, ethics, and transparency in data collection and processing. Here we discuss infrastructures for cross-cultural research in cognitive and behavioral sciences which we call Cross-Cultural Data Infrastructures (CCDIs). We recommend building global networks of psychologists, anthropologists, demographers, experimental philosophers, educators, and cognitive, learning, and data scientists to distill their procedural and methodological knowledge into a set of community standards. We identify key challenges including protocol validity, researcher diversity, community inclusion, and lack of detail in reporting quality assurance and quality control (QAQC) workflows. Our objective is to help promote dialogue and efforts towards consolidating robust solutions by working with a broad research community to improve the efficiency and quality of cross-cultural research. 
    more » « less
  5. This is a story about the challenges and opportunities that surfaced while answering a deceptively complex question - where's the data? As faculty and researchers publish articles, datasets, and other research outputs to meet promotion and tenure requirements, address federal funding policies, and institutional open access and data sharing policies, many online locations for publishing these materials have developed over time. How can we capture where all of the research generated on an academic campus is shared and preserved? This presentation will discuss how our multi-institution collaboration, the Reality of Academic Data Sharing (RADS) Initiative, sought to answer this question. We programmatically pulled DOIs from DataCite and CrossRef, making the naive assumption that these platforms, the two predominant DOI registration agencies for US data, would present us with a neutral and unbiased view of where data from our affiliated researchers were shared. However, as we dug into the data, we found inconsistencies in the use and completeness of the necessary metadata fields for our questions, as well as differences in how DOIs were assigned across repositories. Additionally, we recognized the systematic and privileged bias introduced by our choice of data sources. Specifically, while DataCite and CrossRef provide easy discovery of research outputs because they aggregate DOIs, they are also costly commercial services. Many repositories that cannot afford such services or lack local staffing and knowledge required to use these services are left out of the technology that has recently been labeled “global research infrastructure”. Our presentation will identify the challenges we encountered in conducting this research specifically around finding the data, and cleaning and interpreting the data. We will further engage the audience in a discussion around increasing representation in the global research infrastructure to discover and account for more research outputs. 
    more » « less