skip to main content


Title: Rolling Deck to Repository: Supporting the marine science community with data management services from academic research expeditions

Direct observations of the oceans acquired on oceanographic research ships operated across the international community support fundamental research into the many disciplines of ocean science and provide essential information for monitoring the health of the oceans. A comprehensive knowledge base is needed to support the responsible stewardship of the oceans with easy access to all data acquired globally. In the United States, the multidisciplinary shipboard sensor data routinely acquired each year on the fleet of coastal, regional and global ranging vessels supporting academic marine research are managed by the Rolling Deck to Repository (R2R, rvdata.us) program. With over a decade of operations, the R2R program has developed a robust routinized system to transform diverse data contributions from different marine data providers into a standardized and comprehensive collection of global-ranging observations of marine atmosphere, ocean, seafloor and subseafloor properties that is openly available to the international research community. In this article we describe the elements and framework of the R2R program and the services provided. To manage all expeditions conducted annually, a fleet-wide approach has been developed using data distributions submitted from marine operators with a data management workflow designed to maximize automation of data curation. Other design goals are to improve the completeness and consistency of the data and metadata archived, to support data citability, provenance tracking and interoperable data access aligned with FAIR (findable, accessible, interoperable, reusable) recommendations, and to facilitate delivery of data from the fleet for global data syntheses. Findings from a collection-level review of changes in data acquisition practices and quality over the past decade are presented. Lessons learned from R2R operations are also discussed including the benefits of designing data curation around the routine practices of data providers, approaches for ensuring preservation of a more complete data collection with a high level of FAIRness, and the opportunities for homogenization of datasets from the fleet so that they can support the broadest re-use of data across a diverse user community.

 
more » « less
Award ID(s):
1949707
NSF-PAR ID:
10471436
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Frontiers in Marine Science
Date Published:
Journal Name:
Frontiers in Marine Science
Volume:
9
ISSN:
2296-7745
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we outline the need for a coordinated international effort toward the building of an open-access Global Ocean Oxygen Database and ATlas (GO 2 DAT) complying with the FAIR principles (Findable, Accessible, Interoperable, and Reusable). GO 2 DAT will combine data from the coastal and open ocean, as measured by the chemical Winkler titration method or by sensors (e.g., optodes, electrodes) from Eulerian and Lagrangian platforms (e.g., ships, moorings, profiling floats, gliders, ships of opportunities, marine mammals, cabled observatories). GO 2 DAT will further adopt a community-agreed, fully documented metadata format and a consistent quality control (QC) procedure and quality flagging (QF) system. GO 2 DAT will serve to support the development of advanced data analysis and biogeochemical models for improving our mapping, understanding and forecasting capabilities for ocean O 2 changes and deoxygenation trends. It will offer the opportunity to develop quality-controlled data synthesis products with unprecedented spatial (vertical and horizontal) and temporal (sub-seasonal to multi-decadal) resolution. These products will support model assessment, improvement and evaluation as well as the development of climate and ocean health indicators. They will further support the decision-making processes associated with the emerging blue economy, the conservation of marine resources and their associated ecosystem services and the development of management tools required by a diverse community of users (e.g., environmental agencies, aquaculture, and fishing sectors). A better knowledge base of the spatial and temporal variations of marine O 2 will improve our understanding of the ocean O 2 budget, and allow better quantification of the Earth’s carbon and heat budgets. With the ever-increasing need to protect and sustainably manage ocean services, GO 2 DAT will allow scientists to fully harness the increasing volumes of O 2 data already delivered by the expanding global ocean observing system and enable smooth incorporation of much higher quantities of data from autonomous platforms in the open ocean and coastal areas into comprehensive data products in the years to come. This paper aims at engaging the community (e.g., scientists, data managers, policy makers, service users) toward the development of GO 2 DAT within the framework of the UN Global Ocean Oxygen Decade (GOOD) program recently endorsed by IOC-UNESCO. A roadmap toward GO 2 DAT is proposed highlighting the efforts needed (e.g., in terms of human resources). 
    more » « less
  2. Abstract The Deep Ocean Observing Strategy (DOOS) is an international, community-driven initiative that facilitates collaboration across disciplines and fields, elevates a diverse cohort of early career researchers into future leaders, and connects scientific advancements to societal needs. DOOS represents a global network of deep-ocean observing, mapping, and modeling experts, focusing community efforts in the support of strong science, policy, and planning for sustainable oceans. Its initiatives work to propose deep-sea Essential Ocean Variables; assess technology development; develop shared best practices, standards, and cross-calibration procedures; and transfer knowledge to policy makers and deep-ocean stakeholders. Several of these efforts align with the vision of the UN Ocean Decade to generate the science we need to create the deep ocean we want. DOOS works toward (1) a healthy and resilient deep ocean by informing science-based conservation actions, including optimizing data delivery, creating habitat and ecological maps of critical areas, and developing regional demonstration projects; (2) a predicted deep ocean by strengthening collaborations within the modeling community, determining needs for interdisciplinary modeling and observing system assessment in the deep ocean; (3) an accessible deep ocean by enhancing open access to innovative low-cost sensors and open-source plans, making deep-ocean data Findable, Accessible, Interoperable, and Reusable, and focusing on capacity development in developing countries; and finally (4) an inspiring and engaging deep ocean by translating science to stakeholders/end users and informing policy and management decisions, including in international waters. 
    more » « less
  3. The UN Decade of Ocean Science for Sustainable Development (Ocean Decade) challenges marine science to better inform and stimulate social and economic development while conserving marine ecosystems. To achieve these objectives, we must make our diverse methodologies more comparable and interoperable, expanding global participation and foster capacity development in ocean science through a new and coherent approach to best practice development. We present perspectives on this issue gleaned from the ongoing development of the UNESCO Intergovernmental Oceanographic Commission (IOC) Ocean Best Practices System (OBPS). The OBPS is collaborating with individuals and programs around the world to transform the way ocean methodologies are managed, in strong alignment with the outcomes envisioned for the Ocean Decade. However, significant challenges remain, including: (1) the haphazard management of methodologies across their lifecycle, (2) the ambiguous endorsement of what is “best” and when and where one method may be applicable vs. another, and (3) the inconsistent access to methodological knowledge across disciplines and cultures. To help address these challenges, we recommend that sponsors and leaders in ocean science and education promote consistent documentation and convergence of methodologies to: create and improve context-dependent best practices; incorporate contextualized best practices into Ocean Decade Actions; clarify who endorses which method and why; create a global network of complementary ocean practices systems; and ensure broader consistency and flexibility in international capacity development. 
    more » « less
  4. Obeid, Iyad ; Picone, Joseph ; Selesnick, Ivan (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing a large open source database of high-resolution digital pathology images known as the Temple University Digital Pathology Corpus (TUDP) [1]. Our long-term goal is to release one million images. We expect to release the first 100,000 image corpus by December 2020. The data is being acquired at the Department of Pathology at Temple University Hospital (TUH) using a Leica Biosystems Aperio AT2 scanner [2] and consists entirely of clinical pathology images. More information about the data and the project can be found in Shawki et al. [3]. We currently have a National Science Foundation (NSF) planning grant [4] to explore how best the community can leverage this resource. One goal of this poster presentation is to stimulate community-wide discussions about this project and determine how this valuable resource can best meet the needs of the public. The computing infrastructure required to support this database is extensive [5] and includes two HIPAA-secure computer networks, dual petabyte file servers, and Aperio’s eSlide Manager (eSM) software [6]. We currently have digitized over 50,000 slides from 2,846 patients and 2,942 clinical cases. There is an average of 12.4 slides per patient and 10.5 slides per case with one report per case. The data is organized by tissue type as shown below: Filenames: tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_0a001_00123456_lvl0001_s000.svs tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_00123456.docx Explanation: tudp: root directory of the corpus v1.0.0: version number of the release svs: the image data type gastro: the type of tissue 000001: six-digit sequence number used to control directory complexity 00123456: 8-digit patient MRN 2015_03_05: the date the specimen was captured 0s15_12345: the clinical case name 0s15_12345_0a001_00123456_lvl0001_s000.svs: the actual image filename consisting of a repeat of the case name, a site code (e.g., 0a001), the type and depth of the cut (e.g., lvl0001) and a token number (e.g., s000) 0s15_12345_00123456.docx: the filename for the corresponding case report We currently recognize fifteen tissue types in the first installment of the corpus. The raw image data is stored in Aperio’s “.svs” format, which is a multi-layered compressed JPEG format [3,7]. Pathology reports containing a summary of how a pathologist interpreted the slide are also provided in a flat text file format. A more complete summary of the demographics of this pilot corpus will be presented at the conference. Another goal of this poster presentation is to share our experiences with the larger community since many of these details have not been adequately documented in scientific publications. There are quite a few obstacles in collecting this data that have slowed down the process and need to be discussed publicly. Our backlog of slides dates back to 1997, meaning there are a lot that need to be sifted through and discarded for peeling or cracking. Additionally, during scanning a slide can get stuck, stalling a scan session for hours, resulting in a significant loss of productivity. Over the past two years, we have accumulated significant experience with how to scan a diverse inventory of slides using the Aperio AT2 high-volume scanner. We have been working closely with the vendor to resolve many problems associated with the use of this scanner for research purposes. This scanning project began in January of 2018 when the scanner was first installed. The scanning process was slow at first since there was a learning curve with how the scanner worked and how to obtain samples from the hospital. From its start date until May of 2019 ~20,000 slides we scanned. In the past 6 months from May to November we have tripled that number and how hold ~60,000 slides in our database. This dramatic increase in productivity was due to additional undergraduate staff members and an emphasis on efficient workflow. The Aperio AT2 scans 400 slides a day, requiring at least eight hours of scan time. The efficiency of these scans can vary greatly. When our team first started, approximately 5% of slides failed the scanning process due to focal point errors. We have been able to reduce that to 1% through a variety of means: (1) best practices regarding daily and monthly recalibrations, (2) tweaking the software such as the tissue finder parameter settings, and (3) experience with how to clean and prep slides so they scan properly. Nevertheless, this is not a completely automated process, making it very difficult to reach our production targets. With a staff of three undergraduate workers spending a total of 30 hours per week, we find it difficult to scan more than 2,000 slides per week using a single scanner (400 slides per night x 5 nights per week). The main limitation in achieving this level of production is the lack of a completely automated scanning process, it takes a couple of hours to sort, clean and load slides. We have streamlined all other aspects of the workflow required to database the scanned slides so that there are no additional bottlenecks. To bridge the gap between hospital operations and research, we are using Aperio’s eSM software. Our goal is to provide pathologists access to high quality digital images of their patients’ slides. eSM is a secure website that holds the images with their metadata labels, patient report, and path to where the image is located on our file server. Although eSM includes significant infrastructure to import slides into the database using barcodes, TUH does not currently support barcode use. Therefore, we manage the data using a mixture of Python scripts and manual import functions available in eSM. The database and associated tools are based on proprietary formats developed by Aperio, making this another important point of community-wide discussion on how best to disseminate such information. Our near-term goal for the TUDP Corpus is to release 100,000 slides by December 2020. We hope to continue data collection over the next decade until we reach one million slides. We are creating two pilot corpora using the first 50,000 slides we have collected. The first corpus consists of 500 slides with a marker stain and another 500 without it. This set was designed to let people debug their basic deep learning processing flow on these high-resolution images. We discuss our preliminary experiments on this corpus and the challenges in processing these high-resolution images using deep learning in [3]. We are able to achieve a mean sensitivity of 99.0% for slides with pen marks, and 98.9% for slides without marks, using a multistage deep learning algorithm. While this dataset was very useful in initial debugging, we are in the midst of creating a new, more challenging pilot corpus using actual tissue samples annotated by experts. The task will be to detect ductal carcinoma (DCIS) or invasive breast cancer tissue. There will be approximately 1,000 images per class in this corpus. Based on the number of features annotated, we can train on a two class problem of DCIS or benign, or increase the difficulty by increasing the classes to include DCIS, benign, stroma, pink tissue, non-neoplastic etc. Those interested in the corpus or in participating in community-wide discussions should join our listserv, nedc_tuh_dpath@googlegroups.com, to be kept informed of the latest developments in this project. You can learn more from our project website: https://www.isip.piconepress.com/projects/nsf_dpath. 
    more » « less
  5. This dataset consists of the Surface Ocean CO2 Atlas Version 2022 (SOCATv2022) data product files. The ocean absorbs one quarter of the global CO2 emissions from human activity. The community-led Surface Ocean CO2 Atlas (www.socat.info) is key for the quantification of ocean CO2 uptake and its variation, now and in the future. SOCAT version 2022 has quality-controlled in situ surface ocean fCO2 (fugacity of CO2) measurements on ships, moorings, autonomous and drifting surface platforms for the global oceans and coastal seas from 1957 to 2021. The main synthesis and gridded products contain 33.7 million fCO2 values with an estimated accuracy of better than 5 μatm. A further 6.4 million fCO2 sensor data with an estimated accuracy of 5 to 10 μatm are separately available. During quality control, marine scientists assign a flag to each data set, as well as WOCE flags of 2 (good), 3 (questionable) or 4 (bad) to individual fCO2 values. Data sets are assigned flags of A and B for an estimated accuracy of better than 2 μatm, flags of C and D for an accuracy of better than 5 μatm and a flag of E for an accuracy of better than 10 μatm. Bakker et al. (2016) describe the quality control criteria used in SOCAT versions 3 to 2022. Quality control comments for individual data sets can be accessed via the SOCAT Data Set Viewer (www.socat.info). All data sets, where data quality has been deemed acceptable, have been made public. The main SOCAT synthesis files and the gridded products contain all data sets with an estimated accuracy of better than 5 µatm (data set flags of A to D) and fCO2 values with a WOCE flag of 2. Access to data sets with an estimated accuracy of 5 to 10 (flag of E) and fCO2 values with flags of 3 and 4 is via additional data products and the Data Set Viewer (Table 8 in Bakker et al., 2016). SOCAT publishes a global gridded product with a 1° longitude by 1° latitude resolution. A second product with a higher resolution of 0.25° longitude by 0.25° latitude is available for the coastal seas. The gridded products contain all data sets with an estimated accuracy of better than 5 µatm (data set flags of A to D) and fCO2 values with a WOCE flag of 2. Gridded products are available monthly, per year and per decade. Two powerful, interactive, online viewers, the Data Set Viewer and the Gridded Data Viewer (www.socat.info), enable investigation of the SOCAT synthesis and gridded data products. SOCAT data products can be downloaded. Matlab code is available for reading these files. Ocean Data View also provides access to the SOCAT data products (www.socat.info). SOCAT data products are discoverable, accessible and citable. The SOCAT Data Use Statement (www.socat.info) asks users to generously acknowledge the contribution of SOCAT scientists by invitation to co-authorship, especially for data providers in regional studies, and/or reference to relevant scientific articles. The SOCAT website (www.socat.info) provides a single access point for online viewers, downloadable data sets, the Data Use Statement, a list of contributors and an overview of scientific publications on and using SOCAT. Automation of data upload and initial data checks allows annual releases of SOCAT from version 4 onwards. SOCAT is used for quantification of ocean CO2 uptake and ocean acidification and for evaluation of climate models and sensor data. SOCAT products inform the annual Global Carbon Budget since 2013. The annual SOCAT releases by the SOCAT scientific community are a Voluntary Commitment for United Nations Sustainable Development Goal 14.3 (Reduce Ocean Acidification) (#OceanAction20464). More broadly the SOCAT releases contribute to UN SDG 13 (Climate Action) and SDG 14 (Life Below Water), and to the UN Decade of Ocean Science for Sustainable Development. Hundreds of peer-reviewed scientific publications and high-impact reports cite SOCAT. The SOCAT community-led synthesis product is a key step in the value chain based on in situ inorganic carbon measurements of the oceans, which provides policy makers with critical information on ocean CO2 uptake in climate negotiations. The need for accurate knowledge of global ocean CO2 uptake and its (future) variation makes sustained funding of in situ surface ocean CO2 observations imperative. 
    more » « less