skip to main content


Title: A compilation of global bio-optical in situ data for ocean-colour satellite applications – version two
Abstract. A global compilation of in situ data is useful to evaluate thequality of ocean-colour satellite data records. Here we describe the datacompiled for the validation of the ocean-colour products from the ESA OceanColour Climate Change Initiative (OC-CCI). The data were acquired fromseveral sources (including, inter alia, MOBY, BOUSSOLE, AERONET-OC, SeaBASS, NOMAD,MERMAID, AMT, ICES, HOT and GeP&CO) and span the period from 1997 to 2018.Observations of the following variables were compiled: spectralremote-sensing reflectances, concentrations of chlorophyll a, spectralinherent optical properties, spectral diffuse attenuation coefficients andtotal suspended matter. The data were from multi-project archives acquiredvia open internet services or from individual projects, acquired directlyfrom data providers. Methodologies were implemented for homogenization,quality control and merging of all data. No changes were made to theoriginal data, other than averaging of observations that were close in timeand space, elimination of some points after quality control and conversionto a standard format. The final result is a merged table designed forvalidation of satellite-derived ocean-colour products and available in textformat. Metadata of each in situ measurement (original source, cruise orexperiment, principal investigator) was propagated throughout the work andmade available in the final table. By making the metadata available,provenance is better documented, and it is also possible to analyse each setof data separately. This paper also describes the changes that were made tothe compilation in relation to the previous version (Valente et al., 2016).The compiled data are available athttps://doi.org/10.1594/PANGAEA.898188 (Valente et al., 2019).  more » « less
Award ID(s):
1655686
NSF-PAR ID:
10172764
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; « less
Date Published:
Journal Name:
Earth System Science Data
Volume:
11
Issue:
3
ISSN:
1866-3516
Page Range / eLocation ID:
1037 to 1068
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract. A global in situ data set for validation of ocean colour productsfrom the ESA Ocean Colour Climate Change Initiative (OC-CCI) is presented.This version of the compilation, starting in 1997, now extends to 2021,which is important for the validation of the most recent satellite opticalsensors such as Sentinel 3B OLCI and NOAA-20 VIIRS. The data set comprisesin situ observations of the following variables: spectral remote-sensingreflectance, concentration of chlorophyll-a, spectral inherent opticalproperties, spectral diffuse attenuation coefficient, and total suspendedmatter. Data were obtained from multi-project archives acquired via openinternet services or from individual projects acquired directly from dataproviders. Methodologies were implemented for homogenization, qualitycontrol, and merging of all data. Minimal changes were made on the originaldata, other than conversion to a standard format, elimination of some points,after quality control and averaging of observations that were close in timeand space. The result is a merged table available in text format. Overall,the size of the data set grew with 148 432 rows, with each row representing aunique station in space and time (cf. 136 250 rows in previous version;Valente et al., 2019). Observations of remote-sensing reflectance increasedto 68 641 (cf. 59 781 in previous version; Valente et al., 2019). There wasalso a near tenfold increase in chlorophyll data since 2016. Metadata ofeach in situ measurement (original source, cruise or experiment, principalinvestigator) are included in the final table. By making the metadataavailable, provenance is better documented and it is also possible toanalyse each set of data separately. The compiled data are available athttps://doi.org/10.1594/PANGAEA.941318 (Valente et al., 2022). 
    more » « less
  2. This dataset consists of the Surface Ocean CO2 Atlas Version 2022 (SOCATv2022) data product files. The ocean absorbs one quarter of the global CO2 emissions from human activity. The community-led Surface Ocean CO2 Atlas (www.socat.info) is key for the quantification of ocean CO2 uptake and its variation, now and in the future. SOCAT version 2022 has quality-controlled in situ surface ocean fCO2 (fugacity of CO2) measurements on ships, moorings, autonomous and drifting surface platforms for the global oceans and coastal seas from 1957 to 2021. The main synthesis and gridded products contain 33.7 million fCO2 values with an estimated accuracy of better than 5 μatm. A further 6.4 million fCO2 sensor data with an estimated accuracy of 5 to 10 μatm are separately available. During quality control, marine scientists assign a flag to each data set, as well as WOCE flags of 2 (good), 3 (questionable) or 4 (bad) to individual fCO2 values. Data sets are assigned flags of A and B for an estimated accuracy of better than 2 μatm, flags of C and D for an accuracy of better than 5 μatm and a flag of E for an accuracy of better than 10 μatm. Bakker et al. (2016) describe the quality control criteria used in SOCAT versions 3 to 2022. Quality control comments for individual data sets can be accessed via the SOCAT Data Set Viewer (www.socat.info). All data sets, where data quality has been deemed acceptable, have been made public. The main SOCAT synthesis files and the gridded products contain all data sets with an estimated accuracy of better than 5 µatm (data set flags of A to D) and fCO2 values with a WOCE flag of 2. Access to data sets with an estimated accuracy of 5 to 10 (flag of E) and fCO2 values with flags of 3 and 4 is via additional data products and the Data Set Viewer (Table 8 in Bakker et al., 2016). SOCAT publishes a global gridded product with a 1° longitude by 1° latitude resolution. A second product with a higher resolution of 0.25° longitude by 0.25° latitude is available for the coastal seas. The gridded products contain all data sets with an estimated accuracy of better than 5 µatm (data set flags of A to D) and fCO2 values with a WOCE flag of 2. Gridded products are available monthly, per year and per decade. Two powerful, interactive, online viewers, the Data Set Viewer and the Gridded Data Viewer (www.socat.info), enable investigation of the SOCAT synthesis and gridded data products. SOCAT data products can be downloaded. Matlab code is available for reading these files. Ocean Data View also provides access to the SOCAT data products (www.socat.info). SOCAT data products are discoverable, accessible and citable. The SOCAT Data Use Statement (www.socat.info) asks users to generously acknowledge the contribution of SOCAT scientists by invitation to co-authorship, especially for data providers in regional studies, and/or reference to relevant scientific articles. The SOCAT website (www.socat.info) provides a single access point for online viewers, downloadable data sets, the Data Use Statement, a list of contributors and an overview of scientific publications on and using SOCAT. Automation of data upload and initial data checks allows annual releases of SOCAT from version 4 onwards. SOCAT is used for quantification of ocean CO2 uptake and ocean acidification and for evaluation of climate models and sensor data. SOCAT products inform the annual Global Carbon Budget since 2013. The annual SOCAT releases by the SOCAT scientific community are a Voluntary Commitment for United Nations Sustainable Development Goal 14.3 (Reduce Ocean Acidification) (#OceanAction20464). More broadly the SOCAT releases contribute to UN SDG 13 (Climate Action) and SDG 14 (Life Below Water), and to the UN Decade of Ocean Science for Sustainable Development. Hundreds of peer-reviewed scientific publications and high-impact reports cite SOCAT. The SOCAT community-led synthesis product is a key step in the value chain based on in situ inorganic carbon measurements of the oceans, which provides policy makers with critical information on ocean CO2 uptake in climate negotiations. The need for accurate knowledge of global ocean CO2 uptake and its (future) variation makes sustained funding of in situ surface ocean CO2 observations imperative. 
    more » « less
  3. Obeid, I. (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do not have access to such data resources must rely on techniques in which existing models can be adapted to new datasets [6]. A preliminary version of this breast corpus release was tested in a pilot study using a baseline machine learning system, ResNet18 [7], that leverages several open-source Python tools. The pilot corpus was divided into three sets: train, development, and evaluation. Portions of these slides were manually annotated [1] using the nine labels in Table 1 [8] to identify five to ten examples of pathological features on each slide. Not every pathological feature is annotated, meaning excluded areas can include focuses particular to these labels that are not used for training. A summary of the number of patches within each label is given in Table 2. To maintain a balanced training set, 1,000 patches of each label were used to train the machine learning model. Throughout all sets, only annotated patches were involved in model development. The performance of this model in identifying all the patches in the evaluation set can be seen in the confusion matrix of classification accuracy in Table 3. The highest performing labels were background, 97% correct identification, and artifact, 76% correct identification. A correlation exists between labels with more than 6,000 development patches and accurate performance on the evaluation set. Additionally, these results indicated a need to further refine the annotation of invasive ductal carcinoma (“indc”), inflammation (“infl”), nonneoplastic features (“nneo”), normal (“norm”) and suspicious (“susp”). This pilot experiment motivated changes to the corpus that will be discussed in detail in this poster presentation. To increase the accuracy of the machine learning model, we modified how we addressed underperforming labels. One common source of error arose with how non-background labels were converted into patches. Large areas of background within other labels were isolated within a patch resulting in connective tissue misrepresenting a non-background label. In response, the annotation overlay margins were revised to exclude benign connective tissue in non-background labels. Corresponding patient reports and supporting immunohistochemical stains further guided annotation reviews. The microscopic diagnoses given by the primary pathologist in these reports detail the pathological findings within each tissue site, but not within each specific slide. The microscopic diagnoses informed revisions specifically targeting annotated regions classified as cancerous, ensuring that the labels “indc” and “dcis” were used only in situations where a micropathologist diagnosed it as such. Further differentiation of cancerous and precancerous labels, as well as the location of their focus on a slide, could be accomplished with supplemental immunohistochemically (IHC) stained slides. When distinguishing whether a focus is a nonneoplastic feature versus a cancerous growth, pathologists employ antigen targeting stains to the tissue in question to confirm the diagnosis. For example, a nonneoplastic feature of usual ductal hyperplasia will display diffuse staining for cytokeratin 5 (CK5) and no diffuse staining for estrogen receptor (ER), while a cancerous growth of ductal carcinoma in situ will have negative or focally positive staining for CK5 and diffuse staining for ER [9]. Many tissue samples contain cancerous and non-cancerous features with morphological overlaps that cause variability between annotators. The informative fields IHC slides provide could play an integral role in machine model pathology diagnostics. Following the revisions made on all the annotations, a second experiment was run using ResNet18. Compared to the pilot study, an increase of model prediction accuracy was seen for the labels indc, infl, nneo, norm, and null. This increase is correlated with an increase in annotated area and annotation accuracy. Model performance in identifying the suspicious label decreased by 25% due to the decrease of 57% in the total annotated area described by this label. A summary of the model performance is given in Table 4, which shows the new prediction accuracy and the absolute change in error rate compared to Table 3. The breast tissue subset we are developing includes 3,505 annotated breast pathology slides from 296 patients. The average size of a scanned SVS file is 363 MB. The annotations are stored in an XML format. A CSV version of the annotation file is also available which provides a flat, or simple, annotation that is easy for machine learning researchers to access and interface to their systems. Each patient is identified by an anonymized medical reference number. Within each patient’s directory, one or more sessions are identified, also anonymized to the first of the month in which the sample was taken. These sessions are broken into groupings of tissue taken on that date (in this case, breast tissue). A deidentified patient report stored as a flat text file is also available. Within these slides there are a total of 16,971 total annotated regions with an average of 4.84 annotations per slide. Among those annotations, 8,035 are non-cancerous (normal, background, null, and artifact,) 6,222 are carcinogenic signs (inflammation, nonneoplastic and suspicious,) and 2,714 are cancerous labels (ductal carcinoma in situ and invasive ductal carcinoma in situ.) The individual patients are split up into three sets: train, development, and evaluation. Of the 74 cancerous patients, 20 were allotted for both the development and evaluation sets, while the remain 34 were allotted for train. The remaining 222 patients were split up to preserve the overall distribution of labels within the corpus. This was done in hope of creating control sets for comparable studies. Overall, the development and evaluation sets each have 80 patients, while the training set has 136 patients. In a related component of this project, slides from the Fox Chase Cancer Center (FCCC) Biosample Repository (https://www.foxchase.org/research/facilities/genetic-research-facilities/biosample-repository -facility) are being digitized in addition to slides provided by Temple University Hospital. This data includes 18 different types of tissue including approximately 38.5% urinary tissue and 16.5% gynecological tissue. These slides and the metadata provided with them are already anonymized and include diagnoses in a spreadsheet with sample and patient ID. We plan to release over 13,000 unannotated slides from the FCCC Corpus simultaneously with v1.0.0 of TUDP. Details of this release will also be discussed in this poster. Few digitally annotated databases of pathology samples like TUDP exist due to the extensive data collection and processing required. The breast corpus subset should be released by November 2021. By December 2021 we should also release the unannotated FCCC data. We are currently annotating urinary tract data as well. We expect to release about 5,600 processed TUH slides in this subset. We have an additional 53,000 unprocessed TUH slides digitized. Corpora of this size will stimulate the development of a new generation of deep learning technology. In clinical settings where resources are limited, an assistive diagnoses model could support pathologists’ workload and even help prioritize suspected cancerous cases. ACKNOWLEDGMENTS This material is supported by the National Science Foundation under grants nos. CNS-1726188 and 1925494. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. REFERENCES [1] N. Shawki et al., “The Temple University Digital Pathology Corpus,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York City, New York, USA: Springer, 2020, pp. 67 104. https://www.springer.com/gp/book/9783030368432. [2] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning.” Major Research Instrumentation (MRI), Division of Computer and Network Systems, Award No. 1726188, January 1, 2018 – December 31, 2021. https://www. isip.piconepress.com/projects/nsf_dpath/. [3] A. Gulati et al., “Conformer: Convolution-augmented Transformer for Speech Recognition,” in Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2020, pp. 5036-5040. https://doi.org/10.21437/interspeech.2020-3015. [4] C.-J. Wu et al., “Machine Learning at Facebook: Understanding Inference at the Edge,” in Proceedings of the IEEE International Symposium on High Performance Computer Architecture (HPCA), 2019, pp. 331–344. https://ieeexplore.ieee.org/document/8675201. [5] I. Caswell and B. Liang, “Recent Advances in Google Translate,” Google AI Blog: The latest from Google Research, 2020. [Online]. Available: https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html. [Accessed: 01-Aug-2021]. [6] V. Khalkhali, N. Shawki, V. Shah, M. Golmohammadi, I. Obeid, and J. Picone, “Low Latency Real-Time Seizure Detection Using Transfer Deep Learning,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2021, pp. 1 7. https://www.isip. piconepress.com/publications/conference_proceedings/2021/ieee_spmb/eeg_transfer_learning/. [7] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning,” Philadelphia, Pennsylvania, USA, 2020. https://www.isip.piconepress.com/publications/reports/2020/nsf/mri_dpath/. [8] I. Hunt, S. Husain, J. Simons, I. Obeid, and J. Picone, “Recent Advances in the Temple University Digital Pathology Corpus,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2019, pp. 1–4. https://ieeexplore.ieee.org/document/9037859. [9] A. P. Martinez, C. Cohen, K. Z. Hanley, and X. (Bill) Li, “Estrogen Receptor and Cytokeratin 5 Are Reliable Markers to Separate Usual Ductal Hyperplasia From Atypical Ductal Hyperplasia and Low-Grade Ductal Carcinoma In Situ,” Arch. Pathol. Lab. Med., vol. 140, no. 7, pp. 686–689, Apr. 2016. https://doi.org/10.5858/arpa.2015-0238-OA. 
    more » « less
  4. null (Ed.)
    Abstract. Internally consistent, quality-controlled (QC) data products play animportant role in promoting regional-to-global research efforts tounderstand societal vulnerabilities to ocean acidification (OA). However,there are currently no such data products for the coastal ocean, where mostof the OA-susceptible commercial and recreational fisheries and aquacultureindustries are located. In this collaborative effort, we compiled, quality-controlled, and synthesized 2 decades of discrete measurements ofinorganic carbon system parameters, oxygen, and nutrient chemistry data fromthe North American continental shelves to generate a data product calledthe Coastal Ocean Data Analysis Product in North America (CODAP-NA). Thereare few deep-water (> 1500 m) sampling locations in the currentdata product. As a result, crossover analyses, which rely on comparisonsbetween measurements on different cruises in the stable deep ocean, couldnot form the basis for cruise-to-cruise adjustments. For this reason, carewas taken in the selection of data sets to include in this initial releaseof CODAP-NA, and only data sets from laboratories with known qualityassurance practices were included. New consistency checks and outlierdetections were used to QC the data. Future releases of this CODAP-NAproduct will use this core data product as the basis for cruise-to-cruisecomparisons. We worked closely with the investigators who collected andmeasured these data during the QC process. This version (v2021) of theCODAP-NA is comprised of 3391 oceanographic profiles from 61 researchcruises covering all continental shelves of North America, from Alaska toMexico in the west and from Canada to the Caribbean in the east. Data for 14variables (temperature; salinity; dissolved oxygen content; dissolvedinorganic carbon content; total alkalinity; pH on total scale; carbonateion content; fugacity of carbon dioxide; and substance contents of silicate,phosphate, nitrate, nitrite, nitrate plus nitrite, and ammonium) have beensubjected to extensive QC. CODAP-NA is available as a merged data product(Excel, CSV, MATLAB, and NetCDF; https://doi.org/10.25921/531n-c230,https://www.ncei.noaa.gov/data/oceans/ncei/ocads/metadata/0219960.html, last access: 15 May 2021)(Jiang et al., 2021a). The original cruise data have also been updated withdata providers' consent and summarized in a table with links to NOAA'sNational Centers for Environmental Information (NCEI) archives(https://www.ncei.noaa.gov/access/ocean-acidification-data-stewardship-oads/synthesis/NAcruises.html). 
    more » « less
  5. Ocean colour is recognised as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS); and spectrally-resolved water-leaving radiances (or remote-sensing reflectances) in the visible domain, and chlorophyll-a concentration are identified as required ECV products. Time series of the products at the global scale and at high spatial resolution, derived from ocean-colour data, are key to studying the dynamics of phytoplankton at seasonal and inter-annual scales; their role in marine biogeochemistry; the global carbon cycle; the modulation of how phytoplankton distribute solar-induced heat in the upper layers of the ocean; and the response of the marine ecosystem to climate variability and change. However, generating a long time series of these products from ocean-colour data is not a trivial task: algorithms that are best suited for climate studies have to be selected from a number that are available for atmospheric correction of the satellite signal and for retrieval of chlorophyll-a concentration; since satellites have a finite life span, data from multiple sensors have to be merged to create a single time series, and any uncorrected inter-sensor biases could introduce artefacts in the series, e.g., different sensors monitor radiances at different wavebands such that producing a consistent time series of reflectances is not straightforward. Another requirement is that the products have to be validated against in situ observations. Furthermore, the uncertainties in the products have to be quantified, ideally on a pixel-by-pixel basis, to facilitate applications and interpretations that are consistent with the quality of the data. This paper outlines an approach that was adopted for generating an ocean-colour time series for climate studies, using data from the MERIS (MEdium spectral Resolution Imaging Spectrometer) sensor of the European Space Agency; the SeaWiFS (Sea-viewing Wide-Field-of-view Sensor) and MODIS-Aqua (Moderate-resolution Imaging Spectroradiometer-Aqua) sensors from the National Aeronautics and Space Administration (USA); and VIIRS (Visible and Infrared Imaging Radiometer Suite) from the National Oceanic and Atmospheric Administration (USA). The time series now covers the period from late 1997 to end of 2018. To ensure that the products meet, as well as possible, the requirements of the user community, marine-ecosystem modellers, and remote-sensing scientists were consulted at the outset on their immediate and longer-term requirements as well as on their expectations of ocean-colour data for use in climate research. Taking the user requirements into account, a series of objective criteria were established, against which available algorithms for processing ocean-colour data were evaluated and ranked. The algorithms that performed best with respect to the climate user requirements were selected to process data from the satellite sensors. Remote-sensing reflectance data from MODIS-Aqua, MERIS, and VIIRS were band-shifted to match the wavebands of SeaWiFS. Overlapping data were used to correct for mean biases between sensors at every pixel. The remote-sensing reflectance data derived from the sensors were merged, and the selected in-water algorithm was applied to the merged data to generate maps of chlorophyll concentration, inherent optical properties at SeaWiFS wavelengths, and the diffuse attenuation coefficient at 490 nm. The merged products were validated against in situ observations. The uncertainties established on the basis of comparisons with in situ data were combined with an optical classification of the remote-sensing reflectance data using a fuzzy-logic approach, and were used to generate uncertainties (root mean square difference and bias) for each product at each pixel. 
    more » « less