skip to main content

Title: Near-real-time MODIS-derived vegetation index data products and online services for CONUS based on NASA LANCE

This paper describes a set of Near-Real-Time (NRT) Vegetation Index (VI) data products for the Conterminous United States (CONUS) based on Moderate Resolution Imaging Spectroradiometer (MODIS) data from Land, Atmosphere Near-real-time Capability for EOS (LANCE), an openly accessible NASA NRT Earth observation data repository. The data set offers a variety of commonly used VIs, including Normalized Difference Vegetation Index (NDVI), Vegetation Condition Index (VCI), Mean-referenced Vegetation Condition Index (MVCI), Ratio to Median Vegetation Condition Index (RMVCI), and Ratio to previous-year Vegetation Condition Index (RVCI). LANCE enables the NRT monitoring of U.S. cropland vegetation conditions within 24 hours of observation. With more than 20 years of observations, this continuous data set enables geospatial time series analysis and change detection in many research fields such as agricultural monitoring, natural resource conservation, environmental modeling, and Earth system science. The complete set of VI data products described in the paper is openly distributed via Web Map Service (WMS) and Web Coverage Service (WCS) as well as the VegScape web application (

; ; ; ; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
Scientific Data
Nature Publishing Group
Sponsoring Org:
National Science Foundation
More Like this
  1. Understanding the past, present, and changing behavior of the climate requires close collaboration of a large number of researchers from many scientific domains. At present, the necessary interdisciplinary collaboration is greatly limited by the difficulties in discovering, sharing, and integrating climatic data due to the tremendously increasing data size. This paper discusses the methods and techniques for solving the inter-related problems encountered when transmitting, processing, and serving metadata for heterogeneous Earth System Observation and Modeling (ESOM) data. A cyberinfrastructure-based solution is proposed to enable effective cataloging and two-step search on big climatic datasets by leveraging state-of-the-art web service technologies and crawling the existing data centers. To validate its feasibility, the big dataset served by UCAR THREDDS Data Server (TDS), which provides Petabyte-level ESOM data and updates hundreds of terabytes of data every day, is used as the case study dataset. A complete workflow is designed to analyze the metadata structure in TDS and create an index for data parameters. A simplified registration model which defines constant information, delimits secondary information, and exploits spatial and temporal coherence in metadata is constructed. The model derives a sampling strategy for a high-performance concurrent web crawler bot which is used to mirror the essentialmore »metadata of the big data archive without overwhelming network and computing resources. The metadata model, crawler, and standard-compliant catalog service form an incremental search cyberinfrastructure, allowing scientists to search the big climatic datasets in near real-time. The proposed approach has been tested on UCAR TDS and the results prove that it achieves its design goal by at least boosting the crawling speed by 10 times and reducing the redundant metadata from 1.85 gigabytes to 2.2 megabytes, which is a significant breakthrough for making the current most non-searchable climate data servers searchable.« less
  2. Abstract. The recent availability of freely and openly availablesatellite remote sensing products has enabled the implementation of globalsurface water monitoring at a level not previously possible. Here we presenta global set of satellite-derived time series of surface water storagevariations for lakes and reservoirs for a period that covers the satellitealtimetry era. Our goals are to promote the use of satellite-derived productsfor the study of large inland water bodies and to set the stage for theexpected availability of products from the Surface Water and OceanTopography (SWOT) mission, which will vastly expand the spatial coverage ofsuch products, expected from 2021 on. Our general strategy is to estimateglobal surface water storage changes (ΔV) in large lakes andreservoirs using a combination of paired water surface elevation (WSE) andwater surface area (WSA) extent products. Specifically, we use data producedby multiple satellite altimetry missions (TOPEX/Poseidon, Jason-1, Jason-2,Jason-3, and Envisat) from 1992 on, with surface extent estimated fromTerra/Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) from 2000on. We leverage relationships between elevation and surface area (i.e.,hypsometry) to produce estimates of ΔV even during periods wheneither of the variables was not available. This approach is successfulprovided that there are strong relationships between the two variablesduring an overlapping period. Our targetmore »is to produce time series ofΔV as well as of WSE and WSA for a set of 347 lakes and reservoirsglobally for the 1992–2018 period. The data sets presented and theirrespective algorithm theoretical basis documents are publicly available anddistributed via the Physical Oceanography Distributed Active Archive Center (PO DAAC;, last access: 13 May 2020) of NASA's Jet Propulsion Laboratory.Specifically, the WSE data set is available at (Birkett et al., 2019), the WSA dataset is available at and Kumar, 2019), and the ΔV data set is available at (Tortini et al., 2019). Therecords we describe represent the most complete global surface water timeseries available from the launch of TOPEX/Poseidon in 1992 (beginning of thesatellite altimetry era) to the near present. The production of long-term,consistent, and calibrated records of surface water cycle variables such as inthe data set presented here is of fundamental importance to baseline futureSWOT products.« less
  3. Abstract Background

    No versatile web app exists that allows epidemiologists and managers around the world to comprehensively analyze the impacts of COVID-19 mitigation. The app presented here fills this gap.


    Our web app uses a model that explicitly identifies susceptible, contact, latent, asymptomatic, symptomatic and recovered classes of individuals, and a parallel set of response classes, subject to lower pathogen-contact rates. The user inputs a CSV file of incidence and, if of interest, mortality rate data. A default set of parameters is available that can be overwritten through input or online entry, and a user-selected subset of these can be fitted to the model using maximum-likelihood estimation (MLE). Model fitting and forecasting intervals are specifiable and changes to parameters allow counterfactual and forecasting scenarios. Confidence or credible intervals can be generated using stochastic simulations, based on MLE values, or on an inputted CSV file containing Markov chain Monte Carlo (MCMC) estimates of one or more parameters.


    We illustrate the use of our web app in extracting social distancing, social relaxation, surveillance or virulence switching functions (i.e., time varying drivers) from the incidence and mortality rates of COVID-19 epidemics in Israel, South Africa, and England. The Israeli outbreak exhibits fourmore »distinct phases: initial outbreak, social distancing, social relaxation, and a second wave mitigation phase. An MCMC projection of this latter phase suggests the Israeli epidemic will continue to produce into late November an average of around 1500 new case per day, unless the population practices social-relaxation measures at least 5-fold below the level in August, which itself is 4-fold below the level at the start of July. Our analysis of the relatively late South African outbreak that became the world’s fifth largest COVID-19 epidemic in July revealed that the decline through late July and early August was characterised by a social distancing driver operating at more than twice the per-capita applicable-disease-class (pc-adc) rate of the social relaxation driver. Our analysis of the relatively early English outbreak, identified a more than 2-fold improvement in surveillance over the course of the epidemic. It also identified a pc-adc social distancing rate in early August that, though nearly four times the pc-adc social relaxation rate, appeared to barely contain a second wave that would break out if social distancing was further relaxed.


    Our web app provides policy makers and health officers who have no epidemiological modelling or computer coding expertise with an invaluable tool for assessing the impacts of different outbreak mitigation policies and measures. This includes an ability to generate an epidemic-suppression or curve-flattening index that measures the intensity with which behavioural responses suppress or flatten the epidemic curve in the region under consideration.

    « less
  4. Abstract Background

    Differential correlation networks are increasingly used to delineate changes in interactions among biomolecules. They characterize differences between omics networks under two different conditions, and can be used to delineate mechanisms of disease initiation and progression.


    We present a new R package, , that facilitates the estimation and visualization of differential correlation networks using multiple correlation measures and inference methods. The software is implemented in , and , and is available at Visualization has been tested for the Chrome and Firefox web browsers. A demo is available at


    Our software offers considerable flexibility by allowing the user to interact with the visualization and choose from different estimation methods and visualizations. It also allows the user to easily toggle between correlation networks for samples under one condition and differential correlations between samples under two conditions. Moreover, the software facilitates integrative analysis of cross-correlation networks between two omics data sets.

  5. In many modern enterprises, factory managers monitor their machinery and processes to prevent faults and product defects, and maximize the productivity and efficiency. Asset condition, product quality and system productivity monitoring consume some 40-70% of the production costs. Oftentimes, resource constraints have prevented the adoption and implementation of these practices in small businesses. Recent evolution of manufacturing-as-a-service and increased digitalization opens opportunities for small and medium scale companies to adopt smart manufacturing practices, and thereby surmount these constraints. Specifically, sensor wrappers that delineate the specifications of sensor integration into manufacturing machinery, with appropriate edge-cloud computing and communication architecture can provide even small businesses with a real-time data pipeline to monitor their manufacturing machines. However, the data in itself is difficult to interpret locally. Additionally, proprietary standards and products of the various components of a sensor wrapper make it difficult to implement a sensor wrapper schema. In this paper, we report an open-source method to integrate sensors into legacy manufacturing equipment and hardware. We had implemented this pipeline with off-the-shelf sensors to a polisher (from Buehler), a shaft grinding machine (from Micromatic), and a hybrid manufacturing machine (from Optomec), and used hardware and software components such as a National Instruments Datamore »Acquisition (NI-DAQ) module to collect and stream live data. We evaluate the performance of the data pipeline as it connects to the Smart Manufacturing Innovation Platform (SMIP)—web-based data ingestion platform part of the Clean Energy Smart Manufacturing Innovation Institute (CESMII), a U.S. Department of Energy-sponsored initiative—in terms of data volume versus latency tradeoffs. We demonstrate a viable implementation of Smart Manufacturing by creating a vendor-agnostic web dashboard that fuses multiple sensors to perform real-time performance analysis with lossless data integrity.« less