skip to main content


Title: Multiscale Data Fusion for Surface Soil Moisture Estimation: A Spatial Hierarchical Approach
Abstract

Surface soil moisture (SSM) has been identified as a key climate variable governing hydrologic and atmospheric processes across multiple spatial scales at local, regional, and global levels. The global burgeoning of SSM datasets in the past decade holds a significant potential in improving our understanding of multiscale SSM dynamics. The primary issues that hinder the fusion of SSM data from disparate instruments are (1) different spatial resolutions of the data instruments, (2) inherent spatial variability in SSM caused due to atmospheric and land surface controls, and (3) measurement errors caused due to imperfect retrievals of instruments. We present a data fusion scheme which takes all the above three factors into account using a Bayesian spatial hierarchical model (SHM), combining a geostatistical approach with a hierarchical model. The applicability of the fusion scheme is demonstrated by fusing point, airborne, and satellite data for a watershed exhibiting high spatial variability in Manitoba, Canada. We demonstrate that the proposed data fusion scheme is adept at assimilating and predicting SSM distribution across all three scales while accounting for potential measurement errors caused due to imperfect retrievals. Further validation of the algorithm is required in different hydroclimates and surface heterogeneity as well as for other data platforms for wider applicability.

 
more » « less
NSF-PAR ID:
10375641
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
DOI PREFIX: 10.1029
Date Published:
Journal Name:
Water Resources Research
Volume:
55
Issue:
12
ISSN:
0043-1397
Page Range / eLocation ID:
p. 10443-10465
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Atmospheric model systems, such as those used for weather forecast and reanalysis production, often have significant and systematic errors in their representation of the Arctic surface energy budget and its components. The newly available observation data of the Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) expedition (2019/2020) enable a range of model analyses and validation in order to advance our understanding of potential model deficiencies. In the present study, we analyze deficiencies in the surface radiative energy budget over Arctic sea ice in the ERA5 global atmospheric reanalysis by comparing against the winter MOSAiC campaign data, as well as, a pan-Arctic level-2 MODIS ice surface temperature remote sensing product. We find that ERA5 can simulate the timing of radiatively clear periods, though it is not able to distinguish the two observed radiative Arctic winter states, radiatively clear and opaquely cloudy, in the distribution of the net surface radiative budget. The ERA5 surface temperature over Arctic sea ice has a conditional error with a positive bias in radiatively clear conditions and a negative bias in opaquely cloudy conditions. The mean surface temperature error is 4°C for radiatively clear situations at MOSAiC and up to 15°C in some parts of the Arctic. The spatial variability of the surface temperature, given by 4 observation sites at MOSAiC, is not captured by ERA5 due to its spatial resolution but represented in the level-2 satellite product. The sensitivity analysis of possible error sources, using satellite products of snow depth and sea ice thickness, shows that the positive surface temperature errors during radiatively clear events are, to a large extent, caused by insufficient sea ice thickness and snow depth representation in the reanalysis system. A positive bias characterizes regions with ice thickness greater than 1.5 m, while the negative bias for thinner ice is partly compensated by the effect of snow.

     
    more » « less
  2. Despite the large efforts made by the ocean modeling community, such as the GODAE (Global Ocean Data Assimilation Experiment), which started in 1997 and was renamed as OceanPredict in 2019, the prediction of ocean currents has remained a challenge until the present day—particularly in ocean regions that are characterized by rapid changes in their circulation due to changes in atmospheric forcing or due to the release of available potential energy through the development of instabilities. Ocean numerical models’ useful forecast window is no longer than two days over a given area with the best initialization possible. Predictions quickly diverge from the observational field throughout the water and become unreliable, despite the fact that they can simulate the observed dynamics through other variables such as temperature, salinity and sea surface height. Numerical methods such as harmonic analysis are used to predict both short- and long-term tidal currents with significant accuracy. However, they are limited to the areas where the tide was measured. In this study, a new approach to ocean current prediction based on deep learning is proposed. This method is evaluated on the measured energetic currents of the Gulf of Mexico circulation dominated by the Loop Current (LC) at multiple spatial and temporal scales. The approach taken herein consists of dividing the velocity tensor into planes perpendicular to each of the three Cartesian coordinate system directions. A Long Short-Term Memory Recurrent Neural Network, which is best suited to handling long-term dependencies in the data, was thus used to predict the evolution of the velocity field in each plane, along each of the three directions. The predicted tensors, made of the planes perpendicular to each Cartesian direction, revealed that the model’s prediction skills were best for the flow field in the planes perpendicular to the direction of prediction. Furthermore, the fusion of all three predicted tensors significantly increased the overall skills of the flow prediction over the individual model’s predictions. The useful forecast period of this new model was greater than 4 days with a root mean square error less than 0.05 cm·s−1 and a correlation coefficient of 0.6. 
    more » « less
  3. Abstract. Measurement of light absorption of solar radiation byaerosols is vital for assessing direct aerosol radiative forcing, whichaffects local and global climate. Low-cost and easy-to-operate filter-basedinstruments, such as the Particle Soot Absorption Photometer (PSAP), that collect aerosols on a filter and measure light attenuation through thefilter are widely used to infer aerosol light absorption. However,filter-based absorption measurements are subject to artifacts that aredifficult to quantify. These artifacts are associated with the presence ofthe filter medium and the complex interactions between the filter fibers and accumulated aerosols. Various correction algorithms have been introduced to correct for the filter-based absorption coefficient measurements toward predicting the particle-phase absorption coefficient (Babs). However, the inability of these algorithms to incorporate into their formulations the complex matrix of influencing parameters such as particle asymmetry parameter, particle size, and particle penetration depth results in prediction of particle-phase absorption coefficients with relatively low accuracy. The analytical forms of corrections also suffer from a lack of universal applicability: different corrections are required for rural andurban sites across the world. In this study, we analyzed and compared 3 months of high-time-resolution ambient aerosol absorption data collectedsynchronously using a three-wavelength photoacoustic absorption spectrometer (PASS) and PSAP. Both instruments were operated on the same sampling inletat the Department of Energy's Atmospheric Radiation Measurement program's Southern Great Plains (SGP) user facility in Oklahoma. We implemented the two mostcommonly used analytical correction algorithms, namely, Virkkula (2010) and the average of Virkkula (2010) and Ogren (2010)–Bond et al. (1999) as well as a random forest regression (RFR) machine learning algorithm to predict Babs values from the PSAP's filter-based measurements. The predicted Babs was compared against the reference Babs measured by the PASS. The RFR algorithm performed the best by yielding the lowest root mean squareerror of prediction. The algorithm was trained using input datasets from the PSAP (transmission and uncorrected absorption coefficient), a co-locatednephelometer (scattering coefficients), and the Aerosol Chemical Speciation Monitor (mass concentration of non-refractory aerosol particles). A revisedform of the Virkkula (2010) algorithm suitable for the SGP site has beenproposed; however, its performance yields approximately 2-fold errors when compared to the RFR algorithm. To generalize the accuracy and applicabilityof our proposed RFR algorithm, we trained and tested it on a dataset oflaboratory measurements of combustion aerosols. Input variables to thealgorithm included the aerosol number size distribution from the Scanning Mobility Particle Sizer, absorption coefficients from the filter-basedTricolor Absorption Photometer, and scattering coefficients from amultiwavelength nephelometer. The RFR algorithm predicted Babs values within 5 % of the reference Babs measured by the multiwavelength PASS during the laboratory experiments. Thus, we show that machine learningapproaches offer a promising path to correct for biases in long-termfilter-based absorption datasets and accurately quantify their variabilityand trends needed for robust radiative forcing determination. 
    more » « less
  4. Abstract. Dry deposition is a major sink of tropospheric ozone.Increasing evidence has shown that ozone dry deposition actively linksmeteorology and hydrology with ozone air quality. However, there is littlesystematic investigation on the performance of different ozone drydeposition parameterizations at the global scale and how parameterizationchoice can impact surface ozone simulations. Here, we present the results ofthe first global, multidecadal modelling and evaluation of ozone drydeposition velocity (vd) using multiple ozone dry depositionparameterizations. We model ozone dry deposition velocities over 1982–2011using four ozone dry deposition parameterizations that are representative ofcurrent approaches in global ozone dry deposition modelling. We useconsistent assimilated meteorology, land cover, and satellite-derived leafarea index (LAI) across all four, such that the differences in simulatedvd are entirely due to differences in deposition model structures orassumptions about how land types are treated in each. In addition, we usethe surface ozone sensitivity to vd predicted by a chemical transportmodel to estimate the impact of mean and variability of ozone dry depositionvelocity on surface ozone. Our estimated vd values from four differentparameterizations are evaluated against field observations, and whileperformance varies considerably by land cover types, our results suggestthat none of the parameterizations are universally better than the others.Discrepancy in simulated mean vd among the parameterizations isestimated to cause 2 to 5 ppbv of discrepancy in surface ozone in theNorthern Hemisphere (NH) and up to 8 ppbv in tropical rainforests in July,and up to 8 ppbv in tropical rainforests and seasonally dry tropical forestsin Indochina in December. Parameterization-specific biases based onindividual land cover type and hydroclimate are found to be the two maindrivers of such discrepancies. We find statistically significant trends inthe multiannual time series of simulated July daytime vd in allparameterizations, driven by warming and drying (southern Amazonia, southernAfrican savannah, and Mongolia) or greening (high latitudes). The trend inJuly daytime vd is estimated to be 1 % yr−1 and leadsto up to 3 ppbv of surface ozone changes over 1982–2011. The interannual coefficient ofvariation (CV) of July daytime mean vd in NH is found to be5 %–15 %, with spatial distribution that varies with the dry depositionparameterization. Our sensitivity simulations suggest this can contributebetween 0.5 to 2 ppbv to interannual variability (IAV) in surface ozone, butall models tend to underestimate interannual CV when compared to long-termozone flux observations. We also find that IAV in some dry depositionparameterizations is more sensitive to LAI, while in others it is more sensitiveto climate. Comparisons with other published estimates of the IAV ofbackground ozone confirm that ozone dry deposition can be an important partof natural surface ozone variability. Our results demonstrate the importanceof ozone dry deposition parameterization choice on surface ozone modellingand the impact of IAV of vd on surface ozone, thus making a strong casefor further measurement, evaluation, and model–data integration of ozone drydeposition on different spatiotemporal scales. 
    more » « less
  5. In regions of the world where topography varies significantly with distance, most global climate models (GCMs) have spatial resolutions that are too coarse to accurately simulate key meteorological variables that are influenced by topography, such as clouds, precipitation, and surface temperatures. One approach to tackle this challenge is to run climate models of sufficiently high resolution in those topographically complex regions such as the North American Regionally Refined Model (NARRM) subset of the Department of Energy’s (DOE) Energy Exascale Earth System Model version 2 (E3SM v2). Although high-resolution simulations are expected to provide unprecedented details of atmospheric processes, running models at such high resolutions remains computationally expensive compared to lower-resolution models such as the E3SM Low Resolution (LR). Moreover, because regionally refined and high-resolution GCMs are relatively new, there are a limited number of observational datasets and frameworks available for evaluating climate models with regionally varying spatial resolutions. As such, we developed a new framework to quantify the added value of high spatial resolution in simulating precipitation over the contiguous United States (CONUS). To determine its viability, we applied the framework to two model simulations and an observational dataset. We first remapped all the data into Hierarchical Equal-Area Iso-Latitude Pixelization (HEALPix) pixels. HEALPix offers several mathematical properties that enable seamless evaluation of climate models across different spatial resolutions including its equal-area and partitioning properties. The remapped HEALPix-based data are used to show how the spatial variability of both observed and simulated precipitation changes with resolution increases. This study provides valuable insights into the requirements for achieving accurate simulations of precipitation patterns over the CONUS. It highlights the importance of allocating sufficient computational resources to run climate models at higher temporal and spatial resolutions to capture spatial patterns effectively. Furthermore, the study demonstrates the effectiveness of the HEALPix framework in evaluating precipitation simulations across different spatial resolutions. This framework offers a viable approach for comparing observed and simulated data when dealing with datasets of varying spatial resolutions. By employing this framework, researchers can extend its usage to other climate variables, datasets, and disciplines that require comparing datasets with different spatial resolutions.

     
    more » « less