Multi-spectral satellite images that remotely sense the Earth's surface at regular intervals are often contaminated due to occlusion by clouds. Remote sensing imagery captured via satellites, drones, and aircraft has successfully influenced a wide range of fields such as monitoring vegetation health, tracking droughts, and weather forecasting, among others. Researchers studying the Earth's surface are often hindered while gathering reliable observations due to contaminated reflectance values that are sensitive to thin, thick, and cirrus clouds, as well as their shadows. In this study, we propose a deep learning network architecture, CloudNet, to alleviate cloud-occluded remote sensing imagery captured by Landsat-8 satellite for both visible and non-visible spectral bands. We propose a deep neural network model trained on a distributed storage cluster that leverages historical trends within Landsat-8 imagery while complementing this analysis with high-resolution Sentinel-2 imagery. Our empirical benchmarks profile the efficiency of the CloudNet model with a range of cloud-occluded pixels in the input image. We further compare our CloudNet's performance with state-of-the-art deep learning approaches such as SpAGAN and Resnet. We propose a novel method, dynamic hierarchical transfer learning, to reduce computational resource requirements while training the model to achieve the desired accuracy. Our model regenerates features of cloudy images with a high PSNR accuracy of 34.28 dB.
more »
« less
Cloud shadow and uneven illumination detection and correction using the U-net architecture in near-surface images of complex forest canopies
Abstract. In humid tropical regions, irregular illumination and cloud shadows can complicate near-surface optical remote sensing. This could lead to costly and repetitive surveys to maintain geographical and spectral consistency. This could have a significant impact on the regular monitoring of forest ecosystems. A novel correction method using deep learning is presented here to address the issue in high-resolution canopy images. Our method involves training a deep learning model on one or a few well-illuminated/homogeneous reference images augmented with artificially generated cloud shadows. This training enables the model to predict illumination and cloud shadow patterns in any image and ultimately mitigate these effects. Using images captured by multispectral and RGB cameras, we evaluated the method across multiple sensors and conditions. These included nadir-view images from two sensors mounted on a drone and tower-mounted RGB Phenocams. The technique effectively corrects uneven illumination in near-infrared and true-color RGB images, including non-forested areas. This improvement was also evident in more consistent normalized difference vegetation index (NDVI) patterns in areas affected by uneven illumination. By comparing corrected RGB images to the original in a binary classification task, we evaluated the method's accuracy and Kappa values. Our goal was to detect non-photosynthetic vegetation (NPV) in a mosaic. The overall accuracy and Kappa were both significantly improved in corrected images, with a 2.5% and 1.1% increase, respectively. Moreover, the method can be generalized across sensors and conditions. Further work should focus on refining the technique and exploring its applicability to satellite imagery and beyond.
more »
« less
- PAR ID:
- 10569177
- Publisher / Repository:
- Copernicus Publications
- Date Published:
- Journal Name:
- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Volume:
- XLVIII-3-2024
- ISSN:
- 2194-9034
- Page Range / eLocation ID:
- 191 to 196
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
de Bruijne, M. (Ed.)Spatial transcriptomics techniques such as STARmap [15] enable the subcellular detection of RNA transcripts within complex tissue sections. The data from these techniques are impacted by optical microscopy limitations, such as shading or vignetting effects from uneven illumination during image capture. Downstream analysis of these sparse spatially resolved transcripts is dependent upon the correction of these artefacts. This paper introduces a novel non-parametric vignetting correction tool for spatial transcriptomic images, which estimates the illumination field and background using an efficient iterative sliced histogram normalization routine. We show that our method outperforms the state-of-the-art shading correction techniques both in terms of illumination and background field estimation and requires fewer input images to perform the estimation adequately. We further demonstrate an important downstream application of our technique, showing that spatial transcriptomic volumes corrected by our method yield a higher and more uniform gene expression spot-calling in the rodent hippocampus. Python code and a demo file to reproduce our results are provided in the supplementary material and at this github page: https://github.com/BoveyRao/Non-parametric-vc-for-sparse-st.more » « less
-
Significance: Laparoscopic surgery presents challenges in localizing oncological margins due to poor contrast between healthy and malignant tissues. Optical properties can uniquely identify various tissue types and disease states with high sensitivity and specificity, making it a promising tool for surgical guidance. Although spatial frequency domain imaging (SFDI) effectively measures quantitative optical properties, its deployment in laparoscopy is challenging due to the constrained imaging environment. Thus, there is a need for compact structured illumination techniques to enable accurate, quantitative endogenous contrast in minimally invasive surgery. Aim: We introduce a compact, two-camera laparoscope that incorporates both active stereo depth estimation and speckle-illumination SFDI (si-SFDI) to map profile-corrected, pixel-level absorption (μa), and reduced scattering (μ′s) optical properties in images of tissues with complex geometries. Approach: We used a multimode fiber-coupled 639-nm laser illumination to generate high-contrast speckle patterns on the object. These patterns were imaged through a modified commercial stereo laparoscope for optical property estimation via si-SFDI. Compared with the original si-SFDI work, which required ≥10 images of randomized speckle patterns for accurate optical property estimations, our approach approximates the DC response using a laser speckle reducer (LSR) and consequently requires only two images. In addition, we demonstrate 3D profilometry using active stereo from low-coherence RGB laser flood illumination. Sample topography was then used to correct for measured intensity variations caused by object height and surface angle differences with respect to a calibration phantom. The low-contrast RGB speckle pattern was blurred using an LSR to approximate incoherent white light illumination. We validated profile-corrected si-SFDI against conventional SFDI in phantoms with simple and complex geometries, as well as in a human finger in vivo time-series constriction study. Results: Laparoscopic si-SFDI optical property measurements agreed with conventional SFDI measurements when measuring flat tissue phantoms, exhibiting an error of 6.4% for absorption and 5.8% for reduced scattering. Profile-correction improved the accuracy for measurements of phantoms with complex geometries, particularly for absorption, where it reduced the error by 23.7%. An in vivo finger constriction study further validated laparoscopic si-SFDI, demonstrating an error of 8.2% for absorption and 5.8% for reduced scattering compared with conventional SFDI. Moreover, the observed trends in optical properties due to physiological changes were consistent with previous studies. Conclusions: Our stereo-laparoscopic implementation of si-SFDI provides a simple method to obtain accurate optical property maps through a laparoscope for flat and complex geometries. This has the potential to provide quantitative endogenous contrast for minimally invasive surgical guidance.more » « less
-
Greenspaces in communities are critical for mitigating effects of climate change and have important impacts on health. Today, the availability of satellite imagery data combined with deep learning methods allows for automated greenspace analysis at high resolution. We propose a novel green color augmentation for deep learning model training to better detect and delineate types of greenspace (trees, grass) with satellite imagery. Our method outperforms gold standard methods, which use vegetation indices, by 33.1% (accuracy) and 77.7% (intersection-over-union; IoU). The proposed augmentation technique also shows improvement over state-of-the-art deep learning-based methods by 13.4% (IoU) and 3.11% (accuracy) for greenspace segmentation. We apply the method to high-resolution (0.27m/pixel) satellite images covering Karachi, Pakistan and illuminates an important need; Karachi has 4.17m2of greenspace per capita, which significantly lags World Health Organization recommendations. Moreover, greenspaces in Karachi are often in areas of economic development (Pearson’s correlation coefficient shows a 0.352 correlation between greenspaces and roads,p< 0.001), and corresponds to higher land surface temperature in localized areas. Our greenspace analysis and how it relates to infrastructure and climate is relevant to urban planners, public health and government professionals, and ultimately the public, for improved allocation and development of greenspaces.more » « less
-
null (Ed.)Phenology is a distinct marker of the impacts of climate change on ecosystems. Accordingly, monitoring the spatiotemporal patterns of vegetation phenology is important to understand the changing Earth system. A wide range of sensors have been used to monitor vegetation phenology, including digital cameras with different viewing geometries mounted on various types of platforms. Sensor perspective, view-angle, and resolution can potentially impact estimates of phenology. We compared three different methods of remotely sensing vegetation phenology—an unoccupied aerial vehicle (UAV)-based, downward-facing RGB camera, a below-canopy, upward-facing hemispherical camera with blue (B), green (G), and near-infrared (NIR) bands, and a tower-based RGB PhenoCam, positioned at an oblique angle to the canopy—to estimate spring phenological transition towards canopy closure in a mixed-species temperate forest in central Virginia, USA. Our study had two objectives: (1) to compare the above- and below-canopy inference of canopy greenness (using green chromatic coordinate and normalized difference vegetation index) and canopy structural attributes (leaf area and gap fraction) by matching below-canopy hemispherical photos with high spatial resolution (0.03 m) UAV imagery, to find the appropriate spatial coverage and resolution for comparison; (2) to compare how UAV, ground-based, and tower-based imagery performed in estimating the timing of the spring phenological transition. We found that a spatial buffer of 20 m radius for UAV imagery is most closely comparable to below-canopy imagery in this system. Sensors and platforms agree within +/− 5 days of when canopy greenness stabilizes from the spring phenophase into the growing season. We show that pairing UAV imagery with tower-based observation platforms and plot-based observations for phenological studies (e.g., long-term monitoring, existing research networks, and permanent plots) has the potential to scale plot-based forest structural measures via UAV imagery, constrain uncertainty estimates around phenophases, and more robustly assess site heterogeneity.more » « less
An official website of the United States government

