- Award ID(s):
- 2046059
- PAR ID:
- 10418214
- Date Published:
- Journal Name:
- Remote Sensing
- Volume:
- 14
- Issue:
- 22
- ISSN:
- 2072-4292
- Page Range / eLocation ID:
- 5760
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
The field of remote sensing has undergone a remarkable shift where vast amounts of imagery are now readily available to researchers. New technologies, such as uncrewed aircraft systems, make it possible for anyone with a moderate budget to gather their own remotely sensed data, and methodological innovations have added flexibility for processing and analyzing data. These changes create both the opportunity and need to reproduce, replicate, and compare remote sensing methods and results across spatial contexts, measurement systems, and computational infrastructures. Reproducing and replicating research is key to understanding the credibility of studies and extending recent advances into new discoveries. However, reproducibility and replicability (R&R) remain issues in remote sensing because many studies cannot be independently recreated and validated. Enhancing the R&R of remote sensing research will require significant time and effort by the research community. However, making remote sensing research reproducible and replicable does not need to be a burden. In this paper, we discuss R&R in the context of remote sensing and link the recent changes in the field to key barriers hindering R&R while discussing how researchers can overcome those barriers. We argue for the development of two research streams in the field: (1) the coordinated execution of organized sequences of forward-looking replications, and (2) the introduction of benchmark datasets that can be used to test the replicability of results and methods.more » « less
-
null (Ed.)One of the pathways by which the scientific community confirms the validity of a new scientific discovery is by repeating the research that produced it. When a scientific effort fails to independently confirm the computations or results of a previous study, some fear that it may be a symptom of a lack of rigor in science, while others argue that such an observed inconsistency can be an important precursor to new discovery. Concerns about reproducibility and replicability have been expressed in both scientific and popular media. As these concerns came to light, Congress requested that the National Academies of Sciences, Engineering, and Medicine conduct a study to assess the extent of issues related to reproducibility and replicability and to offer recommendations for improving rigor and transparency in scientific research. Reproducibility and Replicability in Science defines reproducibility and replicability and examines the factors that may lead to non-reproducibility and non-replicability in research. Unlike the typical expectation of reproducibility between two computations, expectations about replicability are more nuanced, and in some cases a lack of replicability can aid the process of scientific discovery. This report provides recommendations to researchers, academic institutions, journals, and funders on steps they can take to improve reproducibility and replicability in science.more » « less
-
Heavy rains and tropical storms often result in floods, which are expected to increase in frequency and intensity. Flood prediction models and inundation mapping tools provide decision-makers and emergency responders with crucial information to better prepare for these events. However, the performance of models relies on the accuracy and timeliness of data received from in situ gaging stations and remote sensing; each of these data sources has its limitations, especially when it comes to real-time monitoring of floods. This study presents a vision-based framework for measuring water levels and detecting floods using computer vision and deep learning (DL) techniques. The DL models use time-lapse images captured by surveillance cameras during storm events for the semantic segmentation of water extent in images. Three different DL-based approaches, namely PSPNet, TransUNet, and SegFormer, were applied and evaluated for semantic segmentation. The predicted masks are transformed into water level values by intersecting the extracted water edges, with the 2D representation of a point cloud generated by an Apple iPhone 13 Pro lidar sensor. The estimated water levels were compared to reference data collected by an ultrasonic sensor. The results showed that SegFormer outperformed other DL-based approaches by achieving 99.55 % and 99.81 % for intersection over union (IoU) and accuracy, respectively. Moreover, the highest correlations between reference data and the vision-based approach reached above 0.98 for both the coefficient of determination (R2) and Nash–Sutcliffe efficiency. This study demonstrates the potential of using surveillance cameras and artificial intelligence for hydrologic monitoring and their integration with existing surveillance infrastructure.more » « less
-
The National Institute of Standards and Technology data science evaluation plant identification challenge is a new periodic competition focused on improving and generalizing remote sensing processing methods for forest landscapes. I created a pipeline to perform three remote sensing tasks. First, a marker-controlled watershed segmentation thresholded by vegetation index and height was performed to identify individual tree crowns within the canopy height model. Second, remote sensing data for segmented crowns was aligned with ground measurements by choosing the set of pairings which minimized error in position and in crown area as predicted by stem height. Third, species classification was performed by reducing the dataset’s dimensionality through principle component analysis and then constructing a set of maximum likelihood classifiers to estimate species likelihoods for each tree. Of the three algorithms, the classification routine exhibited the strongest relative performance, with the segmentation algorithm performing the least well.
-
null (Ed.)Scientific data, its analysis, accuracy, completeness, and reproducibility play a vital role in advancing science and engineering. Open Science Chain (OSC) is a cyberinfrastructure platform built using the Hyperledger Fabric (HLF) blockchain technology to address issues related to data reproducibility and accountability in scientific research. OSC preserves the integrity of research datasets and enables different research groups to share datasets with the integrity information. Additionally, it enables quick verification of the exact datasets that were used for a particular published research and tracks its provenance. In this paper, we describe OSC’s command line utility that will preserve the integrity of research datasets from within the researchers’ environment or from remote systems such as HPC resources or campus clusters used for research. The Python-based command line utility can be seamlessly integrated within research workflows and provides an easy way to preserve the integrity of research data in OSC blockchain platform.more » « less