skip to main content


Title: COMPARISON OF DIVER-OPERATED UNDERWATER PHOTOGRAMMETRIC SYSTEMS FOR CORAL REEF MONITORING

Abstract. Underwater photogrammetry is a well-established technique for measuring and modelling the subaquatic environment in fields ranging from archaeology to marine ecology. While for simple tasks the acquisition and processing of images have become straightforward, applications requiring relative accuracy better then 1:1000 are still considered challenging. This study focuses on the metric evaluation of different off-the-shelf camera systems for making high resolution and high accuracy measurements of coral reefs monitoring through time, where the variations to be measured are in the range of a few centimeters per year. High quality and low-cost systems (reflex and mirrorless vs action cameras, i.e. GoPro) with multiple lenses (prime and zoom), different fields of views (from fisheye to moderate wide angle), pressure housing materials and lens ports (dome and flat) are compared. Tests are repeated at different camera to object distances to investigate distance dependent induced errors and assess the accuracy of the photogrammetrically derived models. An extensive statistical analysis of the different systems is performed and comparisons against reference control point measured through a high precision underwater geodetic network are reported.

 
more » « less
Award ID(s):
1637396
NSF-PAR ID:
10093045
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Volume:
XLII-2/W10
ISSN:
2194-9034
Page Range / eLocation ID:
143 to 150
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The NeonTreeCrowns dataset is a set of individual level crown estimates for 100 million trees at 37 geographic sites across the United States surveyed by the National Ecological Observation Network’s Airborne Observation Platform. Each rectangular bounding box crown prediction includes height, crown area, and spatial location. 

    How can I see the data?

    A web server to look through predictions is available through idtrees.org

    Dataset Organization

    The shapefiles.zip contains 11,000 shapefiles, each corresponding to a 1km^2 RGB tile from NEON (ID: DP3.30010.001). For example "2019_SOAP_4_302000_4100000_image.shp" are the predictions from "2019_SOAP_4_302000_4100000_image.tif" available from the NEON data portal: https://data.neonscience.org/data-products/explore?search=camera. NEON's file convention refers to the year of data collection (2019), the four letter site code (SOAP), the sampling event (4), and the utm coordinate of the top left corner (302000_4100000). For NEON site abbreviations and utm zones see https://www.neonscience.org/field-sites/field-sites-map. 

    The predictions are also available as a single csv for each file. All available tiles for that site and year are combined into one large site. These data are not projected, but contain the utm coordinates for each bounding box (left, bottom, right, top). For both file types the following fields are available:

    Height: The crown height measured in meters. Crown height is defined as the 99th quartile of all canopy height pixels from a LiDAR height model (ID: DP3.30015.001)

    Area: The crown area in m2 of the rectangular bounding box.

    Label: All data in this release are "Tree".

    Score: The confidence score from the DeepForest deep learning algorithm. The score ranges from 0 (low confidence) to 1 (high confidence)

    How were predictions made?

    The DeepForest algorithm is available as a python package: https://deepforest.readthedocs.io/. Predictions were overlaid on the LiDAR-derived canopy height model. Predictions with heights less than 3m were removed.

    How were predictions validated?

    Please see

    Weinstein, B. G., Marconi, S., Bohlman, S. A., Zare, A., & White, E. P. (2020). Cross-site learning in deep learning RGB tree crown detection. Ecological Informatics56, 101061.

    Weinstein, B., Marconi, S., Aubry-Kientz, M., Vincent, G., Senyondo, H., & White, E. (2020). DeepForest: A Python package for RGB deep learning tree crown delineation. bioRxiv.

    Weinstein, Ben G., et al. "Individual tree-crown detection in RGB imagery using semi-supervised deep learning neural networks." Remote Sensing 11.11 (2019): 1309.

    Were any sites removed?

    Several sites were removed due to poor NEON data quality. GRSM and PUUM both had lower quality RGB data that made them unsuitable for prediction. NEON surveys are updated annually and we expect future flights to correct these errors. We removed the GUIL puerto rico site due to its very steep topography and poor sunangle during data collection. The DeepForest algorithm responded poorly to predicting crowns in intensely shaded areas where there was very little sun penetration. We are happy to make these data are available upon request.

    # Contact

    We welcome questions, ideas and general inquiries. The data can be used for many applications and we look forward to hearing from you. Contact ben.weinstein@weecology.org. 

    Gordon and Betty Moore Foundation: GBMF4563 
    more » « less
  2. Abstract

    Models and observations suggest that particle flux attenuation is lower across the mesopelagic zone of anoxic environments compared to oxic environments. Flux attenuation is controlled by microbial metabolism as well as aggregation and disaggregation by zooplankton, all of which shape the relative abundance of differently sized particles. Observing and modeling particle spectra can provide information about the contributions of these processes. We measured particle size spectrum profiles at one station in the oligotrophic Eastern Tropical North Pacific Oxygen Deficient Zone (ETNP ODZ) using an underwater vision profiler (UVP), a high‐resolution camera that counts and sizes particles. Measurements were taken at different times of day, over the course of a week. Comparing these data to particle flux measurements from sediment traps collected over the same time‐period allowed us to constrain the particle size to flux relationship, and to generate highly resolved depth and time estimates of particle flux rates. We found that particle flux attenuated very little throughout the anoxic water column, and at some time points appeared to increase. Comparing our observations to model predictions suggested that particles of all sizes remineralize more slowly in the ODZ than in oxic waters, and that large particles disaggregate into smaller particles, primarily between the base of the photic zone and 500 m. Acoustic measurements of multiple size classes of organisms suggested that many organisms migrated, during the day, to the region with high particle disaggregation. Our data suggest that diel‐migrating organisms both actively transport biomass and disaggregate particles in the ODZ core.

     
    more » « less
  3. null (Ed.)
    Underwater photogrammetry is increasingly being used by marine ecologists because of its ability to produce accurate, spatially detailed, non-destructive measurements of benthic communities, coupled with affordability and ease of use. However, independent quality control, rigorous imaging system set-up, optimal geometry design and a strict modeling of the imaging process are essential to achieving a high degree of measurable accuracy and resolution. If a proper photogrammetric approach that enables the formal description of the propagation of measurement error and modeling uncertainties is not undertaken, statements regarding the statistical significance of the results are limited. In this paper, we tackle these critical topics, based on the experience gained in the Moorea Island Digital Ecosystem Avatar (IDEA) project, where we have developed a rigorous underwater photogrammetric pipeline for coral reef monitoring and change detection. Here, we discuss the need for a permanent, underwater geodetic network, which serves to define a temporally stable reference datum and a check for the time series of photogrammetrically derived three-dimensional (3D) models of the reef structure. We present a methodology to evaluate the suitability of several underwater camera systems for photogrammetric and multi-temporal monitoring purposes and stress the importance of camera network geometry to minimize the deformations of photogrammetrically derived 3D reef models. Finally, we incorporate the measurement and modeling uncertainties of the full photogrammetric process into a simple and flexible framework for detecting statistically significant changes among a time series of models. 
    more » « less
  4. Abstract

    Photography with small unmanned aircraft systems (sUAS) offers opportunities for researchers to better understand habitat selection in wildlife, especially for species that select habitat from an aerial perspective (e.g., many bird species). The growing number of commercialsUASbeing flown by recreational users represents a potentially valuable source of data for documenting and studying wildlife habitat. We used a commercially available quadcoptersUASwith a visible spectrum camera to classify habitat for American Kestrels (Falco sparverius; Aves), as well as to evaluate aspects of image processing and postprocessing relevant to a simple habitat analysis using citizen science photography. We investigated inter–observer repeatability of habitat classification, effectiveness of cross‐image classification and Gaussian filtering, and sensitivity to classification resolution. We photographed vegetation around nests from both 25 m and 50 m above takeoff elevation, and analyzed images via maximum likelihood supervised classification. Our results indicate that commercial off‐the‐shelfsUASphotography can distinguish between grass, herbaceous, woody, bare ground, and human‐modified cover classes with good (kappa > 0.6) or strong (kappa > 0.8) accuracy using a 0.25 m2minimum patch size for aggregation. There was inter‐subject variability in designating training samples, but high repeatability of supervised classification accuracy. Gaussian filtering reduced classification accuracy, while coarser classification resolution out‐performed finer resolution due to “speckling noise.” Image self‐classification significantly outperformed cross‐image classification. Mean classification accuracy metrics (kappa values) across different photo heights differed little, but, importantly, the rank order of images differed noticeably.

     
    more » « less
  5. Abstract

    Augmented reality (AR) enhances the user’s perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot’s operations during the task. Based on the previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the head-mounted display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process.

     
    more » « less