skip to main content

Title: Automated, high-throughput image calibration for parallel-laser photogrammetry
Parallel-laser photogrammetry is growing in popularity as a way to collect non-invasive body size data from wild mammals. Despite its many appeals, this method requires researchers to hand-measure (i) the pixel distance between the parallel laser spots (inter-laser distance) to produce a scale within the image, and (ii) the pixel distance between the study subject’s body landmarks (inter-landmark distance). This manual effort is time-consuming and introduces human error: a researcher measuring the same image twice will rarely return the same values both times (resulting in within-observer error), as is also the case when two researchers measure the same image (resulting in between-observer error). Here, we present two independent methods that automate the inter-laser distance measurement of parallel-laser photogrammetry images. One method uses machine learning and image processing techniques in Python, and the other uses image processing techniques in ImageJ. Both of these methods reduce labor and increase precision without sacrificing accuracy. We first introduce the workflow of the two methods. Then, using two parallel-laser datasets of wild mountain gorilla and wild savannah baboon images, we validate the precision of these two automated methods relative to manual measurements and to each other. We also estimate the reduction of variation in final more » body size estimates in centimeters when adopting these automated methods, as these methods have no human error. Finally, we highlight the strengths of each method, suggest best practices for adopting either of them, and propose future directions for the automation of parallel-laser photogrammetry data. « less
; ; ; ; ; ; ; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
Mammalian Biology
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Imagery from drones is becoming common in wildlife research and management, but processing data efficiently remains a challenge. We developed a methodology for training a convolutional neural network model on large-scale mosaic imagery to detect and count caribou (Rangifer tarandus), compare model performance with an experienced observer and a group of naïve observers, and discuss the use of aerial imagery and automated methods for large mammal surveys. Combining images taken at 75 m and 120 m above ground level, a faster region-based convolutional neural network (Faster-RCNN) model was trained in using annotated imagery with the labels: “adult caribou”, “calf caribou”, and “ghost caribou” (animals moving between images, producing blurring individuals during the photogrammetry processing). Accuracy, precision, and recall of the model were 80%, 90%, and 88%, respectively. Detections between the model and experienced observer were highly correlated (Pearson: 0.96–0.99,Pvalue < 0.05). The model was generally more effective in detecting adults, calves, and ghosts than naïve observers at both altitudes. We also discuss the need to improve consistency of observers’ annotations if manual review will be used to train models accurately. Generalization of automated methods for large mammal detections will be necessary for large-scale studies with diverse platforms, airspace restrictions, and sensor capabilities.

  2. Iron overload, a complication of repeated blood transfusions, can cause tissue damage and organ failure. The body has no regulatory mechanism to excrete excess iron, so iron overload must be closely monitored to guide therapy and measure treatment response. The concentration of iron in the liver is a reliable marker for total body iron content and is now measured noninvasively with magnetic resonance imaging (MRI). MRI produces a diagnostic image by measuring the signals emitted from the body in the presence of a constant magnetic field and radiofrequency pulses. At each pixel, the signal decay constant, T2*, can be calculated, providing insight about the structure of each tissue. Liver iron content can be quantified based on this T2* value because signal decay accelerates with increasing iron concentration. We developed a method to automatically segment the liver from the MRI image to accurately calculate iron content. Our current algorithm utilizes the active contour model for image segmentation, which iteratively evolves a curve until it reaches an edge or a boundary. We applied this algorithm to each MRI image in addition to a map of pixelwise T2* values, combining basic image processing with imaging physics. One of the limitations of this segmentationmore »model is how it handles noise in the MRI data. Recent advancements in deep learning have enabled researchers to utilize convolutional neural networks to denoise and reconstruct images. We used the Trainable Nonlinear Reaction Diffusion network architecture to denoise the MRI images, allowing for smoother segmentation while preserving fine details.« less
  3. Holographic cloud probes provide unprecedented information on cloud particle density, size and position. Each laser shot captures particles within a large volume, where images can be computationally refocused to determine particle size and location. However, processing these holograms with standard methods or machine learning (ML) models requires considerable computational resources, time and occasional human intervention. ML models are trained on simulated holograms obtained from the physical model of the probe since real holograms have no absolute truth labels. Using another processing method to produce labels would be subject to errors that the ML model would subsequently inherit. Models perform well on real holograms only when image corruption is performed on the simulated images during training, thereby mimicking non-ideal conditions in the actual probe. Optimizing image corruption requires a cumbersome manual labeling effort. Here we demonstrate the application of the neural style translation approach to the simulated holograms. With a pre-trained convolutional neural network, the simulated holograms are “stylized” to resemble the real ones obtained from the probe, while at the same time preserving the simulated image “content” (e.g. the particle locations and sizes). With an ML model trained to predict particle locations and shapes on the stylized data sets, wemore »observed comparable performance on both simulated and real holograms, obviating the need to perform manual labeling. The described approach is not specific to holograms and could be applied in other domains for capturing noise and imperfections in observational instruments to make simulated data more like real world observations.

    « less
  4. Increasingly, drone-based photogrammetry has been used to measure size and body condition changes in marine megafauna. A broad range of platforms, sensors, and altimeters are being applied for these purposes, but there is no unified way to predict photogrammetric uncertainty across this methodological spectrum. As such, it is difficult to make robust comparisons across studies, disrupting collaborations amongst researchers using platforms with varying levels of measurement accuracy. Here we built off previous studies quantifying uncertainty and used an experimental approach to train a Bayesian statistical model using a known-sized object floating at the water’s surface to quantify how measurement error scales with altitude for several different drones equipped with different cameras, focal length lenses, and altimeters. We then applied the fitted model to predict the length distributions and estimate age classes of unknown-sized humpback whales Megaptera novaeangliae , as well as to predict the population-level morphological relationship between rostrum to blowhole distance and total body length of Antarctic minke whales Balaenoptera bonaerensis . This statistical framework jointly estimates errors from altitude and length measurements from multiple observations and accounts for altitudes measured with both barometers and laser altimeters while incorporating errors specific to each. This Bayesian model outputs a posteriormore »predictive distribution of measurement uncertainty around length measurements and allows for the construction of highest posterior density intervals to define measurement uncertainty, which allows one to make probabilistic statements and stronger inferences pertaining to morphometric features critical for understanding life history patterns and potential impacts from anthropogenically altered habitats.« less
  5. Martelli, Pier Luigi (Ed.)
    Abstract Motivation Accurate prediction of residue-residue distances is important for protein structure prediction. We developed several protein distance predictors based on a deep learning distance prediction method and blindly tested them in the 14th Critical Assessment of Protein Structure Prediction (CASP14). The prediction method uses deep residual neural networks with the channel-wise attention mechanism to classify the distance between every two residues into multiple distance intervals. The input features for the deep learning method include co-evolutionary features as well as other sequence-based features derived from multiple sequence alignments (MSAs). Three alignment methods are used with multiple protein sequence/profile databases to generate MSAs for input feature generation. Based on different configurations and training strategies of the deep learning method, five MULTICOM distance predictors were created to participate in the CASP14 experiment. Results Benchmarked on 37 hard CASP14 domains, the best performing MULTICOM predictor is ranked 5th out of 30 automated CASP14 distance prediction servers in terms of precision of top L/5 long-range contact predictions (i.e. classifying distances between two residues into two categories: in contact (< 8 Angstrom) and not in contact otherwise) and performs better than the best CASP13 distance prediction method. The best performing MULTICOM predictor is also rankedmore »6th among automated server predictors in classifying inter-residue distances into 10 distance intervals defined by CASP14 according to the precision of distance classification. The results show that the quality and depth of MSAs depend on alignment methods and sequence databases and have a significant impact on the accuracy of distance prediction. Using larger training datasets and multiple complementary features improves prediction accuracy. However, the number of effective sequences in MSAs is only a weak indicator of the quality of MSAs and the accuracy of predicted distance maps. In contrast, there is a strong correlation between the accuracy of contact/distance predictions and the average probability of the predicted contacts, which can therefore be more effectively used to estimate the confidence of distance predictions and select predicted distance maps. Availability The software package, source code, and data of DeepDist2 are freely available at and« less