skip to main content


Title: Automated, high-throughput image calibration for parallel-laser photogrammetry
Parallel-laser photogrammetry is growing in popularity as a way to collect non-invasive body size data from wild mammals. Despite its many appeals, this method requires researchers to hand-measure (i) the pixel distance between the parallel laser spots (inter-laser distance) to produce a scale within the image, and (ii) the pixel distance between the study subject’s body landmarks (inter-landmark distance). This manual effort is time-consuming and introduces human error: a researcher measuring the same image twice will rarely return the same values both times (resulting in within-observer error), as is also the case when two researchers measure the same image (resulting in between-observer error). Here, we present two independent methods that automate the inter-laser distance measurement of parallel-laser photogrammetry images. One method uses machine learning and image processing techniques in Python, and the other uses image processing techniques in ImageJ. Both of these methods reduce labor and increase precision without sacrificing accuracy. We first introduce the workflow of the two methods. Then, using two parallel-laser datasets of wild mountain gorilla and wild savannah baboon images, we validate the precision of these two automated methods relative to manual measurements and to each other. We also estimate the reduction of variation in final body size estimates in centimeters when adopting these automated methods, as these methods have no human error. Finally, we highlight the strengths of each method, suggest best practices for adopting either of them, and propose future directions for the automation of parallel-laser photogrammetry data.  more » « less
Award ID(s):
1753651
NSF-PAR ID:
10321226
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Mammalian Biology
ISSN:
1616-5047
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Imagery from drones is becoming common in wildlife research and management, but processing data efficiently remains a challenge. We developed a methodology for training a convolutional neural network model on large-scale mosaic imagery to detect and count caribou (Rangifer tarandus), compare model performance with an experienced observer and a group of naïve observers, and discuss the use of aerial imagery and automated methods for large mammal surveys. Combining images taken at 75 m and 120 m above ground level, a faster region-based convolutional neural network (Faster-RCNN) model was trained in using annotated imagery with the labels: “adult caribou”, “calf caribou”, and “ghost caribou” (animals moving between images, producing blurring individuals during the photogrammetry processing). Accuracy, precision, and recall of the model were 80%, 90%, and 88%, respectively. Detections between the model and experienced observer were highly correlated (Pearson: 0.96–0.99,Pvalue < 0.05). The model was generally more effective in detecting adults, calves, and ghosts than naïve observers at both altitudes. We also discuss the need to improve consistency of observers’ annotations if manual review will be used to train models accurately. Generalization of automated methods for large mammal detections will be necessary for large-scale studies with diverse platforms, airspace restrictions, and sensor capabilities.

     
    more » « less
  2. Abstract

    As camera trapping has become a standard practice in wildlife ecology, developing techniques to extract additional information from images will increase the utility of generated data. Despite rapid advancements in camera trapping practices, methods for estimating animal size or distance from the camera using captured images have not been standardized. Deriving animal sizes directly from images creates opportunities to collect wildlife metrics such as growth rates or changes in body condition. Distances to animals may be used to quantify important aspects of sampling design such as the effective area sampled or distribution of animals in the camera's field‐of‐view.

    We present a method of using pixel measurements in an image to estimate animal size or distance from the camera using a conceptual model in photogrammetry known as the ‘pinhole camera model’. We evaluated the performance of this approach both using stationary three‐dimensional animal targets and in a field setting using live captive reindeerRangifer tarandusranging in size and distance from the camera.

    We found total mean relative error of estimated animal sizes or distances from the cameras in our simulation was −3.0% and 3.3% and in our field setting was −8.6% and 10.5%, respectively. In our simulation, mean relative error of size or distance estimates were not statistically different between image settings within camera models, between camera models or between the measured dimension used in calculations.

    We provide recommendations for applying the pinhole camera model in a wildlife camera trapping context. Our approach of using the pinhole camera model to estimate animal size or distance from the camera produced robust estimates using a single image while remaining easy to implement and generalizable to different camera trap models and installations, thus enhancing its utility for a variety of camera trap applications and expanding opportunities to use camera trap images in novel ways.

     
    more » « less
  3. Abstract Objectives

    Laboratory studies have yielded important insights into primate locomotor mechanics. Nevertheless, laboratory studies fail to capture the range of ecological and structural variation encountered by free‐ranging primates. We present techniques for collecting kinematic data on wild primates using consumer grade high‐speed cameras and demonstrate novel methods for quantifying metric variation in arboreal substrates.

    Materials and methods

    These methods were developed and applied to our research examining platyrrhine substrate use and locomotion at the Tiputini Biodiversity Station, Ecuador. Modified GoPro cameras equipped with varifocal zoom lenses provided high‐resolution footage (1080 p.; 120 fps) suitable for digitizing gait events. We tested two methods for remotely measuring branch diameter: the parallel laser method and the distance meter photogrammetric method. A forestry‐grade laser rangefinder was used to quantify substrate angle and a force gauge was used to measure substrate compliance. We also introduce GaitKeeper, a graphical user interface for MATLAB, designed for coding quadrupedal gait.

    Results

    Parallel laser and distance meter methods provided accurate estimations of substrate diameter (percent error: 3.1–4.5%). The laser rangefinder yielded accurate estimations of substrate orientation (mean error = 2.5°). Compliance values varied tremendously among substrates but were largely explained by substrate diameter, substrate length, and distance of measurement point from trunk. On average, larger primates used relatively small substrates and traveled higher in the canopy.

    Discussion

    Ultimately, these methods will help researchers identify more precisely how primate gait kinematics respond to the complexity of arboreal habitats, furthering our understanding of the adaptive context in which primate quadrupedalism evolved.

     
    more » « less
  4. Abstract Objectives

    In many taxa, adverse early‐life environments are associated with reduced growth and smaller body size in adulthood. However, in wild primates, we know very little about whether, where, and to what degree trajectories are influenced by early adversity, or which types of early adversity matter most. Here, we use parallel‐laser photogrammetry to assess inter‐individual predictors of three measures of body size (leg length, forearm length, and shoulder‐rump length) in a population of wild female baboons studied since birth.

    Materials and Methods

    Using >2000 photogrammetric measurements of 127 females, we present a cross‐sectional growth curve of wild female baboons (Papio cynocephalus) from juvenescence through adulthood. We then test whether females exposed to several important sources of early‐life adversity—drought, maternal loss, low maternal rank, or a cumulative measure of adversity—were smaller for their age than females who experienced less adversity. Using the “animal model,” we also test whether body size is heritable in this study population.

    Results

    Prolonged early‐life drought predicted shorter limbs but not shorter torsos (i.e., shoulder‐rump lengths). Our other measures of early‐life adversity did not predict variation in body size. Heritability estimates for body size measures were 36%–67%. Maternal effects accounted for 13%–17% of the variance in leg and forearm length, but no variance in torso length.

    Discussion

    Our results suggest that baboon limbs, but not torsos, grow plastically in response to maternal effects and energetic early‐life stress. Our results also reveal considerable heritability for all three body size measures in this study population.

     
    more » « less
  5. Fluorescently labeled proteins absorb and emit light, appearing as Gaussian spots in fluorescence imaging. When fluorescent tags are added to cytoskeletal polymers such as microtubules, a line of fluorescence and even non-linear structures results. While much progress has been made in techniques for imaging and microscopy, image analysis is less well-developed. Current analysis of fluorescent microtubules uses either manual tools, such as kymographs, or automated software. As a result, our ability to quantify microtubule dynamics and organization from light microscopy remains limited. Despite the development of automated microtubule analysis tools for in vitro studies, analysis of images from cells often depends heavily on manual analysis. One of the main reasons for this disparity is the low signal-to-noise ratio in cells, where background fluorescence is typically higher than in reconstituted systems. Here, we present the Toolkit for Automated Microtubule Tracking (TAMiT), which automatically detects, optimizes, and tracks fluorescent microtubules in living yeast cells with sub-pixel accuracy. Using basic information about microtubule organization, TAMiT detects linear and curved polymers using a geometrical scanning technique. Images are fit via an optimization problem for the microtubule image parameters that are solved using non-linear least squares in Matlab. We benchmark our software using simulated images and show that it reliably detects microtubules, even at low signal-to-noise ratios. Then, we use TAMiT to measure monopolar spindle microtubule bundle number, length, and lifetime in a large dataset that includes several S. pombe mutants that affect microtubule dynamics and bundling. The results from the automated analysis are consistent with previous work and suggest a direct role for CLASP/Cls1 in bundling spindle microtubules. We also illustrate automated tracking of single curved astral microtubules in S. cerevisiae, with measurement of dynamic instability parameters. The results obtained with our fully-automated software are similar to results using hand-tracked measurements. Therefore, TAMiT can facilitate automated analysis of spindle and microtubule dynamics in yeast cells. 
    more » « less