Abstract As camera trapping has become a standard practice in wildlife ecology, developing techniques to extract additional information from images will increase the utility of generated data. Despite rapid advancements in camera trapping practices, methods for estimating animal size or distance from the camera using captured images have not been standardized. Deriving animal sizes directly from images creates opportunities to collect wildlife metrics such as growth rates or changes in body condition. Distances to animals may be used to quantify important aspects of sampling design such as the effective area sampled or distribution of animals in the camera's field‐of‐view.We present a method of using pixel measurements in an image to estimate animal size or distance from the camera using a conceptual model in photogrammetry known as the ‘pinhole camera model’. We evaluated the performance of this approach both using stationary three‐dimensional animal targets and in a field setting using live captive reindeerRangifer tarandusranging in size and distance from the camera.We found total mean relative error of estimated animal sizes or distances from the cameras in our simulation was −3.0% and 3.3% and in our field setting was −8.6% and 10.5%, respectively. In our simulation, mean relative error of size or distance estimates were not statistically different between image settings within camera models, between camera models or between the measured dimension used in calculations.We provide recommendations for applying the pinhole camera model in a wildlife camera trapping context. Our approach of using the pinhole camera model to estimate animal size or distance from the camera produced robust estimates using a single image while remaining easy to implement and generalizable to different camera trap models and installations, thus enhancing its utility for a variety of camera trap applications and expanding opportunities to use camera trap images in novel ways.
more »
« less
Review of Methods for Animal Videography Using Camera Systems that Automatically Move to Follow the Animal
Synopsis Digital photography and videography provide rich data for the study of animal behavior and are consequently widely used techniques. For fixed, unmoving cameras there is a resolution versus field-of-view tradeoff and motion blur smears the subject on the sensor during exposure. While these fundamental tradeoffs with stationary cameras can be sidestepped by employing multiple cameras and providing additional illumination, this may not always be desirable. An alternative that overcomes these issues of stationary cameras is to direct a high-magnification camera at an animal continually as it moves. Here, we review systems in which automatic tracking is used to maintain an animal in the working volume of a moving optical path. Such methods provide an opportunity to escape the tradeoff between resolution and field of view and also to reduce motion blur while still enabling automated image acquisition. We argue that further development will be useful and outline potential innovations that may improve the technology and lead to more widespread use.
more »
« less
- Award ID(s):
- 2010768
- PAR ID:
- 10310735
- Date Published:
- Journal Name:
- Integrative and Comparative Biology
- Volume:
- 61
- Issue:
- 3
- ISSN:
- 1540-7063
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Segmentation of moving objects in dynamic scenes is a key process in scene understanding for navigation tasks. Classical cameras suffer from motion blur in such scenarios rendering them effete. On the contrary, event cameras, because of their high temporal resolution and lack of motion blur, are tailor-made for this problem. We present an approach for monocular multi-motion segmentation, which combines bottom-up feature tracking and top-down motion compensation into a unified pipeline, which is the first of its kind to our knowledge. Using the events within a time-interval, our method segments the scene into multiple motions by splitting and merging. We further speed up our method by using the concept of motion propagation and cluster keyslices.The approach was successfully evaluated on both challenging real-world and synthetic scenarios from the EV-IMO, EED, and MOD datasets and outperformed the state-of-the-art detection rate by 12%, achieving a new state-of-the-art average detection rate of 81.06%, 94.2% and 82.35% on the aforementioned datasets. To enable further research and systematic evaluation of multi-motion segmentation, we present and open-source a new dataset/benchmark called MOD++, which includes challenging sequences and extensive data stratification in-terms of camera and object motion, velocity magnitudes, direction, and rotational speeds.more » « less
-
null (Ed.)Single-photon avalanche diodes (SPADs) are a rapidly developing image sensing technology with extreme low-light sensitivity and picosecond timing resolution. These unique capabilities have enabled SPADs to be used in applications like LiDAR, non-line-of-sight imaging and fluorescence microscopy that require imaging in photon-starved scenarios. In this work we harness these capabilities for dealing with motion blur in a passive imaging setting in low illumination conditions. Our key insight is that the data captured by a SPAD array camera can be represented as a 3D spatio-temporal tensor of photon detection events which can be integrated along arbitrary spatio-temporal trajectories with dynamically varying integration windows, depending on scene motion. We propose an algorithm that estimates pixel motion from photon timestamp data and dynamically adapts the integration windows to minimize motion blur. Our simulation results show the applicability of this algorithm to a variety of motion profiles including translation, rotation and local object motion. We also demonstrate the real-world feasibility of our method on data captured using a 32x32 SPAD camera.more » « less
-
Capturing clear images while a camera is moving fast, is integral to the development of mobile robots that can respond quickly and effectively to visual stimuli. This paper proposes to generate camera trajectories, with position and time constraints, that result in higher reconstructed image quality. The degradation in of an image captured during motion is known as motion blur. Three main methods exist for mitigating the effects of motion blur: (i) controlling optical parameters, (ii) controlling camera motion, and (iii) image reconstruction. Given control of a camera's motion, trajectories can be generated that result in an expected blur kernel or point-spread function. This work compares the motion blur effects and reconstructed image quality of three trajectories: (i) linear, (ii) polynomial, and (iii) inverse error. Where inverse error trajectories result in Gaussian blur kernels. Residence time analysis provides a basis for characterizing the motion blur effects of the trajectoriesmore » « less
-
Advances in neural fields are enablling high-fidelity capture of shape and appearance of dynamic 3D scenes. However, this capbabilities lag behind those offered by conventional representations such as 2D videos because of algorithmic challenges and the lack of large-scale multi-view real-world datasets. We address the dataset limitations with DiVa-360, a real-world 360° dynamic visual dataset that contains synchronized high-resolution and long-duration multi-view video sequences of table-scale scenes captured using a customized low-cost system with 53 cameras. It contains 21 object-centric sequences categorized by different motion types, 25 intricate hand-object interaction sequences, and 8 long-duration sequences for a total of 17.4M frames. In addition, we provide foreground-background segmentation masks, synchronized audio, and text descriptions. We benchmark the state-of-the-art dynamic neural field methods on DiVa-360 and provide insights about existing methods and future challenges on long-duration neural field capture.more » « less
An official website of the United States government

