skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Color in motion: Generating 3-dimensional multispectral models to study dynamic visual signals in animals
Analyzing color and pattern in the context of motion is a central and ongoing challenge in the quantification of animal coloration. Many animal signals are spatially and temporally variable, but traditional methods fail to capture this dynamism because they use stationary animals in fixed positions. To investigate dynamic visual displays and to understand the evolutionary forces that shape dynamic colorful signals, we require cross-disciplinary methods that combine measurements of color, pattern, 3-dimensional (3D) shape, and motion. Here, we outline a workflow for producing digital 3D models with objective color information from museum specimens with diffuse colors. The workflow combines multispectral imaging with photogrammetry to produce digital 3D models that contain calibrated ultraviolet (UV) and human-visible (VIS) color information and incorporate pattern and 3D shape. These “3D multispectral models” can subsequently be animated to incorporate both signaler and receiver movement and analyzed in silico using a variety of receiver-specific visual models. This approach—which can be flexibly integrated with other tools and methods—represents a key first step toward analyzing visual signals in motion. We describe several timely applications of this workflow and next steps for multispectral 3D photogrammetry and animation techniques.  more » « less
Award ID(s):
2029538
PAR ID:
10463300
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Frontiers in Ecology and Evolution
Volume:
10
ISSN:
2296-701X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    This protocol describes the process of phenotyping branching coral using the 3D model editing software, MeshLab. MeshLab is a free, straightforward software to analyze 3D models of corals that is especially useful in its ability to import color from Agisoft Metashape models. This protocol outlines the steps used by the Kenkel lab to noninvasively phenotype Acropora cervicornis colonies for total linear extension (TLE), surface area, volume, and volume of interstitial space. We incorporate Agisoft Metashape markers with our Tomahawk scaling system (see Image Capture Protocol) in our workflow which is useful for scaling and to improve model building. Other scaling objects can be used, however these markers provide a consistent scale that do not obstruct the coral during image capture. MeshLab measurements of TLE have been groundtruthed to field measures of TLE. 3D surface area and volume have not yet been compared to traditional methods of wax dipping, for surface area, and water displacement, for volume. However, in tests with shapes of known dimensions, i.e. cubes, MeshLab produced accurate measures of 3D surface area and volume when compared to calculated surface area and volume. For directions to photograph coral for 3D photogrammetry see our Image Capture Protocol. For a walkthrough and scripts to run Agisoft Metashape on the command line, see https://github.com/wyattmillion/Coral3DPhotogram. These protocols, while created for branching coral, can be applied to 3D models of any coral morphology or any object really. Our goal is to make easy-to-use protocols using accessible softwares in the hopes of creating a standardized method for 3D photogrammetry in coral biology. Go to http://www.meshlab.net/#download to download the appropriate software for your operating system. P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, G. Ranzuglia MeshLab: an Open-Source Mesh Processing Tool Sixth Eurographics Italian Chapter Conference, page 129-136, 2008 DOI dx.doi.org/10.17504/protocols.io.bgbpjsmn 
    more » « less
  2. Synopsis Acquiring accurate 3D biological models efficiently and economically is important for morphological data collection and analysis in organismal biology. In recent years, structure-from-motion (SFM) photogrammetry has become increasingly popular in biological research due to its flexibility and being relatively low cost. SFM photogrammetry registers 2D images for reconstructing camera positions as the basis for 3D modeling and texturing. However, most studies of organismal biology still relied on commercial software to reconstruct the 3D model from photographs, which impeded the adoption of this workflow in our field due the blocking issues such as cost and affordability. Also, prior investigations in photogrammetry did not sufficiently assess the geometric accuracy of the models reconstructed. Consequently, this study has two goals. First, we presented an affordable and highly flexible SFM photogrammetry pipeline based on the open-source package OpenDroneMap (ODM) and its user interface WebODM. Second, we assessed the geometric accuracy of the photogrammetric models acquired from the ODM pipeline by comparing them to the models acquired via microCT scanning, the de facto method to image skeleton. Our sample comprised 15 Aplodontia rufa (mountain beaver) skulls. Using models derived from microCT scans of the samples as reference, our results showed that the geometry of the models derived from ODM was sufficiently accurate for gross metric and morphometric analysis as the measurement errors are usually around or below 2%, and morphometric analysis captured consistent patterns of shape variations in both modalities. However, subtle but distinct differences between the photogrammetric and microCT-derived 3D models could affect the landmark placement, which in return affected the downstream shape analysis, especially when the variance within a sample is relatively small. At the minimum, we strongly advise not combining 3D models derived from these two modalities for geometric morphometric analysis. Our findings can be indictive of similar issues in other SFM photogrammetry tools since the underlying pipelines are similar. We recommend that users run a pilot test of geometric accuracy before using photogrammetric models for morphometric analysis. For the research community, we provide detailed guidance on using our pipeline for building 3D models from photographs. 
    more » « less
  3. null (Ed.)
    This is a protocol for generating images to be used in 3D model building via Agisoft Metashape for coral photogrametry. This will cover underwater, field-based methods and tips to collect photographs and preprocessing of photos to improve model building. Image capture is the most important part of 3D photogrammetry because the photos taken at this point will be all that you'll have to build models and collect data. As such, you want to ensure you have enough photos to work with in the future so, in general, more is better. That being said, too many blurry or out of focus pictures will hamper model building. You can optimize your time in the field by taking enough photos from the appropriate angles, however efficiency will come with practice. This is the protocol developed and used by the Kenkel lab to phenotype Acropora cervicornis colonies as part of field operations in the Florida Keys. We incorporate Agisoft Metashape markers in this workflow to scale models and improved model building. The scaling objects used by the Kenkel lab are custom-made, adjustable PVC arrays that include unique markers and bleaching color cards, affectionately called the "Tomahawk". Specs for building a Tomahawk are included in this protocol. Filtering and pre-processing of photos is not always necessary but can be used to salvage 3D models that would be otherwise blurry or incomplete. Here, we describe photo editing in Adobe Lightroom to adjust several characteristics of hundreds of images simultaneously. For a walkthrough and scripts to run Agisoft Metashape on the command line, see https://github.com/wyattmillion/Coral3DPhotogram. For directions to phenotype coral from 3D models see our Phenotyping in MeshLab protocol. These protocols, while created for branching coral, can be applied to 3D models of any coral morphology or any object really. Our goal is to make easy-to-use protocols using accessible softwares in the hopes of creating a standardized method for 3D photogrammetry in coral biology. DOI dx.doi.org/10.17504/protocols.io.bgdcjs2w 
    more » « less
  4. Abstract Many vision‐based indoor localization methods require tedious and comprehensive pre‐mapping of built environments. This research proposes a mapping‐free approach to estimating indoor camera poses based on a 3D style‐transferred building information model (BIM) and photogrammetry technique. To address the cross‐domain gap between virtual 3D models and real‐life photographs, a CycleGAN model was developed to transform BIM renderings into photorealistic images. A photogrammetry‐based algorithm was developed to estimate camera pose using the visual and spatial information extracted from the style‐transferred BIM. The experiments demonstrated the efficacy of CycleGAN in bridging the cross‐domain gap, which significantly improved performance in terms of image retrieval and feature correspondence detection. With the 3D coordinates retrieved from BIM, the proposed method can achieve near real‐time camera pose estimation with an accuracy of 1.38 m and 10.1° in indoor environments. 
    more » « less
  5. Many restoration projects' success is not evaluated, despite available conventional ecological assessment methods. There is a need for more flexible, affordable, and efficient methods for evaluation, particularly those that take advantage of new remote sensing and geospatial technologies. This study explores the use of illustrative small unmanned aerial system (sUAS) products, made using a simple structure‐from‐motion photogrammetry workflow, coupled with a visual assessment protocol as a remote evaluation and ecological condition archive approach. Three streams were assessed in the field (“surface assessments”) using the Stream Visual Assessment Protocol Version 2 (SVAP2) and later illustrated in sUAS products. A survey of 10 stream experts was conducted to (1) assess the general utility of the sUAS products (high‐resolution video, orthomosaics, and 3D models), and (2) test whether the experts could interpret the products and apply the 16 SVAP2 elements remotely. The channel condition, bank condition, riparian area quantity, and canopy cover elements were deemed appropriate for remote assessment, while the riparian area quality, water appearance, fish habitat complexity, and aquatic invertebrate complexity elements were deemed appropriate for remote assessment but with some potential limitations due to the quality of the products and varying site conditions. In general, the survey participants agreed that the illustrative products would be useful in stream ecological assessment and restoration evaluation. Although not a replacement for more quantitative surface assessments when required, this remote visual approach is suitable when more general monitoring is satisfactory. 
    more » « less