Analyzing color and pattern in the context of motion is a central and ongoing challenge in the quantification of animal coloration. Many animal signals are spatially and temporally variable, but traditional methods fail to capture this dynamism because they use stationary animals in fixed positions. To investigate dynamic visual displays and to understand the evolutionary forces that shape dynamic colorful signals, we require cross-disciplinary methods that combine measurements of color, pattern, 3-dimensional (3D) shape, and motion. Here, we outline a workflow for producing digital 3D models with objective color information from museum specimens with diffuse colors. The workflow combines multispectral imaging with photogrammetry to produce digital 3D models that contain calibrated ultraviolet (UV) and human-visible (VIS) color information and incorporate pattern and 3D shape. These “3D multispectral models” can subsequently be animated to incorporate both signaler and receiver movement and analyzed in silico using a variety of receiver-specific visual models. This approach—which can be flexibly integrated with other tools and methods—represents a key first step toward analyzing visual signals in motion. We describe several timely applications of this workflow and next steps for multispectral 3D photogrammetry and animation techniques.
more »
« less
Applications of Photogrammetric Modeling to Roman Wall Painting: A Case Study in the House of Marcus Lucretius
Across many sites in Italy today, wall paintings face particular dangers of damage and destruction. In Pompeii, many extant fragments are open to the air and accessible to tourists. While efforts are underway to preserve the precious few examples that have come down to us today, after excavation even new finds begin to decay from the moment they are exposed to the air. Digital photogrammetry has been used for the documentation, preservation, and reconstruction of archaeological sites, small objects, and sculpture. Photogrammetry is also well-suited to the illustration and reconstruction of Roman wall painting and Roman domestic interiors. Unlike traditional photography, photogrammetry can offer three-dimensional (3D) documentation that captures the seams, cracks, and warps in the structure of the wall. In the case of an entire room, it can also preserve the orientation and visual impression of multiple walls in situ. This paper discusses the results of several photogrammetric campaigns recently undertaken to document the material record in the House of Marcus Lucretius at Pompeii (IX, 3, 5.24). In the process, it explores the combination of visual analysis with digital tools, and the use of 3D models to represent complex relationships between spaces and objects. To conclude, future avenues for research will be discussed, including the creation of an online database that would facilitate visualizing further connections within the material record.
more »
« less
- Award ID(s):
- 1735095
- PAR ID:
- 10112226
- Date Published:
- Journal Name:
- Arts
- Volume:
- 8
- Issue:
- 3
- ISSN:
- 2076-0752
- Page Range / eLocation ID:
- 89
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Advances built into recent sUASs (drones) offer a compelling possibility for field-based data collection in logistically challenging and GPS-denied environments. sUASs-based photogrammetry generates 3D models of features and landscapes, used extensively in archaeology as well as other field sciences. Until recently, navigation has been limited by the expertise of the pilot, as objects, like trees, and vertical or complex environments, such as cliffs, create significant risks to successful documentation. This article assesses sUASs’ capability for autonomous obstacle avoidance and 3D flight planning using data collection scenarios carried out in Black Mesa, Oklahoma. Imagery processed using commercial software confirmed that the collected data can build photogrammetric models suitable for general archaeological documentation. The results demonstrate that new capabilities in drones may open up new field environments previously considered inaccessible, too risky, or costly for fieldwork, especially for all but the most expert pilots. Emerging technologies for drone-based photogrammetry, such as the Skydio 2+ considered here, place remote, rugged terrain within reach of many archaeological research units in terms of commercial options and cost.more » « less
-
Low-cost 3D scanners and automatic photogrammetry software have brought digitization of objects into 3D models to the level of the consumer. However, the digitization techniques are either tedious, disruptive to the scanned object, or expensive. We create a novel 3D scanning system using consumer grade hardware that revolves a camera around the object of interest. Our approach does not disturb the object during capture and allows us to scan delicate objects that can deform under motion, such as potted plants. Our system consists of a Raspberry Pi camera and computer, stepper motor, 3D printed camera track, and control software. Our 3D scanner allows the user to gather image sets for 3D model reconstruction using photogrammetry software with minimal effort. We scale 3D scanning to objects of varying sizes by designing our scanner using programmatic modeling, and allowing the user to change the physical dimensions of the scanner without redrawing each part.more » « less
-
Watrall, Ethan; Goldstein, Lynne (Ed.)The transition to digital approaches in archaeology includes moving from 2D to 3D images of artifacts. This paper includes a discussion of creating 3D images of artifacts in research with students, formally through a course, and informally in a 3D lab and during field research. Students participate in an ongoing research project by 3D digital imaging objects and contextualizing them. The benefits of 3D images of artifacts are discussed for research, instruction, and public outreach (including making 3D printed replicas for teaching and exhibits). In the 3D digital imaging course, students use surface laser scanners to image small objects that would be encountered in an archaeological excavation, with objects of increasing difficulty to image over the course of the semester. Mid-way through the course, each student is assigned an artifact for a project to include 3D laser scanning and photogrammetry, digital measuring, and research. Students write weekly blog updates on a web page they each create. Students learn to measure digital images and manipulate them with other software. Open source software is encouraged, when available. Options for viewing 3D images are discussed so students can link 3D scans to their web pages. Students prepare scans for 3D printing in the Digital Imaging and Visualization (DIVA) Lab. This paper includes a discussion of research and instruction in the DIVA Lab, the Maya field project that created the need for the DIVA Lab, and the use of 3D technology in research and heritage studies in the Maya area.more » « less
-
Neural rendering is fuelling a unification of learning, 3D geometry and video understanding that has been waiting for more than two decades. Progress, however, is still hampered by a lack of suitable datasets and benchmarks. To address this gap, we introduce EPIC Fields, an augmentation of EPIC-KITCHENS with 3D camera information. Like other datasets for neural rendering, EPIC Fields removes the complex and expensive step of reconstructing cameras using photogrammetry, and allows researchers to focus on modelling problems. We illustrate the challenge of photogrammetry in egocentric videos of dynamic actions and propose innovations to address them. Compared to other neural rendering datasets, EPIC Fields is better tailored to video understanding because it is paired with labelled action segments and the recent VISOR segment annotations. To further motivate the community, we also evaluate three benchmark tasks in neural rendering and segmenting dynamic objects, with strong baselines that showcase what is not possible today. We also highlight the advantage of geometry in semi-supervised video object segmentations on the VISOR annotations. EPIC Fields reconstructs 96% of videos in EPICKITCHENS, registering 19M frames in 99 hours recorded in 45 kitchens, and is available from: http://epic-kitchens.github.io/epic-fieldsmore » « less