skip to main content


Title: Programmatic 3D Printing of a Revolving Camera Track to Automatically Capture Dense Images for 3D Scanning of Objects
Low-cost 3D scanners and automatic photogrammetry software have brought digitization of objects into 3D models to the level of the consumer. However, the digitization techniques are either tedious, disruptive to the scanned object, or expensive. We create a novel 3D scanning system using consumer grade hardware that revolves a camera around the object of interest. Our approach does not disturb the object during capture and allows us to scan delicate objects that can deform under motion, such as potted plants. Our system consists of a Raspberry Pi camera and computer, stepper motor, 3D printed camera track, and control software. Our 3D scanner allows the user to gather image sets for 3D model reconstruction using photogrammetry software with minimal effort. We scale 3D scanning to objects of varying sizes by designing our scanner using programmatic modeling, and allowing the user to change the physical dimensions of the scanner without redrawing each part.  more » « less
Award ID(s):
1730183
NSF-PAR ID:
10056162
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Multimedia Modeling (MMM) 2018
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Synopsis

    Acquiring accurate 3D biological models efficiently and economically is important for morphological data collection and analysis in organismal biology. In recent years, structure-from-motion (SFM) photogrammetry has become increasingly popular in biological research due to its flexibility and being relatively low cost. SFM photogrammetry registers 2D images for reconstructing camera positions as the basis for 3D modeling and texturing. However, most studies of organismal biology still relied on commercial software to reconstruct the 3D model from photographs, which impeded the adoption of this workflow in our field due the blocking issues such as cost and affordability. Also, prior investigations in photogrammetry did not sufficiently assess the geometric accuracy of the models reconstructed. Consequently, this study has two goals. First, we presented an affordable and highly flexible SFM photogrammetry pipeline based on the open-source package OpenDroneMap (ODM) and its user interface WebODM. Second, we assessed the geometric accuracy of the photogrammetric models acquired from the ODM pipeline by comparing them to the models acquired via microCT scanning, the de facto method to image skeleton. Our sample comprised 15 Aplodontia rufa (mountain beaver) skulls. Using models derived from microCT scans of the samples as reference, our results showed that the geometry of the models derived from ODM was sufficiently accurate for gross metric and morphometric analysis as the measurement errors are usually around or below 2%, and morphometric analysis captured consistent patterns of shape variations in both modalities. However, subtle but distinct differences between the photogrammetric and microCT-derived 3D models could affect the landmark placement, which in return affected the downstream shape analysis, especially when the variance within a sample is relatively small. At the minimum, we strongly advise not combining 3D models derived from these two modalities for geometric morphometric analysis. Our findings can be indictive of similar issues in other SFM photogrammetry tools since the underlying pipelines are similar. We recommend that users run a pilot test of geometric accuracy before using photogrammetric models for morphometric analysis. For the research community, we provide detailed guidance on using our pipeline for building 3D models from photographs.

     
    more » « less
  2. null (Ed.)
    This protocol describes the process of phenotyping branching coral using the 3D model editing software, MeshLab. MeshLab is a free, straightforward software to analyze 3D models of corals that is especially useful in its ability to import color from Agisoft Metashape models. This protocol outlines the steps used by the Kenkel lab to noninvasively phenotype Acropora cervicornis colonies for total linear extension (TLE), surface area, volume, and volume of interstitial space. We incorporate Agisoft Metashape markers with our Tomahawk scaling system (see Image Capture Protocol) in our workflow which is useful for scaling and to improve model building. Other scaling objects can be used, however these markers provide a consistent scale that do not obstruct the coral during image capture. MeshLab measurements of TLE have been groundtruthed to field measures of TLE. 3D surface area and volume have not yet been compared to traditional methods of wax dipping, for surface area, and water displacement, for volume. However, in tests with shapes of known dimensions, i.e. cubes, MeshLab produced accurate measures of 3D surface area and volume when compared to calculated surface area and volume. For directions to photograph coral for 3D photogrammetry see our Image Capture Protocol. For a walkthrough and scripts to run Agisoft Metashape on the command line, see https://github.com/wyattmillion/Coral3DPhotogram. These protocols, while created for branching coral, can be applied to 3D models of any coral morphology or any object really. Our goal is to make easy-to-use protocols using accessible softwares in the hopes of creating a standardized method for 3D photogrammetry in coral biology. Go to http://www.meshlab.net/#download to download the appropriate software for your operating system. P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, G. Ranzuglia MeshLab: an Open-Source Mesh Processing Tool Sixth Eurographics Italian Chapter Conference, page 129-136, 2008 DOI dx.doi.org/10.17504/protocols.io.bgbpjsmn 
    more » « less
  3. Neural rendering is fuelling a unification of learning, 3D geometry and video understanding that has been waiting for more than two decades. Progress, however, is still hampered by a lack of suitable datasets and benchmarks. To address this gap, we introduce EPIC Fields, an augmentation of EPIC-KITCHENS with 3D camera information. Like other datasets for neural rendering, EPIC Fields removes the complex and expensive step of reconstructing cameras using photogrammetry, and allows researchers to focus on modelling problems. We illustrate the challenge of photogrammetry in egocentric videos of dynamic actions and propose innovations to address them. Compared to other neural rendering datasets, EPIC Fields is better tailored to video understanding because it is paired with labelled action segments and the recent VISOR segment annotations. To further motivate the community, we also evaluate three benchmark tasks in neural rendering and segmenting dynamic objects, with strong baselines that showcase what is not possible today. We also highlight the advantage of geometry in semi-supervised video object segmentations on the VISOR annotations. EPIC Fields reconstructs 96% of videos in EPICKITCHENS, registering 19M frames in 99 hours recorded in 45 kitchens, and is available from: http://epic-kitchens.github.io/epic-fields 
    more » « less
  4. We present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset – EV-IMO – which includes accurate pixel-wise motion masks, egomotion and ground truth depth. Our approach is based on an efficient implementation of the SfM learning pipeline using a low parameter neural network architecture on event data. In addition to camera egomotion and a dense depth map, the network estimates independently moving object segmentation at the pixel-level and computes per-object 3D translational velocities of moving objects. We also train a shallow network with just 40k parameters, which is able to compute depth and egomotion. Our EV-IMO dataset features 32 minutes of indoor recording with up to 3 fast moving objects in the camera field of view. The objects and the camera are tracked using a VICON motion capture system. By 3D scanning the room and the objects, ground truth of the depth map and pixel-wise object masks are obtained. We then train and evaluate our learning pipeline on EV-IMO and demonstrate that it is well suited for scene constrained robotics applications. 
    more » « less
  5. Abstract

    A typical ground investigation for characterizing geotechnical properties of soil requires sampling soils to test in a laboratory. Laboratory X-ray computed tomography (CT) has been used to non-destructively observe soils and characterize their properties using image processing, numerical analysis, or three-dimensional (3D) printing techniques based on scanned images; however, if it becomes possible to scan the soils in the ground, it may enable the characterization without sampling them. In this study, an in-situ X-ray CT scanning system comprising a drilling machine with an integrated CT scanner was developed. A model test was conducted on gravel soil to verify if the equipment can drill and scan the soil underground. Moreover, image processing was performed on acquired 3D CT images to verify the image quality; the particle morphology (particle size and shape characteristics) was compared with the results obtained for projected particles captured in a two-dimensional (2D) manner by a digital camera. The equipment successfully drilled to a target depth of 800 mm, and the soil was scanned at depths of 700, 750, and 800 mm. Image processing results showed a reasonable agreement between the 3D and 2D particle morphology images, and confirmed the feasibility of the in-situ X-ray CT scanning system.

     
    more » « less