skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Method for large-scale structured-light system calibration
We propose a multi-stage calibration method for increasing the overall accuracy of a large-scale structured light system by leveraging the conventional stereo calibration approach using a pinhole model. We first calibrate the intrinsic parameters at a near distance and then the extrinsic parameters with a low-cost large-calibration target at the designed measurement distance. Finally, we estimate pixel-wise errors from standard stereo 3D reconstructions and determine the pixel-wise phase-to-coordinate relationships using low-order polynomials. The calibrated pixel-wise polynomial functions can be used for 3D reconstruction for a given pixel phase value. We experimentally demonstrated that our proposed method achieves high accuracy for a large volume: sub-millimeter within 1200(H) × 800 (V) × 1000(D) mm3 more » « less
Award ID(s):
1637961
PAR ID:
10230070
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Express
Volume:
29
Issue:
11
ISSN:
1094-4087; OPEXFF
Format(s):
Medium: X Size: Article No. 17316
Size(s):
Article No. 17316
Sponsoring Org:
National Science Foundation
More Like this
  1. Fromme, Paul; Su, Zhongqing (Ed.)
    Three-dimensional digital image correlation (3D-DIC) has become a strong alternative to traditional contact-based techniques for structural health monitoring. 3D-DIC can extract the full-field displacement of a structure from a set of synchronized stereo images. Before performing 3D-DIC, a complex calibration process must be completed to obtain the stereovision system’s extrinsic parameters (i.e., cameras’ distance and orientation). The time required for the calibration depends on the dimensions of the targeted structure. For example, for large-scale structures, the calibration may take several hours. Furthermore, every time the cameras’ position changes, a new calibration is required to recalculate the extrinsic parameters. The approach proposed in this research allows determining the 3D-DIC extrinsic parameters using the data measured with commercially available sensors. The system utilizes three Inertial Measurement Units with a laser distance meter to compute the relative orientation and distance between the cameras. In this paper, an evaluation of the sensitivity of the newly developed sensor suite is provided by assessing the errors in the measurement of the extrinsic parameters. Analytical simulations performed on a 7.5 x 5.7 m field of view using the data retrieved from the sensors show that the proposed approach provides an accuracy of ~10-6 m and a promising way to reduce the complexity of 3D-DIC calibration. 
    more » « less
  2. This study compares the accuracy of circular and linear fringe projection profilometry in the aspects of system calibration and 3D reconstruction. We introduce, what we believe to be, a novel calibration method and 3D reconstruction technique using circular and radial fringe patterns. Our approach is compared with the traditional linear phase-shifting method through several 2 × 2 experimental setups. Results indicate that our 3D reconstruction method surpasses the linear phase-shifting approach in performance, although calibration efficiency does not present a superior performance. Further analysis reveals that sensitivity and estimated phase error contribute to the relative underperformance in calibration. This paper offers insights into the potentials and limitations of circular fringe projection profilometry. 
    more » « less
  3. Abstract Fӧrster (or fluorescence) resonance energy transfer (FRET) is a quantifiable energy transfer in which a donor fluorophore nonradiatively transfers its excitation energy to an acceptor fluorophore. A change in FRET efficiency indicates a change of proximity and environment of these fluorophores, which enables the study of intermolecular interactions. Measurement of FRET efficiency using the sensitized emission method requires a donor–acceptor calibrated system. One of these calibration factors named theGfactor, which depends on instrument parameters related to the donor and acceptor measurement channels and on the fluorophores quantum efficiencies, can be determined in several different ways and allows for conversion of the raw donor and acceptor emission signals to FRET efficiency. However, the calculated value of the G factor from experimental data can fluctuate significantly depending on the chosen experimental method and the size of the sample. In this technical note, we extend the results of Gates et al. (Cytometry Part A 95A (2018) 201–213) by refining the calibration method used for calibration of FRET from image pixel data. Instead of using the pixel histograms of two constructs with high and low FRET efficiency to determine theGfactor, we use pixel histogram data from one construct of known efficiency. We validate this method by determining theGfactor with the same constructs developed and used by Gates et al. and comparing the results from the two approaches. While the two approaches are equivalent theoretically, we demonstrate that the use of a single construct with known efficiency provides a more precise experimental measurement of theGfactor that can be attained by collecting a smaller number of images. © 2020 International Society for Advancement of Cytometry 
    more » « less
  4. We present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset – EV-IMO – which includes accurate pixel-wise motion masks, egomotion and ground truth depth. Our approach is based on an efficient implementation of the SfM learning pipeline using a low parameter neural network architecture on event data. In addition to camera egomotion and a dense depth map, the network estimates independently moving object segmentation at the pixel-level and computes per-object 3D translational velocities of moving objects. We also train a shallow network with just 40k parameters, which is able to compute depth and egomotion. Our EV-IMO dataset features 32 minutes of indoor recording with up to 3 fast moving objects in the camera field of view. The objects and the camera are tracked using a VICON motion capture system. By 3D scanning the room and the objects, ground truth of the depth map and pixel-wise object masks are obtained. We then train and evaluate our learning pipeline on EV-IMO and demonstrate that it is well suited for scene constrained robotics applications. 
    more » « less
  5. We propose a boundary-aware multi-task deep-learning- based framework for fast 3D building modeling from a sin- gle overhead image. Unlike most existing techniques which rely on multiple images for 3D scene modeling, we seek to model the buildings in the scene from a single overhead im- age by jointly learning a modified signed distance function (SDF) from the building boundaries, a dense heightmap of the scene, and scene semantics. To jointly train for these tasks, we leverage pixel-wise semantic segmentation and normalized digital surface maps (nDSM) as supervision, in addition to labeled building outlines. At test time, buildings in the scene are automatically modeled in 3D using only an input overhead image. We demonstrate an increase in building modeling performance using a multi-feature net- work architecture that improves building outline detection by considering network features learned for the other jointly learned tasks. We also introduce a novel mechanism for ro- bustly refining instance-specific building outlines using the learned modified SDF. We verify the effectiveness of our method on multiple large-scale satellite and aerial imagery datasets, where we obtain state-of-the-art performance in the 3D building reconstruction task. 
    more » « less