skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Sensor-aided camera calibration for three dimensional digital image correlation measurements
Stereovision systems can extract full-field three-dimensional (3D) displacements of structures by processing the images collected with two synchronized cameras. To obtain accurate measurements, the cameras must be calibrated to account for lens distortion (i.e., intrinsic parameters) and compute the cameras’ relative position and orientation (i.e., extrinsic parameters). Traditionally, calibration is performed by taking photos of a calibration object (e.g., a checkerboard) with the two cameras. Because the calibration object must be similar in size to the targeted structure, measurements on large-scale structures are highly impractical. This research proposes a multi-sensor board with three inertial measurement units and a laser distance meter to compute the extrinsic parameters of a stereovision system and streamline the calibration procedure. In this paper, the performances of the proposed sensor-based calibration are compared with the accuracy of the traditional image-based calibration procedure. Laboratory experiments show that cameras calibrated with the multi-sensor board measure displacements with 95% accuracy compared to displacements obtained from cameras calibrated with the traditional procedure. The results of this study indicate that the sensor-based approach can increase the applicability of 3D digital image correlation measurements to large-scale structures while reducing the time and complexity of the calibration.  more » « less
Award ID(s):
2018992
PAR ID:
10456045
Author(s) / Creator(s):
; ; ; ; ;
Editor(s):
Fromme, Paul; Su, Zhongqing
Date Published:
Journal Name:
Health Monitoring of Structural and Biological Systems XVII
Volume:
12488
Page Range / eLocation ID:
77
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Fromme, Paul; Su, Zhongqing (Ed.)
    Three-dimensional digital image correlation (3D-DIC) has become a strong alternative to traditional contact-based techniques for structural health monitoring. 3D-DIC can extract the full-field displacement of a structure from a set of synchronized stereo images. Before performing 3D-DIC, a complex calibration process must be completed to obtain the stereovision system’s extrinsic parameters (i.e., cameras’ distance and orientation). The time required for the calibration depends on the dimensions of the targeted structure. For example, for large-scale structures, the calibration may take several hours. Furthermore, every time the cameras’ position changes, a new calibration is required to recalculate the extrinsic parameters. The approach proposed in this research allows determining the 3D-DIC extrinsic parameters using the data measured with commercially available sensors. The system utilizes three Inertial Measurement Units with a laser distance meter to compute the relative orientation and distance between the cameras. In this paper, an evaluation of the sensitivity of the newly developed sensor suite is provided by assessing the errors in the measurement of the extrinsic parameters. Analytical simulations performed on a 7.5 x 5.7 m field of view using the data retrieved from the sensors show that the proposed approach provides an accuracy of ~10-6 m and a promising way to reduce the complexity of 3D-DIC calibration. 
    more » « less
  2. We propose a multi-stage calibration method for increasing the overall accuracy of a large-scale structured light system by leveraging the conventional stereo calibration approach using a pinhole model. We first calibrate the intrinsic parameters at a near distance and then the extrinsic parameters with a low-cost large-calibration target at the designed measurement distance. Finally, we estimate pixel-wise errors from standard stereo 3D reconstructions and determine the pixel-wise phase-to-coordinate relationships using low-order polynomials. The calibrated pixel-wise polynomial functions can be used for 3D reconstruction for a given pixel phase value. We experimentally demonstrated that our proposed method achieves high accuracy for a large volume: sub-millimeter within 1200(H) × 800 (V) × 1000(D) mm3
    more » « less
  3. We present a method for reconstructing 3D shape of arbitrary Lambertian objects based on measurements by miniature, energy-efficient, low-cost single-photon cameras. These cameras, operating as time resolved image sensors, illuminate the scene with a very fast pulse of diffuse light and record the shape of that pulse as it returns back from the scene at a high temporal resolution. We propose to model this image formation process, account for its non-idealities, and adapt neural rendering to reconstruct 3D geometry from a set of spatially distributed sensors with known poses. We show that our approach can successfully recover complex 3D shapes from simulated data. We further demonstrate 3D object reconstruction from real-world captures, utilizing measurements from a commodity proximity sensor. Our work draws a connection between image-based modeling and active range scanning and is a step towards 3D vision with single-photon cameras. 
    more » « less
  4. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors now available on modern smartphones. 
    more » « less
  5. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors that are now available on modern smartphones. 
    more » « less