skip to main content


This content will become publicly available on December 5, 2024

Title: Doppler Time-of-Flight Rendering

We introduce Doppler time-of-flight (D-ToF) rendering, an extension of ToF rendering for dynamic scenes, with applications in simulating D-ToF cameras. D-ToF cameras use high-frequency modulation of illumination and exposure, and measure the Doppler frequency shift to compute the radial velocity of dynamic objects. The time-varying scene geometry and high-frequency modulation functions used in such cameras make it challenging to accurately and efficiently simulate their measurements with existing ToF rendering algorithms. We overcome these challenges in a twofold manner: To achieve accuracy, we derive path integral expressions for D-ToF measurements under global illumination and form unbiased Monte Carlo estimates of these integrals. To achieve efficiency, we develop a tailored time-path sampling technique that combines antithetic time sampling with correlated path sampling. We show experimentally that our sampling technique achieves up to two orders of magnitude lower variance compared to naive time-path sampling. We provide an open-source simulator that serves as a digital twin for D-ToF imaging systems, allowing imaging researchers, for the first time, to investigate the impact of modulation functions, material properties, and global illumination on D-ToF imaging performance.

 
more » « less
Award ID(s):
1900849 1730147
NSF-PAR ID:
10485844
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
ACM Transactions on Graphics
Volume:
42
Issue:
6
ISSN:
0730-0301
Page Range / eLocation ID:
1 to 18
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Drilling and milling operations are material removal processes involved in everyday conventional productions, especially in the high-speed metal cutting industry. The monitoring of tool information (wear, dynamic behavior, deformation, etc.) is essential to guarantee the success of product fabrication. Many methods have been applied to monitor the cutting tools from the information of cutting force, spindle motor current, vibration, as well as sound acoustic emission. However, those methods are indirect and sensitive to environmental noises. Here, the in-process imaging technique that can capture the cutting tool information while cutting the metal was studied. As machinists judge whether a tool is worn-out by the naked eye, utilizing the vision system can directly present the performance of the machine tools. We proposed a phase shifted strobo-stereoscopic method (Figure 1) for three-dimensional (3D) imaging. The stroboscopic instrument is usually applied for the measurement of fast-moving objects. The operation principle is as follows: when synchronizing the frequency of the light source illumination and the motion of object, the object appears to be stationary. The motion frequency of the target is transferring from the count information of the encoder signals from the working rotary spindle. If small differences are added to the frequency, the object appears to be slowly moving or rotating. This effect can be working as the source for the phase-shifting; with this phase information, the target can be whole-view 3D reconstructed by 360 degrees. The stereoscopic technique is embedded with two CCD cameras capturing images that are located bilateral symmetrically in regard to the target. The 3D scene is reconstructed by the location information of the same object points from both the left and right images. In the proposed system, an air spindle was used to secure the motion accuracy and drilling/milling speed. As shown in Figure 2, two CCDs with 10X objective lenses were installed on a linear rail with rotary stages to capture the machine tool bit raw picture for further 3D reconstruction. The overall measurement process was summarized in the flow chart (Figure 3). As the count number of encoder signals is related to the rotary speed, the input speed (unit of RPM) was set as the reference signal to control the frequency (f0) of the illumination of the LED. When the frequency was matched with the reference signal, both CCDs started to gather the pictures. With the mismatched frequency (Δf) information, a sequence of images was gathered under the phase-shifted process for a whole-view 3D reconstruction. The study in this paper was based on a 3/8’’ drilling tool performance monitoring. This paper presents the principle of the phase-shifted strobe-stereoscopic 3D imaging process. A hardware set-up is introduced, , as well as the 3D imaging algorithm. The reconstructed image analysis under different working speeds is discussed, the reconstruction resolution included. The uncertainty of the imaging process and the built-up system are also analyzed. As the input signal is the working speed, no other information from other sources is required. This proposed method can be applied as an on-machine or even in-process metrology. With the direct method of the 3D imaging machine vision system, it can directly offer the machine tool surface and fatigue information. This presented method can supplement the blank for determining the performance status of the machine tools, which further guarantees the fabrication process. 
    more » « less
  2. A high-speed super-resolution computational imaging technique is introduced on the basis of classical and quantum correlation functions obtained from photon counts collected from quantum emitters illuminated by spatiotemporally structured illumination. The structured illumination is delocalized—allowing the selective excitation of separate groups of emitters as the modulation of the illumination light advances. A recorded set of photon counts contains rich quantum and classical information. By processing photon counts, multiple orders of Glauber correlation functions are extracted. Combinations of the normalized Glauber correlation functions convert photon counts into signals of increasing order that contain increasing spatial frequency information. However, the amount of information above the noise floor drops at higher correlation orders, causing a loss of accessible information in the finer spatial frequency content that is contained in the higher-order signals. We demonstrate an efficient and robust computational imaging algorithm to fuse the spatial frequencies from the low-spatial-frequency range that is available in the classical information with the spatial frequency content in the quantum signals. Because of the overlap of low spatial frequency information, the higher signal-to-noise ratio (SNR) information concentrated in the low spatial frequencies stabilizes the lower SNR at higher spatial frequencies in the higher-order quantum signals. Robust performance of this joint fusion of classical and quantum computational single-pixel imaging is demonstrated with marked increases in spatial frequency content, leading to super-resolution imaging, along with much better mean squared errors in the reconstructed images. 
    more » « less
  3. Imaging beyond the diffraction limit barrier has attracted wide attention due to the ability to resolve previously hidden image features. Of the various super-resolution microscopy techniques available, a particularly simple method called saturated excitation microscopy (SAX) requires only simple modification of a laser scanning microscope: The illumination beam power is sinusoidally modulated and driven into saturation. SAX images are extracted from the harmonics of the modulation frequency and exhibit improved spatial resolution. Unfortunately, this elegant strategy is hindered by the incursion of shot noise that prevents high-resolution imaging in many realistic scenarios. Here, we demonstrate a technique for super-resolution imaging that we call computational saturated absorption (CSA) in which a joint deconvolution is applied to a set of images with diversity in spatial frequency support among the point spread functions (PSFs) used in the image formation with saturated laser scanning fluorescence microscopy. CSA microscopy allows access to the high spatial frequency diversity in a set of saturated effective PSFs, while avoiding image degradation from shot noise.

     
    more » « less
  4. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors that are now available on modern smartphones. 
    more » « less
  5. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors now available on modern smartphones. 
    more » « less