skip to main content


Title: Comparison of super-resolution and noise reduction for passive single-photon imaging
Single-photon sensitive image sensors have recently gained popularity in passive imaging applications where the goal is to capture photon flux (brightness) values of different scene points in the presence of challenging lighting conditions and scene motion. Recent work has shown that high-speed bursts of single-photon timestamp information captured using a single-photon avalanche diode camera can be used to estimate and correct for scene motion thereby improving signal-to-noise ratio and reducing motion blur artifacts. We perform a comparison of various design choices in the processing pipeline used for noise reduction, motion compensation, and upsampling of single-photon timestamp frames. We consider various pixelwise noise reduction techniques in combination with state-of-the-art deep neural network upscaling algorithms to super-resolve intensity images formed with single-photon timestamp data. We explore the trade space of motion blur and signal noise in various scenes with different motion content. Using real data captured with a hardware prototype, we achieved superresolution reconstruction at frame rates up to 65.8 kHz (native sampling rate of the sensor) and captured videos of fast-moving objects. The best reconstruction is obtained with the motion compensation approach, which achieves a structural similarity (SSIM) of about 0.67 for fast moving rigid objects. We are able to reconstruct subpixel resolution. These results show the relative superiority of our motion compensation compared to other approaches that do not exceed an SSIM of 0.5.  more » « less
Award ID(s):
1846884
NSF-PAR ID:
10499099
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Journal of Electronic Imaging
Date Published:
Journal Name:
Journal of Electronic Imaging
Volume:
31
Issue:
03
ISSN:
1017-9909
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Non-Line-Of-Sight (NLOS) imaging aims at recovering the 3D geometry of objects that are hidden from the direct line of sight. One major challenge with this technique is the weak available multibounce signal limiting scene size, capture speed, and reconstruction quality. To overcome this obstacle, we introduce a multipixel time-of-flight non-line-of-sight imaging method combining specifically designed Single Photon Avalanche Diode (SPAD) array detectors with a fast reconstruction algorithm that captures and reconstructs live low-latency videos of non-line-of-sight scenes with natural non-retroreflective objects. We develop a model of the signal-to-noise-ratio of non-line-of-sight imaging and use it to devise a method that reconstructs the scene such that signal-to-noise-ratio, motion blur, angular resolution, and depth resolution are all independent of scene depth suggesting that reconstruction of very large scenes may be possible.

     
    more » « less
  2. null (Ed.)
    Single-photon avalanche diodes (SPADs) are a rapidly developing image sensing technology with extreme low-light sensitivity and picosecond timing resolution. These unique capabilities have enabled SPADs to be used in applications like LiDAR, non-line-of-sight imaging and fluorescence microscopy that require imaging in photon-starved scenarios. In this work we harness these capabilities for dealing with motion blur in a passive imaging setting in low illumination conditions. Our key insight is that the data captured by a SPAD array camera can be represented as a 3D spatio-temporal tensor of photon detection events which can be integrated along arbitrary spatio-temporal trajectories with dynamically varying integration windows, depending on scene motion. We propose an algorithm that estimates pixel motion from photon timestamp data and dynamically adapts the integration windows to minimize motion blur. Our simulation results show the applicability of this algorithm to a variety of motion profiles including translation, rotation and local object motion. We also demonstrate the real-world feasibility of our method on data captured using a 32x32 SPAD camera. 
    more » « less
  3. Segmentation of moving objects in dynamic scenes is a key process in scene understanding for navigation tasks. Classical cameras suffer from motion blur in such scenarios rendering them effete. On the contrary, event cameras, because of their high temporal resolution and lack of motion blur, are tailor-made for this problem. We present an approach for monocular multi-motion segmentation, which combines bottom-up feature tracking and top-down motion compensation into a unified pipeline, which is the first of its kind to our knowledge. Using the events within a time-interval, our method segments the scene into multiple motions by splitting and merging. We further speed up our method by using the concept of motion propagation and cluster keyslices.The approach was successfully evaluated on both challenging real-world and synthetic scenarios from the EV-IMO, EED, and MOD datasets and outperformed the state-of-the-art detection rate by 12%, achieving a new state-of-the-art average detection rate of 81.06%, 94.2% and 82.35% on the aforementioned datasets. To enable further research and systematic evaluation of multi-motion segmentation, we present and open-source a new dataset/benchmark called MOD++, which includes challenging sequences and extensive data stratification in-terms of camera and object motion, velocity magnitudes, direction, and rotational speeds. 
    more » « less
  4. null (Ed.)
    Drilling and milling operations are material removal processes involved in everyday conventional productions, especially in the high-speed metal cutting industry. The monitoring of tool information (wear, dynamic behavior, deformation, etc.) is essential to guarantee the success of product fabrication. Many methods have been applied to monitor the cutting tools from the information of cutting force, spindle motor current, vibration, as well as sound acoustic emission. However, those methods are indirect and sensitive to environmental noises. Here, the in-process imaging technique that can capture the cutting tool information while cutting the metal was studied. As machinists judge whether a tool is worn-out by the naked eye, utilizing the vision system can directly present the performance of the machine tools. We proposed a phase shifted strobo-stereoscopic method (Figure 1) for three-dimensional (3D) imaging. The stroboscopic instrument is usually applied for the measurement of fast-moving objects. The operation principle is as follows: when synchronizing the frequency of the light source illumination and the motion of object, the object appears to be stationary. The motion frequency of the target is transferring from the count information of the encoder signals from the working rotary spindle. If small differences are added to the frequency, the object appears to be slowly moving or rotating. This effect can be working as the source for the phase-shifting; with this phase information, the target can be whole-view 3D reconstructed by 360 degrees. The stereoscopic technique is embedded with two CCD cameras capturing images that are located bilateral symmetrically in regard to the target. The 3D scene is reconstructed by the location information of the same object points from both the left and right images. In the proposed system, an air spindle was used to secure the motion accuracy and drilling/milling speed. As shown in Figure 2, two CCDs with 10X objective lenses were installed on a linear rail with rotary stages to capture the machine tool bit raw picture for further 3D reconstruction. The overall measurement process was summarized in the flow chart (Figure 3). As the count number of encoder signals is related to the rotary speed, the input speed (unit of RPM) was set as the reference signal to control the frequency (f0) of the illumination of the LED. When the frequency was matched with the reference signal, both CCDs started to gather the pictures. With the mismatched frequency (Δf) information, a sequence of images was gathered under the phase-shifted process for a whole-view 3D reconstruction. The study in this paper was based on a 3/8’’ drilling tool performance monitoring. This paper presents the principle of the phase-shifted strobe-stereoscopic 3D imaging process. A hardware set-up is introduced, , as well as the 3D imaging algorithm. The reconstructed image analysis under different working speeds is discussed, the reconstruction resolution included. The uncertainty of the imaging process and the built-up system are also analyzed. As the input signal is the working speed, no other information from other sources is required. This proposed method can be applied as an on-machine or even in-process metrology. With the direct method of the 3D imaging machine vision system, it can directly offer the machine tool surface and fatigue information. This presented method can supplement the blank for determining the performance status of the machine tools, which further guarantees the fabrication process. 
    more » « less
  5. Optical projection tomography (OPT) is a powerful imaging modality for attaining high resolution absorption and fluorescence imaging in tissue samples and embryos with a diameter of roughly 1 mm. Moving past this 1 mm limit, scattered light becomes the dominant fraction detected, adding significant “blur” to OPT. Time-domain OPT has been used to select out early-arriving photons that have taken a more direct route through the tissue to reduce detection of scattered photons in these larger samples, which are the cause of image domain blur1. In addition, it was recently demonstrated by our group that detection of scattered photons could be further depressed by running in a “deadtime” regime where laser repetition rates are selected such that the deadtime incurred by early-arriving photons acts as a shutter to later-arriving scattered photons2. By running in this deadtime regime, far greater early photon count rates are achievable than with standard early photon OPT. In this work, another advantage of this enhanced early photon collection approach is demonstrated: specifically, a significant improvement in signal-to-noise ratio. In single photon counting detectors, the main source of noise is “afterpulsing,” which is essentially leftover charge from a detected photon that spuriously results in a second photon count. When the arrival of the photons are time-stamped by the time correlated single photon counting (TCSPC) module , the rate constant governing afterpusling is slow compared to the time-scale of the light pulse detected so it is observed as a background signal with very little time-correlation. This signal is present in all time-gates and so adds noise to the detection of early photons. However, since the afterpusling signal is proportional to the total rate of photon detection, our enhanced early photon approach is uniquely able to have increased early photon counts with no appreciable increase in the afterpulsing since overall count-rate does not change. This is because as the rate of early photon detection goes up, the rate of late-photon detection reduces commensurately, yielding no net change in the overall rate of photons detected. This hypothesis was tested on a 4 mm diameter tissue-mimicking phantom (μa = 0.02 mm-1, μs’ = 1 mm-1) by ranging the power of a 10 MHz pulse 780-nm laser with pulse spread of < 100 fs (Calmar, USA) and an avalanche photodiode (MPD, Picoquant, Germany) and TCSPC module (HydraHarp, Picoquant, Germany) for light detection. Details of the results are in Fig. 1a, but of note is that we observed more than a 60-times improvement in SNR compared to conventional early photon detection that would have taken 1000-times longer to achieve the same early photon count. A demonstration of the type of resolution possible is in Fig 1b with an image of a 4-mm-thick human breast cancer biopsy where tumor spiculations of less than 100 μm diameter are observable. 1Fieramonti, L. et al. PloS one (2012). 2Sinha, L., et al. Optics letters (2016). 
    more » « less