Single-photon avalanche detectors (SPADs) are crucial sensors of light for many fields and applications. However, they are not able to resolve photon number, so typically more complex and more expensive experimental setups or devices must be used to measure the number of photons in a pulse. Here, we present a methodology for performing photon number-state reconstruction with only one SPAD. The methodology, which is cost-effective and easy to implement, uses maximum-likelihood techniques with a detector model whose parameters are measurable. We achieve excellent agreement between known input pulses and their reconstructions for coherent states with up to ≈10 photons and peak input photon rates up to several Mcounts/s. When detector imperfections are small, we maintain good agreement for coherent pulses with peak input photon rates of over 40 Mcounts/s, greater than one photon per detector dead time. For anti-bunched light, the reconstructed and independently measured pulse-averaged values of
We investigate the feasibility and performance of photon-number-resolved photodetection employing single-photon avalanche photodiodes (SPADs) with low dark counts. While the main idea, to split
- PAR ID:
- 10132112
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Optics Express
- Volume:
- 28
- Issue:
- 3
- ISSN:
- 1094-4087; OPEXFF
- Format(s):
- Medium: X Size: Article No. 3660
- Size(s):
- Article No. 3660
- Sponsoring Org:
- National Science Foundation
More Like this
-
g (2)(0) are also consistent with one another. Our algorithm is applicable to light pulses whose pulse width and correlation time scales are both at least a few detector dead times. These results, achieved with single commercially available SPADs, provide an inexpensive number-state reconstruction method and expand the capabilities of single-photon detectors. -
Megapixel single-photon avalanche diode (SPAD) arrays have been developed recently, opening up the possibility of deploying SPADs as generalpurpose passive cameras for photography and computer vision. However, most previous work on SPADs has been limited to monochrome imaging. We propose a computational photography technique that reconstructs high-quality color images from mosaicked binary frames captured by a SPAD array, even for high-dyanamic-range (HDR) scenes with complex and rapid motion. Inspired by conventional burst photography approaches, we design algorithms that jointly denoise and demosaick single-photon image sequences. Based on the observation that motion effectively increases the color sample rate, we design a blue-noise pseudorandom RGBW color filter array for SPADs, which is tailored for imaging dark, dynamic scenes. Results on simulated data, as well as real data captured with a fabricated color SPAD hardware prototype shows that the proposed method can reconstruct high-quality images with minimal color artifacts even for challenging low-light, HDR and fast-moving scenes. We hope that this paper, by adding color to computational single-photon imaging, spurs rapid adoption of SPADs for real-world passive imaging applications.
-
null (Ed.)Single-photon avalanche diodes (SPADs) are a rapidly developing image sensing technology with extreme low-light sensitivity and picosecond timing resolution. These unique capabilities have enabled SPADs to be used in applications like LiDAR, non-line-of-sight imaging and fluorescence microscopy that require imaging in photon-starved scenarios. In this work we harness these capabilities for dealing with motion blur in a passive imaging setting in low illumination conditions. Our key insight is that the data captured by a SPAD array camera can be represented as a 3D spatio-temporal tensor of photon detection events which can be integrated along arbitrary spatio-temporal trajectories with dynamically varying integration windows, depending on scene motion. We propose an algorithm that estimates pixel motion from photon timestamp data and dynamically adapts the integration windows to minimize motion blur. Our simulation results show the applicability of this algorithm to a variety of motion profiles including translation, rotation and local object motion. We also demonstrate the real-world feasibility of our method on data captured using a 32x32 SPAD camera.more » « less
-
Single-photon light detection and ranging (LiDAR) techniques use emerging single-photon detectors (SPADs) to push 3D imaging capabilities to unprecedented ranges. However, it remains challenging to robustly estimate scene depth from the noisy and otherwise corrupted measurements recorded by a SPAD. Here, we propose a deep sensor fusion strategy that combines corrupted SPAD data and a conventional 2D image to estimate the depth of a scene. Our primary contribution is a neural network architecture—SPADnet—that uses a monocular depth estimation algorithm together with a SPAD denoising and sensor fusion strategy. This architecture, together with several techniques in network training, achieves state-of-the-art results for RGB-SPAD fusion with simulated and captured data. Moreover, SPADnet is more computationally efficient than previous RGB-SPAD fusion networks.
-
Abstract Active 3D imaging systems have broad applications across disciplines, including biological imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing accuracy, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but our approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D imaging in practical scenarios where widely-varying photon counts are observed.