skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Low-Light High Dynamic Range Single Frame Image Denoising for Quanta Image Sensors
Imaging low-light high dynamic range (HDR) scenes in a single capture is challenging for conventional sensors when exposure bracketing is not feasible due to application constraints. Advancements in sensor technology have narrowed the gap, as split-pixel and dual conversion gain (DCG) enables single-frame HDR capture and Quanta Image Sensors (QIS) allow counting individual photons at low light. However, removing shot noise from a single HDR image remains a difficult task due to the spatially varying nature of noise. To address this issue, we propose a learnable pipeline with a modular design for processing high bit-depth QIS raw images. Compared to existing algorithmic solutions, our approach offers superior reconstruction performance and greater robustness to variations in illuminance and noise.  more » « less
Award ID(s):
2335309
PAR ID:
10658990
Author(s) / Creator(s):
; ;
Publisher / Repository:
International Image Sensor Workshop
Date Published:
Format(s):
Medium: X Other: PDF
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract—Accurately capturing dynamic scenes with wideranging motion and light intensity is crucial for many vision applications. However, acquiring high-speed high dynamic range (HDR) video is challenging because the camera’s frame rate restricts its dynamic range. Existing methods sacrifice speed to acquire multi-exposure frames. Yet, misaligned motion in these frames can still pose complications for HDR fusion algorithms, resulting in artifacts. Instead of frame-based exposures, we sample the videos using individual pixels at varying exposures and phase offsets. Implemented on a monochrome pixel-wise programmable image sensor, our sampling pattern captures fast motion at a high dynamic range. We then transform pixel-wise outputs into an HDR video using end-to-end learned weights from deep neural networks, achieving high spatiotemporal resolution with minimized motion blurring. We demonstrate aliasing-free HDR video acquisition at 1000 FPS, resolving fast motion under low-light conditions and against bright backgrounds — both challenging conditions for conventional cameras. By combining the versatility of pixel-wise sampling patterns with the strength of deep neural networks at decoding complex scenes, our method greatly enhances the vision system’s adaptability and performance in dynamic conditions. Index Terms—High-dynamic-range video, high-speed imaging, CMOS image sensors, programmable sensors, deep learning, convolutional neural networks. 
    more » « less
  2. Reconstruction of high-resolution extreme dynamic range images from a small number of low dynamic range (LDR) images is crucial for many computer vision applications. Current high dynamic range (HDR) cameras based on CMOS image sensor technology rely on multiexposure bracketing which suffers from motion artifacts and signal-to-noise (SNR) dip artifacts in extreme dynamic range scenes. Recently, single-photon cameras (SPCs) have been shown to achieve orders of magnitude higher dynamic range for passive imaging than conventional CMOS sensors. SPCs are becoming increasingly available commercially, even in some consumer devices. Unfortunately, current SPCs suffer from low spatial resolution. To overcome the limitations of CMOS and SPC sensors, we propose a learning-based CMOS-SPC fusion method to recover high-resolution extreme dynamic range images. We compare the performance of our method against various traditional and state-of-the-art baselines using both synthetic and experimental data. Our method outperforms these baselines, both in terms of visual quality and quantitative metrics. 
    more » « less
  3. Megapixel single-photon avalanche diode (SPAD) arrays have been developed recently, opening up the possibility of deploying SPADs as generalpurpose passive cameras for photography and computer vision. However, most previous work on SPADs has been limited to monochrome imaging. We propose a computational photography technique that reconstructs high-quality color images from mosaicked binary frames captured by a SPAD array, even for high-dyanamic-range (HDR) scenes with complex and rapid motion. Inspired by conventional burst photography approaches, we design algorithms that jointly denoise and demosaick single-photon image sequences. Based on the observation that motion effectively increases the color sample rate, we design a blue-noise pseudorandom RGBW color filter array for SPADs, which is tailored for imaging dark, dynamic scenes. Results on simulated data, as well as real data captured with a fabricated color SPAD hardware prototype shows that the proposed method can reconstruct high-quality images with minimal color artifacts even for challenging low-light, HDR and fast-moving scenes. We hope that this paper, by adding color to computational single-photon imaging, spurs rapid adoption of SPADs for real-world passive imaging applications. 
    more » « less
  4. Abstract Strong light–matter interactions in two-dimensional layered materials (2D materials) have attracted the interest of researchers from interdisciplinary fields for more than a decade now. A unique phenomenon in some 2D materials is their large exciton binding energies (BEs), increasing the likelihood of exciton survival at room temperature. It is this large BE that mediates the intense light–matter interactions of many of the 2D materials, particularly in their monolayer limit, where the interplay of excitonic phenomena poses a wealth of opportunities for high-performance optoelectronics and quantum photonics. Within quantum photonics, quantum information science (QIS) is growing rapidly, where photons are a promising platform for information processing due to their low-noise properties, excellent modal control, and long-distance propagation. A central element for QIS applications is a single photon emitter (SPE) source, where an ideal on-demand SPE emits exactly one photon at a time into a given spatiotemporal mode. Recently, 2D materials have shown practical appeal for QIS which is directly driven from their unique layered crystalline structure. This structural attribute of 2D materials facilitates their integration with optical elements more easily than the SPEs in conventional three-dimensional solid state materials, such as diamond and SiC. In this review article, we will discuss recent advances made with 2D materials towards their use as quantum emitters, where the SPE emission properties maybe modulated deterministically. The use of unique scanning tunneling microscopy tools for thein-situgeneration and characterization of defects is presented, along with theoretical first-principles frameworks and machine learning approaches to model the structure-property relationship of exciton–defect interactions within the lattice towards SPEs. Given the rapid progress made in this area, the SPEs in 2D materials are emerging as promising sources of nonclassical light emitters, well-poised to advance quantum photonics in the future. 
    more » « less
  5. Time-resolved image sensors that capture light at pico-tonanosecond timescales were once limited to niche applications but are now rapidly becoming mainstream in consumer devices. We propose lowcost and low-power imaging modalities that capture scene information from minimal time-resolved image sensors with as few as one pixel. The key idea is to flood illuminate large scene patches (or the entire scene) with a pulsed light source and measure the time-resolved reflected light by integrating over the entire illuminated area. The one-dimensional measured temporal waveform, called transient, encodes both distances and albedoes at all visible scene points and as such is an aggregate proxy for the scene’s 3D geometry. We explore the viability and limitations of the transient waveforms by themselves for recovering scene information, and also when combined with traditional RGB cameras. We show that plane estimation can be performed from a single transient and that using only a few more it is possible to recover a depth map of the whole scene. We also show two proof-of-concept hardware prototypes that demonstrate the feasibility of our approach for compact, mobile, and budget-limited applications. 
    more » « less