Light transport contains all light information between a light source and an image sensor. As an important application of light transport, dual photography has been a popular research topic, but it is challenged by long acquisition time, low signal-to-noise ratio, and the storage or processing of a large number of measurements. In this Letter, we propose a novel hardware setup that combines a flying-spot micro-electro mechanical system (MEMS) modulated projector with an event camera to implement dual photography for 3D scanning in both line-of-sight (LoS) and non-line-of-sight (NLoS) scenes with a transparent object. In particular, we achieved depth extraction from the LoS scenes and 3D reconstruction of the object in a NLoS scene using event light transport.
Structured light illumination is an active three-dimensional scanning technique that uses a projector and camera pair to project and capture a series of stripe patterns; however, with a single camera and single projector, structured light scanning has issues associated with scan occlusions, multi-path, and weak signal reflections. To address these issues, this paper proposes dual-projector scanning using a range of projector/camera arrangements. Unlike previous attempts at dual-projector scanning, the proposed scanner drives both light engines simultaneously, using temporal-frequency multiplexing to computationally decouple the projected patterns. Besides presenting the details of how such a system is built, we also present experimental results demonstrating how multiple projectors can be used to (1) minimize occlusions; (2) achieve higher signal-to-noise ratios having twice a single projector’s brightness; (3) reduce the number of component video frames required for a scan; and (4) detect multi-path interference.
more » « less- NSF-PAR ID:
- 10131772
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Applied Optics
- Volume:
- 59
- Issue:
- 4
- ISSN:
- 1559-128X; APOPAI
- Format(s):
- Medium: X Size: Article No. 964
- Size(s):
- Article No. 964
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract We present a deep learning based solution for separating the direct and global light transport components from a single photograph captured under high frequency structured lighting with a co‐axial projector‐camera setup. We employ an architecture with one encoder and two decoders that shares information between the encoder and the decoders, as well as between both decoders to ensure a consistent decomposition between both light transport components. Furthermore, our deep learning separation approach does not require binary structured illumination, allowing us to utilize the full resolution capabilities of the projector. Consequently, our deep separation network is able to achieve high fidelity decompositions for lighting frequency sensitive features such as subsurface scattering and specular reflections. We evaluate and demonstrate our direct and global separation method on a wide variety of synthetic and captured scenes.
-
This paper presents an absolute phase unwrapping method for high-speed three-dimensional (3D) shape measurement. This method uses three phase-shifted patterns and one binary random pattern on a single-camera, single-projector structured light system. We calculate the wrapped phase from phase-shifted images and determine the coarse correspondence through the digital image correlation (DIC) between the captured binary random pattern of the object and the pre-captured binary random pattern of a flat surface. We then developed a computational framework to determine fringe order number pixel by pixel using the coarse correspondence information. Since only one additional pattern is used, the proposed method can be used for high-speed 3D shape measurement. Experimental results successfully demonstrated that the proposed method can achieve high-speed and high-quality measurement of complex scenes.
-
Measuring speed is a critical factor to reduce motion artifacts for dynamic scene capture. Phase-shifting methods have the advantage of providing high-accuracy and dense 3D point clouds, but the phase unwrapping process affects the measurement speed. This paper presents an absolute phase unwrapping method capable of using only three speckle-embedded phase-shifted patterns for high-speed three-dimensional (3D) shape measurement on a single-camera, single-projector structured light system. The proposed method obtains the wrapped phase of the object from the speckle-embedded three-step phase-shifted patterns. Next, it utilizes the Semi-Global Matching (SGM) algorithm to establish the coarse correspondence between the image of the object with the embedded speckle pattern and the pre-obtained image of a flat surface with the same embedded speckle pattern. Then, a computational framework uses the coarse correspondence information to determine the fringe order pixel by pixel. The experimental results demonstrated that the proposed method can achieve high-speed and high-quality 3D measurements of complex scenes.
-
The need for high-speed imaging in applications such as biomedicine, surveillance, and consumer electronics has called for new developments of imaging systems. While the industrial effort continuously pushes the advance of silicon focal plane array image sensors, imaging through a single-pixel detector has gained significant interest thanks to the development of computational algorithms. Here, we present a new imaging modality, deep compressed imaging via optimized-pattern scanning, which can significantly increase the acquisition speed for a single-detector-based imaging system. We project and scan an illumination pattern across the object and collect the sampling signal with a single-pixel detector. We develop an innovative end-to-end optimized auto-encoder, using a deep neural network and compressed sensing algorithm, to optimize the illumination pattern, which allows us to reconstruct faithfully the image from a small number of measurements, with a high frame rate. Compared with the conventional switching-mask-based single-pixel camera and point-scanning imaging systems, our method achieves a much higher imaging speed, while retaining a similar imaging quality. We experimentally validated this imaging modality in the settings of both continuous-wave illumination and pulsed light illumination and showed high-quality image reconstructions with a high compressed sampling rate. This new compressed sensing modality could be widely applied in different imaging systems, enabling new applications that require high imaging speeds.