We propose a multi-stage calibration method for increasing the overall accuracy of a large-scale structured light system by leveraging the conventional stereo calibration approach using a pinhole model. We first calibrate the intrinsic parameters at a near distance and then the extrinsic parameters with a low-cost large-calibration target at the designed measurement distance. Finally, we estimate pixel-wise errors from standard stereo 3D reconstructions and determine the pixel-wise phase-to-coordinate relationships using low-order polynomials. The calibrated pixel-wise polynomial functions can be used for 3D reconstruction for a given pixel phase value. We experimentally demonstrated that our proposed method achieves high accuracy for a large volume: sub-millimeter within 1200(H) × 800 (V) × 1000(D) mm3.
more »
« less
This content will become publicly available on May 5, 2026
Real-Time Per-Pixel Predistortion for Head-Tracked Light Field Displays
The latest light-field displays have improved greatly, but continue to be based on the approximate pinhole model. For every frame, our real-time technique evaluates a full optical model, and then renders an image predistorted at the sub-pixel level to the current pixel-to-eye light flow, reducing cross-talk and increasing viewing angle.
more »
« less
- Award ID(s):
- 2008590
- PAR ID:
- 10568985
- Publisher / Repository:
- SID
- Date Published:
- ISSN:
- 2168-0159
- Format(s):
- Medium: X
- Location:
- San Jose
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper proposes a representational model for image pairs such as consecutive video frames that are related by local pixel displacements, in the hope that the model may shed light on motion perception in primary visual cortex (V1). The model couples the following two components: (1) the vector representations of local contents of images and (2) the matrix representations of local pixel displacements caused by the relative motions between the agent and the objects in the 3D scene. When the image frame undergoes changes due to local pixel displacements, the vectors are multiplied by the matrices that represent the local displacements. Thus the vector representation is equivariant as it varies according to the local displacements. Our experiments show that our model can learn Gabor-like filter pairs of quadrature phases. The profiles of the learned filters match those of simple cells in Macaque V1. Moreover, we demonstrate that the model can learn to infer local motions in either a supervised or unsupervised manner. With such a simple model, we achieve competitive results on optical flow estimation.more » « less
-
Recently developed graded photonic super-crystals show an enhanced light absorption and light extraction efficiency if they are integrated with a solar cell and an organic light emitting device, respectively. In this paper, we present the holographic fabrication of a graded photonic super-crystal with a rectangular unit super-cell. The spatial light modulator-based pixel-by-pixel phase engineering of the incident laser beam provides a high resolution phase pattern for interference lithography. This also provides a flexible design for the graded photonic super-crystals with a different ratio of length over the width of the rectangular unit super-cell. The light extraction efficiency is simulated for the organic light emitting device, where the cathode is patterned with the graded photonic super-crystal. The high extraction efficiency is maintained for different exposure thresholds during the interference lithography. The desired polarization effects are observed for certain exposure thresholds. The extraction efficiency reaches as high as 75% in the glass substrate.more » « less
-
null (Ed.)The sky exhibits a unique spatial polarization pattern by scattering the unpolarized sun light. Just like insects use this unique angular pattern to navigate, we use it to map pixels to directions on the sky. That is, we show that the unique polarization pattern encoded in the polarimetric appearance of an object captured under the sky can be decoded to reveal the surface normal at each pixel. We derive a polarimetric reflection model of a diffuse plus mirror surface lit by the sun and a clear sky. This model is used to recover the per-pixel surface normal of an object from a single polarimetric image or from multiple polarimetric images captured under the sky at different times of the day. We experimentally evaluate the accuracy of our shape-from-sky method on a number of real objects of different surface compositions. The results clearly show that this passive approach to fine-geometry recovery that fully leverages the unique illumination made by nature is a viable option for 3D sensing. With the advent of quad-Bayer polarization chips, we believe the implications of our method span a wide range of domains.more » « less
-
Abstract—Accurately capturing dynamic scenes with wideranging motion and light intensity is crucial for many vision applications. However, acquiring high-speed high dynamic range (HDR) video is challenging because the camera’s frame rate restricts its dynamic range. Existing methods sacrifice speed to acquire multi-exposure frames. Yet, misaligned motion in these frames can still pose complications for HDR fusion algorithms, resulting in artifacts. Instead of frame-based exposures, we sample the videos using individual pixels at varying exposures and phase offsets. Implemented on a monochrome pixel-wise programmable image sensor, our sampling pattern captures fast motion at a high dynamic range. We then transform pixel-wise outputs into an HDR video using end-to-end learned weights from deep neural networks, achieving high spatiotemporal resolution with minimized motion blurring. We demonstrate aliasing-free HDR video acquisition at 1000 FPS, resolving fast motion under low-light conditions and against bright backgrounds — both challenging conditions for conventional cameras. By combining the versatility of pixel-wise sampling patterns with the strength of deep neural networks at decoding complex scenes, our method greatly enhances the vision system’s adaptability and performance in dynamic conditions. Index Terms—High-dynamic-range video, high-speed imaging, CMOS image sensors, programmable sensors, deep learning, convolutional neural networks.more » « less
An official website of the United States government
