skip to main content


Search for: All records

Creators/Authors contains: "Gao, Liang"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Cameras with extreme speeds are enabling technologies in both fundamental and applied sciences. However, existing ultrafast cameras are incapable of coping with extended three-dimensional scenes and fall short for non-line-of-sight imaging, which requires a long sequence of time-resolved two-dimensional data. Current non-line-of-sight imagers, therefore, need to perform extensive scanning in the spatial and/or temporal dimension, restricting their use in imaging only static or slowly moving objects. To address these long-standing challenges, we present here ultrafast light field tomography (LIFT), a transient imaging strategy that offers a temporal sequence of over 1000 and enables highly efficient light field acquisition, allowing snapshot acquisition of the complete four-dimensional space and time. With LIFT, we demonstrated three-dimensional imaging of light in flight phenomena with a <10 picoseconds resolution and non-line-of-sight imaging at a 30 Hz video-rate. Furthermore, we showed how LIFT can benefit from deep learning for an improved and accelerated image formation. LIFT may facilitate broad adoption of time-resolved methods in various disciplines.

     
    more » « less
  2. null (Ed.)
  3. Wavelength beam-combining of four terahertz (THz) distributed-feedback quantum-cascade lasers (QCLs) is demonstrated using low-cost THz components that include a lens carved out of a plastic ball and a mechanically fabricated blazed grating. Single-lobed beams from predominantly single-mode QCLs radiating peak power in the range of50−<#comment/>170mWare overlapped in the far field at frequencies ranging from3.31−<#comment/>3.54THz. Collinear propagation with a maximum angular deviation of0.3∘<#comment/>is realized for the four beams. The total power efficiency for the focused and beam-combined radiation is as high as25%<#comment/>. This result could pave the way for future commercialization of beam-combined monolithic THz QCL arrays for multi-spectral THz sensing and spectroscopy at standoff distances.

     
    more » « less
  4. We present high-resolution, high-speed fluorescence lifetime imaging microscopy (FLIM) of live cells based on a compressed sensing scheme. By leveraging the compressibility of biological scenes in a specific domain, we simultaneously record the time-lapse fluorescence decay upon pulsed laser excitation within a large field of view. The resultant system, referred to as compressed FLIM, can acquire a widefield fluorescence lifetime image within a single camera exposure, eliminating the motion artifact and minimizing the photobleaching and phototoxicity. The imaging speed, limited only by the readout speed of the camera, is up to 100 Hz. We demonstrated the utility of compressed FLIM in imaging various transient dynamics at the microscopic scale.

     
    more » « less
  5. Light field cameras have been employed in myriad applications thanks to their 3D imaging capability. By placing a microlens array in front of a conventional camera, one can measure both the spatial and angular information of incoming light rays and reconstruct a depth map. The unique optical architecture of light field cameras poses new challenges on controlling aberrations and vignetting in lens design process. The results of our study show that field curvature can be numerically corrected for by digital refocusing, and vignetting must be minimized because it reduces the depth reconstruction accuracy. To address this unmet need, we herein present an optical design pipeline for light field cameras and demonstrated its implementation in a light field endoscope.

     
    more » « less
  6. Non-line-of-sight (NLOS) imaging is a light-starving application that suffers from highly noisy measurement data. In order to recover the hidden scene with good contrast, it is crucial for the reconstruction algorithm to be robust against noises and artifacts. We propose here two weighting factors for the filtered backprojection (FBP) reconstruction algorithm in NLOS imaging. The apodization factor modifies the aperture (wall) function to reduce streaking artifacts, and the coherence factor evaluates the spatial coherence of measured signals for noise suppression. Both factors are simple to evaluate, and their synergistic effects lead to state-of-the-art reconstruction quality for FBP with noisy data. We demonstrate the effectiveness of the proposed weighting factors on publicly accessible experimental datasets.

     
    more » « less
  7. Wearable near-eye displays for virtual and augmented reality (VR/AR) have seen enormous growth in recent years. While researchers are exploiting a plethora of techniques to create life-like three-dimensional (3D) objects, there is a lack of awareness of the role of human perception in guiding the hardware development. An ultimate VR/AR headset must integrate the display, sensors, and processors in a compact enclosure that people can comfortably wear for a long time while allowing a superior immersion experience and user-friendly human–computer interaction. Compared with other 3D displays, the holographic display has unique advantages in providing natural depth cues and correcting eye aberrations. Therefore, it holds great promise to be the enabling technology for next-generation VR/AR devices. In this review, we survey the recent progress in holographic near-eye displays from the human-centric perspective.

     
    more » « less
  8. Abstract

    Multidimensional photography can capture optical fields beyond the capability of conventional image sensors that measure only two-dimensional (2D) spatial distribution of light. By mapping a high-dimensional datacube of incident light onto a 2D image sensor, multidimensional photography resolves the scene along with other information dimensions, such as wavelength and time. However, the application of current multidimensional imagers is fundamentally restricted by their static optical architectures and measurement schemes—the mapping relation between the light datacube voxels and image sensor pixels is fixed. To overcome this limitation, we propose tunable multidimensional photography through active optical mapping. A high-resolution spatial light modulator, referred to as an active optical mapper, permutes and maps the light datacube voxels onto sensor pixels in an arbitrary and programmed manner. The resultant system can readily adapt the acquisition scheme to the scene, thereby maximising the measurement flexibility. Through active optical mapping, we demonstrate our approach in two niche implementations: hyperspectral imaging and ultrafast imaging.

     
    more » « less
  9. Compressed ultrafast photography (CUP) is a computational optical imaging technique that can capture transient dynamics at an unprecedented speed. Currently, the image reconstruction of CUP relies on iterative algorithms, which are time-consuming and often yield nonoptimal image quality. To solve this problem, we develop a deep-learning-based method for CUP reconstruction that substantially improves the image quality and reconstruction speed. A key innovation toward efficient deep learning reconstruction of a large three-dimensional (3D) event datacube (x,y,t) (x,y, spatial coordinate;t, time) is that we decompose the original datacube into massively parallel two-dimensional (2D) imaging subproblems, which are much simpler to solve by a deep neural network. We validated our approach on simulated and experimental data.

     
    more » « less
  10. We present a foveated rendering method to accelerate the amplitude-only computer-generated hologram (AO-CGH) calculation in a holographic near-eye 3D display. For a given target image, we compute a high-resolution foveal region and a low-resolution peripheral region with dramatically reduced pixel numbers. Our technique significantly improves the computation speed of the AO-CGH while maintaining the perceived image quality in the fovea. Moreover, to accommodate the eye gaze angle change, we develop an algorithm to laterally shift the foveal image with negligible extra computational cost. Our technique holds great promise in advancing the holographic 3D display in real-time use.

     
    more » « less