skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Zhou, Kevin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. While distributional reinforcement learning (DistRL) has been empirically effective, the question of when and why it is better than vanilla, non-distributional RL has remained unanswered. This paper explains the benefits of DistRL through the lens of small-loss bounds, which are instance-dependent bounds that scale with optimal achievable cost. Particularly, our bounds converge much faster than those from non-distributional approaches if the optimal cost is small. As warmup, we propose a distributional contextual bandit (DistCB) algorithm, which we show enjoys small-loss regret bounds and empirically outperforms the state-of-the-art on three real-world tasks. In online RL, we propose a DistRL algorithm that constructs confidence sets using maximum likelihood estimation. We prove that our algorithm enjoys novel small-loss PAC bounds in low-rank MDPs. As part of our analysis, we introduce the l1 distributional eluder dimension which may be of independent interest. Then, in offline RL, we show that pessimistic DistRL enjoys small-loss PAC bounds that are novel to the offline setting and are more robust to bad single-policy coverage. 
    more » « less
  2. Volumetric fluorescence imaging techniques, such as confocal, multiphoton, light sheet, and light field microscopy, have become indispensable tools across a wide range of cellular, developmental, and neurobiological applications. However, it is difficult to scale such techniques to the large 3D fields of view (FOV), volume rates, and synchronicity requirements for high-resolution 4D imaging of freely behaving organisms. Here, we present reflective Fourier light field computed tomography (ReFLeCT), a high-speed volumetric fluorescence computational imaging technique. ReFLeCT synchronously captures entire tomograms of multiple unrestrained, unanesthetized model organisms across multi-millimeter 3D FOVs at 120 volumes per second. In particular, we applied ReFLeCT to reconstruct 4D videos of fluorescently labeled zebrafish andDrosophilalarvae, enabling us to study their heartbeat, fin and tail motion, gaze, jaw motion, and muscle contractions with nearly isotropic 3D resolution while they are freely moving. To our knowledge, as a novel approach for snapshot tomographic capture, ReFLeCT is a major advance toward bridging the gap between current volumetric fluorescence microscopy techniques and macroscopic behavioral imaging. 
    more » « less
  3. Abstract The utility of visible light for 3D printing has increased in recent years owing to its accessibility and reduced materials interactions, such as scattering and absorption/degradation, relative to traditional UV light‐based processes. However, photosystems that react efficiently with visible light often require multiple molecular components and have strong and diverse absorption profiles, increasing the complexity of formulation and printing optimization. Herein, a streamlined method to select and optimize visible light 3D printing conditions is described. First, green light liquid crystal display (LCD) 3D printing using a novel resin is optimized through traditional empirical methods, which involves resin component selection, spectroscopic characterization, time‐intensive 3D printing under several different conditions, and measurements of dimensional accuracy for each printed object. Subsequent analytical quantification of dynamic photon absorption during green light polymerizations unveils relationships to cure depth that enables facile resin and 3D printing optimization using a model that is a modification to the Jacob's equation traditionally used for stereolithographic 3D printing. The approach and model are then validated using a distinct green light‐activated resin for two types of projection‐based 3D printing. 
    more » « less
  4. We present the Fourier Light field Camera Array Microscope (FL-CAM) for high-throughput, single-snapshot 3D imaging. The FL-CAM substitutes a synchronized array of 48 independent imaging systems for micro-lens array of typical light field systems. 
    more » « less
  5. Abstract Frequency-modulated continuous wave (FMCW) light detection and ranging (LiDAR) is an emerging 3D ranging technology that offers high sensitivity and ranging precision. Due to the limited bandwidth of digitizers and the speed limitations of beam steering using mechanical scanners, meter-scale FMCW LiDAR systems typically suffer from a low 3D frame rate, which greatly restricts their applications in real-time imaging of dynamic scenes. In this work, we report a high-speed FMCW based 3D imaging system, combining a grating for beam steering with a compressed time-frequency analysis approach for depth retrieval. We thoroughly investigate the localization accuracy and precision of our system both theoretically and experimentally. Finally, we demonstrate 3D imaging results of multiple static and moving objects, including a flexing human hand. The demonstrated technique achieves submillimeter localization accuracy over a tens-of-centimeter imaging range with an overall depth voxel acquisition rate of 7.6 MHz, enabling densely sampled 3D imaging at video rate. 
    more » « less
  6. https://arxiv.org/abs/2209.02486 
    more » « less
  7. We present a computational 3D profilometric microscope employing an array of 54 cameras and 3-axis scanning to produce multi-TB datasets per sample. Using stereo and sharpness cues, our self-supervised reconstruction algorithm generates 6-gigapixel reconstructions with micron-scale resolution across >110 cm2
    more » « less