skip to main content


Search for: All records

Award ID contains: 2008464

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This work provides the design of a multifocal display that can create a dense stack of focal planes in a single shot. We achieve this using a novel computational lens that provides spatial selectivity in its focal length, i.e, the lens appears to have different focal lengths across points on a display behind it. This enables a multifocal display via an appropriate selection of the spatially-varying focal length, thereby avoiding time multiplexing techniques that are associated with traditional focus tunable lenses. The idea central to this design is a modification of a Lohmann lens, a focus tunable lens created with two cubic phase plates that translate relative to each other. Using optical relays and a phase spatial light modulator, we replace the physical translation of the cubic plates with an optical one, while simultaneously allowing for different pixels on the display to undergo different amounts of translations and, consequently, different focal lengths. We refer to this design as a Split-Lohmann multifocal display. Split-Lohmann displays provide a large étendue as well as high spatial and depth resolutions; the absence of time multiplexing and the extremely light computational footprint for content processing makes it suitable for video and interactive experiences. Using a lab prototype, we show results over a wide range of static, dynamic, and interactive 3D scenes, showcasing high visual quality over a large working range. 
    more » « less
    Free, publicly-accessible full text available August 1, 2024
  2. Free, publicly-accessible full text available July 26, 2024
  3. Reconstructing and designing media with continuously-varying refractive index fields remains a challenging problem in computer graphics. A core difficulty in trying to tackle this inverse problem is that light travels inside such media along curves, rather than straight lines. Existing techniques for this problem make strong assumptions on the shape of the ray inside the medium, and thus limit themselves to media where the ray deflection is relatively small. More recently, differentiable rendering techniques have relaxed this limitation, by making it possible to differentiably simulate curved light paths. However, the automatic differentiation algorithms underlying these techniques use large amounts of memory, restricting existing differentiable rendering techniques to relatively small media and low spatial resolutions. We present a method for optimizing refractive index fields that both accounts for curved light paths and has a small, constant memory footprint. We use the adjoint state method to derive a set of equations for computing derivatives with respect to the refractive index field of optimization objectives that are subject to nonlinear ray tracing constraints. We additionally introduce discretization schemes to numerically evaluate these equations, without the need to store nonlinear ray trajectories in memory, significantly reducing the memory requirements of our algorithm. We use our technique to optimize high-resolution refractive index fields for a variety of applications, including creating different types of displays (multiview, lightfield, caustic), designing gradient-index optics, and reconstructing gas flows. 
    more » « less
  4. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors now available on modern smartphones. 
    more » « less