skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.


Search for: All records

Award ID contains: 1730147

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Ultrasonically-sculpted gradient-index optical waveguides enable non-invasive light confinement inside scattering media. The confinement level strongly depends on ultrasound parameters (e.g., amplitude, frequency), and medium optical properties (e.g., extinction coefficient). We develop a physically-accurate simulator, and use it to quantify these dependencies for a radially-symmetric virtual optical waveguide. Our analysis provides insights for optimizing virtual optical waveguides for given applications. We leverage these insights to configure virtual optical waveguides that improve light confinement fourfold compared to previous configurations at five mean free paths. We show that virtual optical waveguides enhance light throughput by 50% compared to an ideal external lens, in a medium with bladder-like optical properties at one transport mean free path. We corroborate these simulation findings with real experiments: we demonstrate, for the first time, that virtual optical waveguides recycle scattered light, and enhance light throughput by 15% compared to an external lens at five transport mean free paths.

     
    more » « less
    Free, publicly-accessible full text available December 1, 2024
  2. Abstract

    We demonstrate in situ non-invasive relay imaging through a medium without inserting physical optical components. We show that a virtual optical graded-index (GRIN) lens can be sculpted in the medium using in situ reconfigurable ultrasonic interference patterns to relay images through the medium. Ultrasonic wave patterns change the local density of the medium to sculpt a graded refractive index pattern normal to the direction of light propagation, which modulates the phase front of light, causing it to focus within the medium and effectively creating a virtual relay lens. We demonstrate the in situ relay imaging and resolving of small features (22 µm) through a turbid medium (optical thickness = 5.7 times the scattering mean free path), which is normally opaque. The focal distance and the numerical aperture of the sculpted optical GRIN lens can be tuned by changing the ultrasonic wave parameters. As an example, we experimentally demonstrate that the axial focal distance can be continuously scanned over a depth of 5.4 mm in the modulated medium and that the numerical aperture can be tuned up to 21.5%. The interaction of ultrasonic waves and light can be mediated through different physical media, including turbid media, such as biological tissue, in which the ultrasonically sculpted GRIN lens can be used for relaying images of the underlying structures through the turbid medium, thus providing a potential alternative to implanting invasive endoscopes.

     
    more » « less
  3. Free, publicly-accessible full text available July 13, 2025
  4. Free, publicly-accessible full text available June 18, 2025
  5. Free, publicly-accessible full text available June 18, 2025
  6. We introduce a suite of path sampling methods for differentiable rendering of scene parameters that do not induce visibility-driven discontinuities, such as BRDF parameters. We begin by deriving a path integral formulation for differentiable rendering of such parameters, which we then use to derive methods that importance sample paths according to this formulation. Our methods are analogous to path tracing and path tracing with next event estimation for primal rendering, have linear complexity, and can be implemented efficiently using path replay backpropagation. Our methods readily benefit from differential BRDF sampling routines, and can be further enhanced using multiple importance sampling and a loss-aware pixel-space adaptive sampling procedure tailored to our path integral formulation. We show experimentally that our methods reduce variance in rendered gradients by potentially orders of magnitude, and thus help accelerate inverse rendering optimization of BRDF parameters. 
    more » « less
    Free, publicly-accessible full text available January 1, 2025
  7. We introduce Doppler time-of-flight (D-ToF) rendering, an extension of ToF rendering for dynamic scenes, with applications in simulating D-ToF cameras. D-ToF cameras use high-frequency modulation of illumination and exposure, and measure the Doppler frequency shift to compute the radial velocity of dynamic objects. The time-varying scene geometry and high-frequency modulation functions used in such cameras make it challenging to accurately and efficiently simulate their measurements with existing ToF rendering algorithms. We overcome these challenges in a twofold manner: To achieve accuracy, we derive path integral expressions for D-ToF measurements under global illumination and form unbiased Monte Carlo estimates of these integrals. To achieve efficiency, we develop a tailored time-path sampling technique that combines antithetic time sampling with correlated path sampling. We show experimentally that our sampling technique achieves up to two orders of magnitude lower variance compared to naive time-path sampling. We provide an open-source simulator that serves as a digital twin for D-ToF imaging systems, allowing imaging researchers, for the first time, to investigate the impact of modulation functions, material properties, and global illumination on D-ToF imaging performance.

     
    more » « less
    Free, publicly-accessible full text available December 5, 2024
  8. Free, publicly-accessible full text available October 1, 2024
  9. This work provides the design of a multifocal display that can create a dense stack of focal planes in a single shot. We achieve this using a novel computational lens that provides spatial selectivity in its focal length, i.e, the lens appears to have different focal lengths across points on a display behind it. This enables a multifocal display via an appropriate selection of the spatially-varying focal length, thereby avoiding time multiplexing techniques that are associated with traditional focus tunable lenses. The idea central to this design is a modification of a Lohmann lens, a focus tunable lens created with two cubic phase plates that translate relative to each other. Using optical relays and a phase spatial light modulator, we replace the physical translation of the cubic plates with an optical one, while simultaneously allowing for different pixels on the display to undergo different amounts of translations and, consequently, different focal lengths. We refer to this design as a Split-Lohmann multifocal display. Split-Lohmann displays provide a large étendue as well as high spatial and depth resolutions; the absence of time multiplexing and the extremely light computational footprint for content processing makes it suitable for video and interactive experiences. Using a lab prototype, we show results over a wide range of static, dynamic, and interactive 3D scenes, showcasing high visual quality over a large working range.

     
    more » « less