skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on August 1, 2026

Title: A Monte Carlo Rendering Framework for Simulating Optical Heterodyne Detection
Optical heterodyne detection (OHD) employs coherent light and optical interference techniques (Fig. 1-(A)) to extract physical parameters, such as velocity or distance, which are encoded in the frequency modulation of the light. With its superior signal-to-noise ratio compared to incoherent detection methods, such as time-of-flight lidar, OHD has become integral to applications requiring high sensitivity, including autonomous navigation, atmospheric sensing, and biomedical velocimetry. However, current simulation tools for OHD focus narrowly on specific applications, relying on domain-specific settings like restricted reflection functions, scene configurations, or single-bounce assumptions, which limit their applicability. In this work, we introduce a flexible and general framework for spectral-domain simulation of OHD. We demonstrate that classical radiometry-based path integral formulation can be adapted and extended to simulate the OHD measurements in the spectral domain. This enables us to leverage the rich modeling and sampling capabilities of existing Monte Carlo path tracing techniques. Our formulation shares structural similarities with transient rendering but operates in the spectral domain and accounts for the Doppler effect (Fig. 1-(B)). While simulators for the Doppler effect in incoherent (intensity) detection methods exist, they are largely not suitable to simulate OHD. We use a microsurface interpretation to show that these two Doppler imaging techniques capture different physical quantities and thus need different simulation frameworks. We validate the correctness and predictive power of our simulation framework by qualitatively comparing the simulations with real-world captured data for three different OHD applications—FMCW lidar, blood flow velocimetry, and wind Doppler lidar (Fig. 1-(C)).  more » « less
Award ID(s):
2403122 1844538
PAR ID:
10643522
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Association for Computing Machinery
Date Published:
Journal Name:
ACM Transactions on Graphics
Volume:
44
Issue:
4
ISSN:
0730-0301
Page Range / eLocation ID:
1 to 19
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Conventional rendering techniques are primarily designed and optimized for single-frame rendering. In practical applications, such as scene editing and animation rendering, users frequently encounter scenes where only a small portion is modified between consecutive frames. In this paper, we develop a novel approach to incremental re-rendering of scenes with dynamic objects, where only a small part of a scene moves from one frame to the next. We formulate the difference (or residual) in the image between two frames as a (correlated) light-transport integral which we call the residual path integral. Efficient numerical solution of this integral then involves (1) devising importance sampling strategies to focus on paths with non-zero residual-transport contributions and (2) choosing appropriate mappings between the native path spaces of the two frames. We introduce a set of path importance sampling strategies that trace from the moving object(s) which are the sources of residual energy. We explore path mapping strategies that generalize those from gradient-domain path tracing to our importance sampling techniques specially for dynamic scenes. Additionally, our formulation can be applied to material editing as a simpler special case. We demonstrate speed-ups over previous correlated sampling of path differences and over rendering the new frame independently. Our formulation brings new insights into the re-rendering problem and paves the way for devising new types of sampling techniques and path mappings with different trade-offs. 
    more » « less
  2. Abstract Conventional rendering techniques are primarily designed and optimized for single‐frame rendering. In practical applications, such as scene editing and animation rendering, users frequently encounter scenes where only a small portion is modified between consecutive frames. In this paper, we develop a novel approach to incremental re‐rendering of scenes with dynamic objects, where only a small part of a scene moves from one frame to the next. We formulate the difference (or residual) in the image between two frames as a (correlated) light‐transport integral which we call the residual path integral. Efficient numerical solution of this integral then involves (1) devising importance sampling strategies to focus on paths with non‐zero residual‐transport contributions and (2) choosing appropriate mappings between the native path spaces of the two frames. We introduce a set of path importance sampling strategies that trace from the moving object(s) which are the sources of residual energy. We explore path mapping strategies that generalize those from gradient‐domain path tracing to our importance sampling techniques specially for dynamic scenes. Additionally, our formulation can be applied to material editing as a simpler special case. We demonstrate speed‐ups over previous correlated sampling of path differences and over rendering the new frame independently. Our formulation brings new insights into the re‐rendering problem and paves the way for devising new types of sampling techniques and path mappings with different trade‐offs. 
    more » « less
  3. Abstract The Pi Cloud Chamber offers a unique opportunity to study aerosol‐cloud microphysics interactions in a steady‐state, turbulent environment. In this work, an atmospheric large‐eddy simulation (LES) model with spectral bin microphysics is scaled down to simulate these interactions, allowing comparison with experimental results. A simple scalar flux budget model is developed and used to explore the effect of sidewalls on the bulk mixing temperature, water vapor mixing ratio, and supersaturation. The scaled simulation and the simple scalar flux budget model produce comparable bulk mixing scalar values. The LES dynamics results are compared with particle image velocimetry measurements of turbulent kinetic energy, energy dissipation rates, and large‐scale oscillation frequencies from the cloud chamber. These simulated results match quantitatively to experimental results. Finally, with the bin microphysics included the LES is able to simulate steady‐state cloud conditions and broadening of the cloud droplet size distributions with decreasing droplet number concentration, as observed in the experiments. The results further suggest that collision‐coalescence does not contribute significantly to this broadening. This opens a path for further detailed intercomparison of laboratory and simulation results for model validation and exploration of specific physical processes. 
    more » « less
  4. This paper presents a Multiplicative Extended Kalman Filter (MEKF) framework using a state-of-the-art velocimeter Light Detection and Ranging (LIDAR) sensor for Terrain Relative Navigation (TRN) applications. The newly developed velocimeter LIDAR is capable of providing simultaneous position, Doppler velocity, and reflectivity measurements for every point in the point cloud. This information, along with pseudo-measurements from point cloud registration techniques, a novel bulk velocity batch state estimation process and inertial measurement data, is fused within a traditional Kalman filter architecture. Results from extensive emulation robotics experiments performed at Texas A&M’s Land, Air, and Space Robotics (LASR) laboratory and Monte Carlo simulations are presented to evaluate the efficacy of the proposed algorithms. 
    more » « less
  5. Abstract There is a need for long-term observations of cloud and precipitation fall speeds in validating and improving rainfall forecasts from climate models. To this end, the U.S. Department of Energy Atmospheric Radiation Measurement (ARM) user facility Southern Great Plains (SGP) site at Lamont, Oklahoma, hosts five ARM Doppler lidars that can measure cloud and aerosol properties. In particular, the ARM Doppler lidars record Doppler spectra that contain information about the fall speeds of cloud and precipitation particles. However, due to bandwidth and storage constraints, the Doppler spectra are not routinely stored. This calls for the automation of cloud and rain detection in ARM Doppler lidar data so that the spectral data in clouds can be selectively saved and further analyzed. During the ARMing the Edge field experiment, a Waggle node capable of performing machine learning applications in situ was deployed at the ARM SGP site for this purpose. In this paper, we develop and test four algorithms for the Waggle node to automatically classify ARM Doppler lidar data. We demonstrate that supervised learning using a ResNet50-based classifier will classify 97.6% of the clear-air images and 94.7% of cloudy images correctly, outperforming traditional peak detection methods. We also show that a convolutional autoencoder paired withk-means clustering identifies 10 clusters in the ARM Doppler lidar data. Three clusters correspond to mostly clear conditions with scattered high clouds, and seven others correspond to cloudy conditions with varying cloud-base heights. 
    more » « less