Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We present a high-speed underwater optical backscatter communication technique based on acousto-optic light steering. Our approach enables underwater assets to transmit data at rates potentially reaching hundreds of Mbps, vastly outperforming current state-of-the-art optical and underwater backscatter systems, which typically operate at only a few kbps. In our system, a base station illuminates the backscatter device with a pulsed laser and captures the retroreflected signal using an ultrafast photodetector. The backscatter device comprises a retroreflector and a 2 MHz ultrasound transducer. The transducer generates pressure waves that dynamically modulate the refractive index of the surrounding medium, steering the light either toward the photodetector (encodingbit1) or away from it (encodingbit0). Using a 3-bit redundancy scheme, our prototype achieves a communication rate of approximately 0.66 Mbps with an energy consumption of ≤ 1 μJ/bit, representing a 60× improvement over prior techniques. We validate its performance through extensive laboratory experiments in which remote underwater assets wirelessly transmit multimedia data to the base station under various environmental conditions.more » « lessFree, publicly-accessible full text available December 1, 2026
-
Optical heterodyne detection (OHD) employs coherent light and optical interference techniques (Fig. 1-(A)) to extract physical parameters, such as velocity or distance, which are encoded in the frequency modulation of the light. With its superior signal-to-noise ratio compared to incoherent detection methods, such as time-of-flight lidar, OHD has become integral to applications requiring high sensitivity, including autonomous navigation, atmospheric sensing, and biomedical velocimetry. However, current simulation tools for OHD focus narrowly on specific applications, relying on domain-specific settings like restricted reflection functions, scene configurations, or single-bounce assumptions, which limit their applicability. In this work, we introduce a flexible and general framework for spectral-domain simulation of OHD. We demonstrate that classical radiometry-based path integral formulation can be adapted and extended to simulate the OHD measurements in the spectral domain. This enables us to leverage the rich modeling and sampling capabilities of existing Monte Carlo path tracing techniques. Our formulation shares structural similarities with transient rendering but operates in the spectral domain and accounts for the Doppler effect (Fig. 1-(B)). While simulators for the Doppler effect in incoherent (intensity) detection methods exist, they are largely not suitable to simulate OHD. We use a microsurface interpretation to show that these two Doppler imaging techniques capture different physical quantities and thus need different simulation frameworks. We validate the correctness and predictive power of our simulation framework by qualitatively comparing the simulations with real-world captured data for three different OHD applications—FMCW lidar, blood flow velocimetry, and wind Doppler lidar (Fig. 1-(C)).more » « lessFree, publicly-accessible full text available August 1, 2026
-
Event cameras, which feature pixels that independently respond to changes in brightness, are becoming increasingly popular in high- speed applications due to their lower latency, reduced bandwidth requirements, and enhanced dynamic range compared to traditional frame- based cameras. Numerous imaging and vision techniques have leveraged event cameras for high- speed scene understanding by capturing high- framerate, high- dynamic range videos, primarily utilizing the temporal advantages inherent to event cameras. Additionally, imaging and vision techniques have utilized the light field—a complementary dimension to temporal information—for enhanced scene understanding.In this work, we propose "Event Fields", a new approach that utilizes innovative optical designs for event cameras to capture light fields at high speed. We develop the underlying mathematical framework for Event Fields and introduce two foundational frameworks to capture them practically: spatial multiplexing to capture temporal derivatives and temporal multiplexing to capture angular derivatives. To realize these, we design two complementary optical setups— one using a kaleidoscope for spatial multiplexing and another using a galvanometer for temporal multiplexing. We evaluate the performance of both designs using a custom-built simulator and real hardware prototypes, showcasing their distinct benefits. Our event fields unlock the full advantages of typical light fields—like post- capture refocusing and depth estimation—now supercharged for high- speed and high- dynamic range scenes. This novel light- sensing paradigm opens doors to new applications in photography, robotics, and AR/VR, and presents fresh challenges in rendering and machine learning.more » « lessFree, publicly-accessible full text available June 10, 2026
-
We introduce a structured light system that enables full-frame 3D scanning at speeds of \SI{1000}{\fps}, four times faster than the previous fastest systems. Our key innovation is the use of a custom acousto-optic light scanning device capable of projecting two million light planes per second. Coupling this device with an event camera allows our system to overcome the key bottleneck preventing previous structured light systems based on event cameras from achieving higher scanning speeds---the limited rate of illumination steering. Unlike these previous systems, ours uses the event camera's full-frame bandwidth, shifting the speed bottleneck from the illumination side to the imaging side. To mitigate this new bottleneck and further increase scanning speed, we introduce adaptive scanning strategies that leverage the event camera's asynchronous operation by selectively illuminating regions of interest, thereby achieving effective scanning speeds an order of magnitude beyond the camera's theoretical limit.more » « less
-
Differentiable 3D-Gaussian splatting (GS) is emerging as a prominent technique in computer vision and graphics for reconstructing 3D scenes. GS represents a scene as a set of 3D Gaussians with varying opacities and employs a computationally efficient splatting operation along with analytical derivatives to compute the 3D Gaussian parameters given scene images captured from various viewpoints. Unfortunately, capturing surround view (360° viewpoint) images is impossible or impractical in many real-world imaging scenarios, including underwater imaging, rooms inside a building, and autonomous navigation. In these restricted baseline imaging scenarios, the GS algorithm suffers from a well-known ‘missing cone’ problem, which results in poor reconstruction along the depth axis. In this paper, we demonstrate that using transient data (from sonars) allows us to address the missing cone problem by sampling high-frequency data along the depth axis. We extend the Gaussian splatting algorithms for two commonly used sonars and propose fusion algorithms that simultaneously utilize RGB camera data and sonar data. Through simulations, emulations, and hardware experiments across various imaging scenarios, we show that the proposed fusion algorithms lead to significantly better novel view synthesis (5 dB improvement in PSNR) and 3D geometry reconstruction (60% lower Chamfer distance).more » « less
-
Underwater perception and 3D surface reconstruction are challenging problems with broad applications in construction, security, marine archaeology, and environmental monitoring. Treacherous operating conditions, fragile surroundings, and limited navigation control often dictate that submersibles restrict their range of motion and, thus, the baseline over which they can capture measurements. In the context of 3D scene reconstruction, it is well-known that smaller baselines make reconstruction more challenging. Our work develops a physics-based multimodal acoustic-optical neural surface reconstruction framework (AONeuS) capable of effectively integrating high-resolution RGB measurements with low-resolution depth-resolved imaging sonar measurements. By fusing these complementary modalities, our framework can reconstruct accurate high-resolution 3D surfaces from measurements captured over heavily-restricted baselines. Through extensive simulations and in-lab experiments, we demonstrate that AONeuS dramatically outperforms recent RGB-only and sonar-only inverse-differentiable-rendering--based surface reconstruction methods.more » « less
An official website of the United States government
