Frequency-modulated continuous wave (FMCW) light detection and ranging (LiDAR) is an emerging 3D ranging technology that offers high sensitivity and ranging precision. Due to the limited bandwidth of digitizers and the speed limitations of beam steering using mechanical scanners, meter-scale FMCW LiDAR systems typically suffer from a low 3D frame rate, which greatly restricts their applications in real-time imaging of dynamic scenes. In this work, we report a high-speed FMCW based 3D imaging system, combining a grating for beam steering with a compressed time-frequency analysis approach for depth retrieval. We thoroughly investigate the localization accuracy and precision of our system both theoretically and experimentally. Finally, we demonstrate 3D imaging results of multiple static and moving objects, including a flexing human hand. The demonstrated technique achieves submillimeter localization accuracy over a tens-of-centimeter imaging range with an overall depth voxel acquisition rate of 7.6 MHz, enabling densely sampled 3D imaging at video rate.more » « less
- Award ID(s):
- NSF-PAR ID:
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Nature Communications
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
We present a real-time spectral-scanning frequency-modulated continuous wave (FMCW) 3D imaging and velocimetry system that can produce 3D depth maps at 33 Hz, with 48° × 68° field of view (FOV) and 32.8-cm depth range. Each depth map consists of 507 × 500 pixels, with 0.095° × 0.14° angular resolution and 2.82-mm depth resolution. The system employs a grating for beam steering and a telescope for angular FOV magnification. Quantitative depth, reflectivity, and axial velocity measurements of a static 3D printed depth variation target and a moving robotic arm are demonstrated.more » « less
Frequency modulated continuous wave laser ranging (FMCW LiDAR) enables distance mapping with simultaneous position and velocity information, is immune to stray light, can achieve long range, operate in the eye-safe region of 1550 nm and achieve high sensitivity. Despite its advantages, it is compounded by the simultaneous requirement of both narrow linewidth low noise lasers that can be precisely chirped. While integrated silicon-based lasers, compatible with wafer scale manufacturing in large volumes at low cost, have experienced major advances and are now employed on a commercial scale in data centers, and impressive progress has led to integrated lasers with (ultra) narrow sub-100 Hz-level intrinsic linewidth based on optical feedback from photonic circuits, these lasers presently lack fast nonthermal tuning, i.e. frequency agility as required for coherent ranging. Here, we demonstrate a hybrid photonic integrated laser that exhibits very narrow intrinsic linewidth of 25 Hz while offering linear, hysteresis-free, and mode-hop-free-tuning beyond 1 GHz with up to megahertz actuation bandwidth constituting 1.6 × 1015Hz/s tuning speed. Our approach uses foundry-based technologies - ultralow-loss (1 dB/m) Si3N4photonic microresonators, combined with aluminium nitride (AlN) or lead zirconium titanate (PZT) microelectromechanical systems (MEMS) based stress-optic actuation. Electrically driven low-phase-noise lasing is attained by self-injection locking of an Indium Phosphide (InP) laser chip and only limited by fundamental thermo-refractive noise at mid-range offsets. By utilizing difference-drive and apodization of the photonic chip to suppress mechanical vibrations of the chip, a flat actuation response up to 10 MHz is achieved. We leverage this capability to demonstrate a compact coherent LiDAR engine that can generate up to 800 kHz FMCW triangular optical chirp signals, requiring neither any active linearization nor predistortion compensation, and perform a 10 m optical ranging experiment, with a resolution of 12.5 cm. Our results constitute a photonic integrated laser system for scenarios where high compactness, fast frequency actuation, and high spectral purity are required.
Optical phased arrays (OPAs) implemented in integrated photonic circuits could enable a variety of 3D sensing, imaging, illumination, and ranging applications, and their convergence in new lidar technology. However, current integrated OPA approaches do not scale—in control complexity, power consumption, or optical efficiency—to the large aperture sizes needed to support medium- to long-range lidar. We present the serpentine OPA (SOPA), a new OPA concept that addresses these fundamental challenges and enables architectures that scale up to large apertures. The SOPA is based on a serially interconnected array of low-loss grating waveguides and supports fully passive, 2D wavelength-controlled beam steering. A fundamentally space-efficient design that folds the feed network into the aperture also enables scalable tiling of SOPAs into large apertures with a high fill-factor. We experimentally demonstrate, to the best of our knowledge, the first SOPA using a 1450–1650 nm wavelength sweep to produce 16,500 addressable spots in a
array. We also demonstrate, for the first time, far-field interference of beams from two separate OPAs on a single silicon photonic chip, as an initial step towards long-range computational imaging lidar based on novel active aperture synthesis schemes.
3D sensing is a primitive function that allows imaging with depth information generally achieved via the time‐of‐flight (ToF) principle. However, time‐to‐digital converters (TDCs) in conventional ToF sensors are usually bulky, complex, and exhibit large delay and power loss. To overcome these issues, a resistive time‐of‐flight (R‐ToF) sensor that can measure the depth information in an analog domain by mimicking the biological process of spike‐timing‐dependent plasticity (STDP) is proposed herein. The R‐ToF sensors based on integrated avalanche photodiodes (APDs) with memristive intelligent matters achieve a scan depth of up to 55 cm (≈89% accuracy and 2.93 cm standard deviation) and low power consumption (0.5 nJ/step) without TDCs. The in‐depth computing is realized via R‐ToF 3D imaging and memristive classification. This R‐ToF system opens a new pathway for miniaturized and energy‐efficient neuromorphic vision engineering that can be harnessed in light‐detection and ranging (LiDAR), automotive vehicles, biomedical in vivo imaging, and augmented/virtual reality.
This study presents an overview and a few case studies to explicate the transformative power of diverse imaging techniques for smart manufacturing, focusing largely on various
in-situand ex-situimaging methods for monitoring fusion-based metal additive manufacturing (AM) processes such as directed energy deposition (DED), selective laser melting (SLM), electron beam melting (EBM). In-situimaging techniques, encompassing high-speed cameras, thermal cameras, and digital cameras, are becoming increasingly affordable, complementary, and are emerging as vital for real-time monitoring, enabling continuous assessment of build quality. For example, high-speed cameras capture dynamic laser-material interaction, swiftly detecting defects, while thermal cameras identify thermal distribution of the melt pool and potential anomalies. The data gathered from in-situimaging are then utilized to extract pertinent features that facilitate effective control of process parameters, thereby optimizing the AM processes and minimizing defects. On the other hand, ex-situimaging techniques play a critical role in comprehensive component analysis. Scanning electron microscopy (SEM), optical microscopy, and 3D-profilometry enable detailed characterization of microstructural features, surface roughness, porosity, and dimensional accuracy. Employing a battery of Artificial Intelligence (AI) algorithms, information from diverse imaging and other multi-modal data sources can be fused, and thereby achieve a more comprehensive understanding of a manufacturing process. This integration enables informed decision-making for process optimization and quality assurance, as AI algorithms analyze the combined data to extract relevant insights and patterns. Ultimately, the power of imaging in additive manufacturing lies in its ability to deliver real-time monitoring, precise control, and comprehensive analysis, empowering manufacturers to achieve supreme levels of precision, reliability, and productivity in the production of components.