skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: NeRF-enabled Analysis-Through-Synthesis for ISAR Imaging of Small Everyday Objects with Sparse and Noisy UWB Radar Data
Inverse Synthetic Aperture Radar (ISAR) imaging presents a formidable challenge when it comes to small everyday objects due to their limited Radar Cross-Section (RCS) and the inherent resolution constraints of radar systems. Existing ISAR reconstruction methods including backprojection (BP) often require complex setups and controlled environments, rendering them impractical for many real-world noisy scenarios. In this paper, we propose a novel Analysis-through-Synthesis (ATS) framework enabled by Neural Radiance Fields (NeRF) for high-resolution coherent ISAR imaging of small objects using sparse and noisy Ultra-Wideband (UWB) radar data with an inexpensive and portable setup. Our end-to-end framework integrates ultra-wideband radar wave propagation, reflection characteristics, and scene priors, enabling efficient 2D scene reconstruction without the need for costly anechoic chambers or complex measurement test beds. With qualitative and quantitative comparisons, we demonstrate that the proposed method outperforms traditional techniques and generates ISAR images of complex scenes with multiple targets and complex structures in Non-Line-of-Sight (NLOS) and noisy scenarios, particularly with limited number of views and sparse UWB radar scans. This work represents a significant step towards practical, costeffective ISAR imaging of small everyday objects, with broad implications for robotics and mobile sensing applications.  more » « less
Award ID(s):
2326905
PAR ID:
10561206
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
Format(s):
Medium: X
Location:
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Sponsoring Org:
National Science Foundation
More Like this
  1. Isaacs, Jason C.; Bishop, Steven S. (Ed.)
    Ultra-wideband (UWB) ground penetrating radar (GPR) is an effective, widely used tool for detection and mapping of buried targets. However, traditional ground penetrating radar systems struggle to resolve and identify congested target configurations and irregularly shaped targets. This is a significant limitation for many municipalities who seek to use GPR to locate and image underground utility pipes. This research investigates the implementation of orbital angular momentum (OAM) control in an UWB GPR, with the goal of addressing these limitations. Control of OAM is a novel technique which leverages an additional degree of freedom offered by spatially structured helical waveforms. This paper examines several free-space and buried target configurations to determine the ability of helical OAM waveforms to improve detectability and distinguishability of buried objects including those with symmetric, asymmetric, and chiral geometries. Microwave OAM can be generated using a uniform circular array (UCA) of antennas with phase delays applied according to azimuth angle. Here, a four-channel network analyzer transceiver is connected to a UCA to enable UWB capability. The characteristic phase delays of OAM waveforms are implemented synthetically via signal processing. The viability demonstrated with the method opens design and analysis degrees of freedom for penetrating radar that may help with discerning challenging targets, such as buried landmines and wires. 
    more » « less
  2. Abstract Non-Line-Of-Sight (NLOS) imaging aims at recovering the 3D geometry of objects that are hidden from the direct line of sight. One major challenge with this technique is the weak available multibounce signal limiting scene size, capture speed, and reconstruction quality. To overcome this obstacle, we introduce a multipixel time-of-flight non-line-of-sight imaging method combining specifically designed Single Photon Avalanche Diode (SPAD) array detectors with a fast reconstruction algorithm that captures and reconstructs live low-latency videos of non-line-of-sight scenes with natural non-retroreflective objects. We develop a model of the signal-to-noise-ratio of non-line-of-sight imaging and use it to devise a method that reconstructs the scene such that signal-to-noise-ratio, motion blur, angular resolution, and depth resolution are all independent of scene depth suggesting that reconstruction of very large scenes may be possible. 
    more » « less
  3. An emerging technology for indoor localization is ultra wide-band, also known as UWB. UWB has been making waves as a system that can be both secure and function as an “indoor GPS”. The proliferation of UWB is underway and soon it will be as ubiquitous as Bluetooth orWi-Fi.With this in mind, the benchmarking of the DWM3000EVB module in an Ultra Wideband Real Time Locating System is the goal of this research. The UWB RTLS created is a three anchor - one tag system that can calculate position just under 100 Hz and has an average accuracy of 5 centimeters. 
    more » « less
  4. Abstract Underwater imaging enables nondestructive plankton sampling at frequencies, durations, and resolutions unattainable by traditional methods. These systems necessitate automated processes to identify organisms efficiently. Early underwater image processing used a standard approach: binarizing images to segment targets, then integrating deep learning models for classification. While intuitive, this infrastructure has limitations in handling high concentrations of biotic and abiotic particles, rapid changes in dominant taxa, and highly variable target sizes. To address these challenges, we introduce a new framework that starts with a scene classifier to capture large within‐image variation, such as disparities in the layout of particles and dominant taxa. After scene classification, scene‐specific Mask regional convolutional neural network (Mask R‐CNN) models are trained to separate target objects into different groups. The procedure allows information to be extracted from different image types, while minimizing potential bias for commonly occurring features. Using in situ coastal plankton images, we compared the scene‐specific models to the Mask R‐CNN model encompassing all scene categories as a single full model. Results showed that the scene‐specific approach outperformed the full model by achieving a 20% accuracy improvement in complex noisy images. The full model yielded counts that were up to 78% lower than those enumerated by the scene‐specific model for some small‐sized plankton groups. We further tested the framework on images from a benthic video camera and an imaging sonar system with good results. The integration of scene classification, which groups similar images together, can improve the accuracy of detection and classification for complex marine biological images. 
    more » « less
  5. Underwater perception and 3D surface reconstruction are challenging problems with broad applications in construction, security, marine archaeology, and environmental monitoring. Treacherous operating conditions, fragile surroundings, and limited navigation control often dictate that submersibles restrict their range of motion and, thus, the baseline over which they can capture measurements. In the context of 3D scene reconstruction, it is well-known that smaller baselines make reconstruction more challenging. Our work develops a physics-based multimodal acoustic-optical neural surface reconstruction framework (AONeuS) capable of effectively integrating high-resolution RGB measurements with low-resolution depth-resolved imaging sonar measurements. By fusing these complementary modalities, our framework can reconstruct accurate high-resolution 3D surfaces from measurements captured over heavily-restricted baselines. Through extensive simulations and in-lab experiments, we demonstrate that AONeuS dramatically outperforms recent RGB-only and sonar-only inverse-differentiable-rendering--based surface reconstruction methods. 
    more » « less