skip to main content

This content will become publicly available on August 10, 2024

Title: CoIR: Compressive Implicit Radar
Using millimeter wave (mmWave) signals for imaging has an important advantage in that they can penetrate through poor environmental conditions such as fog, dust, and smoke that severely degrade optical-based imaging systems. However, mmWave radars, contrary to cameras and LiDARs, suffer from low angular resolution because of small physical apertures and conventional signal processing techniques. Sparse radar imaging, on the other hand, can increase the aperture size while minimizing the power consumption and read out bandwidth. This paper presents CoIR, an analysis by synthesis method that leverages the implicit neural network bias in convolutional decoders and compressed sensing to perform high accuracy sparse radar imaging. The proposed system is data set-agnostic and does not require any auxiliary sensors for training or testing. We introduce a sparse array design that allows for a 5.5× reduction in the number of antenna elements needed compared to conventional MIMO array designs. We demonstrate our system's improved imaging performance over standard mmWave radars and other competitive untrained methods on both simulated and experimental mmWave radar data.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE Transactions on Pattern Analysis and Machine Intelligence
Page Range / eLocation ID:
1 to 12
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Emerging autonomous driving systems require reliable perception of 3D surroundings. Unfortunately, current mainstream perception modalities, i.e., camera and Lidar, are vulnerable under challenging lighting and weather conditions. On the other hand, despite their all-weather operations, today's vehicle Radars are limited to location and speed detection. In this paper, we introduce MILLIPOINT, a practical system that advances the Radar sensing capability to generate 3D point clouds. The key design principle of MILLIPOINT lies in enabling synthetic aperture radar (SAR) imaging on low-cost commodity vehicle Radars. To this end, MILLIPOINT models the relation between signal variations and Radar movement, and enables self-tracking of Radar at wavelength-scale precision, thus realize coherent spatial sampling. Furthermore, MILLIPOINT solves the unique problem of specular reflection, by properly focusing on the targets with post-imaging processing. It also exploits the Radar's built-in antenna array to estimate the height of reflecting points, and eventually generate 3D point clouds. We have implemented MILLIPOINT on a commodity vehicle Radar. Our evaluation results show that MILLIPOINT effectively combats motion errors and specular reflections, and can construct 3D point clouds with much higher density and resolution compared with the existing vehicle Radar solutions. 
    more » « less
  2. Abstract—Robotic geo-fencing and surveillance systems require accurate monitoring of objects if/when they violate perimeter restrictions. In this paper, we seek a solution for depth imaging of such objects of interest at high accuracy (few tens of cm) over extended ranges (up to 300 meters) from a single vantage point, such as a pole mounted platform. Unfortunately, the rich literature in depth imaging using camera, lidar and radar in isolation struggles to meet these tight requirements in real-world conditions. This paper proposes Metamoran, a solution that explores long-range depth imaging of objects of interest by fusing the strengths of two complementary technologies: mmWave radar and camera. Unlike cameras, mmWave radars offer excellent cm-scale depth resolution even at very long ranges. However, their angular resolution is at least 10× worse than camera systems. Fusing these two modalities is natural, but in scenes with high clutter and at long ranges, radar reflections are weak and experience spurious artifacts. Metamoran’s core contribution is to leverage image segmentation and monocular depth estimation on camera images to help declutter radar and discover true object reflections.We perform a detailed evaluation of Metamoran’s depth imaging capabilities in 400 diverse scenarios. Our evaluation shows that Metamoran estimates the depth of static objects up to 90 m away and moving objects up to 305 m away and with a median error of 28 cm, an improvement of 13× over a naive radar+camera baseline and 23× compared to monocular depth estimation. 
    more » « less
  3. Millimeter wave (mmWave) sensing has recently gained attention for its robustness in challenging environments. When visual sensors such as cameras fail to perform, mmWave radars can be used to provide reliable performance. However, the poor scattering performance and lack of texture in millimeter waves can make it difficult for radars to identify objects in some situations precisely. In this paper, we take insight from camera fiducials which are very easily identifiable by a camera, and present R-fiducial tags, which smartly augment the current infrastructure to enable myriad applications with mmwave radars. R-fiducial acts as fiducials for mmwave sensing, similar to camera fiducials, and can be reliably identified by a mmwave radar. We identify a set of requirements for millimeter wave fiducials and show how R-fiducial meets them all. R-fiducial uses a novel spread-spectrum modulation technique to provide low latency with high reliability. Our evaluations show that R-fiducial can be reliably detected with a 100% detection rate up to 25 meters with a 120-degree field of view and a few milliseconds of latency. We also conduct experiments and case studies in adverse and low visibility conditions to demonstrate the potential of R-fiducial in a variety of applications. 
    more » « less
  4. Millimeter-Wave (mmWave) radar can enable high-resolution human pose estimation with low cost and computational requirements. However, mmWave data point cloud, the primary input to processing algorithms, is highly sparse and carries significantly less information than other alternatives such as video frames. Furthermore, the scarce labeled mmWave data impedes the development of machine learning (ML) models that can generalize to unseen scenarios. We propose a fast and scalable human pose estimation (FUSE) framework that combines multi-frame representation and meta-learning to address these challenges. Experimental evaluations show that FUSE adapts to the unseen scenarios 4× faster than current supervised learning approaches and estimates human joint coordinates with about 7 cm mean absolute error. 
    more » « less
  5. Abstract Properties of frozen hydrometeors in clouds remain difficult to sense remotely. Estimates of number concentration, distribution shape, ice particle density, and ice water content are essential for connecting cloud processes to surface precipitation. Progress has been made with dual-frequency radars, but validation has been difficult because of lack of particle imaging and sizing observations collocated with the radar measurements. Here, data are used from two airborne profiling (up and down) radars, the W-band Wyoming Cloud Radar and the Ka-band Profiling Radar, allowing for Ka–W-band dual-wavelength ratio (DWR) profiles. The aircraft (the University of Wyoming King Air) also carried a suite of in situ cloud and precipitation probes. This arrangement is optimal for relating the “flight-level” DWR (an average from radar gates below and above flight level) to ice particle size distributions measured by in situ optical array probes, as well as bulk properties such as minimum snow particle density and ice water content. This comparison reveals a strong relationship between DWR and the ice particle median-volume diameter. An optimal range of DWR values ensures the highest retrieval confidence, bounded by the radars’ relative calibration and DWR saturation, found here to be about 2.5–7.5 dB. The DWR-defined size distribution shape is used with a Mie scattering model and an experimental mass–diameter relationship to test retrievals of ice particle concentration and ice water content. Comparison with flight-level cloud-probe data indicate good performance, allowing microphysical interpretations for the rest of the vertical radar transects. 
    more » « less