skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, February 13 until 2:00 AM ET on Friday, February 14 due to maintenance. We apologize for the inconvenience.


Title: R-fiducial: Millimeter Wave Radar Fiducials for Sensing Traffic Infrastructure
Millimeter wave (mmWave) sensing has recently gained attention for its robustness in challenging environments. When visual sensors such as cameras fail to perform, mmWave radars can be used to provide reliable performance. However, the poor scattering performance and lack of texture in millimeter waves can make it difficult for radars to identify objects in some situations precisely. In this paper, we take insight from camera fiducials which are very easily identifiable by a camera, and present R-fiducial tags, which smartly augment the current infrastructure to enable myriad applications with mmwave radars. R-fiducial acts as fiducials for mmwave sensing, similar to camera fiducials, and can be reliably identified by a mmwave radar. We identify a set of requirements for millimeter wave fiducials and show how R-fiducial meets them all. R-fiducial uses a novel spread-spectrum modulation technique to provide low latency with high reliability. Our evaluations show that R-fiducial can be reliably detected with a 100% detection rate up to 25 meters with a 120-degree field of view and a few milliseconds of latency. We also conduct experiments and case studies in adverse and low visibility conditions to demonstrate the potential of R-fiducial in a variety of applications.  more » « less
Award ID(s):
2225617 2211805 2107613
PAR ID:
10457419
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2023 IEEE 97th Vehicular Technology Conference (VTC2023-Spring)
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Using millimeter wave (mmWave) signals for imaging has an important advantage in that they can penetrate through poor environmental conditions such as fog, dust, and smoke that severely degrade optical-based imaging systems. However, mmWave radars, contrary to cameras and LiDARs, suffer from low angular resolution because of small physical apertures and conventional signal processing techniques. Sparse radar imaging, on the other hand, can increase the aperture size while minimizing the power consumption and read out bandwidth. This paper presents CoIR, an analysis by synthesis method that leverages the implicit neural network bias in convolutional decoders and compressed sensing to perform high accuracy sparse radar imaging. The proposed system is data set-agnostic and does not require any auxiliary sensors for training or testing. We introduce a sparse array design that allows for a 5.5× reduction in the number of antenna elements needed compared to conventional MIMO array designs. We demonstrate our system's improved imaging performance over standard mmWave radars and other competitive untrained methods on both simulated and experimental mmWave radar data. 
    more » « less
  2. The highly directional nature of the millimeter wave (mmWave) beams pose several challenges in using that spectrum for meeting the communication needs of immersive applications. In particular, the mmWave beams are susceptible to misalignments and blockages caused by user movements. As a result, mmWave channels are vulnerable to large fluctuations in quality, which in turn, cause disproportionate degradation in end-to-end performance of Transmission Control Protocol (TCP) based applications. In this paper, we propose a reinforcement learning (RL) integrated transport-layer plugin, Millimeter wave based Immersive Agent (MIA), for immersive content delivery over the mmWave link. MIA uses the RL model to predict mmWave link bandwidth based on the real-time measurement. Then, MIA cooperates with TCP’s congestion control scheme to adapt the sending rate in accordance with the predictions of the mmWave bandwidth. To evaluate the effectiveness of the proposed MIA, we conduct experiments using a mmWave augmented immersive testbed and network simulations. The evaluation results show that MIA improves end-to-end immersive performance significantly on both throughput and latency. 
    more » « less
  3. Millimeter wave (mmWave) access networks have the potential to meet the high-throughput and low-latency needs of immersive applications. However, due to the highly directional nature of the mmWave beams and their susceptibility to beam misalignment and blockage resulting from user movements and rotations, the associated mmWave links are vulnerable to large channel fluctuations. These fluctuations result in disproportion- ately adverse effects on performance of transport layer protocols such as Transmission Control Protocol (TCP). To overcome this challenge, we propose a network layer solution, COded Taking And Giving (COTAG) scheme to sustain low-latency and high- throughput end-to-end TCP performance in dually connected networks. In particular, COTAG creates network encoded packets at the network gateway and each access point (AP) aiming to adaptively take the spare bandwidth on each link for transmis- sion. Further, if one link bandwidth drops due to user movements, COTAG actively abandons the transmission opportunity by conditionally dropping packets. Consequently, COTAG actively adapts to link quality changes in mmWave access network and enhances the TCP performance without jeopardizing the latency of immersive content delivery. To evaluate the effectiveness of the proposed COTAG, we conduct experiments using off-the- shelf APs and network simulations. The evaluation results show that COTAG improves end-to-end TCP performance significantly on both throughput and latency. 
    more » « less
  4. We presentCoSense, a system that enables coexistence of networking and sensing on next-generation millimeter-wave (mmWave) picocells for traffic monitoring and pedestrian safety at intersections in all weather conditions. Although existing wireless signal-based object detection systems are available, they suffer from limited resolution and their outputs may not provide sufficient discriminatory information in complex scenes, such as traffic intersections.CoSenseproposes using 5G picocells, which operate at mmWave frequency bands and provide higher data rates and higher sensing resolution than traditional wireless technology. However, it is difficult to run sensing applications and data transfer simultaneously on mmWave devices due to potential interference, and using special-purpose sensing hardware can prohibit deployment of sensing applications to a large number of existing and future inexpensive mmWave devices. Additionally, mmWave devices are vulnerable to weak reflectivity and specularity challenges, which may result in loss of information about objects and pedestrians. To overcome these challenges,CoSensedesign customized deep learning models that not only can recover missing information about the target scene but also enable coexistence of networking and sensing. We evaluateCoSenseon diverse data samples captured at traffic intersections and demonstrate that it can detect and locate pedestrians and vehicles, both qualitatively and quantitatively, without significantly affecting the networking throughput.

     
    more » « less
  5. This paper proposes SquiggleMilli, a system that approximates traditional Synthetic Aperture Radar (SAR) imaging on mobile millimeter-wave (mmWave) devices. The system is capable of imaging through obstructions, such as clothing, and under low visibility conditions. Unlike traditional SAR that relies on mechanical controllers or rigid bodies, SquiggleMilli is based on the hand-held, fluidic motion of the mmWave device. It enables mmWave imaging in hand-held settings by re-thinking existing motion compensation, compressed sensing, and voxel segmentation. Since mmWave imaging suffers from poor resolution due to specularity and weak reflectivity, the reconstructed shapes could be imperceptible by machines and humans. To this end, SquiggleMilli designs a machine learning model to recover the high spatial frequencies in the object to reconstruct an accurate 2D shape and predict its 3D features and category. We have customized SquiggleMilli for security applications, but the model is adaptable to other applications with limited training samples. We implement SquiggleMilli on off-the-shelf components and demonstrate its performance improvement over the traditional SAR qualitatively and quantitatively. 
    more » « less