- PAR ID:
- 10457419
- Date Published:
- Journal Name:
- 2023 IEEE 97th Vehicular Technology Conference (VTC2023-Spring)
- Page Range / eLocation ID:
- 1 to 7
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Using millimeter wave (mmWave) signals for imaging has an important advantage in that they can penetrate through poor environmental conditions such as fog, dust, and smoke that severely degrade optical-based imaging systems. However, mmWave radars, contrary to cameras and LiDARs, suffer from low angular resolution because of small physical apertures and conventional signal processing techniques. Sparse radar imaging, on the other hand, can increase the aperture size while minimizing the power consumption and read out bandwidth. This paper presents CoIR, an analysis by synthesis method that leverages the implicit neural network bias in convolutional decoders and compressed sensing to perform high accuracy sparse radar imaging. The proposed system is data set-agnostic and does not require any auxiliary sensors for training or testing. We introduce a sparse array design that allows for a 5.5× reduction in the number of antenna elements needed compared to conventional MIMO array designs. We demonstrate our system's improved imaging performance over standard mmWave radars and other competitive untrained methods on both simulated and experimental mmWave radar data.more » « less
-
The highly directional nature of the millimeter wave (mmWave) beams pose several challenges in using that spectrum for meeting the communication needs of immersive applications. In particular, the mmWave beams are susceptible to misalignments and blockages caused by user movements. As a result, mmWave channels are vulnerable to large fluctuations in quality, which in turn, cause disproportionate degradation in end-to-end performance of Transmission Control Protocol (TCP) based applications. In this paper, we propose a reinforcement learning (RL) integrated transport-layer plugin, Millimeter wave based Immersive Agent (MIA), for immersive content delivery over the mmWave link. MIA uses the RL model to predict mmWave link bandwidth based on the real-time measurement. Then, MIA cooperates with TCP’s congestion control scheme to adapt the sending rate in accordance with the predictions of the mmWave bandwidth. To evaluate the effectiveness of the proposed MIA, we conduct experiments using a mmWave augmented immersive testbed and network simulations. The evaluation results show that MIA improves end-to-end immersive performance significantly on both throughput and latency.more » « less
-
Millimeter wave (mmWave) access networks have the potential to meet the high-throughput and low-latency needs of immersive applications. However, due to the highly directional nature of the mmWave beams and their susceptibility to beam misalignment and blockage resulting from user movements and rotations, the associated mmWave links are vulnerable to large channel fluctuations. These fluctuations result in disproportion- ately adverse effects on performance of transport layer protocols such as Transmission Control Protocol (TCP). To overcome this challenge, we propose a network layer solution, COded Taking And Giving (COTAG) scheme to sustain low-latency and high- throughput end-to-end TCP performance in dually connected networks. In particular, COTAG creates network encoded packets at the network gateway and each access point (AP) aiming to adaptively take the spare bandwidth on each link for transmis- sion. Further, if one link bandwidth drops due to user movements, COTAG actively abandons the transmission opportunity by conditionally dropping packets. Consequently, COTAG actively adapts to link quality changes in mmWave access network and enhances the TCP performance without jeopardizing the latency of immersive content delivery. To evaluate the effectiveness of the proposed COTAG, we conduct experiments using off-the- shelf APs and network simulations. The evaluation results show that COTAG improves end-to-end TCP performance significantly on both throughput and latency.more » « less
-
We present
CoSense , a system that enables coexistence of networking and sensing on next-generation millimeter-wave (mmWave) picocells for traffic monitoring and pedestrian safety at intersections in all weather conditions. Although existing wireless signal-based object detection systems are available, they suffer from limited resolution and their outputs may not provide sufficient discriminatory information in complex scenes, such as traffic intersections.CoSense proposes using 5G picocells, which operate at mmWave frequency bands and provide higher data rates and higher sensing resolution than traditional wireless technology. However, it is difficult to run sensing applications and data transfer simultaneously on mmWave devices due to potential interference, and using special-purpose sensing hardware can prohibit deployment of sensing applications to a large number of existing and future inexpensive mmWave devices. Additionally, mmWave devices are vulnerable to weak reflectivity and specularity challenges, which may result in loss of information about objects and pedestrians. To overcome these challenges,CoSense design customized deep learning models that not only can recover missing information about the target scene but also enable coexistence of networking and sensing. We evaluateCoSense on diverse data samples captured at traffic intersections and demonstrate that it can detect and locate pedestrians and vehicles, both qualitatively and quantitatively, without significantly affecting the networking throughput. -
This paper proposes SquiggleMilli, a system that approximates traditional Synthetic Aperture Radar (SAR) imaging on mobile millimeter-wave (mmWave) devices. The system is capable of imaging through obstructions, such as clothing, and under low visibility conditions. Unlike traditional SAR that relies on mechanical controllers or rigid bodies, SquiggleMilli is based on the hand-held, fluidic motion of the mmWave device. It enables mmWave imaging in hand-held settings by re-thinking existing motion compensation, compressed sensing, and voxel segmentation. Since mmWave imaging suffers from poor resolution due to specularity and weak reflectivity, the reconstructed shapes could be imperceptible by machines and humans. To this end, SquiggleMilli designs a machine learning model to recover the high spatial frequencies in the object to reconstruct an accurate 2D shape and predict its 3D features and category. We have customized SquiggleMilli for security applications, but the model is adaptable to other applications with limited training samples. We implement SquiggleMilli on off-the-shelf components and demonstrate its performance improvement over the traditional SAR qualitatively and quantitatively.more » « less