skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: SquiggleMilli: Approximating SAR Imaging on Mobile Millimeter-Wave Devices
This paper proposes SquiggleMilli, a system that approximates traditional Synthetic Aperture Radar (SAR) imaging on mobile millimeter-wave (mmWave) devices. The system is capable of imaging through obstructions, such as clothing, and under low visibility conditions. Unlike traditional SAR that relies on mechanical controllers or rigid bodies, SquiggleMilli is based on the hand-held, fluidic motion of the mmWave device. It enables mmWave imaging in hand-held settings by re-thinking existing motion compensation, compressed sensing, and voxel segmentation. Since mmWave imaging suffers from poor resolution due to specularity and weak reflectivity, the reconstructed shapes could be imperceptible by machines and humans. To this end, SquiggleMilli designs a machine learning model to recover the high spatial frequencies in the object to reconstruct an accurate 2D shape and predict its 3D features and category. We have customized SquiggleMilli for security applications, but the model is adaptable to other applications with limited training samples. We implement SquiggleMilli on off-the-shelf components and demonstrate its performance improvement over the traditional SAR qualitatively and quantitatively.  more » « less
Award ID(s):
1910853 2018966
PAR ID:
10296770
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the ACM on interactive mobile wearable and ubiquitous technologies
Volume:
5
Issue:
3
ISSN:
2474-9567
Page Range / eLocation ID:
Article No.: 125, pp 1–26
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The ubiquity of millimeter-wave (mmWave) technology could bring through-obstruction imaging to portable, mobile systems. Existing through-obstruction imaging systems rely on Synthetic Aperture Radar (SAR) technique, but emulating the SAR principle on hand-held devices has been challenging. We propose ViSAR, a portable platform that integrates an optical camera and mmWave radar to emulate the SAR principle and enable through-obstruction 3D imaging. ViSAR synchronizes the devices at the software-level and uses the Time Domain Backprojection algorithm to generate vision-augmented mmWave images. We have experimentally evaluated ViSAR by imaging several indoor objects. 
    more » « less
  2. null (Ed.)
    The ubiquity of millimeter-wave (mmWave) technology in 5G-and-beyond devices enable opportunities to bring through-obstruction imaging in hand-held, ad-hoc settings. This imaging technique will require manually scanning the scene to emulate a Synthetic Aperture Radar (SAR) [4] and measure back-scattered signals. Appropriate signal focusing can reveal hidden items and can be used to detect and classify shapes automatically. Such hidden object detection and classification could enable multiple applications, such as in-situ security check without pat-down search, baggage discrimination without opening the baggage, packaged inventory item counting without intrusions, etc. 
    more » « less
  3. We propose MiShape, a millimeter-wave (mmWave) wireless signal based imaging system that generates high-resolution human silhouettes and predicts 3D locations of body joints. The system can capture human motions in real-time under low light and low-visibility conditions. Unlike existing vision-based motion capture systems, MiShape is privacy non-invasive and can generalize to a wide range of motion tracking applications at-home. To overcome the challenges with low-resolution, specularity, and aliasing in images from Commercial-Off-The-Shelf (COTS) mmWave systems, MiShape designs deep learning models based on conditional Generative Adversarial Networks and incorporates the rules of human biomechanics. We have customized MiShape for gait monitoring, but the model is well adaptive to any tracking applications with limited fine-tuning samples. We experimentally evaluate MiShape with real data collected from a COTS mmWave system for 10 volunteers, with diverse ages, gender, height, and somatotype, performing different poses. Our experimental results demonstrate that MiShape delivers high-resolution silhouettes and accurate body poses on par with an existing vision-based system, and unlocks the potential of mmWave systems, such as 5G home wireless routers, for privacy-noninvasive healthcare applications. 
    more » « less
  4. mmWave signals form a critical component of 5G and next-generation wireless networks, which are also being increasingly considered for sensing the environment around us to enable ubiquitous IoT applications. In this context, this paper leverages the properties of mmWave signals for tracking 3D finger motion for interactive IoT applications. While conventional vision-based solutions break down under poor lighting, occlusions, and also suffer from privacy concerns, mmWave signals work under typical occlusions and non-line-of-sight conditions, while being privacy-preserving. In contrast to prior works on mmWave sensing that focus on predefined gesture classification, this work performs continuous 3D finger motion tracking. Towards this end, we first observe via simulations and experiments that the small size of fingers coupled with specular reflections do not yield stable mmWave reflections. However, we make an interesting observation that focusing on the forearm instead of the fingers can provide stable reflections for 3D finger motion tracking. Muscles that activate the fingers extend through the forearm, whose motion manifests as vibrations on the forearm. By analyzing the variation in phases of reflected mmWave signals from the forearm, this paper designs mm4Arm, a system that tracks 3D finger motion. Nontrivial challenges arise due to the high dimensional search space, complex vibration patterns, diversity across users, hardware noise, etc. mm4Arm exploits anatomical constraints in finger motions and fuses them with machine learning architectures based on encoder-decoder and ResNets in enabling accurate tracking. A systematic performance evaluation with 10 users demonstrates a median error of 5.73° (location error of 4.07 mm) with robustness to multipath and natural variation in hand position/orientation. The accuracy is also consistent under non-line-of-sight conditions and clothing that might occlude the forearm. mm4Arm runs on smartphones with a latency of 19 ms and low energy overhead. 
    more » « less
  5. In this paper we learn to segment hands and hand-held objects from motion. Our system takes a single RGB image and hand location as input to segment the hand and hand-held object. For learning, we generate responsibility maps that show how well a hand’s motion explains other pixels’ motion in video. We use these responsibility maps as pseudo-labels to train a weakly-supervised neural network using an attention-based similarity loss and contrastive loss. Our system outperforms alternate methods, achieving good performance on the 100DOH, EPIC-KITCHENS, and HO3D datasets. 
    more » « less