skip to main content


Title: Capricorn: Towards Real-Time Rich Scene Analysis Using RF-Vision Sensor Fusion
Video scene analysis is a well-investigated area where researchers have devoted efforts to detect and classify people and objects in the scene. However, real-life scenes are more complex: the intrinsic states of the objects (e.g., machine operating states or human vital signals) are often overlooked by vision-based scene analysis. Recent work has proposed a radio frequency (RF) sensing technique, wireless vibrometry, that employs wireless signals to sense subtle vibrations from the objects and infer their internal states. We envision that the combination of video scene analysis with wireless vibrometry form a more comprehensive understanding of the scene, namely "rich scene analysis". However, the RF sensors used in wireless vibrometry only provide time series, and it is challenging to associate these time series data with multiple real-world objects. We propose a real-time RF-vision sensor fusion system, Capricorn, that efficiently builds a cross-modal correspondence between visual pixels and RF time series to better understand the complex natures of a scene. The vision sensors in Capricorn model the surrounding environment in 3D and obtain the distances of different objects. In the RF domain, the distance is proportional to the signal time-of-flight (ToF), and we can leverage the ToF to separate the RF time series corresponding to each object. The RF-vision sensor fusion in Capricorn brings multiple benefits. The vision sensors provide environmental contexts to guide the processing of RF data, which helps us select the most appropriate algorithms and models. Meanwhile, the RF sensor yields additional information that is originally invisible to vision sensors, providing insight into objects' intrinsic states. Our extensive evaluations show that Capricorn real-timely monitors multiple appliances' operating status with an accuracy of 97%+ and recovers vital signals like respirations from multiple people. A video (https://youtu.be/b-5nav3Fi78) demonstrates the capability of Capricorn.  more » « less
Award ID(s):
1705135
NSF-PAR ID:
10494742
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems
Page Range / eLocation ID:
334 to 348
Format(s):
Medium: X
Location:
Boston, Massachusetts
Sponsoring Org:
National Science Foundation
More Like this
  1. Intelligent systems commonly employ vision sensors like cameras to analyze a scene. Recent work has proposed a wireless sensing technique, wireless vibrometry, to enrich the scene analysis generated by vision sensors. Wireless vibrometry employs wireless signals to sense subtle vibrations from the objects and infer their internal states. However, it is difficult for pure Radio-Frequency (RF) sensing systems to obtain objects' visual appearances (e.g., object types and locations), especially when an object is inactive. Thus, most existing wireless vibrometry systems assume that the number and the types of objects in the scene are known. The key to getting rid of these presumptions is to build a connection between wireless sensor time series and vision sensor images. We present Capricorn, a vision-guided wireless vibrometry system. In Capricorn, the object type information from vision sensors guides the wireless vibrometry system to select the most appropriate signal processing pipeline. The object tracking capability in computer vision also helps wireless systems efficiently detect and separate vibrations from multiple objects in real time. 
    more » « less
  2. As augmented and virtual reality (AR/VR) technology matures, a method is desired to represent real-world persons visually and aurally in a virtual scene with high fidelity to craft an immersive and realistic user experience. Current technologies leverage camera and depth sensors to render visual representations of subjects through avatars, and microphone arrays are employed to localize and separate high-quality subject audio through beamforming. However, challenges remain in both realms. In the visual domain, avatars can only map key features (e.g., pose, expression) to a predetermined model, rendering them incapable of capturing the subjects’ full details. Alternatively, high-resolution point clouds can be utilized to represent human subjects. However, such three-dimensional data is computationally expensive to process. In the realm of audio, sound source separation requires prior knowledge of the subjects’ locations. However, it may take unacceptably long for sound source localization algorithms to provide this knowledge, which can still be error-prone, especially with moving objects. These challenges make it difficult for AR systems to produce real-time, high-fidelity representations of human subjects for applications such as AR/VR conferencing that mandate negligible system latency. We present Acuity, a real-time system capable of creating high-fidelity representations of human subjects in a virtual scene both visually and aurally. Acuity isolates subjects from high-resolution input point clouds. It reduces the processing overhead by performing background subtraction at a coarse resolution, then applying the detected bounding boxes to fine-grained point clouds. Meanwhile, Acuity leverages an audiovisual sensor fusion approach to expedite sound source separation. The estimated object location in the visual domain guides the acoustic pipeline to isolate the subjects’ voices without running sound source localization. Our results demonstrate that Acuity can isolate multiple subjects’ high-quality point clouds with a maximum latency of 70 ms and average throughput of over 25 fps, while separating audio in less than 30 ms. We provide the source code of Acuity at: https://github.com/nesl/Acuity. 
    more » « less
  3. Evolution has honed predatory skills in the natural world where localizing and intercepting fast-moving prey is required. The current generation of robotic systems mimics these biological systems using deep learning. High-speed processing of the camera frames using convolutional neural networks (CNN) (frame pipeline) on such constrained aerial edge-robots gets resource-limited. Adding more compute resources also eventually limits the throughput at the frame rate of the camera as frame-only traditional systems fail to capture the detailed temporal dynamics of the environment. Bio-inspired event cameras and spiking neural networks (SNN) provide an asynchronous sensor-processor pair (event pipeline) capturing the continuous temporal details of the scene for high-speed but lag in terms of accuracy. In this work, we propose a target localization system combining event-camera and SNN-based high-speed target estimation and frame-based camera and CNN-driven reliable object detection by fusing complementary spatio-temporal prowess of event and frame pipelines. One of our main contributions involves the design of an SNN filter that borrows from the neural mechanism for ego-motion cancelation in houseflies. It fuses the vestibular sensors with the vision to cancel the activity corresponding to the predator's self-motion. We also integrate the neuro-inspired multi-pipeline processing with task-optimized multi-neuronal pathway structure in primates and insects. The system is validated to outperform CNN-only processing using prey-predator drone simulations in realistic 3D virtual environments. The system is then demonstrated in a real-world multi-drone set-up with emulated event data. Subsequently, we use recorded actual sensory data from multi-camera and inertial measurement unit (IMU) assembly to show desired working while tolerating the realistic noise in vision and IMU sensors. We analyze the design space to identify optimal parameters for spiking neurons, CNN models, and for checking their effect on the performance metrics of the fused system. Finally, we map the throughput controlling SNN and fusion network on edge-compatible Zynq-7000 FPGA to show a potential 264 outputs per second even at constrained resource availability. This work may open new research directions by coupling multiple sensing and processing modalities inspired by discoveries in neuroscience to break fundamental trade-offs in frame-based computer vision 1 . 
    more » « less
  4. 1. Description of the objectives and motivation for the contribution to ECE education The demand for wireless data transmission capacity is increasing rapidly and this growth is expected to continue due to ongoing prevalence of cellular phones and new and emerging bandwidth-intensive applications that encompass high-definition video, unmanned aerial systems (UAS), intelligent transportation systems (ITS) including autonomous vehicles, and others. Meanwhile, vital military and public safety applications also depend on access to the radio frequency spectrum. To meet these demands, the US federal government is beginning to move from the proven but inefficient model of exclusive frequency assignments to a more-efficient, shared-spectrum approach in some bands of the radio frequency spectrum. A STEM workforce that understands the radio frequency spectrum and applications that use the spectrum is needed to further increase spectrum efficiency and cost-effectiveness of wireless systems over the next several decades to meet anticipated and unanticipated increases in wireless data capacity. 2. Relevant background including literature search examples if appropriate CISCO Systems’ annual survey indicates continued strong growth in demand for wireless data capacity. Meanwhile, undergraduate electrical and computer engineering courses in communication systems, electromagnetics, and networks tend to emphasize mathematical and theoretical fundamentals and higher-layer protocols, with less focus on fundamental concepts that are more specific to radio frequency wireless systems, including the physical and media access control layers of wireless communication systems and networks. An efficient way is needed to introduce basic RF system and spectrum concepts to undergraduate engineering students in courses such as those mentioned above who are unable to, or had not planned to take a full course in radio frequency / microwave engineering or wireless systems and networks. We have developed a series of interactive online modules that introduce concepts fundamental to wireless communications, the radio frequency spectrum, and spectrum sharing, and seek to present these concepts in context. The modules include interactive, JavaScript-based simulation exercises intended to reinforce the concepts that are presented in the modules through narrated slide presentations, text, and external links. Additional modules in development will introduce advanced undergraduate and graduate students and STEM professionals to configuration and programming of adaptive frequency-agile radios and spectrum management systems that can operate efficiently in congested radio frequency environments. Simulation exercises developed for the advanced modules allow both manual and automatic control of simulated radio links in timed, game-like simulations, and some exercises will enable students to select from among multiple pre-coded controller strategies and optionally edit the code before running the timed simulation. Additionally, we have developed infrastructure for running remote laboratory experiments that can also be embedded within the online modules, including a web-based user interface, an experiment management framework, and software defined radio (SDR) application software that runs in a wireless testbed initially developed for research. Although these experiments rely on limited hardware resources and introduce additional logistical considerations, they provide additional realism that may further challenge and motivate students. 3. Description of any assessment methods used to evaluate the effectiveness of the contribution, Each set of modules is preceded and followed by a survey. Each individual module is preceded by a quiz and followed by another quiz, with pre- and post-quiz questions drawn from the same pool. The pre-surveys allow students to opt in or out of having their survey and quiz results used anonymously in research. 4. Statement of results. The initial modules have been and are being used by three groups of students: (1) students in an undergraduate Introduction to Communication Systems course; (2) an interdisciplinary group of engineering students, including computer science students, who are participating in related undergraduate research project; and (3) students in a graduate-level communications course that includes both electrical and computer engineers. Analysis of results from the first group of students showed statistically significant increases from pre-quiz to post-quiz for each of four modules on fundamental wireless communication concepts. Results for the other students have not yet been analyzed, but also appear to show substantial pre-quiz to post-quiz increases in mean scores. 
    more » « less
  5. Access to high-quality data is an important barrier in the digital analysis of urban settings, including applications within computer vision and urban design. Diverse forms of data collected from sensors in areas of high activity in the urban environment, particularly at street intersections, are valuable resources for researchers interpreting the dynamics between vehicles, pedestrians, and the built environment. In this paper, we present a high-resolution audio, video, and LiDAR dataset of three urban intersections in Brooklyn, New York, totaling almost 8 unique hours. The data were collected with custom Reconfigurable Environmental Intelligence Platform (REIP) sensors that were designed with the ability to accurately synchronize multiple video and audio inputs. The resulting data are novel in that they are inclusively multimodal, multi-angular, high-resolution, and synchronized. We demonstrate four ways the data could be utilized — (1) to discover and locate occluded objects using multiple sensors and modalities, (2) to associate audio events with their respective visual representations using both video and audio modes, (3) to track the amount of each type of object in a scene over time, and (4) to measure pedestrian speed using multiple synchronized camera views. In addition to these use cases, our data are available for other researchers to carry out analyses related to applying machine learning to understanding the urban environment (in which existing datasets may be inadequate), such as pedestrian-vehicle interaction modeling and pedestrian attribute recognition. Such analyses can help inform decisions made in the context of urban sensing and smart cities, including accessibility-aware urban design and Vision Zero initiatives.

     
    more » « less