skip to main content


Title: PAWS: A Wearable Acoustic System for Pedestrian Safety
With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this paper, we build a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars. We demonstrate that using a segmented architecture and implementation consisting of headset-mounted audio sensors, a front-end hardware that performs signal processing and feature extraction, and machine learning based classification on a smartphone, we are able to provide early danger detection in real-time, from up to 60m distance, near 100% precision on the vehicle detection and alert the user with low latency.  more » « less
Award ID(s):
1704469 1704899
NSF-PAR ID:
10056958
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
International Conference on Internet-of-Things Design and Implementation
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this paper, we build a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars. We demonstrate that using a segmented architecture consisting of headset-mounted audio sensors, a front-end hardware platform that performs signal processing and feature extraction, and machine learning based classification on a smartphone, we are able to provide early danger detection in real-time, from up to 60m away, and alert the user with low latency and high accuracy. To further reduce power consumption of the battery-powered wearable headset, we implement a custom-designed integrated circuit that is able to compute delays between multiple channels of audio with nW power consumption. A regression-based method for sound source localization, AvPR, is proposed and used in combination with the IC to improve the granularity and robustness of localization. 
    more » « less
  2. With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this article, we present PAWS, a smartphone platform that utilizes an embedded wearable headset system mounted with an array of MEMS microphones to help detect, localize, and warn pedestrians of the imminent dangers of approaching cars. 
    more » « less
  3. Advances in perception for self-driving cars have accel- erated in recent years due to the availability of large-scale datasets, typically collected at specific locations and under nice weather conditions. Yet, to achieve the high safety re- quirement, these perceptual systems must operate robustly under a wide variety of weather conditions including snow and rain. In this paper, we present a new dataset to enable robust autonomous driving via a novel data collection pro- cess — data is repeatedly recorded along a 15 km route un- der diverse scene (urban, highway, rural, campus), weather (snow, rain, sun), time (day/night), and traffic conditions (pedestrians, cyclists and cars). The dataset includes im- ages and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS to establish correspon- dence across routes. The dataset includes road and object annotations using amodal masks to capture partial occlu- sions and 3D bounding boxes. We demonstrate the unique- ness of this dataset by analyzing the performance of base- lines in amodal segmentation of road and objects, depth estimation, and 3D object detection. The repeated routes opens new research directions in object discovery, contin- ual learning, and anomaly detection. Link to Ithaca365: https://ithaca365.mae.cornell.edu/ 
    more » « less
  4. Recent statistics reveal an alarming increase in accidents involving pedestrians (especially children) crossing the street. A common philosophy of existing pedestrian detection approaches is that this task should be undertaken by the moving cars themselves. In sharp departure from this philosophy, we propose to enlist the help of cars parked along the sidewalk to detect and protect crossing pedestrians. In support of this goal, we propose ADOPT: a system for Alerting Drivers to Occluded Pedestrian Traffic. ADOPT lays the theoretical foundations of a system that uses parked cars to: (1) detect the presence of a group of crossing pedestrians – a crossing cohort; (2) predict the time the last member of the cohort takes to clear the street; (3) send alert messages to those approaching cars that may reach the crossing area while pedestrians are still in the street; and, (4) show how approaching cars can adjust their speed, given several simultaneous crossing locations. Importantly, in ADOPT all communications occur over very short distances and at very low power. Our extensive simulations using SUMO-generated pedestrian and car traffic have shown the effectiveness of ADOPT in detecting and protecting crossing pedestrians. 
    more » « less
  5. Access to high-quality data is an important barrier in the digital analysis of urban settings, including applications within computer vision and urban design. Diverse forms of data collected from sensors in areas of high activity in the urban environment, particularly at street intersections, are valuable resources for researchers interpreting the dynamics between vehicles, pedestrians, and the built environment. In this paper, we present a high-resolution audio, video, and LiDAR dataset of three urban intersections in Brooklyn, New York, totaling almost 8 unique hours. The data were collected with custom Reconfigurable Environmental Intelligence Platform (REIP) sensors that were designed with the ability to accurately synchronize multiple video and audio inputs. The resulting data are novel in that they are inclusively multimodal, multi-angular, high-resolution, and synchronized. We demonstrate four ways the data could be utilized — (1) to discover and locate occluded objects using multiple sensors and modalities, (2) to associate audio events with their respective visual representations using both video and audio modes, (3) to track the amount of each type of object in a scene over time, and (4) to measure pedestrian speed using multiple synchronized camera views. In addition to these use cases, our data are available for other researchers to carry out analyses related to applying machine learning to understanding the urban environment (in which existing datasets may be inadequate), such as pedestrian-vehicle interaction modeling and pedestrian attribute recognition. Such analyses can help inform decisions made in the context of urban sensing and smart cities, including accessibility-aware urban design and Vision Zero initiatives.

     
    more » « less