With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this paper, we build a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars. We demonstrate that using a segmented architecture consisting of headset-mounted audio sensors, a front-end hardware platform that performs signal processing and feature extraction, and machine learning based classification on a smartphone, we are able to provide early danger detection in real-time, from up to 60m away, and alert the user with low latency and high accuracy. To further reduce power consumption of the battery-powered wearable headset, we implement a custom-designed integrated circuit that is able to compute delays between multiple channels of audio with nW power consumption. A regression-based method for sound source localization, AvPR, is proposed and used in combination with the IC to improve the granularity and robustness of localization.
more »
« less
PAWS: A Wearable Acoustic System for Pedestrian Safety
With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this paper, we build a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars. We demonstrate that using a segmented architecture and implementation consisting of headset-mounted audio sensors, a front-end hardware that performs signal processing and feature extraction, and machine learning based classification on a smartphone, we are able to provide early danger detection in real-time, from up to 60m distance, near 100% precision on the vehicle detection and alert the user with low latency.
more »
« less
- PAR ID:
- 10056958
- Date Published:
- Journal Name:
- International Conference on Internet-of-Things Design and Implementation
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this article, we present PAWS, a smartphone platform that utilizes an embedded wearable headset system mounted with an array of MEMS microphones to help detect, localize, and warn pedestrians of the imminent dangers of approaching cars.more » « less
-
Recent statistics reveal an alarming increase in accidents involving pedestrians (especially children) crossing the street. A common philosophy of existing pedestrian detection approaches is that this task should be undertaken by the moving cars themselves. In sharp departure from this philosophy, we propose to enlist the help of cars parked along the sidewalk to detect and protect crossing pedestrians. In support of this goal, we propose ADOPT: a system for Alerting Drivers to Occluded Pedestrian Traffic. ADOPT lays the theoretical foundations of a system that uses parked cars to: (1) detect the presence of a group of crossing pedestrians – a crossing cohort; (2) predict the time the last member of the cohort takes to clear the street; (3) send alert messages to those approaching cars that may reach the crossing area while pedestrians are still in the street; and, (4) show how approaching cars can adjust their speed, given several simultaneous crossing locations. Importantly, in ADOPT all communications occur over very short distances and at very low power. Our extensive simulations using SUMO-generated pedestrian and car traffic have shown the effectiveness of ADOPT in detecting and protecting crossing pedestrians.more » « less
-
null (Ed.)The safety of distracted pedestrians presents a significant public health challenge in the United States and worldwide. An estimated 6,704 American pedestrians died and over 200,000 pedestrians were injured in traffic crashes in 2018, according to the Centers for Disease Control and Prevention (CDC). This number is increasing annually and many researchers posit that distraction by smartphones is a primary reason for the increasing number of pedestrian injuries and deaths. One strategy to prevent pedestrian injuries and death is to use intrusive interruptions that warn distracted pedestrians directly on their smartphones. To this end, we developed StreetBit, a Bluetooth beacon-based mobile application that alerts distracted pedestrians with a visual and/or audio interruption when they are distracted by their smartphones and are approaching a potentially-dangerous traffic intersection. In this paper, we present the background, architecture, and operations of the StreetBit Application.more » « less
-
Access to high-quality data is an important barrier in the digital analysis of urban settings, including applications within computer vision and urban design. Diverse forms of data collected from sensors in areas of high activity in the urban environment, particularly at street intersections, are valuable resources for researchers interpreting the dynamics between vehicles, pedestrians, and the built environment. In this paper, we present a high-resolution audio, video, and LiDAR dataset of three urban intersections in Brooklyn, New York, totaling almost 8 unique hours. The data were collected with custom Reconfigurable Environmental Intelligence Platform (REIP) sensors that were designed with the ability to accurately synchronize multiple video and audio inputs. The resulting data are novel in that they are inclusively multimodal, multi-angular, high-resolution, and synchronized. We demonstrate four ways the data could be utilized — (1) to discover and locate occluded objects using multiple sensors and modalities, (2) to associate audio events with their respective visual representations using both video and audio modes, (3) to track the amount of each type of object in a scene over time, and (4) to measure pedestrian speed using multiple synchronized camera views. In addition to these use cases, our data are available for other researchers to carry out analyses related to applying machine learning to understanding the urban environment (in which existing datasets may be inadequate), such as pedestrian-vehicle interaction modeling and pedestrian attribute recognition. Such analyses can help inform decisions made in the context of urban sensing and smart cities, including accessibility-aware urban design and Vision Zero initiatives.more » « less