skip to main content


Search for: All records

Award ID contains: 1704469

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The Glimpse.3D is a body-worn camera that captures, processes, stores, and transmits 3D visual information of a real-world environment using a low cost camera-based sensor system that is constrained by its limited processing capability, storage, and battery life. The 3D content is viewed on a mobile device such as a smartphone or a virtual reality headset. This system can be used in applications such as capturing and sharing 3D content in the social media, training people in different professions, and post-facto analysis of an event. Glimpse.3D uses off-the-shelf hardware and standard computer vision algorithms. Its novelty lies in the ability to optimally control camera data acquisition and processing stages to guarantee the desired quality of captured information and battery life. The design of the controller is based on extensive measurements and modeling of the relationships between the linear and angular motion of a body-worn camera and the quality of generated 3D point clouds as well as the battery life of the system. To achieve this, we 1) devise a new metric to quantify the quality of generated 3D point clouds, 2) formulate an optimization problem to find an optimal trigger point for the camera system that prolongs its battery life while maximizing the quality of captured 3D environment, and 3) make the model adaptive so that the system evolves and its performance improves over time. 
    more » « less
  2. With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this paper, we build a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars. We demonstrate that using a segmented architecture and implementation consisting of headset-mounted audio sensors, a front-end hardware that performs signal processing and feature extraction, and machine learning based classification on a smartphone, we are able to provide early danger detection in real-time, from up to 60m distance, near 100% precision on the vehicle detection and alert the user with low latency. 
    more » « less
  3. Mobility tracking of IoT devices in smart city infrastructures such as smart buildings, hospitals, shopping centers, warehouses, smart streets, and outdoor spaces has many applications. Since Bluetooth Low Energy (BLE) is available in almost every IoT device in the market nowadays, a key to localizing and tracking IoT devices is to develop an accurate ranging technique for BLE-enabled IoT devices. This is, however, a challenging feat as billions of these devices are already in use, and for pragmatic reasons, we cannot propose to modify the IoT device (a BLE peripheral) itself. Furthermore, unlike WiFi ranging - where the channel state information (CSI) is readily available and the bandwidth can be increased by stitching 2.4GHz and 5GHz bands together to achieve a high-precision ranging, an unmodified BLE peripheral provides us with only the RSSI information over a very limited bandwidth. Accurately ranging a BLE device is therefore far more challenging than other wireless standards. In this paper, we exploit characteristics of BLE protocol (e.g. frequency hopping and empty control packet transmissions) and propose a technique to directly estimate the range of a BLE peripheral from a BLE access point by multipath profiling. We discuss the theoretical foundation and conduct experiments to show that the technique achieves a 2.44m absolute range estimation error on average. 
    more » « less