skip to main content


Search for: All records

Creators/Authors contains: "Nirjon, Shahriar"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    We propose Zygarde --- which is an energy- and accuracy-aware soft real-time task scheduling framework for batteryless systems that flexibly execute deep learning tasks1 that are suitable for running on microcontrollers. The sporadic nature of harvested energy, resource constraints of the embedded platform, and the computational demand of deep neural networks (DNNs) pose a unique and challenging real-time scheduling problem for which no solutions have been proposed in the literature. We empirically study the problem and model the energy harvesting pattern as well as the trade-off between the accuracy and execution of a DNN. We develop an imprecise computing-based scheduling algorithm that improves the timeliness of DNN tasks on intermittently powered systems. We evaluate Zygarde using four standard datasets as well as by deploying it in six real-life applications involving audio and camera sensor systems. Results show that Zygarde decreases the execution time by up to 26% and schedules 9% -- 34% more tasks with up to 21% higher inference accuracy, compared to traditional schedulers such as the earliest deadline first (EDF). 
    more » « less
  2. The lack of adequate training data is one of the major hurdles in WiFi-based activity recognition systems. In this paper, we propose Wi-Fringe, which is a WiFi CSI-based devicefree human gesture recognition system that recognizes named gestures, i.e., activities and gestures that have a semantically meaningful name in English language, as opposed to arbitrary free-form gestures. Given a list of activities (only their names in English text), along with zero or more training examples (WiFi CSI values) per activity, Wi-Fringe is able to detect all activities at runtime. We show for the first time that by utilizing the state-of-the-art semantic representation of English words, which is learned from datasets like the Wikipedia (e.g., Google's word-to-vector [1]) and verb attributes learned from how a word is defined (e.g, American Heritage Dictionary), we can enhance the capability of WiFi-based named gesture recognition systems that lack adequate training examples per class. We propose a novel cross-domain knowledge transfer algorithm between radio frequency (RF) and text to lessen the burden on developers and end-users from the tedious task of data collection for all possible activities. To evaluate Wi-Fringe, we collect data from four volunteers in a multi-person apartment and an office building for a total of 20 activities. We empirically quantify the trade-off between the accuracy and the number of unseen activities. 
    more » « less
  3. In this paper, we present - ZenCam, which is an always-on body camera that exploits readily available information in the encoded video stream from the on-chip firmware to classify the dynamics of the scene. This scene-context is further combined with simple inertial measurement unit (IMU)-based activity level-context of the wearer to optimally control the camera configuration at run-time to keep the device under the desired energy budget. We describe the design and implementation of ZenCam and thoroughly evaluate its performance in real-world scenarios. Our evaluation shows a 29.8-35% reduction in energy consumption and 48.1-49.5% reduction in storage usage when compared to a standard baseline setting of 1920×1080 at 30fps while maintaining a competitive or better video quality at the minimal computational overhead. 
    more » « less
  4. With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this paper, we build a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars. We demonstrate that using a segmented architecture consisting of headset-mounted audio sensors, a front-end hardware platform that performs signal processing and feature extraction, and machine learning based classification on a smartphone, we are able to provide early danger detection in real-time, from up to 60m away, and alert the user with low latency and high accuracy. To further reduce power consumption of the battery-powered wearable headset, we implement a custom-designed integrated circuit that is able to compute delays between multiple channels of audio with nW power consumption. A regression-based method for sound source localization, AvPR, is proposed and used in combination with the IC to improve the granularity and robustness of localization. 
    more » « less
  5. The Glimpse.3D is a body-worn camera that captures, processes, stores, and transmits 3D visual information of a real-world environment using a low cost camera-based sensor system that is constrained by its limited processing capability, storage, and battery life. The 3D content is viewed on a mobile device such as a smartphone or a virtual reality headset. This system can be used in applications such as capturing and sharing 3D content in the social media, training people in different professions, and post-facto analysis of an event. Glimpse.3D uses off-the-shelf hardware and standard computer vision algorithms. Its novelty lies in the ability to optimally control camera data acquisition and processing stages to guarantee the desired quality of captured information and battery life. The design of the controller is based on extensive measurements and modeling of the relationships between the linear and angular motion of a body-worn camera and the quality of generated 3D point clouds as well as the battery life of the system. To achieve this, we 1) devise a new metric to quantify the quality of generated 3D point clouds, 2) formulate an optimization problem to find an optimal trigger point for the camera system that prolongs its battery life while maximizing the quality of captured 3D environment, and 3) make the model adaptive so that the system evolves and its performance improves over time. 
    more » « less
  6. With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this article, we present PAWS, a smartphone platform that utilizes an embedded wearable headset system mounted with an array of MEMS microphones to help detect, localize, and warn pedestrians of the imminent dangers of approaching cars. 
    more » « less
  7. Mobility tracking of IoT devices in smart city infrastructures such as smart buildings, hospitals, shopping centers, warehouses, smart streets, and outdoor spaces has many applications. Since Bluetooth Low Energy (BLE) is available in almost every IoT device in the market nowadays, a key to localizing and tracking IoT devices is to develop an accurate ranging technique for BLE-enabled IoT devices. This is, however, a challenging feat as billions of these devices are already in use, and for pragmatic reasons, we cannot propose to modify the IoT device (a BLE peripheral) itself. Furthermore, unlike WiFi ranging - where the channel state information (CSI) is readily available and the bandwidth can be increased by stitching 2.4GHz and 5GHz bands together to achieve a high-precision ranging, an unmodified BLE peripheral provides us with only the RSSI information over a very limited bandwidth. Accurately ranging a BLE device is therefore far more challenging than other wireless standards. In this paper, we exploit characteristics of BLE protocol (e.g. frequency hopping and empty control packet transmissions) and propose a technique to directly estimate the range of a BLE peripheral from a BLE access point by multipath profiling. We discuss the theoretical foundation and conduct experiments to show that the technique achieves a 2.44m absolute range estimation error on average. 
    more » « less
  8. With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this paper, we build a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars. We demonstrate that using a segmented architecture and implementation consisting of headset-mounted audio sensors, a front-end hardware that performs signal processing and feature extraction, and machine learning based classification on a smartphone, we are able to provide early danger detection in real-time, from up to 60m distance, near 100% precision on the vehicle detection and alert the user with low latency. 
    more » « less