skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Title: REAM: Resource Efficient Adaptive Monitoring of Community Spaces at the Edge Using Reinforcement Learning
An increasing number of community spaces are being instrumented with heterogeneous IoT sensors and actuators that enable continuous monitoring of the surrounding environments. Data streams generated from the devices are analyzed using a range of analytics operators and transformed into meaningful information for community monitoring applications. To ensure high quality results, timely monitoring, and application reliability, we argue that these operators must be hosted at edge servers located in close proximity to the community space. In this paper, we present a Resource Efficient Adaptive Monitoring (REAM) framework at the edge that adaptively selects workflows of devices and operators to maintain adequate quality of information for the application at hand while judiciously consuming the limited resources available on edge servers. IoT deployments in community spaces are in a state of continuous flux that are dictated by the nature of activities and events within the space. Since these spaces are complex and change dynamically, and events can take place under different environmental contexts, developing a one-size-fits-all model that works for all types of spaces is infeasible. The REAM framework utilizes deep reinforcement learning agents that learn by interacting with each individual community spaces and take decisions based on the state of the environment in each space and other contextual information. We evaluate our framework on two real-world testbeds in Orange County, USA and NTHU, Taiwan. The evaluation results show that community spaces using REAM can achieve > 90% monitoring accuracy while incurring ~ 50% less resource consumption costs compared to existing static monitoring and Machine Learning driven approaches.  more » « less
Award ID(s):
1952247
PAR ID:
10311281
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2020 IEEE International Conference on Smart Computing (SMARTCOMP)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deep neural networks (DNNs) are being applied to various areas such as computer vision, autonomous vehicles, and healthcare, etc. However, DNNs are notorious for their high computational complexity and cannot be executed efficiently on resource constrained Internet of Things (IoT) devices. Various solutions have been proposed to handle the high computational complexity of DNNs. Offloading computing tasks of DNNs from IoT devices to cloud/edge servers is one of the most popular and promising solutions. While such remote DNN services provided by servers largely reduce computing tasks on IoT devices, it is challenging for IoT devices to inspect whether the quality of the service meets their service level objectives (SLO) or not. In this paper, we address this problem and propose a novel approach named QIS (quality inspection sampling) that can efficiently inspect the quality of the remote DNN services for IoT devices. To realize QIS, we design a new ID-generation method to generate data (IDs) that can identify the serving DNN models on edge servers. QIS inserts the IDs into the input data stream and implements sampling inspection on SLO violations. The experiment results show that the QIS approach can reliably inspect, with a nearly 100% success rate, the service qualtiy of remote DNN services when the SLA level is 99.9% or lower at the cost of only up to 0.5% overhead. 
    more » « less
  2. Recent advances in Internet of Things (IoT) technologies have sparked significant interest toward developing learning-based sensing applications on embedded edge devices. These efforts, however, are being challenged by the complexities of adapting to unforeseen conditions in an open-world environment, mainly due to the intensive computational and energy demands exceeding the capabilities of edge devices. In this article, we propose OpenSense, an open-world time-series sensing framework for making inferences from time-series sensor data and achieving incremental learning on an embedded edge device with limited resources. The proposed framework is able to achieve two essential tasks, inference and incremental learning, eliminating the necessity for powerful cloud servers. In addition, to secure enough time for incremental learning and reduce energy consumption, we need to schedule sensing activities without missing any events in the environment. Therefore, we propose two dynamic sensor scheduling techniques: 1) a class-level period assignment scheduler that finds an appropriate sensing period for each inferred class and 2) a Q-learning-based scheduler that dynamically determines the sensing interval for each classification moment by learning the patterns of event classes. With this framework, we discuss the design choices made to ensure satisfactory learning performance and efficient resource usage. Experimental results demonstrate the ability of the system to incrementally adapt to unforeseen conditions and to efficiently schedule to run on a resource-constrained device. 
    more » « less
  3. IoT devices are increasingly the source of data for machine learning (ML) applications running on edge servers. Data transmissions from devices to servers are often over local wireless networks whose bandwidth is not just limited but, more importantly, variable. Furthermore, in cyber-physical systems interacting with the physical environment, image offloading is also commonly subject to timing constraints. It is, therefore, important to develop an adaptive approach that maximizes the inference performance of ML applications under timing constraints and the resource constraints of IoT devices. In this paper, we use image classification as our target application and propose progressive neural compression (PNC) as an efficient solution to this problem. Although neural compression has been used to compress images for different ML applications, existing solutions often produce fixed-size outputs that are unsuitable for timing-constrained offloading over variable bandwidth. To address this limitation, we train a multi-objective rateless autoencoder that optimizes for multiple compression rates via stochastic taildrop to create a compression solution that produces features ordered according to their importance to inference performance. Features are then transmitted in that order based on available bandwidth, with classification ultimately performed using the (sub)set of features received by the deadline. We demonstrate the benefits of PNC over state-of-the-art neural compression approaches and traditional compression methods on a testbed comprising an IoT device and an edge server connected over a wireless network with varying bandwidth. 
    more » « less
  4. Remote health monitoring is a powerful tool to provide preventive care and early intervention for populations-at-risk. Such monitoring systems are becoming available nowadays due to recent advancements in Internet-of-Things (IoT) paradigms, enabling ubiquitous monitoring. These systems require a high level of quality in attributes such as availability and accuracy due to patients critical conditions in the monitoring. Deep learning methods are very promising in such health applications to obtain a satisfactory performance, where a considerable amount of data is available. These methods are perfectly positioned in the cloud servers in a centralized cloud-based IoT system. However, the response time and availability of these systems highly depend on the quality of Internet connection. On the other hand, smart gateway devices are unable to implement deep learning methods (such as training models) due to their limited computational capacities. In our previous work, we proposed a hierarchical computing architecture (HiCH), where both edge and cloud computing resources were efficiently exploited, allocating heavy tasks of a conventional machine learning method to the cloud servers and outsourcing the hypothesis function to the edge. Due to this local decision making, the availability of the system was highly improved. In this paper, we investigate the feasibility of deploying the Convolutional Neural Network (CNN) based classification model as an example of deep learning methods in this architecture. Therefore, the system benefits from the features of the HiCH and the CNN, ensuring a high-level availability and accuracy. We demonstrate a real-time health monitoring for a case study on ECG classifications and evaluate the performance of the system in terms of response time and accuracy. 
    more » « less
  5. With the proliferation of Internet of Things (IoT) devices, real-time stream processing at the edge of the network has gained significant attention. However, edge stream processing systems face substantial challenges due to the heterogeneity and constraints of computational and network resources and the intricacies of multi-tenant application hosting. An optimized placement strategy for edge application topology becomes crucial to leverage the advantages offered by Edge computing and enhance the throughput and end-to-end latency of data streams. This paper presents Beaver, a resource scheduling framework designed to efficiently deploy stream processing topologies across distributed edge nodes. Its core is a novel scheduler that employs a synergistic integration of graph partitioning within application topologies and a two-sided matching technique to optimize the strategic placement of stream operators. Beaver aims to achieve optimal performance by minimizing bottlenecks in the network, memory, and CPU resources at the edge. We implemented a prototype of Beaver using Apache Storm and Kubernetes orchestration engine and evaluated its performance using an open-source real-time IoT benchmark (RIoTBench). Compared to state-of-the-art techniques, experimental evaluations demonstrate at least 1.6× improvement in the number of tuples processed within a one-second deadline under varying network delay and bandwidth scenarios. 
    more » « less