skip to main content

This content will become publicly available on September 1, 2022

Title: Event-Driven Deep Learning for Edge Intelligence (EDL-EI) †
Edge intelligence (EI) has received a lot of interest because it can reduce latency, increase efficiency, and preserve privacy. More significantly, as the Internet of Things (IoT) has proliferated, billions of portable and embedded devices have been interconnected, producing zillions of gigabytes on edge networks. Thus, there is an immediate need to push AI (artificial intelligence) breakthroughs within edge networks to achieve the full promise of edge data analytics. EI solutions have supported digital technology workloads and applications from the infrastructure level to edge networks; however, there are still many challenges with the heterogeneity of computational capabilities and the spread of information sources. We propose a novel event-driven deep-learning framework, called EDL-EI (event-driven deep learning for edge intelligence), via the design of a novel event model by defining events using correlation analysis with multiple sensors in real-world settings and incorporating multi-sensor fusion techniques, a transformation method for sensor streams into images, and lightweight 2-dimensional convolutional neural network (CNN) models. To demonstrate the feasibility of the EDL-EI framework, we presented an IoT-based prototype system that we developed with multiple sensors and edge devices. To verify the proposed framework, we have a case study of air-quality scenarios based on the benchmark data more » provided by the USA Environmental Protection Agency for the most polluted cities in South Korea and China. We have obtained outstanding predictive accuracy (97.65% and 97.19%) from two deep-learning models on the cities’ air-quality patterns. Furthermore, the air-quality changes from 2019 to 2020 have been analyzed to check the effects of the COVID-19 pandemic lockdown. « less
Authors:
; ; ;
Award ID(s):
1951971 1935076 1747751
Publication Date:
NSF-PAR ID:
10294395
Journal Name:
Sensors
Volume:
21
Issue:
18
Page Range or eLocation-ID:
6023
ISSN:
1424-8220
Sponsoring Org:
National Science Foundation
More Like this
  1. The past decade has witnessed the rising dominance of deep learning and artificial intelligence in a wide range of applications. In particular, the ocean of wireless smartphones and IoT devices continue to fuel the tremendous growth of edge/cloudbased machine learning (ML) systems including image/speech recognition and classification. To overcome the infrastructural barrier of limited network bandwidth in cloud ML, existing solutions have mainly relied on traditional compression codecs such as JPEG that were historically engineered for humanend users instead of ML algorithms. Traditional codecs do not necessarily preserve features important to ML algorithms under limited bandwidth, leading to potentially inferiormore »performance. This work investigates application-driven optimization of programmable commercial codec settings for networked learning tasks such as image classification. Based on the foundation of variational autoencoders (VAEs), we develop an end-to-end networked learning framework by jointly optimizing the codec and classifier without reconstructing images for given data rate (bandwidth). Compared with standard JPEG codec, the proposed VAE joint compression and classification framework achieves classification accuracy improvement by over 10% and 4%, respectively, for CIFAR-10 and ImageNet-1k data sets at data rate of 0.8 bpp. Our proposed VAE-based models show 65%􀀀99% reductions in encoder size,  1.5􀀀 13.1 improvements in inference speed and 25%􀀀99% savings in power compared to baseline models. We further show that a simple decoder can reconstruct images with sufficient quality without compromising classification accuracy.« less
  2. Human activity recognition (HAR) from wearable sensors data has become ubiquitous due to the widespread proliferation of IoT and wearable devices. However, recognizing human activity in heterogeneous environments, for example, with sensors of different models and make, across different persons and their on-body sensor placements introduces wide range discrepancies in the data distributions, and therefore, leads to an increased error margin. Transductive transfer learning techniques such as domain adaptation have been quite successful in mitigating the domain discrepancies between the source and target domain distributions without the costly target domain data annotations. However, little exploration has been done when multiplemore »distinct source domains are present, and the optimum mapping to the target domain from each source is not apparent. In this paper, we propose a deep Multi-Source Adversarial Domain Adaptation (MSADA) framework that opportunistically helps select the most relevant feature representations from multiple source domains and establish such mappings to the target domain by learning the perplexity scores. We showcase that the learned mappings can actually reflect our prior knowledge on the semantic relationships between the domains, indicating that MSADA can be employed as a powerful tool for exploratory activity data analysis. We empirically demonstrate that our proposed multi-source domain adaptation approach achieves 2% improvement with OPPORTUNITY dataset (cross-person heterogeneity, 4 ADLs), whereas 13% improvement on DSADS dataset (cross-position heterogeneity, 10 ADLs and sports activities).« less
  3. Mobile edge computing pushes computationally-intensive services closer to the user to provide reduced delay due to physical proximity. This has led many to consider deploying deep learning models on the edge – commonly known as edge intelligence (EI). EI services can have many model implementations that provide different QoS. For instance, one model can perform inference faster than another (thus reducing latency) while achieving less accuracy when evaluated. In this paper, we study joint service placement and model scheduling of EI services with the goal to maximize Quality-of-Servcice (QoS) for end users where EI services have multiple implementations to servemore »user requests, each with varying costs and QoS benefits. We cast the problem as an integer linear program and prove that it is NP-hard. We then prove the objective is equivalent to maximizing a monotone increasing, submodular set function and thus can be solved greedily while maintaining a (1 – 1/e)-approximation guarantee. We then propose two greedy algorithms: one that theoretically guarantees this approximation and another that empirically matches its performance with greater efficiency. Finally, we thoroughly evaluate the proposed algorithm for making placement and scheduling decisions in both synthetic and real-world scenarios against the optimal solution and some baselines. In the real-world case, we consider real machine learning models using the ImageNet 2012 data-set for requests. Our numerical experiments empirically show that our more efficient greedy algorithm is able to approximate the optimal solution with a 0.904 approximation on average, while the next closest baseline achieves a 0.607 approximation on average.« less
  4. The management of drinking water quality is critical to public health and can benefit from techniques and technologies that support near real-time forecasting of lake and reservoir conditions. The cyberinfrastructure (CI) needed to support forecasting has to overcome multiple challenges, which include: 1) deploying sensors at the reservoir requires the CI to extend to the network’s edge and accommodate devices with constrained network and power; 2) different lakes need different sensor modalities, deployments, and calibrations; hence, the CI needs to be flexible and customizable to accommodate various deployments; and 3) the CI requires to be accessible and usable to variousmore »stakeholders (water managers, reservoir operators, and researchers) without barriers to entry. This paper describes the CI underlying FLARE (Forecasting Lake And Reservoir Ecosystems), a novel system co-designed in an interdisciplinary manner between CI and domain scientists to address the above challenges. FLARE integrates R packages that implement the core numerical forecasting (including lake process modeling and data assimilation) with containers, overlay virtual networks, object storage, versioned storage, and event-driven Function-as-a-Service (FaaS) serverless execution. It is a flexible forecasting system that can be deployed in different modalities, including the Manual Mode suitable for end-users’ personal computers and the Workflow Mode ideal for cloud deployment. The paper reports on experimental data and lessons learned from the operational deployment of FLARE in a drinking water supply (Falling Creek Reservoir in Vinton, Virginia, USA). Experiments with a FLARE deployment quantify its edge-to-cloud virtual network performance and serverless execution in OpenWhisk deployments on both XSEDE-Jetstream and the IBM Cloud Functions FaaS system.« less
  5. An increasing number of community spaces are being instrumented with heterogeneous IoT sensors and actuators that enable continuous monitoring of the surrounding environments. Data streams generated from the devices are analyzed using a range of analytics operators and transformed into meaningful information for community monitoring applications. To ensure high quality results, timely monitoring, and application reliability, we argue that these operators must be hosted at edge servers located in close proximity to the community space. In this paper, we present a Resource Efficient Adaptive Monitoring (REAM) framework at the edge that adaptively selects workflows of devices and operators to maintainmore »adequate quality of information for the application at hand while judiciously consuming the limited resources available on edge servers. IoT deployments in community spaces are in a state of continuous flux that are dictated by the nature of activities and events within the space. Since these spaces are complex and change dynamically, and events can take place under different environmental contexts, developing a one-size-fits-all model that works for all types of spaces is infeasible. The REAM framework utilizes deep reinforcement learning agents that learn by interacting with each individual community spaces and take decisions based on the state of the environment in each space and other contextual information. We evaluate our framework on two real-world testbeds in Orange County, USA and NTHU, Taiwan. The evaluation results show that community spaces using REAM can achieve > 90% monitoring accuracy while incurring ~ 50% less resource consumption costs compared to existing static monitoring and Machine Learning driven approaches.« less