With the explosion of intelligent and latency-sensitive applications such as AR/VR, remote health and autonomous driving, mobile edge computing (MEC) has emerged as a promising solution to mitigate the high end-to-end latency of mobile cloud computing (MCC). However, the edge servers have significantly less computing capability compared to the resourceful central cloud. Therefore, a collaborative cloud-edge-local offloading scheme is necessary to accommodate both computationally intensive and latency-sensitive mobile applications. The coexistence of central cloud, edge servers and the mobile device (MD), forming a multi-tiered heterogeneous architecture, makes the optimal application deployment very challenging especially for multi-component applications with component dependencies. This paper addresses the problem of energy and latency efficient application offloading in a collaborative cloud-edgelocal environment. We formulate a multi-objective mixed integer linear program (MILP) with the goal of minimizing the systemwide energy consumption and application end-to-end latency. An approximation algorithm based on LP relaxation and rounding is proposed to address the time complexity. We demonstrate that our approach outperforms existing strategies in terms of application request acceptance ratio, latency and system energy consumption.
more »
« less
Risk-Aware Application Placement in Mobile Edge Computing Systems: A Learning-based Optimization Approach
In this paper, we address the problem of application placement in MEC systems that takes into account the risk of exceeding the energy budget of the edge servers. We formulate the problem as a chance-constrained program, where the objective is to maximize the total quality of service in the system, while keeping the expected risk of exceeding the edge servers' energy budget within an acceptable threshold. We develop a learning-based method to solve the problem which requires a very small execution time for large size instances. We evaluate the performance of the proposed method by conducting an extensive experimental analysis.
more »
« less
- Award ID(s):
- 1724227
- PAR ID:
- 10288187
- Date Published:
- Journal Name:
- 2020 IEEE International Conference on Edge Computing (EDGE)
- Page Range / eLocation ID:
- 83 to 90
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In the resource-rich environment of data centers most failures can quickly failover to redundant resources. In contrast, failure in edge infrastructures with limited resources might require maintenance personnel to drive to the location in order to fix the problem. The operational cost of these "truck rolls" to locations at the edge infrastructure competes with the operational cost incurred by extra space and power needed for redundant resources at the edge. Computational storage devices with network interfaces can act as network-attached storage servers and offer a new design point for storage systems at the edge. In this paper we hypothesize that a system consisting of a larger number of such small "embedded" storage nodes provides higher availability due to a larger number of failure domains while also saving operational cost in terms of space and power. As evidence for our hypothesis, we compared the possibility of data loss between two different types of storage systems: one is constructed with general-purpose servers, and the other one is constructed with embedded storage nodes. Our results show that the storage system constructed with general-purpose servers has 7 to 20 times higher risk of losing data over the storage system constructed with embedded storage devices. We also compare the two alternatives in terms of power and space using the Media-Based Work Unit (MBWU) that we developed in an earlier paper as a reference point.more » « less
-
Supporting smooth movement of mobile clients is important when offloading services on an edge computing platform. Interruption free client mobility demands seamless migration of the offloading service to nearby edge servers. However, fast migration of offloading services across edge servers in a WAN environment poses significant challenges to the handoff service design. In this paper, we present a novel service handoff system which seamlessly migrates offloading services to the nearest edge server, while the mobile client is moving. Service handoff is achieved via container migration. We identify an important performance problem during Docker container migration. Based on our systematic study of container layer management and image stacking, we propose a migration method which leverages the layered storage system to reduce file system synchronization overhead, without dependence on the distributed file system. We implement a prototype system and conduct experiments using real world product applications. Evaluation results reveal that compared to state-of-the-art service handoff systems designed for edge computing platforms, our system reduces the total duration of service handoff time by 80% (56%) with network bandwidth 5Mbps (20Mbps).more » « less
-
Federated learning at edge systems not only mitigates privacy concerns by keeping data localized but also leverages edge computing resources to enable real-time AI inference and decision-making. In a blockchain-based federated learning framework over edge clouds, edge servers as clients can contribute private data or computing resources to the overall training or mining task for secure model aggregation. To overcome the impractical assumption that edge servers will voluntarily join training or mining, it is crucial to design an incentive mechanism that motivates edge servers to achieve optimal training and mining outcomes. In this paper, we investigate the incentive mechanism design for a semi-asynchronous blockchain-based federated edge learning system. We model the resource pricing mechanism among edge servers and task publishers as a Stackelberg game and prove the existence and uniqueness of a Nash equilibrium in such a game. We then propose an iterative algorithm based on the Alternating Direction Method of Multipliers (ADMM) to achieve the optimal strategies for each participating edge server. Finally, our simulation results verify the convergence and efficiency of our proposed scheme.more » « less
-
Recent advances in Internet of Things (IoT) technologies have sparked significant interest toward developing learning-based sensing applications on embedded edge devices. These efforts, however, are being challenged by the complexities of adapting to unforeseen conditions in an open-world environment, mainly due to the intensive computational and energy demands exceeding the capabilities of edge devices. In this article, we propose OpenSense, an open-world time-series sensing framework for making inferences from time-series sensor data and achieving incremental learning on an embedded edge device with limited resources. The proposed framework is able to achieve two essential tasks, inference and incremental learning, eliminating the necessity for powerful cloud servers. In addition, to secure enough time for incremental learning and reduce energy consumption, we need to schedule sensing activities without missing any events in the environment. Therefore, we propose two dynamic sensor scheduling techniques: 1) a class-level period assignment scheduler that finds an appropriate sensing period for each inferred class and 2) a Q-learning-based scheduler that dynamically determines the sensing interval for each classification moment by learning the patterns of event classes. With this framework, we discuss the design choices made to ensure satisfactory learning performance and efficient resource usage. Experimental results demonstrate the ability of the system to incrementally adapt to unforeseen conditions and to efficiently schedule to run on a resource-constrained device.more » « less
An official website of the United States government

