Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Users on edge generate deep inference requests continuously over time. Mobile/edge devices located near users can undertake the computation of inference locally for users, e.g., the embedded edge device on an autonomous vehicle. Due to limited computing resources on one mobile/edge device, it may be challenging to process the inference requests from users with high throughput. An attractive solution is to (partially) offload the computation to a remote device in the network. In this paper, we examine the existing inference execution solutions across local and remote devices and propose an adaptive scheduler, a BPS scheduler, for continuous deep inference on collaborative edge intelligence. By leveraging data parallel, neurosurgeon, reinforcement learning techniques, BPS can boost the overall inference performance by up to 8.2× over the baseline schedulers. A lightweight compressor, FF, specialized in compressing intermediate output data for neurosurgeon, is proposed and integrated into the BPS scheduler. FF exploits the operating character of convolutional layers and utilizes efficient approximation algorithms. Compared to existing compression methods, FF achieves up to 86.9% lower accuracy loss and up to 83.6% lower latency overhead.more » « less
-
Deep neural network (DNN) inference poses unique challenges in serving computational requests due to high request intensity, concurrent multi-user scenarios, and diverse heterogeneous service types. Simultaneously, mobile and edge devices provide users with enhanced computational capabilities, enabling them to utilize local resources for deep inference processing. Moreover, dynamic inference techniques allow content-based computational cost selection per request. This paper presents Dystri, an innovative framework devised to facilitate dynamic inference on distributed edge infrastructure, thereby accommodating multiple heterogeneous users. Dystri offers a broad applicability in practical environments, encompassing heterogeneous device types, DNN-based applications, and dynamic inference techniques, surpassing the state-of-the-art (SOTA) approaches. With distributed controllers and a global coordinator, Dystri allows per-request, per-user adjustments of quality-of-service, ensuring instantaneous, flexible, and discrete control. The decoupled workflows in Dystri naturally support user heterogeneity and scalability, addressing crucial aspects overlooked by existing SOTA works. Our evaluation involves three multi-user, heterogeneous DNN inference service platforms deployed on distributed edge infrastructure, encompassing seven DNN applications. Results show Dystri achieves near-zero deadline misses and excels in adapting to varying user numbers and request intensities. Dystri outperforms baselines with accuracy improvement up to 95 ×.more » « less
-
Convolutional neural networks (CNNs) play an important role in today's mobile and edge computing systems for vision-based tasks like object classification and detection. However, state-of-the-art methods on CNN acceleration are trapped in either limited practical latency speed-up on general computing platforms or latency speed-up with severe accuracy loss. In this paper, we propose a spatial-based dynamic CNN acceleration framework, NeuLens, for mobile and edge platforms. Specially, we design a novel dynamic inference mechanism, assemble region-aware convolution (ARAC) supernet, that peels off redundant operations inside CNN models as many as possible based on spatial redundancy and channel slicing. In ARAC supernet, the CNN inference flow is split into multiple independent micro-flows, and the computational cost of each can be autonomously adjusted based on its tiled-input content and application requirements. These micro-flows can be loaded into hardware like GPUs as single models. Consequently, its operation reduction can be well translated into latency speed-up and is compatible with hardware-level accelerations. Moreover, the inference accuracy can be well preserved by identifying critical regions on images and processing them in the original resolution with large micro-flow. Based on our evaluation, NeuLens outperforms baseline methods by up to 58% latency reduction with the same accuracy and by up to 67.9% accuracy improvement under the same latency/memory constraints.more » « less
-
Federated learning (FL) has been emerging as a new distributed machine learning paradigm recently. Although FL can protect the data privacy of participants by keeping their training data on local devices, there are recent works raising new privacy concerns especially when workers or the parameter server of FL are untrustworthy or malicious. One effective way to solve the problem is using hierarchical federated learning (HFL) where a few middle-layer aggregators (or called group leaders) are used to aggregate local model updates from workers and send group model updates to the parameter server. In this paper, we consider the participant selection problem of HFL in an edge cloud with multiple FL models, where each model needs to select one parameter server, a few group leaders and a certain amount of workers from edge servers to jointly perform HFL. We first formulate this problem as a non-linear integer programming, aiming to minimize the total learning cost of all models while satisfying the constrained edge resources. We then design a three-stage algorithm by decoupling the original problem into three sub-problems and solving them iteratively. Simulations with real-world datasets and FL models confirm that our proposed algorithm can efficiently reduce the average total learning cost in edge cloud compared with existing methods.more » « less
-
As edge computing complements the cloud to enable computational services right at the network edge, federated learning (FL) can also benefit from close-by edge computing infrastructure. However, most prior works on federated edge learning (FEL) mainly focus on one shared global model during the federated training in edge systems. In a real edge computing scenario, there may co-exist multiple various FL models that are owned by different entities and used by different applications. Simultaneously training these models competes both computing and networking resources in the shared edge system. Therefore, in this work, we consider a multi-model federated edge learning where multiple FEL models are being trained in the edge network and edge servers can act as either parameter servers or workers of these FEL models. We formulate a joint participant selection and learning scheduling problem, which is a non-linear mixed-integer program, aiming to minimize the total cost of all FEL models while satisfying the desired convergence rate of trained FEL models and the constrained edge resources. We then design several algorithms by decoupling the original problem into two or three sub-problems which can be solved respectively and iteratively. Extensive simulations with real-world training datasets and FEL models show that our proposed algorithms can efficiently reduce the average total cost of all FEL models in a multi-model FEL setting compared with existing algorithms.more » « less