skip to main content


Title: The Promise and Challenges of Computation Deduplication and Reuse at the Network Edge
In edge computing deployments, where devices may be in close proximity to each other, these devices may offload similar computational tasks (i.e., tasks with similar input data for the same edge computing service or for services of the same nature). This results in the execution of duplicate (redundant) computation, which may become a pressing issue for future edge computing environments, since such deployments are envisioned to consist of small-scale data-centers at the edge. To tackle this issue, in this paper, we highlight the importance of paradigms for the deduplication and reuse of computation at the network edge. Such paradigms have the potential to significantly reduce the completion times for offloaded tasks, accommodating more users, devices, and tasks with the same volume of deployed edge computing resources, however, they come with their own technical challenges. Finally, we present a multi-layer architecture to enable computation deduplication and reuse at the network edge and discuss open challenges and future research directions.  more » « less
Award ID(s):
2104700 2306685
NSF-PAR ID:
10322110
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE wireless communications
ISSN:
1558-0687
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In edge computing use cases (e.g., smart cities), where several users and devices may be in close proximity to each other, computational tasks with similar input data for the same services (e.g., image or video annotation) may be offloaded to the edge. The execution of such tasks often yields the same results (output) and thus duplicate (redundant) computation. Based on this observation, prior work has advocated for "computation reuse", a paradigm where the results of previously executed tasks are stored at the edge and are reused to satisfy incoming tasks with similar input data, instead of executing these incoming tasks from scratch. However, realizing computation reuse in practical edge computing deployments, where services may be offered by multiple (distributed) edge nodes (servers) for scalability and fault tolerance, is still largely unexplored. To tackle this challenge, in this paper, we present Reservoir, a framework to enable pervasive computation reuse at the edge, while imposing marginal overheads on user devices and the operation of the edge network infrastructure. Reservoir takes advantage of Locality Sensitive Hashing (LSH) and runs on top of Named-Data Networking (NDN), extending the NDN architecture for the realization of the computation reuse semantics in the network. Our evaluation demonstrated that Reservoir can reuse computation with up to an almost perfect accuracy, achieving 4.25-21.34x lower task completion times compared to cases without computation reuse. 
    more » « less
  2. Edge computing allows end-user devices to offload heavy computation to nearby edge servers for reduced latency, maximized profit, and/or minimized energy consumption. Data-dependent tasks that analyze locally-acquired sensing data are one of the most common candidates for task offloading in edge computing. As a result, the total latency and network load are affected by the total amount of data transferred from end-user devices to the selected edge servers. Most existing solutions for task allocation in edge computing do not take into consideration that some user tasks may actually operate on the same data items. Making the task allocation algorithm aware of the existing data sharing characteristics of tasks can help reduce network load at a negligible profit loss by allocating more tasks sharing data on the same server. In this paper, we formulate the data sharing-aware task allocation problem that make decisions on task allocation for maximized profit and minimized network load by taking into account the data-sharing characteristics of tasks. In addition, because the problem is NP-hard, we design the DSTA algorithm, which finds a solution to the problem in polynomial time. We analyze the performance of the proposed algorithm against a state-of-the-art baseline that only maximizes profit. Our extensive analysis shows that DSTA leads to about 8 times lower data load on the network while being within 1.03 times of the total profit on average compared to the state-of-the-art. 
    more » « less
  3. null (Ed.)
    Due to the proliferation of Internet of Things (IoT) and application/user demands that challenge communication and computation, edge computing has emerged as the paradigm to bring computing resources closer to users. In this paper, we present Whispering, an analytical model for the migration of services (service offloading) from the cloud to the edge, in order to minimize the completion time of computational tasks offloaded by user devices and improve the utilization of resources. We also empirically investigate the impact of reusing the results of previously executed tasks for the execution of newly received tasks (computation reuse) and propose an adaptive task offloading scheme between edge and cloud. Our evaluation results show that Whispering achieves up to 35% and 97% (when coupled with computation reuse) lower task completion times than cases where tasks are executed exclusively at the edge or the cloud. 
    more » « less
  4. While cloud computing is the current standard for outsourcing computation, it can be prohibitively expensive for cities and infrastructure operators to deploy services. At the same time, there are underutilized computing resources within cities and local edge-computing deployments. Using these slack resources may enable significantly lower pricing than comparable cloud computing; such resources would incur minimal marginal expenditure since their deployment and operation are mostly sunk costs. However, there are challenges associated with using these resources. First, they are not effectively aggregated or provisioned. Second, there is a lack of trust between customers and suppliers of computing resources, given that they are distinct stakeholders and behave according to their own interests. Third, delays in processing inputs may diminish the value of the applications. To resolve these challenges, we introduce an architecture combining a distributed trusted computing mechanism, such as a blockchain, with an efficient messaging system like Apache Pulsar. Using this architecture, we design a decentralized computation market where customers and suppliers make offers to deploy and host applications. The proposed architecture can be realized using any trusted computing mechanism that supports smart contracts, and any messaging framework with the necessary features. This combination ensures that the market is robust without incurring the input processing delays that limit other blockchain-based solutions. We evaluate the market protocol using game-theoretic analysis to show that deviation from the protocol is discouraged. Finally, we assess the performance of a prototype implementation based on experiments with a streaming computer-vision application. 
    more » « less
  5. null (Ed.)
    Abstract Edge computing is emerging as a new paradigm to allow processing data near the edge of the network, where the data is typically generated and collected. This enables critical computations at the edge in applications such as Internet of Things (IoT), in which an increasing number of devices (sensors, cameras, health monitoring devices, etc.) collect data that needs to be processed through computationally intensive algorithms with stringent reliability, security and latency constraints. Our key tool is the theory of coded computation, which advocates mixing data in computationally intensive tasks by employing erasure codes and offloading these tasks to other devices for computation. Coded computation is recently gaining interest, thanks to its higher reliability, smaller delay, and lower communication costs. In this paper, we develop a private and rateless adaptive coded computation (PRAC) algorithm for distributed matrix-vector multiplication by taking into account (1) the privacy requirements of IoT applications and devices, and (2) the heterogeneous and time-varying resources of edge devices. We show that PRAC outperforms known secure coded computing methods when resources are heterogeneous. We provide theoretical guarantees on the performance of PRAC and its comparison to baselines. Moreover, we confirm our theoretical results through simulations and implementations on Android-based smartphones. 
    more » « less