skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Do we need sophisticated system design for edge-assisted augmented reality?
We revisit the performance of a canonical system design for edge-assisted AR that simply combines off-the-shelf H.264 video encoding with a standard object tracking technique. Our experimental analysis shows that the simple canonical design for edge-assisted object detection can achieve within 3.07%/1.51% of the accuracy of ideal offloading (which assumes infinite network bandwidth and the total network transmission time of a single RTT) under LTE/5G mmWave networks. Our findings suggest that recent trend towards sophisticated system architecture design for edge-assisted AR appears unnecessary. We provide insights for why video compression plus on-device object tracking is so effective in edge-assisted object detection, draw implications to edge-assisted AR research, and pose open problems that warrant further investigation into this surprise finding.  more » « less
Award ID(s):
2112778
PAR ID:
10342971
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
EdgeSys '22: Proceedings of the 5th International Workshop on Edge Systems, Analytics and Networking
Page Range / eLocation ID:
7 to 12
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Edge-assisted Augmented Reality (AR) which offloads computeintensive Deep Neural Network (DNN)-based AR tasks to edge servers faces an important design challenge: how to pick the DNN model out of many choices proposed for each AR task for offloading. For each AR task, e.g., depth estimation, many DNN-based models have been proposed over time that vary in accuracy and complexity. In general, more accurate models are also more complex; they are larger and have longer inference time. Thus choosing a larger model in offloading can provide higher accuracy for the offloaded frames but also incur longer turnaround time, during which the AR app has to reuse the estimation result from the last offloaded frame, which can lead to lower average accuracy. In this paper, we experimentally study this design tradeoff using depth estimation as a case study. We design optimal offloading schedule and further consider the impact of numerous factors such as on-device fast tracking, frame downsizing and available network bandwidth. Our results show that for edge-assisted monocular depth estimation, with proper frame downsizing and fast tracking, compared to small models, the improved accuracy of large models can offset its longer turnaround time to provide higher average estimation accuracy across frames under both LTE and 5G mmWave. 
    more » « less
  2. Edge-assisted AR supports high-quality AR on resource-constrained mobile devices by offloading high-rate camera-captured frames to powerful GPU edge servers to perform heavy vision tasks. Since the result of an offloaded frame may not come back in the same frame interval, edge-assisted AR designs resort to local tracking on the last server returned result to generate more accurate result for the current frame. In such an offloading+local tracking paradigm, reducing the staleness of the last server returned result is critical to improving AR task accuracy. In this paper, we present MPCP, an online offloading scheduling framework that minimizes the staleness of server-returned result in edge-assisted AR by optimally pipelining network transfer of frames to the edge server and the Deep Neural Network inference on the edge server. MPCP is based on model predictive control (MPC). Our evaluation results show that MPCP reduces the depth estimation error by up to 10.0% compared to several baseline schemes. 
    more » « less
  3. With faster wireless networks and server GPUs, offloading high-accuracy but compute-intensive AR tasks implemented in Deep Neural Networks (DNNs) to edge servers offers a promising way to support high-QoE Augmented/Mixed Reality (AR/MR) applications. A cost-effective way for AR app vendors to deploy such edge-assisted AR apps to support a large user base is to use commercial Machine-Learning-as-a-Service (MLaaS) deployed at the edge cloud. To maximize cost-effectiveness, such an MLaaS provider faces a key design challenge, \ie how to maximize the number of clients concurrently served by each GPU server in its cluster while meeting per-client AR task accuracy SLAs. The above AR offloading inference serving problem differs from generic inference serving or video analytics serving in one fundamental way: due to the use of local tracking which reuses the last server-returned inference result to derive results for the current frame, the offloading frequency and end-to-end latency of each AR client directly affect its AR task accuracy (for all the frames). In this paper, we present ARISE, a framework that optimizes the edge server capacity in serving edge-assisted AR clients. Our design exploits the intricate interplay between per-client offloading schedule and batched inference on the server via proactively coordinating offloading request streams from different AR clients. Our evaluation using a large set of emulated AR clients and a 10-phone testbed shows that \name supports 1.7x--6.9x more clients compared to various baselines while keeping the per-client accuracy within the client-specified accuracy SLAs. 
    more » « less
  4. Edge-assisted video analytics is gaining momentum. In this work, we tackle an important problem to compress video content live streamed from the device to the edge without scarifying accuracy and timeliness of its video analytics. We find that on-device processing can be tuned over a larger configuration space for more video compression, which was largely overlooked. Inspired by our pilot study, we design VPPlus to fulfill the potentials to compress the video as much as we can, while preserving analytical accuracy. VPPlus incorporates two core modules – offline profiling and online adaptation – to generate proper feedback automatically and quickly to tune on-device processing. We validate the effectiveness and efficiency of VPPlususing five object detection tasks over two popular datasets; VPPlus outperforms the state-of-art approaches in almost all the cases. 
    more » « less
  5. Image sensors with programmable region-of-interest (ROI) readout are a new sensing technology important for energyefficient embedded computer vision. In particular, ROIs can subsample the number of pixels being readout while performing single object tracking in a video. In this paper, we develop adaptive sampling algorithms which perform joint object tracking and predictive video subsampling. We utilize an object detection consisting of either mean shift tracking or a neural network, coupled with a Kalman filter for prediction. We show that our algorithms achieve mean average precision of 0.70 or higher on a dataset of 20 videos in software. Further, we implement hardware acceleration of mean shift tracking with Kalman filter adaptive subsampling on an FPGA. Hardware results show a 23× improvement in clock cycles and latency as compared to baseline methods and achieves 38FPS real-time performance. This research points to a new domain of hardware-software co-design for adaptive video subsampling in embedded computer vision. 
    more » « less