skip to main content


Title: A quantitative analysis of system bottlenecks in visual SLAM
Visual SLAM systems are concurrent, performance-critical systems that respond to real-time environmental conditions and are frequently deployed on resource-constrained hardware. Previous SLAM frameworks have primarily focused on algorithmic advances and their systems core has largely remained unchanged. In turn, SLAM systems suffer from performance problems that could be alleviated with improved systems design. In this paper, we present a quantitative analysis of the systems challenges to building consistent, accurate, and robust SLAM systems in the face of concurrency, variable environmental conditions, and resource-constrained hardware. We identify three interconnected challenges on systems design --- timeliness, concurrency, and context awareness --- and clarify their effects on performance.  more » « less
Award ID(s):
1846320
PAR ID:
10321518
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the 23rd Annual International Workshop on Mobile Computing Systems and Applications
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Since emerging edge applications such as Internet of Things (IoT) analytics and augmented reality have tight latency constraints, hardware AI accelerators have been recently proposed to speed up deep neural network (DNN) inference run by these applications. Resource-constrained edge servers and accelerators tend to be multiplexed across multiple IoT applications, introducing the potential for performance interference between latency-sensitive workloads. In this article, we design analytic models to capture the performance of DNN inference workloads on shared edge accelerators, such as GPU and edgeTPU, under different multiplexing and concurrency behaviors. After validating our models using extensive experiments, we use them to design various cluster resource management algorithms to intelligently manage multiple applications on edge accelerators while respecting their latency constraints. We implement a prototype of our system in Kubernetes and show that our system can host 2.3× more DNN applications in heterogeneous multi-tenant edge clusters with no latency violations when compared to traditional knapsack hosting algorithms. 
    more » « less
  2. Edge computing is increasingly proposed as a solution for reducing resource consumption of mobile devices running simultaneous localization and mapping (SLAM) algorithms, with most edge-assisted SLAM systems assuming the communication resources between the mobile device and the edge server to be unlimited, or relying on heuristics to choose the information to be transmitted to the edge. This paper presents AdaptSLAM, an edge-assisted visual (V) and visual-inertial (VI) SLAM system that adapts to the available communication and computation resources, based on a theoretically grounded method we developed to select the subset of keyframes (the representative frames) for constructing the best local and global maps in the mobile device and the edge server under resource constraints. We implemented AdaptSLAM to work with the state-of-the-art open-source V-and VI-SLAM ORB-SLAM3 framework, and demonstrated that, under constrained network bandwidth, AdaptSLAM reduces the tracking error by 62% compared to the best baseline method. 
    more » « less
  3. The Global Wearable market is anticipated to rise at a considerable rate in the next coming years and communication is a fundamental block in any wearable device. In communication, encryption methods are being used with the aid of microcontrollers or software implementations, which are power-consuming and incorporate complex hardware implementation. Internet of Things (IoT) devices are considered as resource-constrained devices that are expected to operate with low computational power and resource utilization criteria. At the same time, recent research has shown that IoT devices are highly vulnerable to emerging security threats, which elevates the need for low-power and small-size hardware-based security countermeasures. Chaotic encryption is a method of data encryption that utilizes chaotic systems and non-linear dynamics to generate secure encryption keys. It aims to provide high-level security by creating encryption keys that are sensitive to initial conditions and difficult to predict, making it challenging for unauthorized parties to intercept and decode encrypted data. Since the discovery of chaotic equations, there have been various encryption applications associated with them. In this paper, we comprehensively analyze the physical and encryption attacks on continuous chaotic systems in resource-constrained devices and their potential remedies. To this aim, we introduce different categories of attacks of chaotic encryption. Our experiments focus on chaotic equations implemented using Chua’s equation and leverages circuit architectures and provide simulations proof of remedies for different attacks. These remedies are provided to block the attackers from stealing users’ information (e.g., a pulse message) with negligible cost to the power and area of the design. 
    more » « less
  4. null (Ed.)
    Abstract Deep neural networks (DNNs) have substantial computational requirements, which greatly limit their performance in resource-constrained environments. Recently, there are increasing efforts on optical neural networks and optical computing based DNNs hardware, which bring significant advantages for deep learning systems in terms of their power efficiency, parallelism and computational speed. Among them, free-space diffractive deep neural networks (D 2 NNs) based on the light diffraction, feature millions of neurons in each layer interconnected with neurons in neighboring layers. However, due to the challenge of implementing reconfigurability, deploying different DNNs algorithms requires re-building and duplicating the physical diffractive systems, which significantly degrades the hardware efficiency in practical application scenarios. Thus, this work proposes a novel hardware-software co-design method that enables first-of-its-like real-time multi-task learning in D 2 2NNs that automatically recognizes which task is being deployed in real-time. Our experimental results demonstrate significant improvements in versatility, hardware efficiency, and also demonstrate and quantify the robustness of proposed multi-task D 2 NN architecture under wide noise ranges of all system components. In addition, we propose a domain-specific regularization algorithm for training the proposed multi-task architecture, which can be used to flexibly adjust the desired performance for each task. 
    more » « less
  5. null (Ed.)
    Dynamically reallocating computing resources to handle bursty workloads is a common practice for web applications (e.g., e-commerce) in clouds. However, our empirical analysis on a standard n-tier benchmark application (RUBBoS) shows that simply scaling an n-tier application by reallocating hardware resources without fast adapting soft resources (e.g., server threads, connections) may lead to large response time fluctuations. This is because soft resources control the workload concurrency of component servers in the system: adding or removing hardware resources such as Virtual Machines (VMs) can implicitly change the workload concurrency of dependent servers, causing either under- or over-utilization of the critical hardware resource in the system. To quickly identify the optimal soft resource allocation of each server in the system and stabilize response time fluctuation, we propose a novel Scatter-Concurrency-Throughput (SCT) model based on the monitoring of each server's real-time concurrency and throughput. We then implement a Concurrency-aware system Scaling (ConScale) framework which integrates the SCT model to fast adapt the soft resource allocations of key servers during the system scaling process. Our experiments using six realistic bursty workload traces show that ConScale can effectively mitigate the response time fluctuations of the target web application compared to the state-of-the-art cloud scaling strategies such as EC2-AutoScaling. 
    more » « less