skip to main content


Title: Optimized Compression Policy for Flying Ad hoc Networks
Managing energy consumption for computation and communication is a key requirement for flying ad hoc networks (FANET) to prolong the network lifetime. In many applications, the main role of drones is to collect imagery information and relay them to a ground station for further processing and decision making. In this paper, we present a predictive compression policy to maximize the end-to-end image quality penalized by communication and computation costs. The idea is to predict the number of remaining links to the destination for a given routing algorithm and use it to re-compress image frames at intermediate nodes such that the overall energy consumption is minimized. Numerical results confirm that the performance of this method is within 4% of the global optima and higher than the current fixed-rate policies with a significant margin.  more » « less
Award ID(s):
1755984
PAR ID:
10133282
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
16th IEEE Annual Consumer Communications & Networking Conference (CCNC)
Page Range / eLocation ID:
1 to 2
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. With the rapid growth of wireless compute-intensive services (such as image recognition, real-time language translation, or other artificial intelligence applications), efficient wireless algorithm design should not only address when and which users should transmit at each time instance (referred to as wireless scheduling) but also determine where the computation should be executed (referred to as offloading decision) with the goal of minimizing both computing latency and energy consumption. Despite the presence of a variety of earlier works on the efficient offloading design in wireless networks, to the best of our knowledge, there does not exist a work on the realistic user- level dynamic model, where each incoming user demands a heavy computation and leaves the system once its computing request is completed. To this end, we formulate a problem of an optimal offloading design in the presence of dynamic compute-intensive applications in wireless networks. Then, we show that there exists a fundamental logarithmic energy- workload tradeoff for any feasible offloading algorithm, and develop an optimal threshold-based offloading algorithm that achieves this fundamental logarithmic bound. 
    more » « less
  2. With the rapid growth of wireless compute-intensive services (such as image recognition, real-time language translation, or other artificial intelligence applications), efficient wireless algorithm design should not only address when and which users should transmit at each time instance (referred to as wireless scheduling) but also determine where the computation should be executed (referred to as offloading decision) with the goal of minimizing both computing latency and energy consumption. Despite the presence of a variety of earlier works on the efficient offloading design in wireless networks, to the best of our knowledge, there does not exist a work on the realistic user-level dynamic model, where each incoming user demands a heavy computation and leaves the system once its computing request is completed. To this end, we formulate a problem of an optimal offloading design in the presence of dynamic compute-intensive applications in wireless networks. Then, we show that there exists a fundamental logarithmic energy-workload tradeoff for any feasible offloading algorithm, and develop an optimal threshold-based offloading algorithm that achieves this fundamental logarithmic bound. 
    more » « less
  3. This paper describes a novel framework for executing a network of trained deep neural network (DNN) models on commercial-off-the-shelf devices that are deployed in an IoT environment. The scenario consists of two devices connected by a wireless network: a user-end device (U), which is a low-end, energy and performance-limited processor, and a cloudlet (C), which is a substantially higher performance and energy-unconstrained processor. The goal is to distribute the computation of the DNN models between U and C to minimize the energy consumption of U while taking into account the variability in the wireless channel delay and the performance overhead of executing models in parallel. The proposed framework was implemented using an NVIDIA Jetson Nano for U and a Dell workstation with Titan Xp GPU as C. Experiments demonstrate significant improvements both in terms of energy consumption of U and processing delay. 
    more » « less
  4. Autonomous mobile robots (AMRs) have the capability to execute a wide range of tasks with minimal human intervention. However, one of the major limitations of AMRs is their limited battery life, which often results in interruptions to their task execution and the need to reach the nearest charging station. Optimizing energy consumption in AMRs has become a critical challenge in their deployment. Through empirical studies on real AMRs, we have identified a lack of coordination between computation and control as a major source of energy inefficiency. In this paper, we propose a comprehensive energy prediction model that provides real-time energy consumption for each component of the AMR. Additionally, we propose three path models to address the obstacle avoidance problem for AMRs. To evaluate the performance of our energy prediction and path models, we have developed a customized AMR called Donkey, which has the capability for fine-grained (millisecond-level) end-to-end power profiling. Our energy prediction model demonstrated an accuracy of over 90% in our evaluations. Finally, we applied our energy prediction model to obstacle avoidance and guided energy-efficient path selection, resulting in up to a 44.8% reduction in energy consumption compared to the baseline. 
    more » « less
  5. In this paper, we propose a novel Spin-Transfer Torque Magnetic Random-Access Memory (STT-MRAM) array design that could simultaneously work as non-volatile memory and implement a reconfigure in-memory logic operation without add-on logic circuits to the memory chip. The computed output could be simply read out like a typical MRAM bit-cell through the modified peripheral circuit. Such intrinsic in-memory computation can be used to process data locally and transfers the “cooked” data to the primary processing unit (i.e. CPU or GPU) for complex computation with high precision requirement. It greatly reduces power-hungry and long distance data communication, and further leads to extreme parallelism within memory. In this work, we further propose an in-memory edge extraction algorithm as a case study to demonstrate the efficiency of in memory preprocessing methodology. The simulation results show that our edge extraction method reduces data communication as much as 8x for grayscale image, thus greatly reducing system energy consumption. Meanwhile, the F-measure result shows only ∼10% degradation compared to conventional edge detection operator, such as Prewitt, Sobel and Roberts. 
    more » « less