In this paper, we consider a large-scale heterogeneous mobile edge computing system, where each device’s mean computing task arrival rate, mean service rate, mean energy consumption, and mean offloading latency are drawn from different bounded continuous probability distributions to reflect the diverse compute-intensive applications, mobile devices with different computing capabilities and battery efficiencies, and different types of wireless access networks (e.g., 4G/5G cellular networks, WiFi). We consider a class of distributed threshold-based randomized offloading policies and develop a threshold update algorithm based on its computational load, average offloading latency, average energy consumption, and edge server processing time, depending on the server utilization. We show that there always exists a unique Mean-Field Nash Equilibrium (MFNE) in the large-system limit when the task processing times of mobile devices follow an exponential distribution. This is achieved by carefully partitioning the space of mean arrival rates to account for the discrete structure of each device’s optimal threshold. Moreover, we show that our proposed threshold update algorithm converges to the MFNE. Finally, we perform simulations to corroborate our theoretical results and demonstrate that our proposed algorithm still performs well in more general setups based on the collected real-world data and outperforms the well-known probabilistic offloading policy.
more »
« less
CASTLE over the Air -- Distributed Scheduling for Cellular Data Transmissions (demo)
We present the demonstration of a fully distributed scheduling framework called CASTLE (Client-side Adaptive Scheduler That minimizes Load and Energy) that jointly optimizes the spectral efficiency of cellular networks and battery consumption of smart devices. To do so, we focus on scenarios when many smart devices compete for cellular resources in the same base station: spreading out transmissions over time so that only a few devices transmit at once and improves both spectral efficiency and battery consumption. To this end, we devise two novel features in CASTLE. First, we explicitly consider inter-cell interference for accurate cellular load estimation in our machine learning algorithm. Second, we propose a fully distributed scheduling algorithm that coordinates transmissions between clients based on the locally estimated load level at each client. Our formulation for minimizing battery consumption at each device leads to an optimized back off-based algorithm that fits practical environments. Our comprehensive experimental results show that CASTLE's load estimation is up to 91 % accurate, and that CASTLE achieves higher spectral efficiency with less battery consumption, compared to existing centralized scheduling algorithms as well as a distributed CSMA-like protocol. Furthermore,we develop a light-weight SDK that can expedite the deployment of CASTLE into smart devices and evaluate it in a commercial LTE network.
more »
« less
- Award ID(s):
- 1738097
- PAR ID:
- 10150575
- Date Published:
- Journal Name:
- MobiSys '19: Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services
- Page Range / eLocation ID:
- 673 to 674
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Improving energy efficiency has become necessary to enable sustainable computational science. At the same time, scientific workflows are key in facilitating distributed computing in virtually all domain sciences. As data and computational requirements increase, I/O-intensive workflows have become prevalent. In this work, we evaluate the ability of twopopular energy-aware workflow scheduling algorithms to provide effective schedules for this class of workflow applications, that is, schedules that strike a good compromise between workflow execution time and energy consumption. These two algorithms make decisions based on a widely used power consumption model that simply assumes linear correlation to CPU usage. Previous work has shown this model to be inaccurate, in particular for modeling power consumption of I/O-intensive workflow executions, and has proposed an accurate model. We evaluate the effectiveness of the two aforementioned algorithms based on this accurate model. We find that, when making their decisions, these algorithms can underestimate power consumption by up to 360{\%}, which makes it unclear how well these algorithm would fare in practice. To evaluate the benefit of using the more accurate power consumption model in practice, we propose a simple scheduling algorithm that relies on this model to balance the I/O load across the available compute resources. Experimental results show that this algorithm achieves more desirable compromises between energy consumption and workflow execution time than the two popular algorithms.more » « less
-
We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements, motivated by the federated learn- ing (FL) framework. We propose a distributed communication-efficient and local differentially private stochastic gradient descent (CLDP-SGD) algorithm and analyze its communication, privacy, and convergence trade-offs. Since each iteration of the CLDP- SGD aggregates the client-side local gradients, we develop (optimal) communication-efficient schemes for mean estimation for several lp spaces under local differential privacy (LDP). To overcome performance limitation of LDP, CLDP-SGD takes advantage of the inherent privacy amplification provided by client sub- sampling and data subsampling at each se- lected client (through SGD) as well as the recently developed shuffled model of privacy. For convex loss functions, we prove that the proposed CLDP-SGD algorithm matches the known lower bounds on the centralized private ERM while using a finite number of bits per iteration for each client, i.e., effectively get- ting communication efficiency for “free”. We also provide preliminary experimental results supporting the theory.more » « less
-
Wi-Fi is one of the key wireless technologies for the Internet of things (IoT) owing to its ubiquity. Low-power operation of commercial Wi-Fi enabled IoT modules (typically powered by replaceable batteries) is critical in order to achieve a long battery life, while maintaining connectivity, and thereby reduce the cost and frequency of maintenance. In this work, we focus on commonly used sparse periodic uplink traffic scenario in IoT. Through extensive experiments with a state-of-the-art Wi-Fi enabled IoT module (Texas Instruments SimpleLink CC3235SF), we study the performance of the power save mechanism (PSM) in the IEEE 802.11 standard and show that the battery life of the module is limited, while running thin uplink traffic, to ~30% of its battery life on an idle connection, even when utilizing IEEE 802.11 PSM. Focusing on sparse uplink traffic, a prominent traffic scenario for IoT (e.g., periodic measurements, keep-alive mechanisms, etc.), we design a simulation framework for single-user sparse uplink traffic on ns-3, and develop a detailed and platform-agnostic accurate power consumption model within the framework and calibrate it to CC3235SF. Subsequently, we present five potential power optimization strategies (including standard IEEE 802.11 PSM) and analyze, with simulation results, the sensitivity of power consumption to specific network characteristics (e.g., round-trip time (RTT) and relative timing between TCP segment transmissions and beacon receptions) to present key insights. Finally, we propose a standard-compliant client-side cross-layer power saving optimization algorithm that can be implemented on client IoT modules. We show that the proposed optimization algorithm extends battery life by 24%, 26%, and 31% on average for sparse TCP uplink traffic with 5 TCP segments per second for networks with constant RTT values of 25 ms, 10 ms, and 5 ms, respectively.more » « less
-
null (Ed.)The concept of Industry 4.0 introduces the unification of industrial Internet-of-Things (IoT), cyber physical systems, and data-driven business modeling to improve production efficiency of the factories. To ensure high production efficiency, Industry 4.0 requires industrial IoT to be adaptable, scalable, real-time, and reliable. Recent successful industrial wireless standards such as WirelessHART appeared as a feasible approach for such industrial IoT. For reliable and real-time communication in highly unreliable environments, they adopt a high degree of redundancy. While a high degree of redundancy is crucial to real-time control, it causes a huge waste of energy, bandwidth, and time under a centralized approach and are therefore less suitable for scalability and handling network dynamics. To address these challenges, we propose DistributedHART—a distributed real-time scheduling system for WirelessHART networks. The essence of our approach is to adopt local (node-level) scheduling through a time window allocation among the nodes that allows each node to schedule its transmissions using a real-time scheduling policy locally and online. DistributedHART obviates the need of creating and disseminating a central global schedule in our approach, thereby significantly reducing resource usage and enhancing the scalability. To our knowledge, it is the first distributed real-time multi-channel scheduler for WirelessHART. We have implemented DistributedHART and experimented on a 130-node testbed. Our testbed experiments as well as simulations show at least 85% less energy consumption in DistributedHART compared to existing centralized approach while ensuring similar schedulability.more » « less
An official website of the United States government

