Evolution of cellular networks into dynamic, dense, and heterogeneous networks have introduced new challenges for cell resource optimization, especially in the imbalanced traffic load regions. Numerous load balancing schemes have been proposed to tackle this issue; however, they operate in a reactive manner that confines their ability to meet the top‐notch quality of experience demands. To address this challenge, we propose a novel proactive load balancing scheme. Our framework learns users' mobility and demands statistics jointly to proactively cache future contents during their stay at lightly loaded cells, which results in quality of experience maximization and load minimization. System level simulations are performed and compared with the state‐of‐the‐art reactive schemes.
more » « less- PAR ID:
- 10372744
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Transactions on Emerging Telecommunications Technologies
- Volume:
- 31
- Issue:
- 2
- ISSN:
- 2161-3915
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Increased network wide energy consumption is a paramount challenge that hinders wide scale ultra-dense networks (UDN) deployments. While several Energy Saving (ES) enhancement schemes have been proposed recently, these schemes have one common tenancy. They operate in reactive mode i.e., to increase ES, cells are switched ON/OFF reactively in response to changing cell loads. Though, significant ES gains have been reported for such ON/OFF schemes, the inherent reactiveness of these ES schemes limits their ability to meet the extremely low latency and high QoS expected from future cellular networks vis-a-vis 5G and beyond. To address this challenge, in this paper we propose a novel user mobility prediction based AUtonomous pROactive eneRgy sAving (AURORA) framework for future UDN. Instead of observing changes in cell loads passively and then reacting to them, AURORA uses past hand over (HO) traces to determine future cell loads. This prediction is then used to proactively schedule small cell sleep cycles. AURORA also incorporates the effect of Cell Individual Offsets (CIOs) for balancing load among cells to ensure QoS while maximizing ES. Extensive system level simulations leveraging realistic SLAW model based mobility traces show that AURORA can achieve significant energy reduction gain without noticeable impact on QoS.more » « less
-
This paper studies how to provision edge computing and network resources for complex microservice-based applications (MSAs) in face of uncertain and dynamic geo-distributed demands. The complex inter-dependencies between distributed microservice components make load balancing for MSAs extremely challenging, and the dynamic geo-distributed demands exacerbate load imbalance and consequently congestion and performance loss. In this paper, we develop an edge resource provisioning model that accurately captures the inter-dependencies between microservices and their impact on load balancing across both computation and communication resources. We also propose a robust formulation that employs explicit risk estimation and optimization to hedge against potential worst-case load fluctuations, with controlled robustness-resource trade-off. Utilizing a data-driven approach, we provide a solution that provides risk estimation with measurement data of past load geo-distributions. Simulations with real-world datasets have validated that our solution provides the important robustness crucially needed in MSAs, and performs superiorly compared to baselines that neglect either network or inter-dependency constraints.more » « less
-
Abstract Graphical fluid simulations are CPU‐bound. Parallelizing simulations on hundreds of cores in the computing cloud would make them faster, but requires evenly balancing load across nodes. Good load balancing depends on manual decisions from experts, which are time‐consuming and error prone, or dynamic approaches that estimate and react to future load, which are non‐deterministic and hard to debug.
This paper proposes Birdshot scheduling, an automatic and purely static load balancing algorithm whose performance is close to expert decisions and reactive algorithms without their difficulty or complexity. Birdshot scheduling's key insight is to leverage the high‐latency, high‐throughput, full bisection bandwidth of cloud computing nodes. Birdshot scheduling splits the simulation domain into many micro‐partitions and statically assigns them to nodes randomly. Analytical results show that randomly assigned micro‐partitions balance load with high probability. The high‐throughput network easily handles the increased data transfers from micro‐partitions, and full bisection bandwidth allows random placement with no performance penalty. Overlapping the communications and computations of different micro‐partitions masks latency.
Experiments with particle‐level set, SPH, FLIP and explicit Eulerian methods show that Birdshot scheduling speeds up simulations by a factor of 2‐3, and can out‐perform reactive scheduling algorithms. Birdshot scheduling performs within 21% of state‐of‐the‐art dynamic methods that require running a second, parallel simulation. Unlike speculative algorithms, Birdshot scheduling is purely static: it requires no controller, runtime data collection, partition migration or support for these operations from the programmer.
-
5G and beyond communication networks require satisfying very low latency standards, high reliability, high- speed user connectivity, more security, improved capacity and better service demands. Augmenting such a wide range of KPIs (Key Performance Indicators) needs a smart, intelligent and programmable solution for TSPs (Telecommunication Service Providers). Resource availability and quality sustainability are challenging parameters in a heterogeneous 5G environment. Programmable Dynamic Network Slicing (PDNS) is a key technology enabling parameter that can allow multiple tenants to bring their versatile applications simultaneously over shared physical infrastructure. Latest emerging technologies like virtualized Software- Defined Networks (vSDN) and Artificial Intelligence (AI) play a pivotal supporting role in solving the above-mentioned constraints. Using the PDNS framework, we have proposed a novel slice backup algorithm leveraging Deep Learning (DL) neural network to orchestrate network latency and load efficiently. Our model has been trained using the available KPIs and incoming traffic is analyzed. The proposed solution performs stable load balancing between shared slices even if certain extreme conditions (slice unavailability) through intelligent resource allocation. The framework withstands service outage and always select the most suitable slice as a backup. Our results show latency-aware resource distribution for better network stability.more » « less
-
We consider the load-balancing design for forwarding incoming flows to access points (APs) in high-density wireless networks with both channel fading and flow-level dynamics, where each incoming flow has a certain amount of service demand and leaves the system once its service request is complete. The efficient load-balancing design is strongly needed for supporting high-quality wireless connections in high-density areas. In this work, we propose a Joint Load-Balancing and Scheduling (JLBS) Algorithm that always forwards the incoming flows to the AP with the smallest workload in the presence of flow-level dynamics and each AP always serves the flow with the best channel quality. Our analysis reveals that our proposed JLBS Algorithm not only achieves maximum system throughput, but also minimizes the total system workload in the heavy-traffic regime. Moreover, we observe from both our theoretical and simulation results that the mean total workload performance under the proposed JLBS Algorithm does not degrade as the number of APs increases, which is strongly desirable in high-density wireless networks.more » « less