skip to main content

This content will become publicly available on December 7, 2022

Title: QCell: Self-optimization of Softwarized 5G Networks through Deep Q-learning
With the unprecedented rise in traffic demand and mobile subscribers, real-time fine-grained optimization frameworks are crucial for the future of cellular networks. Indeed, rigid and inflexible infrastructures are incapable of adapting to the massive amounts of data forecast for 5G networks. Network softwarization, i.e., the approach of controlling “everything” via software, endows the network with unprecedented flexibility, allowing it to run optimization and machine learning-based frame- works for flexible adaptation to current network conditions and traffic demand. This work presents QCell, a Deep Q-Network- based optimization framework for softwarized cellular networks. QCell dynamically allocates slicing and scheduling resources to the network base stations adapting to varying interference con- ditions and traffic patterns. QCell is prototyped on Colosseum, the world’s largest network emulator, and tested in a variety of network conditions and scenarios. Our experimental results show that using QCell significantly improves user’s throughput (up to 37.6%) and the size of transmission queues (up to 11.9%), decreasing service latency.
; ; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
IEEE Globecom 2021
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. Given an urban development plan and the historical traffic observations over the road network, the Conditional Urban Traffic Estimation problem aims to estimate the resulting traffic status prior to the deployment of the plan. This problem is of great importance to urban development and transportation management, yet is very challenging because the plan would change the local travel demands drastically and the new travel demand pattern might be unprecedented in the historical data. To tackle these challenges, we propose a novel Conditional Urban Traffic Generative Adversarial Network (Curb-GAN), which provides traffic estimations in consecutive time slots based on different (unprecedented) travel demands, thus enables urban planners to accurately evaluate urban plans before deploying them. The proposed Curb-GAN adopts and advances the conditional GAN structure through a few novel ideas: (1) dealing with various travel demands as the "conditions" and generating corresponding traffic estimations, (2) integrating dynamic convolutional layers to capture the local spatial auto-correlations along the underlying road networks, (3) employing self-attention mechanism to capture the temporal dependencies of the traffic across different time slots. Extensive experiments on two real-world spatio-temporal datasets demonstrate that our Curb-GAN outperforms major baseline methods in estimation accuracy under various conditions and can produce moremore »meaningful estimations.« less
  2. The rapid growth of mobile data traffic is straining cellular networks. A natural approach to alleviate cellular networks congestion is to use, in addition to the cellular interface, secondary interfaces such as WiFi, Dynamic spectrum and mmWave to aid cellular networks in handling mobile traffic. The fundamental question now becomes: How should traffic be distributed over different interfaces, taking into account different application QoS requirements and the diverse nature of radio interfaces. To this end, we propose the Discounted Rate Utility Maximization (DRUM) framework with interface costs as a means to quantify application preferences in terms of throughput, delay, and cost. The flow rate allocation problem can be formulated as a convex optimization problem. However, solving this problem requires non-causal knowledge of the time-varying capacities of all radio interfaces. To this end, we propose an online predictive algorithm that exploits the predictability of wireless connectivity for a small look-ahead window w. We show that, under some mild conditions, the proposed algorithm achieves a constant competitive ratio independent of the time horizon T. Furthermore, the competitive ratio approaches 1 as the prediction window increases. We also propose another predictive algorithm based on the "Receding Horizon Control" principle from control theory thatmore »performs very well in practice. Numerical simulations serve to validate our formulation, by showing that under the DRUM framework: the more delay-tolerant the flow, the less it uses the cellular network, preferring to transmit in high rate bursts over the secondary interfaces. Conversely, delay-sensitive flows consistently transmit irrespective of different interfaces' availability. Simulations also show that the proposed online predictive algorithms have a near-optimal performance compared to the offline prescient solution under all considered scenarios.« less
  3. With unprecedented increases in traffic load in today's wireless networks, design challenges shift from the wireless network itself to the computational support behind the wireless network. In this vein, there is new interest in quantum-compute approaches because of their potential to substantially speed up processing, and so improve network throughput. However, quantum hardware that actually exists today is much more susceptible to computational errors than silicon-based hardware, due to the physical phenomena of decoherence and noise. This paper explores the boundary between the two types of computation---classical-quantum hybrid processing for optimization problems in wireless systems---envisioning how wireless can simultaneously leverage the benefit of both approaches. We explore the feasibility of a hybrid system with a real hardware prototype using one of the most advanced experimentally available techniques today, reverse quantum annealing. Preliminary results on a low-latency, large MIMO system envisioned in the 5G New Radio roadmap are encouraging, showing approximately 2--10\times× better performance in terms of processing time than prior published results.
  4. Abstract: Radio access network (RAN) in 5G is expected to satisfy the stringent delay requirements of a variety of applications. The packet scheduler plays an important role by allocating spectrum resources to user equipments (UEs) at each transmit time interval (TTI). In this paper, we show that optimal scheduling is a challenging combinatorial optimization problem, which is hard to solve within the channel coherence time with conventional optimization methods. Rule-based scheduling methods, on the other hand, are hard to adapt to the time-varying wireless channel conditions and various data request patterns of UEs. Recently, integrating artificial intelligence (AI) into wireless networks has drawn great interest from both academia and industry. In this paper, we incorporate deep reinforcement learning (DRL) into the design of cellular packet scheduling. A delay-aware cell traffic scheduling algorithm is developed to map the observed system state to scheduling decision. Due to the huge state space, a recurrent neural network (RNN) is utilized to approximate the optimal action-policy function. Different from conventional rule-based scheduling methods, the proposed scheme can learn from the interactions with the environment and adaptively choosing the best scheduling decision at each TTI. Simulation results show that the DRL-based packet scheduling can achieve themore »lowest average delay compared with several conventional approaches. Meanwhile, the UEs' average queue lengths can also be significantly reduced. The developed method also exhibits great potential in real-time scheduling in delay-sensitive scenarios.« less
  5. Offloading cellular traffic to WiFi networks plays an important role in alleviating the increasing burden on cellular networks. However, excessive traffic offloading brings severe packet collisions into a WiFi network due to its contention-based medium access scheme, which significantly reduces the WiFi network’s throughput. In this paper, we propose DAO, a device-to-device (D2D) communications assisted traffic offloading scheme to improve the amount of traffic offloaded from cellular to WiFi in integrated cellular and WiFi networks. Specifically, in an integrated cellular-WiFi network, the cellular network exploits D2D communications in licensed cellular bands to aggregate traffic from cellular users before offloading it to the WiFi network to reduce the number of contending users in WiFi access. The traffic offloading process in DAO is formulated as an optimization problem that jointly takes into account the activations of aggregation nodes (ANs) and the connections between ANs and offloading users to maximize the offloaded traffic while guaranteeing the long-term data rates required by the offloading users. Extensive simulation results reveal the significant performance gain achieved by DAO over the existing schemes.