Millimeter-wave (mmWave) communication is anticipated to provide significant throughout gains in urban scenarios. To this end, network densification is a necessity to meet the high traffic volume generated by smart phones, tablets, and sensory devices while overcoming large pathloss and high blockages at mmWaves frequencies. These denser networks are created with users deploying small mm Wave base stations (BSs) in a plug-and-play fashion. Although, this deployment method provides the required density, the amorphous deployment of BSs needs distributed management. To address this difficulty, we propose a self-organizing method to allocate power to mm Wave BSs in an ultra dense network. The proposed method consists of two parts: clustering using fast local clustering and power allocation via Q-learning. The important features of the proposed method are its scalability and self-organizing capabilities, which are both important features of 5G. Our simulations demonstrate that the introduced method, provides required quality of service (QoS) for all the users independent of the size of the network.
more »
« less
PC-SSL: Peer-Coordinated Sequential Split Learning for Intelligent Traffic Analysis in mmWave 5G Networks
Fifth Generation (5G) networks operating on mmWave frequency bands are anticipated to provide an ultrahigh capacity with low latency to serve mobile users requiring high-end cellular services and emerging metaverse applications. Managing and coordinating the high data rate and throughput among the mmWave 5G Base Stations (BSs) is a challenging task, and it requires intelligent network traffic analysis. While BSs coordination has been traditionally treated as a centralized task, this involves higher latency that may adversely impact the user’s Quality of Service (QoS). In this paper, we address this issue by considering the need for distributed coordination among BSs to maximize spectral efficiency and improve the data rate provided to their users via embedded AI. We present Peer-Coordinated Sequential Split Learning dubbed PC-SSL, which is a distributed learning approach whereby multiple 5G BSs collaborate to train and update deep learning models without disclosing their associated mobile users data, i.e., without privacy leakage. Our proposed PC-SSL minimizes the data transmitted between the client BSs and a server by processing data locally on the clients. This results in low latency and computation overhead in making handoff decisions and other networking operations. We evaluate the performance of our proposed PC-SSL in the mmWave 5G throughput prediction use-case based on a real dataset. The results demonstrate that our proposal outperforms conventional approaches and achieves a comparable performance to centralized, vanilla split learning.
more »
« less
- Award ID(s):
- 2210252
- PAR ID:
- 10516302
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 978-1-6654-6483-3
- Page Range / eLocation ID:
- 1 to 6
- Subject(s) / Keyword(s):
- 5G mobile communication Spectral efficiency Distributed databases Quality of service Throughput Servers Proposals Throughput prediction mmWave 5G networks split learning
- Format(s):
- Medium: X
- Location:
- Toronto, ON, Canada
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Augmented Reality (AR) has been widely hailed as a representative of ultra-high bandwidth and ultra-low latency apps that will be enabled by 5G networks. While single-user AR can perform AR tasks locally on the mobile device, multi-user AR apps, which allow multiple users to interact within the same physical space, critically rely on the cellular network to support user interactions. However, a recent study showed that multi-user AR apps can experience very high end-to-end latency when running over LTE, rendering user interaction practically infeasible. In this paper, we study whether 5G mmWave, which promises significant bandwidth and latency improvements over LTE, can support multi-user AR by conducting an in-depth measurement study of the same popular multi-user AR app over both LTE and 5G mmWave. Our measurement and analysis show that: (1) The E2E AR latency over LTE is significantly lower compared to the values reported in the previous study. However, it still remains too high for practical user interaction. (2) 5G mmWave brings no benefits to multi-user AR apps. (3) While 5G mmWave reduces the latency of the uplink visual data transmission, there are other components of the AR app that are independent of the network technology and account for a significant fraction of the E2E latency. (4) The app drains 66% more network energy, which translates to 28% higher total energy over 5G mmWave compared to over LTE.more » « less
-
As we progress from 5G to emerging 6G wireless, the spectrum of cellular communication services is set to broaden significantly, encompassing real-time remote healthcare applications and sophisticated smart infrastructure solutions, among others. This expansion brings to the forefront a diverse set of service requirements, underscoring the challenges and complexities inherent in next-generation networks. In the realm of 5G, Enhanced Mobile Broadband (eMBB) and Ultra-Reliable Low-Latency Communications (URLLC) have been pivotal service categories. As we venture into the 6G era, these foundational use cases will evolve and embody additional performance criteria, further diversifying the network service portfolio. This evolution amplifies the necessity for dynamic and efficient resource allocation strategies capable of balancing the diverse service demands. In response to this need, we introduce the Intelligent Dynamic Resource Allocation and Puncturing (IDRAP) framework. Leveraging Deep Reinforcement Learning (DRL), IDRAP is designed to balance between the bandwidth-intensive requirements of eMBB services and the latency and reliability needs of URLLC users. The performance of IDRAP is evaluated and compared against other resource management solutions, including Intelligent Dynamic Resource Slicing (IDRS), Policy Gradient Actor-Critic Learning (PGACL), System-Wide Tradeoff Scheduling (SWTS), Sum-Log, and Sum-Rate.The results show an improved Service Satisfaction Level (SSL) for eMBB users while maintaining the essential SSL threshold for URLLC services.more » « less
-
This paper presents mmCPTP, a cross-layer end-toend protocol for fast delivery of data over mmWave channels associated with emerging 5G services. Recent measurement studies of mmWave channels in urban micro cellular deployments show considerable fluctuation in received signal strength along with intermittent outages resulting from user mobility. This results in significant impairment of end-to-end data transfer throughput when regular TCP is used to transport data over such mmWave channels. To address this issue, we propose mmCPTP, a novel cross-layer end-to-end data transfer protocol that sets up a transport plug-in at or near the base station and uses feedback from the lower layer (RLC/MAC) to opportunistically pull data at the mobile client without the slow start and probing delays associated with TCP. The system model and end-to-end protocol architecture are described and compared with TCP and IndirectTCP (I-TCP) in terms of achievable data rate. The proposed mmCPTP protocol is evaluated using NS3 simulation for 5G NR (New Radio) considering a high-speed mobile user scenario. The system is further validated using a proof-of-concept prototype which emulates the high-speed mmWave/NR access link with traffic shaping over Gbps ethernet. Results show significant performance gains for mmCPTP over TCP and I-TCP (2.5x to 17.2x, depending on the version).more » « less
-
In this paper, we consider a large-scale heterogeneous mobile edge computing system, where each device’s mean computing task arrival rate, mean service rate, mean energy consumption, and mean offloading latency are drawn from different bounded continuous probability distributions to reflect the diverse compute-intensive applications, mobile devices with different computing capabilities and battery efficiencies, and different types of wireless access networks (e.g., 4G/5G cellular networks, WiFi). We consider a class of distributed threshold-based randomized offloading policies and develop a threshold update algorithm based on its computational load, average offloading latency, average energy consumption, and edge server processing time, depending on the server utilization. We show that there always exists a unique Mean-Field Nash Equilibrium (MFNE) in the large-system limit when the task processing times of mobile devices follow an exponential distribution. This is achieved by carefully partitioning the space of mean arrival rates to account for the discrete structure of each device’s optimal threshold. Moreover, we show that our proposed threshold update algorithm converges to the MFNE. Finally, we perform simulations to corroborate our theoretical results and demonstrate that our proposed algorithm still performs well in more general setups based on the collected real-world data and outperforms the well-known probabilistic offloading policy.more » « less
An official website of the United States government

