skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on July 1, 2026

Title: Distributed Graph-Based Learning for User Association and Beamforming Design in Multi-RIS Multi-Cell Networks
We propose a novel graph neural network (GNN) architecture for jointly optimizing user association, base station (BS) beamforming, and reconfigurable intelligent surface (RIS) phase shift in a multi-RIS aided multi-cell network. The proposed architecture represents BSs and users as nodes in a bipartite graph where the same type of nodes shares the same neural networks for generating messages and updating its representations, allowing for distributed implementation. In addition, we utilize a composite reflected channel estimation integrated between layers of the GNN structure to significantly reduce the signaling overhead and complexity required for channel estimation in a multi-RIS network. To avoid BS overload, load balancing is regularized in the training of the GNN and we further develop a collision avoidance algorithm to ensure strict load balancing at every BS. Numerical results show that the proposed GNN architecture is significantly more efficient than existing approaches. The results further demonstrate its strong scalability with network size and achieving a throughput performance approaching that of a centralized traditional optimization algorithm, without requiring individual RIS-reflected channels estimation and without the need for re-training or fine-tuning.  more » « less
Award ID(s):
2403285
PAR ID:
10614780
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE Transactions on Wireless Communications
Date Published:
Journal Name:
IEEE Transactions on Wireless Communications
Volume:
24
Issue:
7
ISSN:
1536-1276
Page Range / eLocation ID:
6118 to 6134
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Optimally extracting the advantages available from reconfigurable intelligent surfaces (RISs) in wireless communications systems requires estimation of the channels to and from the RIS. The process of determining these channels is complicated when the RIS is composed of passive elements without any sensing or data processing capabilities, and thus, the channels must be estimated indirectly by a noncolocated device, typically a controlling base station (BS). In this article, we examine channel estimation for passive RIS-based systems from a fundamental viewpoint. We study various possible channel models and the identifiability of the models as a function of the available pilot data and behavior of the RIS during training. In particular, we will consider situations with and without line-of-sight propagation, single-antenna and multi-antenna configurations for the users and BS, correlated and sparse channel models, single-carrier and wideband orthogonal frequency-division multiplexing (OFDM) scenarios, availability of direct links between the users and BS, exploitation of prior information, as well as a number of other special cases. We further conduct simulations of representative algorithms and comparisons of their performance for various channel models using the relevant Cramér-Rao bounds. 
    more » « less
  2. Graph Neural Networks (GNNs) are based on repeated aggregations of information from nodes’ neighbors in a graph. However, because nodes share many neighbors, a naive implementation leads to repeated and inefficient aggregations and represents significant computational overhead. Here we propose Hierarchically Aggregated computation Graphs (HAGs), a new GNN representation technique that explicitly avoids redundancy by managing intermediate aggregation results hierarchically and eliminates repeated computations and unnecessary data transfers in GNN training and inference. HAGs perform the same computations and give the same models/accuracy as traditional GNNs, but in a much shorter time due to optimized computations. To identify redundant computations, we introduce an accurate cost function and use a novel search algorithm to find optimized HAGs. Experiments show that the HAG representation significantly outperforms the standard GNN by increasing the end-to-end training throughput by up to 2.8× and reducing the aggregations and data transfers in GNN training by up to 6.3× and 5.6×, with only 0.1% memory overhead. Overall, our results represent an important advancement in speeding-up and scaling-up GNNs without any loss in model predictive performance. 
    more » « less
  3. In this work, we develop a two time-scale deep learning approach for beamforming and phase shift (BF-PS) design in time-varying RIS-aided networks. In contrast to most existing works that assume perfect CSI for BF-PS design, we take into account the cost of channel estimation and utilize Long Short-Term Memory (LSTM) networks to design BF-PS from limited samples of estimated channel CSI. An LSTM channel extrapolator is designed first to generate high resolution estimates of the cascaded BS-RIS-user channel from sampled signals acquired at a slow time scale. Subsequently, the outputs of the channel extrapolator are fed into an LSTM-based two stage neural network for the joint design of BF-PS at a fast time scale of per coherence time. To address the critical issue that training overhead increases linearly with the number of RIS elements, we consider various pilot structures and sampling patterns in time and space to evaluate the efficiency and sum-rate performance of the proposed two time-scale design. Our results show that the proposed two time-scale design can achieve good spectral efficiency when taking into account the pilot overhead required for training. The proposed design also outperforms a direct BF-PS design that does not employ a channel extrapolator. These demonstrate the feasibility of applying RIS in time-varying channels with reasonable pilot overhead. 
    more » « less
  4. Graph Neural Networks (GNNs) are a popular machine learning framework for solving various graph processing applications. This framework exploits both the graph topology and the feature vectors of the nodes. One of the important applications of GNN is in the semi-supervised node classification task. The accuracy of the node classification using GNN depends on (i) the number and (ii) the choice of the training nodes. In this article, we demonstrate that increasing the training nodes by selecting nodes from the same class that are spread out across non-contiguous subgraphs, can significantly improve the accuracy. We accomplish this by presenting a novel input intervention technique that can be used in conjunction with different GNN classification methods to increase the non-contiguous training nodes and, thereby, improve the accuracy. We also present an output intervention technique to identify misclassified nodes and relabel them with their potentially correct labels. We demonstrate on real-world networks that our proposed methods, both individually and collectively, significantly improve the accuracy in comparison to the baseline GNN algorithms. Both our methods are agnostic. Apart from the initial set of training nodes generated by the baseline GNN methods, our techniques do not need any other extra knowledge about the classes of the nodes. Thus, our methods are modular and can be used as pre-and post-processing steps with many of the currently available GNN methods to improve their accuracy. 
    more » « less
  5. Despite the recent success of Graph Neural Networks (GNNs), training GNNs on large graphs remains challenging. The limited resource capacities of the existing servers, the dependency between nodes in a graph, and the privacy concern due to the centralized storage and model learning have spurred the need to design an effective distributed algorithm for GNN training. However, existing distributed GNN training methods impose either excessive communication costs or large memory overheads that hinders their scalability. To overcome these issues, we propose a communication-efficient distributed GNN training technique named (LLCG). To reduce the communication and memory overhead, each local machine in LLCG first trains a GNN on its local data by ignoring the dependency between nodes among different machines, then sends the locally trained model to the server for periodic model averaging. However, ignoring node dependency could result in significant performance degradation. To solve the performance degradation, we propose to apply on the server to refine the locally learned models. We rigorously analyze the convergence of distributed methods with periodic model averaging for training GNNs and show that naively applying periodic model averaging but ignoring the dependency between nodes will suffer from an irreducible residual error. However, this residual error can be eliminated by utilizing the proposed global corrections to entail fast convergence rate. Extensive experiments on real-world datasets show that LLCG can significantly improve the efficiency without hurting the performance. One-sentence Summary: We propose LLCG a communication efficient distributed algorithm for training GNNs. 
    more » « less