Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Recent major developments in semantic communi- cation systems stem from integration of deep learning (DL) tech- niques. Following the discovery of capacity achieving codes, the primary motivation for adopting the semantic approach, which retrieves meaning without requiring an exact reconstruction, is its potential to further conserve resources such as bandwidth and power. In this paper, we propose a novel semantic communica- tion framework for textual data over additive white Gaussian noise (AWGN) channels via DL. Our framework leverages the information bottleneck (IB) principle to balance minimizing bit transmission under wireless channel rate constraints with maximizing semantic information retention. Unlike previous works, we integrate the bilingual evaluation understudy (BLEU) sentence similarity score into the training objective to enhance model performance. In particular, inspired by knowledge distil- lation, we utilize large language models (LLMs) during training to transfer their knowledge of text semantics into our model. Using IB principle, we train a neural semantic encoder at the transmitter and a neural semantic decoder at the receiver that incorporates into its objective function the rate constraint together with the BLEU score and the knowledge encoded in the soft probabilities produced by the LLM. Through extensive experiments, our proposed framework demonstrates a notable improvement of up to 45% in text semantic similarity compared to state-of-the-art benchmarks operating at the same channel capacity, significantly outperforming traditional communication systems. Moreover, it exhibits robustness to variations in signal- to-noise ratio (SNR) and achieves significant gains across both low and medium SNR regimes.more » « lessFree, publicly-accessible full text available December 11, 2026
-
This work presents an analysis of semantic com- munication in the context of First-Order Logic (FOL)-based deduction. Specifically, the receiver holds a set of hypotheses about the State of the World (SotW), while the transmitter has incomplete evidence about the true SotW but lacks access to the ground truth. The transmitter aims to communicate limited information to help the receiver identify the hypothesis most consistent with true SotW. We formulate the objective as approximating the posterior distribution of the transmitter at the receiver. Using Stirling’s approximation, this reduces to a constrained, finite-horizon resource allocation problem. Applying the Karush-Kuhn-Tucker conditions yields a truncated water- filling solution. Despite the problem’s non-convexity, symmetry and permutation invariance ensure global optimality. Based on this, we design message selection strategies, both for single- and multi- round communication, and model the receiver’s inference as an m-ary Bayesian hypothesis testing problem. Under the Maximum A Posteriori (MAP) rule, our communication strategy achieves optimal performance within budget constraints. We further analyze convergence rates and validate the theoretical findings through experiments, demonstrating reduced error over random selection and prior methods.more » « lessFree, publicly-accessible full text available December 11, 2026
-
In this paper, we consider the parallel implementation of an already-trained deep model on multiple processing nodes (a.k.a. workers). Specifically, we investigate as to how a deep model should be divided into several parallel sub-models, each of which is executed efficiently by a worker. Since latency due to synchronization and data transfer among workers negatively impacts the performance of the parallel implementation, it is desirable to have minimum interdependency among parallel sub-models. To achieve this goal, we propose to rearrange the neurons in the neural network, partition them (without changing the general topology of the neural network), and modify the weights such that the interdependency among sub-models is minimized under the computations and communications constraints of the workers while minimizing its impact on the performance of the model. We propose RePurpose, a layer-wise model restructuring and pruning technique that guarantees the performance of the overall parallelized model. To efficiently apply RePurpose, we propose an approach based on L0 optimization and the Munkres assignment algorithm. We show that, compared to the existing methods, RePurpose significantly improves the efficiency of the distributed inference via parallel implementation, both in terms of communication and computational complexity.more » « less
-
Analog Joint Source-Channel Coding for Distributed Functional Compression using Deep Neural Networksnull (Ed.)In this paper, we study Joint Source-Channel Coding (JSCC) for distributed analog functional compression over both Gaussian Multiple Access Channel (MAC) and AWGN channels. Notably, we propose a deep neural network based solution for learning encoders and decoders. We propose three methods of increasing performance. The first one frames the problem as an autoencoder; the second one incorporates the power constraint in the objective by using a Lagrange multiplier; the third method derives the objective from the information bottleneck principle. We show that all proposed methods are variational approximations to upper bounds on the indirect rate-distortion problem’s minimization objective. Further, we show that the third method is the variational approximation of a tighter upper bound compared to the other two. Finally, we show empirical performance results for image classification. We compare with existing work and showcase the performance improvement yielded by the proposed methods.more » « less
-
null (Ed.)In this paper, we consider federated learning in wireless edge networks. Transmitting stochastic gradients (SG) or deep model's parameters over a limited-bandwidth wireless channel can incur large training latency and excessive power consumption. Hence, data compressing is often used to reduce the communication overhead. However, efficient communication requires the compression algorithm to satisfy the constraints imposed by the communication medium and take advantage of its characteristics, such as over-the-air computations inherent in wireless multiple-access channels (MAC), unreliable transmission and idle nodes in the edge network, limited transmission power, and preserving the privacy of data. To achieve these goals, we propose a novel framework based on Random Linear Coding (RLC) and develop efficient power management and channel usage techniques to manage the trade-offs between power consumption, communication bit-rate and convergence rate of federated learning over wireless MAC. We show that the proposed encoding/decoding results in an unbiased compression of SG, hence guaranteeing the convergence of the training algorithm without requiring error-feedback. Finally, through simulations, we show the superior performance of the proposed method over other existing techniques.more » « less
-
null (Ed.)In this paper, we introduce a framework for Joint Source-Channel Coding of distributed Gaussian sources over a multiple access AWGN channel. Although there are prior works that have studied this, they either strongly rely on intuition to design encoders and decoder or require the knowledge of the complete joint distribution of all the distributed sources. Our system overcomes this. We model our system as a Variational Autoencoder and leverage insight provided by this connection to propose a crucial regularization mechanism for learning. This allows us to beat the state of the art by improving the signal reconstruction quality by almost 1dB for certain configurations. The end-to-end learned system is also found to be robust to channel condition variations of ±5dB and shows a drop in signal reconstruction quality by at most 1dB. Finally, we propose a novel lower bound on the optimal distortion in signal reconstruction and empirically showcase the tightness of the bound in comparison with the existing bound.more » « less
An official website of the United States government
