skip to main content


Title: Analog Joint Source-Channel Coding for Distributed Functional Compression using Deep Neural Networks
In this paper, we study Joint Source-Channel Coding (JSCC) for distributed analog functional compression over both Gaussian Multiple Access Channel (MAC) and AWGN channels. Notably, we propose a deep neural network based solution for learning encoders and decoders. We propose three methods of increasing performance. The first one frames the problem as an autoencoder; the second one incorporates the power constraint in the objective by using a Lagrange multiplier; the third method derives the objective from the information bottleneck principle. We show that all proposed methods are variational approximations to upper bounds on the indirect rate-distortion problem’s minimization objective. Further, we show that the third method is the variational approximation of a tighter upper bound compared to the other two. Finally, we show empirical performance results for image classification. We compare with existing work and showcase the performance improvement yielded by the proposed methods.  more » « less
Award ID(s):
2003002
NSF-PAR ID:
10293685
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE International Symposium on Information Theory
Page Range / eLocation ID:
2429 to 2434
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Given earth imagery with spectral features on a terrain surface, this paper studies surface segmentation based on both explanatory features and surface topology. The problem is important in many spatial and spatiotemporal applications such as flood extent mapping in hydrology. The problem is uniquely challenging for several reasons: first, the size of earth imagery on a terrain surface is often much larger than the input of popular deep convolutional neural networks; second, there exists topological structure dependency between pixel classes on the surface, and such dependency can follow an unknown and non-linear distribution; third, there are often limited training labels. Existing methods for earth imagery segmentation often divide the imagery into patches and consider the elevation as an additional feature channel. These methods do not fully incorporate the spatial topological structural constraint within and across surface patches and thus often show poor results, especially when training labels are limited. Existing methods on semi-supervised and unsupervised learning for earth imagery often focus on learning representation without explicitly incorporating surface topology. In contrast, we propose a novel framework that explicitly models the topological skeleton of a terrain surface with a contour tree from computational topology, which is guided by the physical constraint (e.g., water flow direction on terrains). Our framework consists of two neural networks: a convolutional neural network (CNN) to learn spatial contextual features on a 2D image grid, and a graph neural network (GNN) to learn the statistical distribution of physics-guided spatial topological dependency on the contour tree. The two models are co-trained via variational EM. Evaluations on the real-world flood mapping datasets show that the proposed models outperform baseline methods in classification accuracy, especially when training labels are limited. 
    more » « less
  2. Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. However, finding the optimal bottleneck variable involves a difficult optimization problem, which until recently has been considered for only two limited cases: discrete X and Y with small state spaces, and continuous X and Y with a Gaussian joint distribution (in which case optimal encoding and decoding maps are linear). We propose a method for performing IB on arbitrarily-distributed discrete and/or continuous X and Y, while allowing for nonlinear encoding and decoding maps. Our approach relies on a novel non-parametric upper bound for mutual information. We describe how to implement our method using neural networks. We then show that it achieves better performance than the recently-proposed “variational IB” method on several real-world datasets. 
    more » « less
  3. null (Ed.)
    Adverse event detection is critical for many real-world applications including timely identification of product defects, disasters, and major socio-political incidents. In the health context, adverse drug events account for countless hospitalizations and deaths annually. Since users often begin their information seeking and reporting with online searches, examination of search query logs has emerged as an important detection channel. However, search context - including query intent and heterogeneity in user behaviors - is extremely important for extracting information from search queries, and yet the challenge of measuring and analyzing these aspects has precluded their use in prior studies. We propose DeepSAVE, a novel deep learning framework for detecting adverse events based on user search query logs. DeepSAVE uses an enriched variational autoencoder encompassing a novel query embedding and user modeling module that work in concert to address the context challenge associated with search-based detection of adverse events. Evaluation results on three large real-world event datasets show that DeepSAVE outperforms existing detection methods as well as comparison deep learning auto encoders. Ablation analysis reveals that each component of DeepSAVE significantly contributes to its overall performance. Collectively, the results demonstrate the viability of the proposed architecture for detecting adverse events from search query logs. 
    more » « less
  4. Generalized linear mixed models are commonly used to describe relationships between correlated responses and covariates in medical research. In this paper, we propose a simple and easily implementable regularized estimation approach to select both fixed and random effects in generalized linear mixed model. Specifically, we propose to construct and optimize the objective functions using the confidence distributions of model parameters, as opposed to using the observed data likelihood functions, to perform effect selections. Two estimation methods are developed. The first one is to use the joint confidence distribution of model parameters to perform simultaneous fixed and random effect selections. The second method is to use the marginal confidence distributions of model parameters to perform the selections of fixed and random effects separately. With a proper choice of regularization parameters in the adaptive LASSO framework, we show the consistency and oracle properties of the proposed regularized estimators. Simulation studies have been conducted to assess the performance of the proposed estimators and demonstrate computational efficiency. Our method has also been applied to two longitudinal cancer studies to identify demographic and clinical factors associated with patient health outcomes after cancer therapies.

     
    more » « less
  5. null (Ed.)
    Abstract In this study, we propose a scalable batch sampling scheme for optimization of simulation models with spatially varying noise. The proposed scheme has two primary advantages: (i) reduced simulation cost by recommending batches of samples at carefully selected spatial locations and (ii) improved scalability by actively considering replicating at previously observed sampling locations. Replication improves the scalability of the proposed sampling scheme as the computational cost of adaptive sampling schemes grow cubicly with the number of unique sampling locations. Our main consideration for the allocation of computational resources is the minimization of the uncertainty in the optimal design. We analytically derive the relationship between the “exploration versus replication decision” and the posterior variance of the spatial random process used to approximate the simulation model’s mean response. Leveraging this reformulation in a novel objective-driven adaptive sampling scheme, we show that we can identify batches of samples that minimize the prediction uncertainty only in the regions of the design space expected to contain the global optimum. Finally, the proposed sampling scheme adopts a modified preposterior analysis that uses a zeroth-order interpolation of the spatially varying simulation noise to identify sampling batches. Through the optimization of three numerical test functions and one engineering problem, we demonstrate (i) the efficacy and of the proposed sampling scheme to deal with a wide array of stochastic functions, (ii) the superior performance of the proposed method on all test functions compared to existing methods, (iii) the empirical validity of using a zeroth-order approximation for the allocation of sampling batches, and (iv) its applicability to molecular dynamics simulations by optimizing the performance of an organic photovoltaic cell as a function of its processing settings. 
    more » « less