Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Covariance matrix reconstruction has been the most widely used guiding objective in gridless direction-of-arrival (DoA) estimation for sparse linear arrays. Many semidefinite programming (SDP)-based methods fall under this category. Although deep learning-based approaches enable the construction of more sophisticated objective functions, most methods still rely on covariance matrix reconstruction. In this paper, we propose new loss functions that are invariant to the scaling of the matrices and provide a comparative study of losses with varying degrees of invariance. The proposed loss functions are formulated based on the scale-invariant signal-to-distortion ratio between the target matrix and the Gram matrix of the prediction. Numerical results show that a scale-invariant loss outperforms its non-invariant counterpart but is inferior to the recently proposed subspace loss that is invariant to the change of basis. These results provide evidence that designing loss functions with greater degrees of invariance is advantageous in deep learning-based gridless DoA estimation.more » « lessFree, publicly-accessible full text available April 6, 2026
-
This paper introduces new and practically relevant non-Gaussian priors for the Sparse Bayesian Learning (SBL) framework applied to the Multiple Measurement Vector (MMV) problem. We extend the Gaussian Scale Mixture (GSM) framework to model prior distributions for row vectors, exploring the use of shared and different hyperparameters across different measurements. We propose Expectation Maximization (EM) based algorithms to estimate the parameters of the prior density along with the hyperparameters. To promote sparsity more effectively in a non-Gaussian setting, we show the importance of incorporating learning of the parameters of the mixing density. Such an approach effectively utilizes the common support notion in the MMV problem and promotes sparsity without explicitly imposing a sparsity-promoting prior, indicating the methods’ robustness to model mismatches. Numerical simulations are provided to compare the proposed approaches with the existing SBL algorithm for the MMV problem.more » « lessFree, publicly-accessible full text available April 6, 2026
-
We present a general Bernoulli Gaussian scale mixture based approach for modeling priors that can represent a large class of random signals. For inference, we introduce belief propagation (BP) to multi-snapshot signal recovery based on the minimum mean square error estimation criteria. Our method relies on intra-snapshot messages that update the signal vector for each snapshot and inter-snapshot messages that share probabilistic information related to the common sparsity structure across snapshots. Despite the very general model, our BP method can efficiently compute accurate approximations of marginal posterior PDFs. Preliminary numerical results illustrate the superior convergence rate and improved performance of the proposed method compared to approaches based on sparse Bayesian learning (SBL).more » « lessFree, publicly-accessible full text available April 6, 2026
-
Sparse Bayesian Learning (SBL) is a popular sparse signal recovery method, and various algorithms exist under the SBL paradigm. In this paper, we introduce a novel re-parameterization that allows the iterations of existing algorithms to be viewed as special cases of a unified and general mapping function. Furthermore, the re-parameterization enables an interesting beamforming interpretation that lends insights to all the considered algorithms. Utilizing the abstraction allowed by the general mapping viewpoint, we introduce a novel neural network architecture for learning improved iterative update rules under the SBL framework. Our modular design of the architecture enables the model to be independent of the size of the measurement matrix and provides us a unique opportunity to test the generalization capabilities across different measurement matrices. We show that the network when trained on a particular parameterized dictionary generalizes in many ways hitherto not possible; different measurement matrices, both type and dimension, and number of snapshots. Our numerical results showcase the generalization capability of our network in terms of mean square error and probability of support recovery across sparsity levels, different signal-to-noise ratios, number of snapshots and multiple measurement matrices of different sizes.more » « lessFree, publicly-accessible full text available April 6, 2026
-
Free, publicly-accessible full text available April 6, 2026
-
Localizing more sources than sensors with a sparse linear array (SLA) has long relied on minimizing a distance between two covariance matrices and recent algorithms often utilize semidefinite programming (SDP). Although deep neural network (DNN)-based methods offer new alternatives, they still depend on covariance matrix fitting. In this paper, we develop a novel methodology that estimates the co-array subspaces from a sample covariance for SLAs. Our methodology trains a DNN to learn signal and noise subspace representations that are invariant to the selection of bases. To learn such representations, we propose loss functions that gauge the separation between the desired and the estimated subspace. In particular, we propose losses that measure the length of the shortest path between subspaces viewed on a union of Grassmannians, and prove that it is possible for a DNN to approximate signal subspaces. The computation of learning subspaces of different dimensions is accelerated by a new batch sampling strategy called consistent rank sampling. The methodology is robust to array imperfections due to its geometry-agnostic and data-driven nature. In addition, we propose a fully end-to-end gridless approach that directly learns angles to study the possibility of bypassing subspace methods. Numerical results show that learning such subspace representations is more beneficial than learning covariances or angles. It outperforms conventional SDP-based methods such as the sparse and parametric approach (SPA) and existing DNN-based covariance reconstruction methods for a wide range of signal-to-noise ratios (SNRs), snapshots, and source numbers for both perfect and imperfect arrays.more » « lessFree, publicly-accessible full text available January 1, 2026
-
Free, publicly-accessible full text available January 1, 2026
-
Free, publicly-accessible full text available July 8, 2025
-
Free, publicly-accessible full text available July 7, 2025
-
We examine the problem of uplink cell-free access point (AP) placement in the context of optimal throughput. In this regard, we formulate two main placement problems, namely the sum rate and minimum rate maximization problems, and discuss the challenges associated with solving the underlying optimization problems with the help of some simple scenarios. As a practical solution to the AP placement problem, we suggest a vector quantization (VQ) approach. The suitability of the VQ approach to cell-free AP placement is investigated by examining three VQ-based solutions. First, the standard VQ approach, that is the Lloyd algorithm (using the squared error distortion function) is described. Second, the tree-structured VQ (TSVQ), which performs successive partitioning of the distribution space is applied. Third, a probability density function optimized VQ (PDFVQ) procedure is outlined, enabling efficient, low complexity, and scalable placement, and is aimed at a massive distributed multiple-input-multiple-output scenario. While the VQ-based solutions do not explicitly solve the cell-free AP placement problems, numerical experiments show that their sum and minimum rate performances are good enough, and offer a good starting point for gradient-based optimization methods. Among the VQ solutions, PDFVQ, with its distinct advantages, offers a good trade-off between sum and minimum rates.more » « less