Localizing more sources than sensors with a sparse linear array (SLA) has long relied on minimizing a distance between two covariance matrices and recent algorithms often utilize semidefinite programming (SDP). Although deep neural network (DNN)-based methods offer new alternatives, they still depend on covariance matrix fitting. In this paper, we develop a novel methodology that estimates the co-array subspaces from a sample covariance for SLAs. Our methodology trains a DNN to learn signal and noise subspace representations that are invariant to the selection of bases. To learn such representations, we propose loss functions that gauge the separation between the desired and the estimated subspace. In particular, we propose losses that measure the length of the shortest path between subspaces viewed on a union of Grassmannians, and prove that it is possible for a DNN to approximate signal subspaces. The computation of learning subspaces of different dimensions is accelerated by a new batch sampling strategy called consistent rank sampling. The methodology is robust to array imperfections due to its geometry-agnostic and data-driven nature. In addition, we propose a fully end-to-end gridless approach that directly learns angles to study the possibility of bypassing subspace methods. Numerical results show that learning such subspace representations is more beneficial than learning covariances or angles. It outperforms conventional SDP-based methods such as the sparse and parametric approach (SPA) and existing DNN-based covariance reconstruction methods for a wide range of signal-to-noise ratios (SNRs), snapshots, and source numbers for both perfect and imperfect arrays.
more »
« less
This content will become publicly available on April 6, 2026
A Comparative Study of Invariance-Aware Loss Functions for Deep Learning-based Gridless Direction-of-Arrival Estimation
Covariance matrix reconstruction has been the most widely used guiding objective in gridless direction-of-arrival (DoA) estimation for sparse linear arrays. Many semidefinite programming (SDP)-based methods fall under this category. Although deep learning-based approaches enable the construction of more sophisticated objective functions, most methods still rely on covariance matrix reconstruction. In this paper, we propose new loss functions that are invariant to the scaling of the matrices and provide a comparative study of losses with varying degrees of invariance. The proposed loss functions are formulated based on the scale-invariant signal-to-distortion ratio between the target matrix and the Gram matrix of the prediction. Numerical results show that a scale-invariant loss outperforms its non-invariant counterpart but is inferior to the recently proposed subspace loss that is invariant to the change of basis. These results provide evidence that designing loss functions with greater degrees of invariance is advantageous in deep learning-based gridless DoA estimation.
more »
« less
- PAR ID:
- 10599160
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-6874-1
- Page Range / eLocation ID:
- 1 to 5
- Subject(s) / Keyword(s):
- direction of arrival sparse linear arrays array processing deep learning neural networks
- Format(s):
- Medium: X
- Location:
- Hyderabad, India
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Parameter estimation from noisy and one-bit quantized data has become an important topic in signal processing, as it offers low cost and low complexity in the implementation. On the other hand, Direction-of-Arrival (DoA) estimation using Sparse Linear Arrays (SLAs) has recently gained considerable interest in array processing due to their attractive capability of providing enhanced degrees of freedom. In this paper, the problem of DoA estimation from one-bit measurements received by an SLA is considered and a novel framework for solving this problem is proposed. The proposed approach first provides an estimate of the received signal covariance matrix through minimization of a constrained weighted least-squares criterion. Then, MUSIC is applied to the spatially smoothed version of the estimated covariance matrix to find the DoAs of interest. Several numerical results are provided to demonstrate the superiority of the proposed approach over its counterpart already propounded in the literature.more » « less
-
In this paper, we revisit the framework for maximum likelihood estimation (MLE) as applied to parametric models with an aim to estimate the parameter of interest in a gridless manner. The approach has inherent connections to the sparse Bayesian learning (SBL) formulation, and naturally leads to the problem of structured matrix recovery (SMR). We therefore pose the parameter estimation problem as a SMR problem, and recover the parameter of interest by appealing to the Carathéodory-Fejér result on decomposition of positive semi-definite Toeplitz matrices. We propose an iterative algorithm to estimate the structured covariance matrix; each iteration solves a semi-definite program. We numerically compare the performance with other gridless schemes in literature and demonstrate the superior performance of the proposed techniquemore » « less
-
Gridless direction-of-arrival (DOA) estimation with multiple frequencies can be applied in acoustics source localization problems. We formulate this as an atomic norm minimization (ANM) problem and derive an equivalent regularization-free semi-definite program (SDP) thereby avoiding regularization bias. The DOA is retrieved using a Vandermonde decomposition on the Toeplitz matrix obtained from the solution of the SDP. We also propose a fast SDP program to deal with non-uniform array and frequency spacing. For non-uniform spacings, the Toeplitz structure will not exist, but the DOA is retrieved via irregular Vandermonde decomposition (IVD), and we theoretically guarantee the existence of the IVD. We extend ANM to the multiple measurement vector (MMV) cases and derive its equivalent regularization-free SDP. Using multiple frequencies and the MMV model, we can resolve more sources than the number of physical sensors for a uniform linear array. Numerical results demonstrate that the regularization-free framework is robust to noise and aliasing, and it overcomes the regularization bias.more » « less
-
We consider the problem of estimating differences in two Gaussian graphical models (GGMs) which are known to have similar structure. The GGM structure is encoded in its precision (inverse covariance) matrix. In many applications one is interested in estimating the difference in two precision matrices to characterize underlying changes in conditional dependencies of two sets of data. Most existing methods for differential graph estimation are based on a lasso penalized loss function. In this paper, we analyze a log-sum penalized D-trace loss function approach for differential graph learning. An alternating direction method of multipliers (ADMM) algorithm is presented to optimize the objective function. Theoretical analysis establishing consistency in estimation in high-dimensional settings is provided. We illustrate our approach using a numerical example where log-sum penalized D-trace loss significantly outperforms lasso-penalized D-trace loss as well as smoothly clipped absolute deviation (SCAD) penalized D-trace loss.more » « less
An official website of the United States government
