skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: On ESPRIT with Multiple Coprime-Invariances
In conventional ESPRIT, a single translational invariance in a sensor array is used to obtain high-resolution direction-of-arrival (DOA) estimation. However, when the invariance is greater than the classical sensor spacing =2, spatial frequency ambiguity may occur. In this paper, we propose to use multiple setwise coprime invariances to resolve this ambiguity. While special cases of this were known in the literature, our algorithm is more general in that we consider any number of invariances, and that it can perfectly recover any number of DOAs (limited only in terms of number of sensors) if infinite snapshots are available. We also demonstrate through simulation that our algorithm works well in a practical setting where only finite snapshots are available.  more » « less
Award ID(s):
1712633
PAR ID:
10148059
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proc. Asil. Conf. Sig., Sys., and Comp.
Page Range / eLocation ID:
148 to 152
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Invariance (defined in a general sense) has been one of the most effective priors for representation learning. Direct factorization of parametric models is feasible only for a small range of invariances, while regularization approaches, despite improved generality, lead to nonconvex optimization. In this work, we develop a convex representation learning algorithm for a variety of generalized invariances that can be modeled as semi-norms. Novel Euclidean embeddings are introduced for kernel representers in a semi-inner-product space, and approximation bounds are established. This allows invariant representations to be learned efficiently and effectively as confirmed in our experiments, along with accurate predictions. 
    more » « less
  2. null (Ed.)
    We show that the Invariant Risk Minimization (IRM) formulation of Arjovsky et al. (2019) can fail to capture “natural” invariances, at least when used in its practical “linear” form, and even on very simple problems which directly follow the motivating examples for IRM. This can lead to worse generalization on new environments, even when compared to unconstrained ERM. The issue stems from a significant gap between the linear variant (as in their concrete method IRMv1) and the full non-linear IRM formulation. Additionally, even when capturing the “right” invariances, we show that it is possible for IRM to learn a sub-optimal predictor, due to the loss function not being invariant across environments. The issues arise even when measuring invariance on the population distributions, but are exacerbated by the fact that IRM is extremely fragile to sampling. 
    more » « less
  3. Blind sensor calibration for spectrum estimation is the problem of estimating the unknown sensor calibration parameters as well as the parameters-of-interest of the impinging signals simultaneously from snapshots of measurements obtained from an array of sensors. In this paper, we consider blind phase and gain calibration (BPGC) problem for direction-of-arrival estimation with multiple snapshots of measurements obtained from an uniform array of sensors, where each sensor is perturbed by an unknown gain and phase parameter. Due to the unknown sensor and signal parameters, BPGC problem is a highly nonlinear problem. Assuming that the sources are uncorrelated, the covariance matrix of the measurements in a perfectly calibrated array is a Toeplitz matrix. Leveraging this fact, we first change the nonlinear problem to a linear problem considering certain rank-one positive semidefinite matrix, and then suggest a non-convex optimization approach to find the factor of the rank-one matrix under a unit norm constraint to avoid trivial solutions. Numerical experiments demonstrate that our proposed non-convex optimization approach provides better or competitive recovery performance than existing methods in the literature, without requiring any tuning parameters. 
    more » « less
  4. We give an $$\widetilde{O}(\sqrt{n})$$-space single-pass 0.483-approximation streaming algorithm for estimating the maximum directed cut size (Max-DICUT) in a directed graph on n vertices. This improves over an $$O(\log n)$$-space $4 / 9 < 0.45$ approximation algorithm due to Chou, Golovnev, and Velusamy (FOCS 2020), which was known to be optimal for $$o(\sqrt{n})$$-space algorithms. Max-DICUT is a special case of a constraint satisfaction problem (CSP). In this broader context, we give the first CSP for which algorithms with $$\widetilde{O}(\sqrt{n})$$- space can provably outperform $$o(\sqrt{n})$$- space algorithms. The key technical contribution of our work is development of the notions of a first-order snapshot of a (directed) graph and of estimates of such snapshots. These snapshots can be used to simulate certain (non-streaming) Max-DICUT algorithms, including the “oblivious” algorithms introduced by Feige and Jozeph (Algorithmica, 2015), who showed that one such algorithm Previous work of the authors (SODA 2023) studied the restricted case of bounded-degree graphs, and observed that in this setting, it is straightforward to estimate the snapshot with $$\ell_{1}$$ errors and this suffices to simulate oblivious algorithms. But for unbounded-degree graphs, even defining an achievable and sufficient notion of estimation is subtle. We describe a new notion of snapshot estimation and prove its sufficiency using careful smoothing techniques, and then develop an algorithm which sketches such an estimate via a delicate process of intertwined vertex- and edge-subsampling. Prior to our work, the only streaming algorithms for any CSP on general instances were based on generalizations of the $$O(\log n)$$-space algorithm for Max-DICUT, and can roughly be characterized as based on “zeroth” order snapshots. Our work thus opens the possibility of a new class of algorithms for approximating CSPs by demonstrating that more sophisticated snapshots can outperform cruder ones in the case of Max-DICUT. 
    more » « less
  5. Adrian Weller (Ed.)
    A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning invariant representations of the data that aim to achieve two competing goals: (a) maximize information or accuracy with respect to a target response, and (b) maximize invariance or independence with respect to a set of protected features (e.g. for fairness, privacy, etc). Despite their wide applicability, theoretical understanding of the optimal tradeoffs — with respect to accuracy, and invariance — achievable by invariant representations is still severely lacking. In this paper, we provide an information theoretic analysis of such tradeoffs under both classification and regression settings. More precisely, we provide a geometric characterization of the accuracy and invariance achievable by any representation of the data; we term this feasible region the information plane. We provide an inner bound for this feasible region for the classification case, and an exact characterization for the regression case, which allows us to either bound or exactly characterize the Pareto optimal frontier between accuracy and invariance. Although our contributions are mainly theoretical, a key practical application of our results is in certifying the potential sub-optimality of any given representation learning algorithm for either classification or regression tasks. Our results shed new light on the fundamental interplay between accuracy and invariance, and may be useful in guiding the design of future representation learning algorithms. 
    more » « less