We propose blind estimators for the average noise power, receive signal power, signal-to-noise ratio (SNR), and mean-square error (MSE), suitable for multi-antenna millimeter wave (mmWave) wireless systems. The proposed estimators can be computed at low complexity and solely rely on beamspace sparsity, i.e., the fact that only a small number of dominant propagation paths exist in typical mmWave channels. Our estimators can be used (i) to quickly track some of the key quantities in multi-antenna mmWave systems while avoiding additional pilot overhead and (ii) to design efficient nonparametric algorithms that require such quantities. We provide a theoretical analysis of the proposed estimators, and we demonstrate their efficacy via synthetic experiments and using a nonparametric channel-vector denoising task with realistic multi-antenna mmWave channels.
more »
« less
Low-Complexity Blind Parameter Estimation in Wireless Systems with Noisy Sparse Signals
Baseband processing algorithms often require knowledge of the noise power, signal power, or signal-to-noise ratio (SNR). In practice, these parameters are typically unknown and must be estimated. Furthermore, the mean-square error (MSE) is a desirable metric to be minimized in a variety of estimation and signal recovery algorithms. However, the MSE cannot directly be used as it depends on the true signal that is generally unknown to the estimator. In this paper, we propose novel blind estimators for the average noise power, average receive signal power, SNR, and MSE. The proposed estimators can be computed at low complexity and solely rely on the large-dimensional and sparse nature of the processed data. Our estimators can be used (i) to quickly track some of the key system parameters while avoiding additional pilot overhead, (ii) to design low-complexity nonparametric algorithms that require such quantities, and (iii) to accelerate more sophisticated estimation or recovery algorithms. We conduct a theoretical analysis of the proposed estimators for a Bernoulli complex Gaussian (BCG) prior, and we demonstrate their efficacy via synthetic experiments. We also provide three application examples that deviate from the BCG prior in millimeter-wave multi-antenna and cell-free wireless systems for which we develop nonparametric denoising algorithms that improve channel-estimation accuracy with a performance comparable to denoisers that assume perfect knowledge of the system parameters.
more »
« less
- PAR ID:
- 10490321
- Publisher / Repository:
- IEEE Transactions on Wireless Communications
- Date Published:
- Journal Name:
- IEEE Transactions on Wireless Communications
- Volume:
- 22
- Issue:
- 10
- ISSN:
- 1536-1276
- Page Range / eLocation ID:
- 7055 to 7071
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper proposes an iterative method of estimating power system forced oscillation (FO) amplitude, frequency, phase, and start/stop times from measured data. It combines three algorithms with favorable asymptotic statistical properties: a periodogram-based iterative frequency estimator, a Discrete-Time Fourier Transform (DTFT)-based method of estimating amplitude and phase, and a changepoint detection (CPD) method for estimating the FO start and stop samples. Each of these have been shown in the literature to be approximate maximum likelihood estimators (MLE), meaning that for large enough sample size or signal-to-noise ratio (SNR), they can be unbiased and reach the Cramer-Rao Lower Bound in variance. The proposed method is shown through Monte Carlo simulations of a low-order model of the Western Electricity Coordinating Council (WECC) power system to achieve statistical efficiency for low SNR values. The proposed method is validated with data measured from the January 11, 2019 US Eastern Interconnection (EI) FO event. It is shown to accurately extract the FO parameters and remove electromechanical mode meter bias, even with a time-varying FO amplitude.more » « less
-
Since its development, the minimax framework has been one of the cornerstones of theoretical statistics, and has contributed to the popularity of many well-known estimators, such as the regularized M-estimators for high-dimensional problems. In this paper, we will first show through the example of sparse Gaussian sequence model, that the theoretical results under the classical minimax framework are insufficient for explaining empirical observations. In particular, both hard and soft thresholding estimators are (asymptotically) minimax, however, in practice they often exhibit sub-optimal performances at various signal-to-noise ratio (SNR) levels. The first contribution of this paper is to demonstrate that this issue can be resolved if the signal-to-noise ratio is taken into account in the construction of the parameter space. We call the resulting minimax framework the signal-to-noise ratio aware minimaxity. The second contribution of this paper is to showcase how one can use higher-order asymptotics to obtain accurate approximations of the SNR-aware minimax risk and discover minimax estimators. The theoretical findings obtained from this refined minimax framework provide new insights and practical guidance for the estimation of sparse signals.more » « less
-
The problem of benign overfitting asks whether it is possible for a model to perfectly fit noisy training data and still generalize well. We study benign overfitting in two- layer leaky ReLU networks trained with the hinge loss on a binary classification task. We consider input data that can be decomposed into the sum of a common signal and a random noise component, that lie on subspaces orthogonal to one another. We characterize conditions on the signal to noise ratio (SNR) of the model parameters giving rise to benign versus non-benign (or harmful) overfitting: in particular, if the SNR is high then benign overfitting occurs, conversely if the SNR is low then harmful overfitting occurs. We attribute both benign and non- benign overfitting to an approximate margin maximization property and show that leaky ReLU networks trained on hinge loss with gradient descent (GD) satisfy this property. In contrast to prior work we do not require the training data to be nearly orthogonal. Notably, for input dimension d and training sample size n, while results in prior work require d= !(n2 log n), here we require only d= ! (n).more » « less
-
The problem of benign overfitting asks whether it is possible for a model to perfectly fit noisy training data and still generalize well. We study benign overfitting in two- layer leaky ReLU networks trained with the hinge loss on a binary classification task. We consider input data that can be decomposed into the sum of a common signal and a random noise component, that lie on subspaces orthogonal to one another. We characterize conditions on the signal to noise ratio (SNR) of the model parameters giving rise to benign versus non-benign (or harmful) overfitting: in particular, if the SNR is high then benign overfitting occurs, conversely if the SNR is low then harmful overfitting occurs. We attribute both benign and non- benign overfitting to an approximate margin maximization property and show that leaky ReLU networks trained on hinge loss with gradient descent (GD) satisfy this property. In contrast to prior work we do not require the training data to be nearly orthogonal. Notably, for input dimension d and training sample size n, while results in prior work require d= !(n2 log n), here we require only d= ! (n).more » « less
An official website of the United States government

