In this work, we propose a novel approach for high accuracy user localization by merging tools from both millimeter wave (mmWave) imaging and communications. The key idea of the proposed solution is to leverage mmWave imaging to construct a high-resolution 3D image of the line-of-sight (LOS) and non-line-of-sight (NLOS) objects in the environment at one antenna array. Then, uplink pilot signaling with the user is used to estimate the angle-of-arrival and time-of- arrival of the dominant channel paths. By projecting the AoA and ToA information on the 3D mmWave images of the environment, the proposed solution can locate the user with a sub-centimeter accuracy. This approach has several gains. First, it allows accurate simultaneous localization and mapping (SLAM) from a single standpoint, i.e., using only one antenna array. Second, it does not require any prior knowledge of the surrounding environment. Third, it can locate NLOS users, even if their signals experience more than one reflection and without requiring an antenna array at the user. The approach is evaluated using a hardware setup and its ability to provide sub-centimeter localization accuracy is shown
Joint Channel Estimation and Localization for Cooperative Millimeter Wave Systems
Localization is one of the most interesting topics related to the promising millimeter wave (mmWave) technology. In this paper, we investigate joint channel estimation and localization for a cooperative mmWave system with several receivers. Due to the strong line-of-sight path common to mmWave channels, one can localize the position of the user by exploiting the signal's angle-of-arrival (AoA). Leveraging a variational Bayesian approach, we obtain soft information about the AoA for each receiver. We then use the soft AoA information and geometrical constraints to localize the position of the user and further improve the channel estimation performance. Numerical results show that the proposed algorithm has centimeter-level localization accuracy for an outdoor scene. In addition, the proposed algorithm provides 1-3 dB of gain for channel estimation by exploiting the correlation among the receiver channels depending on the availability of prior information about the path loss model.
- Publication Date:
- NSF-PAR ID:
- Journal Name:
- Proc. 2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)
- Page Range or eLocation-ID:
- 1 to 5
- Sponsoring Org:
- National Science Foundation
More Like this
Bayesian Iterative Channel Estimation and Turbo Equalization for Multiple-Input–Multiple-Output Underwater Acoustic CommunicationsThis article investigates a robust receiver scheme for a single carrier, multiple-input–multiple-output (MIMO) underwater acoustic (UWA) communications, which uses the sparse Bayesian learning algorithm for iterative channel estimation embedded in Turbo equalization (TEQ). We derive a block-wise sparse Bayesian learning framework modeling the spatial correlation of the MIMO UWA channels, where a more robust expectation–maximization algorithm is proposed for updating the joint estimates of channel impulse response, residual noise, and channel covariance matrix. By exploiting the spatially correlated sparsity of MIMO UWA channels and the second-order a priori channel statistics from the training sequence, the proposed Bayesian channel estimator enjoys not only relatively low complexity but also more stable control of the hyperparameters that determine the channel sparsity and recovery accuracy. Moreover, this article proposes a low complexity space-time soft decision feedback equalizer (ST-SDFE) with successive soft interference cancellation. Evaluated by the undersea 2008 Surface Processes and Acoustic Communications Experiment, the improved sparse Bayesian learning channel estimation algorithm outperforms the conventional Bayesian algorithms in terms of the robustness and complexity, while enjoying better estimation accuracy than the orthogonal matching pursuit and the improved proportionate normalized least mean squares algorithms. We have also verified that the proposed ST-SDFE TEQ significantly outperformsmore »
Voice assistants such as Amazon Echo (Alexa) and Google Home use microphone arrays to estimate the angle of arrival (AoA) of the human voice. This paper focuses on adding user localization as a new capability to voice assistants. For any voice command, we desire Alexa to be able to localize the user inside the home. The core challenge is two-fold: (1) accurately estimating the AoAs of multipath echoes without the knowledge of the source signal, and (2) tracing back these AoAs to reverse triangulate the user's location.We develop VoLoc, a system that proposes an iterative align-and-cancel algorithm for improved multipath AoA estimation, followed by an error-minimization technique to estimate the geometry of a nearby wall reflection. The AoAs and geometric parameters of the nearby wall are then fused to reveal the user's location. Under modest assumptions, we report localization accuracy of 0.44 m across different rooms, clutter, and user/microphone locations. VoLoc runs in near real-time but needs to hear around 15 voice commands before becoming operational.
The ability for a smart speaker to localize a user based on his/her voice opens the door to many new applications. In this paper, we present a novel system, MAVL, to localize human voice. It consists of three major components: (i) We first develop a novel multi-resolution analysis to estimate the Angle-of-Arrival (AoA) of time-varying low-frequency coherent voice signals coming from multiple propagation paths; (ii) We then automatically estimate the room structure by emitting acoustic signals and developing an improved 3D MUSIC algorithm; (iii) We finally re-trace the paths using the estimated AoA and room structure to localize the voice. We implement a prototype system using a single speaker and a uniform circular microphone array. Our results show that it achieves median errors of 1.49o and 3.33o for the top two AoAs estimation and achieves median localization errors of 0.31m in line-of-sight (LoS) cases and 0.47m in non-line-of-sight (NLoS) cases.
The ability for a smart speaker to localize a user based on his/her voice opens the door to many new applications. In this paper, we present a novel system, MAVL, to localize human voice. It consists of three major components: (i) We first develop a novel multi-resolution analysis to estimate the AoA of time-varying low-frequency coherent voice signals coming from multiple propagation paths; (ii) We then automatically estimate the room structure by emitting acoustic signals and developing an improved 3D MUSIC algorithm; (iii) We finally re-trace the paths using the estimated AoA and room structure to localize the voice. We implement a prototype system using a single speaker and a uniform circular microphone array. Our results show that it achieves median errors of 1.49 degree and 3.33 degree for the top two AoAs estimation and achieves median localization errors of 0.31m in line-of-sight (LoS) cases and 0.47m in non-line-of-sight (NLoS) cases.