skip to main content


Title: Machine Learning Based MIMO Equalizer for High Frequency (HF) Communications
Utilization of multiple-input multiple-output (MIMO) systems as a means of increasing channel capacity has been an area of increasing consideration in radio communications. However, less study has been devoted to MIMO in the high-frequency band. This research is important because high-frequency communication using MIMO allows for international communication at long distances using lower power consumption than many other approaches. The inter-symbol interference caused by the selective fading of multiple received signals and the randomness of the ionospheric conditions means there is a need for a novel solution. The purpose of this research is to introduce two machine learning approaches that can adaptively apply equalization algorithms to address fading and optimize equalization parameters. The novelty of our approach lies in two main factors. The first is that our approach allows for a software-defined radio to switch equalization algorithms depending on conditions during run-time. The second is that we optimize this selected algorithm further by using two machine-learning approaches. The first proposed cognitive engine model, which utilizes a genetic algorithm, demonstrates the validity and advantage of using a cognitive engine to select optimal equalization parameters at the receiver under the problems created by utilizing the high-frequency band. This approach acts as a comparison for our second. We then propose a second cognitive engine, the adaptive manipulator, which optimizes not only by selecting equalization parameters but also continually changes the equalization algorithm. Finally, we compare the performance of the proposed cognitive engine models with state-of-the-art algorithms.  more » « less
Award ID(s):
1852199
NSF-PAR ID:
10215212
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
2020 International Joint Conference on Neural Networks, IJCNN 2020
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Massive multi-user (MU) multiple-input multiple-output (MIMO) promises significant gains in spectral efficiency compared to traditional, small-scale MIMO technology. Linear equalization algorithms, such as zero forcing (ZF) or minimum mean-square error (MMSE)-based methods, typically rely on centralized processing at the base station (BS), which results in (i) excessively high interconnect and chip input/output data rates, and (ii) high computational complexity. In this paper, we investigate the achievable rates of decentralized equalization that mitigates both of these issues. We consider two distinct BS architectures that partition the antenna array into clusters, each associated with independent radio-frequency chains and signal processing hardware, and the results of each cluster are fused in a feedforward network. For both architectures, we consider ZF, MMSE, and a novel, non-linear equalization algorithm that builds upon approximate message passing (AMP), and we theoretically analyze the achievable rates of these methods. Our results demonstrate that decentralized equalization with our AMP-based methods incurs no or only a negligible loss in terms of achievable rates compared to that of centralized solutions. 
    more » « less
  2. null (Ed.)
    Massive multi-user (MU) multiple-input multiple-output (MIMO) provides high spectral efficiency by means of spatial multiplexing and fine-grained beamforming. However, conventional base-station (BS) architectures for systems with hundreds of antennas that rely on centralized baseband processing inevitably suffer from (i) excessive interconnect data rates between radio-frequency circuitry and processing fabrics, and (ii) prohibitive complexity at the centralized baseband processor. Recently, decentralized baseband processing (DBP) architectures and algorithms have been proposed, which mitigate the interconnect bandwidth and complexity bottlenecks. This paper systematically explores the design trade-offs between error-rate performance, computational complexity, and data transfer latency of DBP architectures under different system configurations and channel conditions. Considering architecture, algorithm, and numerical precision aspects, we provide practical guidelines to select the DBP architecture and algorithm that are able to realize the full benefits of massive MU-MIMO in the uplink and downlink. 
    more » « less
  3. This article investigates a robust receiver scheme for a single carrier, multiple-input–multiple-output (MIMO) underwater acoustic (UWA) communications, which uses the sparse Bayesian learning algorithm for iterative channel estimation embedded in Turbo equalization (TEQ). We derive a block-wise sparse Bayesian learning framework modeling the spatial correlation of the MIMO UWA channels, where a more robust expectation–maximization algorithm is proposed for updating the joint estimates of channel impulse response, residual noise, and channel covariance matrix. By exploiting the spatially correlated sparsity of MIMO UWA channels and the second-order a priori channel statistics from the training sequence, the proposed Bayesian channel estimator enjoys not only relatively low complexity but also more stable control of the hyperparameters that determine the channel sparsity and recovery accuracy. Moreover, this article proposes a low complexity space-time soft decision feedback equalizer (ST-SDFE) with successive soft interference cancellation. Evaluated by the undersea 2008 Surface Processes and Acoustic Communications Experiment, the improved sparse Bayesian learning channel estimation algorithm outperforms the conventional Bayesian algorithms in terms of the robustness and complexity, while enjoying better estimation accuracy than the orthogonal matching pursuit and the improved proportionate normalized least mean squares algorithms. We have also verified that the proposed ST-SDFE TEQ significantly outperforms the low-complexity minimum mean square error TEQ in terms of the bit error rate and error propagation. 
    more » « less
  4. With increasing needs of fast and reliable commu- nication between devices, wireless communication techniques are rapidly evolving to meet such needs. Multiple input and output (MIMO) systems are one of the key techniques that utilize multiple antennas for high-throughput and reliable communication. However, increasing the number of antennas in communication also adds to the complexity of channel esti- mation, which is essential to accurately decode the transmitted data. Therefore, development of accurate and efficient channel estimation methods is necessary. We report the performance of machine learning-based channel estimation approaches to enhance channel estimation performance in high-noise envi- ronments. More specifically, bit error rate (BER) performance of 2 × 2 and 4 × 4 MIMO communication systems with space- time block coding model (STBC) and two neural network-based channel estimation algorithms is analyzed. Most significantly, the results demonstrate that a generalized regression neural network (GRNN) model matches BER results of a known-channel communication for 4 × 4 MIMO with 8-bit pilots, when trained in a specific signal to noise ratio (SNR) regime. Moreover, up to 9dB improvement in signal-to-noise ratio (SNR) for a target BER is observed, compared to least square (LS) channel estimation, especially when the model is trained in the low SNR regime. A deep artificial neural network (Deep ANN) model shows worse BER performance compared to LS in all tested environments. These preliminary results present an opportunity for achieving better performance in channel estimation through GRNN and highlight further research topics for deployment in the wild. 
    more » « less
  5. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less