skip to main content


Title: Neural Network DPD via Backpropagation through a Neural Network Model of the PA
We demonstrate digital predistortion (DPD) using a novel, neural-network (NN) method to combat the nonlinearities in power amplifiers (PAs), which limit the power efficiency of mobile devices, increase the error vector magnitude, and cause inadequate spectral containment. DPD is commonly done with polynomial-based methods that use an indirect-learning architecture (ILA) which can be computationally intensive, especially for mobile devices, and overly sensitive to noise. Our approach using NNs avoids the problems associated with ILAs by first training a NN to model the PA then training a predistorter by backpropagating through the PA NN model. The NN DPD effectively learns the unique PA distortions, which may not easily fit a polynomial-based model, and hence may offer a favorable tradeoff between computation overhead and DPD performance. We demonstrate the performance of our NN method using two different power amplifier systems and investigate the complexity tradeoffs.  more » « less
Award ID(s):
1717218
NSF-PAR ID:
10202730
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2019 IEEE 53rd Asilomar Conference on Signals, Systems and Computers
Page Range / eLocation ID:
358 to 362
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The primary source of nonlinear distortion in wireless transmitters is the power amplifier (PA). Conventional digital predistortion (DPD) schemes use high-order polynomials to accurately approximate and compensate for the nonlinearity of the PA. This is not practical for scaling to tens or hundreds of PAs in massive multiple-input multiple-output (MIMO) systems. There is more than one candidate precoding matrix in a massive MIMO system because of the excess degrees-of-freedom (DoFs), and each precoding matrix requires a different DPD polynomial order to compensate for the PA nonlinearity. This paper proposes a low-order DPD method achieved by exploiting massive DoFs of next-generation front ends. We propose a novel indirect learning structure which adapts the channel and PA distortion iteratively by cascading adaptive zero forcing precoding and DPD. Our solution uses a 3rd order polynomial to achieve the same performance as the conventional DPD using an 11th order polynomial for a 10010 massive MIMO configuration. Experimental results show a 70% reduction in computational complexity, enabling ultra-low latency communications. 
    more » « less
  2. Compression and efficient storage ofneural network (NN)parameters is critical for applications that run on resource-constrained devices. Despite the significant progress in NN model compression, there has been considerably less investigation in the actualphysicalstorage of NN parameters. Conventionally, model compression and physical storage are decoupled, as digital storage media witherror-correcting codes (ECCs)provide robust error-free storage. However, this decoupled approach is inefficient as it ignores the overparameterization present in most NNs and forces the memory device to allocate the same amount of resources to every bit of information regardless of its importance. In this work, we investigate analog memory devices as an alternative to digital media – one that naturally provides a way to add more protection for significant bits unlike its counterpart, but is noisy and may compromise the stored model’s performance if used naively. We develop a variety of robust coding strategies for NN weight storage on analog devices, and propose an approach to jointly optimize model compression and memory resource allocation. We then demonstrate the efficacy of our approach on models trained on MNIST, CIFAR-10, and ImageNet datasets for existing compression techniques. Compared to conventional error-free digital storage, our method reduces the memory footprint by up to one order of magnitude, without significantly compromising the stored model’s accuracy.

     
    more » « less
  3. Abstract

    In the last decade, much work in atmospheric science has focused on spatial verification (SV) methods for gridded prediction, which overcome serious disadvantages of pixelwise verification. However, neural networks (NN) in atmospheric science are almost always trained to optimize pixelwise loss functions, even when ultimately assessed with SV methods. This establishes a disconnect between model verification during versus after training. To address this issue, we develop spatially enhanced loss functions (SELF) and demonstrate their use for a real-world problem: predicting the occurrence of thunderstorms (henceforth, “convection”) with NNs. In each SELF we use either a neighborhood filter, which highlights convection at scales larger than a threshold, or a spectral filter (employing Fourier or wavelet decomposition), which is more flexible and highlights convection at scales between two thresholds. We use these filters to spatially enhance common verification scores, such as the Brier score. We train each NN with a different SELF and compare their performance at many scales of convection, from discrete storm cells to tropical cyclones. Among our many findings are that (i) for a low or high risk threshold, the ideal SELF focuses on small or large scales, respectively; (ii) models trained with a pixelwise loss function perform surprisingly well; and (iii) nevertheless, models trained with a spectral filter produce much better-calibrated probabilities than a pixelwise model. We provide a general guide to using SELFs, including technical challenges and the final Python code, as well as demonstrating their use for the convection problem. To our knowledge this is the most in-depth guide to SELFs in the geosciences.

    Significance Statement

    Gridded predictions, in which a quantity is predicted at every pixel in space, should be verified with spatially aware methods rather than pixel by pixel. Neural networks (NN), which are often used for gridded prediction, are trained to minimize an error value called the loss function. NN loss functions in atmospheric science are almost always pixelwise, which causes the predictions to miss rare events and contain unrealistic spatial patterns. We use spatial filters to enhance NN loss functions, and we test our novel spatially enhanced loss functions (SELF) on thunderstorm prediction. We find that different SELFs work better for different scales (i.e., different-sized thunderstorm complexes) and that spectral filters, one of the two filter types, produce unexpectedly well calibrated thunderstorm probabilities.

     
    more » « less
  4. Assistive robotic devices are a particularly promising field of application for neural networks (NN) due to the need for personalization and hard-to-model human-machine interaction dynamics. However, NN based estimators and controllers may produce potentially unsafe outputs over previously unseen data points. In this paper, we introduce an algorithm for updating NN control policies to satisfy a given set of formal safety constraints, while also optimizing the original loss function. Given a set of mixed-integer linear constraints, we define the NN repair problem as a Mixed Integer Quadratic Program (MIQP). In extensive experiments, we demonstrate the efficacy of our repair method in generating safe policies for a lower-leg prosthesis. 
    more » « less
  5. Abstract

    Advances in visual perceptual tasks have been mainly driven by the amount, and types, of annotations of large-scale datasets. Researchers have focused on fully-supervised settings to train models using offline epoch-based schemes. Despite the evident advancements, limitations and cost of manually annotated datasets have hindered further development for event perceptual tasks, such as detection and localization of objects and events in videos. The problem is more apparent in zoological applications due to the scarcity of annotations and length of videos-most videos are at most ten minutes long. Inspired by cognitive theories, we present a self-supervised perceptual prediction framework to tackle the problem of temporal event segmentation by building a stable representation of event-related objects. The approach is simple but effective. We rely on LSTM predictions of high-level features computed by a standard deep learning backbone. For spatial segmentation, the stable representation of the object is used by an attention mechanism to filter the input features before the prediction step. The self-learned attention maps effectively localize the object as a side effect of perceptual prediction. We demonstrate our approach on long videos from continuous wildlife video monitoring, spanning multiple days at 25 FPS. We aim to facilitate automated ethogramming by detecting and localizing events without the need for labels. Our approach is trained in an online manner on streaming input and requires only a single pass through the video, with no separate training set. Given the lack of long and realistic (includes real-world challenges) datasets, we introduce a new wildlife video dataset–nest monitoring of the Kagu (a flightless bird from New Caledonia)–to benchmark our approach. Our dataset features a video from 10 days (over 23 million frames) of continuous monitoring of the Kagu in its natural habitat. We annotate every frame with bounding boxes and event labels. Additionally, each frame is annotated with time-of-day and illumination conditions. We will make the dataset, which is the first of its kind, and the code available to the research community. We find that the approach significantly outperforms other self-supervised, traditional (e.g., Optical Flow, Background Subtraction) and NN-based (e.g., PA-DPC, DINO, iBOT), baselines and performs on par with supervised boundary detection approaches (i.e., PC). At a recall rate of 80%, our best performing model detects one false positive activity every 50 min of training. On average, we at least double the performance of self-supervised approaches for spatial segmentation. Additionally, we show that our approach is robust to various environmental conditions (e.g., moving shadows). We also benchmark the framework on other datasets (i.e., Kinetics-GEBD, TAPOS) from different domains to demonstrate its generalizability. The data and code are available on our project page:https://aix.eng.usf.edu/research_automated_ethogramming.html

     
    more » « less