skip to main content


Title: A Memristor-Based Optimization Framework for Artificial Intelligence Applications
Memristors have recently received significant attention as device-level components for building a novel generation of computing systems. These devices have many promising features, such as non-volatility, low power consumption, high density, and excellent scalability. The ability to control and modify biasing voltages at memristor terminals make them promising candidates to efficiently perform matrix-vector multiplications and solve systems of linear equations. In this article, we discuss how networks of memristors arranged in crossbar arrays can be used for efficiently solving optimization and machine learning problems. We introduce a new memristor-based optimization framework that combines the computational merits of memristor crossbars with the advantages of an operator splitting method, the alternating direction method of multipliers (ADMM). Here, ADMM helps in splitting a complex optimization problem into subproblems that involve the solution of systems of linear equations. The strength of this framework is shown by applying it to linear programming, quadratic programming, and sparse optimization. In addition to ADMM, implementation of a customized power iteration method for eigenvalue/eigenvector computation using memristor crossbars is discussed. The memristor-based power iteration method can further be applied to principal component analysis. The use of memristor crossbars yields a significant speed-up in computation, and thus, we believe, has the potential to advance optimization and machine learning research in artificial intelligence.  more » « less
Award ID(s):
1637559
NSF-PAR ID:
10062544
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE circuits and systems magazine
Volume:
18
Issue:
1
ISSN:
1531-636X
Page Range / eLocation ID:
29-44
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we propose a new approach for robust compressive sensing (CS) using memristor crossbars that are constructed by recently invented memristor devices. The exciting features of a memristor crossbar, such as high density, low power and great scalability, make it a promising candidate to perform large-scale matrix operations. To apply memristor crossbars to solve a robust CS problem, the alternating directions method of multipliers (ADMM) is employed to split the original problem into subproblems that involve the solution of systems of linear equations. A system of linear equations can then be solved using memristor crossbars with astonishing O(1) time complexity. We also study the impact of hardware variations on the memristor crossbar based CS solver from both theoretical and practical points of view. The resulting overall complexity is given by O(n), which achieves O(n2.5) speed-up compared to the state-of-the-art software approach. Numerical results are provided to illustrate the effectiveness of the proposed CS solver. 
    more » « less
  2. A memristor crossbar, which is constructed with memristor devices, has the unique ability to change and memorize the state of each of its memristor elements. It also has other highly desirable features such as high density, low power operation and excellent scalability. Hence the memristor crossbar technology can potentially be utilized for developing low-complexity and high-scalability solution frameworks for solving a large class of convex optimization problems, which involve extensive matrix operations and have critical applications in multiple disciplines. This paper, as the first attempt towards this direction, proposes a novel memristor crossbar-based framework for solving two important convex optimization problems, i.e., second-order cone programming (SOCP) and homogeneous quadratically constrained quadratic programming (QCQP) problems. In this paper, the alternating direction method of multipliers (ADMM) is adopted. It splits the SOCP and homogeneous QCQP problems into sub-problems that involve the solution of linear systems, which could be effectively solved using the memristor crossbar in O(1) time complexity. The proposed algorithm is an iterative procedure that iterates a constant number of times. Therefore, algorithms to solve SOCP and homogeneous QCQP problems have pseudo-O(N) complexity, which is a significant reduction compared to the state-of-the-art software solvers (O(N3.5)-O(N4)). 
    more » « less
  3. The high computation and memory storage of large deep neural networks (DNNs) models pose intensive challenges to the conventional Von-Neumann architecture, incurring substantial data movements in the memory hierarchy. The memristor crossbar array has emerged as a promising solution to mitigate the challenges and enable low-power acceleration of DNNs. Memristor-based weight pruning and weight quantization have been separately investigated and proven effectiveness in reducing area and power consumption compared to the original DNN model. However, there has been no systematic investigation of memristor-based neuromorphic computing (NC) systems considering both weight pruning and weight quantization. In this paper, we propose an unified and systematic memristor-based framework considering both structured weight pruning and weight quantization by incorporating alternating direction method of multipliers (ADMM) into DNNs training. We consider hardware constraints such as crossbar blocks pruning, conductance range, and mismatch between weight value and real devices, to achieve high accuracy and low power and small area footprint. Our framework is mainly integrated by three steps, i.e., memristor- based ADMM regularized optimization, masked mapping and retraining. Experimental results show that our proposed frame- work achieves 29.81× (20.88×) weight compression ratio, with 98.38% (96.96%) and 98.29% (97.47%) power and area reduction on VGG-16 (ResNet-18) network where only have 0.5% (0.76%) accuracy loss, compared to the original DNN models. We share our models at anonymous link http://bit.ly/2Jp5LHJ . 
    more » « less
  4. Abstract

    Recent studies of resistive switching devices with hexagonal boron nitride (h-BN) as the switching layer have shown the potential of two-dimensional (2D) materials for memory and neuromorphic computing applications. The use of 2D materials allows scaling the resistive switching layer thickness to sub-nanometer dimensions enabling devices to operate with low switching voltages and high programming speeds, offering large improvements in efficiency and performance as well as ultra-dense integration. These characteristics are of interest for the implementation of neuromorphic computing and machine learning hardware based on memristor crossbars. However, existing demonstrations of h-BN memristors focus on single isolated device switching properties and lack attention to fundamental machine learning functions. This paper demonstrates the hardware implementation of dot product operations, a basic analog function ubiquitous in machine learning, using h-BN memristor arrays. Moreover, we demonstrate the hardware implementation of a linear regression algorithm on h-BN memristor arrays.

     
    more » « less
  5. Abstract

    This paper aims to develop distributed algorithms for nonconvex optimization problems with complicated constraints associated with a network. The network can be a physical one, such as an electric power network, where the constraints are nonlinear power flow equations, or an abstract one that represents constraint couplings between decision variables of different agents. Despite the recent development of distributed algorithms for nonconvex programs, highly complicated constraints still pose a significant challenge in theory and practice. We first identify some difficulties with the existing algorithms based on the alternating direction method of multipliers (ADMM) for dealing with such problems. We then propose a reformulation that enables us to design a two-level algorithm, which embeds a specially structured three-block ADMM at the inner level in an augmented Lagrangian method framework. Furthermore, we prove the global and local convergence as well as iteration complexity of this new scheme for general nonconvex constrained programs, and show that our analysis can be extended to handle more complicated multi-block inner-level problems. Finally, we demonstrate with computation that the new scheme provides convergent and parallelizable algorithms for various nonconvex applications, and is able to complement the performance of the state-of-the-art distributed algorithms in practice by achieving either faster convergence in optimality gap or in feasibility or both.

     
    more » « less