 Award ID(s):
 1637559
 Publication Date:
 NSFPAR ID:
 10110068
 Journal Name:
 2017 22nd Asia and South Pacific Design Automation Conference (ASPDAC)
 Page Range or eLocationID:
 788 to 793
 Sponsoring Org:
 National Science Foundation
More Like this

Memristors have recently received significant attention as devicelevel components for building a novel generation of computing systems. These devices have many promising features, such as nonvolatility, low power consumption, high density, and excellent scalability. The ability to control and modify biasing voltages at memristor terminals make them promising candidates to efficiently perform matrixvector multiplications and solve systems of linear equations. In this article, we discuss how networks of memristors arranged in crossbar arrays can be used for efficiently solving optimization and machine learning problems. We introduce a new memristorbased optimization framework that combines the computational merits of memristor crossbars with the advantages of an operator splitting method, the alternating direction method of multipliers (ADMM). Here, ADMM helps in splitting a complex optimization problem into subproblems that involve the solution of systems of linear equations. The strength of this framework is shown by applying it to linear programming, quadratic programming, and sparse optimization. In addition to ADMM, implementation of a customized power iteration method for eigenvalue/eigenvector computation using memristor crossbars is discussed. The memristorbased power iteration method can further be applied to principal component analysis. The use of memristor crossbars yields a significant speedup in computation, and thus, we believe, hasmore »

In this paper, we propose a new approach for robust compressive sensing (CS) using memristor crossbars that are constructed by recently invented memristor devices. The exciting features of a memristor crossbar, such as high density, low power and great scalability, make it a promising candidate to perform largescale matrix operations. To apply memristor crossbars to solve a robust CS problem, the alternating directions method of multipliers (ADMM) is employed to split the original problem into subproblems that involve the solution of systems of linear equations. A system of linear equations can then be solved using memristor crossbars with astonishing O(1) time complexity. We also study the impact of hardware variations on the memristor crossbar based CS solver from both theoretical and practical points of view. The resulting overall complexity is given by O(n), which achieves O(n2.5) speedup compared to the stateoftheart software approach. Numerical results are provided to illustrate the effectiveness of the proposed CS solver.

We present alfonso, an opensource Matlab package for solving conic optimization problems over nonsymmetric convex cones. The implementation is based on the authors’ corrected analysis of a method of Skajaa and Ye. It enables optimization over any convex cone as long as a logarithmically homogeneous selfconcordant barrier is available for the cone or its dual. This includes many nonsymmetric cones, for example, hyperbolicity cones and their duals (such as sumofsquares cones), semidefinite and secondorder cone representable cones, power cones, and the exponential cone. Besides enabling the solution of problems that cannot be cast as optimization problems over a symmetric cone, algorithms for nonsymmetric conic optimization also offer performance advantages for problems whose symmetric cone programming representation requires a large number of auxiliary variables or has a special structure that can be exploited in the barrier computation. The worstcase iteration complexity of alfonso is the best known for nonsymmetric cone optimization: [Formula: see text] iterations to reach an εoptimal solution, where ν is the barrier parameter of the barrier function used in the optimization. Alfonso can be interfaced with a Matlab function (supplied by the user) that computes the Hessian of a barrier function for the cone. A simplified interface ismore »

The high computation and memory storage of large deep neural networks (DNNs) models pose intensive challenges to the conventional VonNeumann architecture, incurring substantial data movements in the memory hierarchy. The memristor crossbar array has emerged as a promising solution to mitigate the challenges and enable lowpower acceleration of DNNs. Memristorbased weight pruning and weight quantization have been separately investigated and proven effectiveness in reducing area and power consumption compared to the original DNN model. However, there has been no systematic investigation of memristorbased neuromorphic computing (NC) systems considering both weight pruning and weight quantization. In this paper, we propose an unified and systematic memristorbased framework considering both structured weight pruning and weight quantization by incorporating alternating direction method of multipliers (ADMM) into DNNs training. We consider hardware constraints such as crossbar blocks pruning, conductance range, and mismatch between weight value and real devices, to achieve high accuracy and low power and small area footprint. Our framework is mainly integrated by three steps, i.e., memristor based ADMM regularized optimization, masked mapping and retraining. Experimental results show that our proposed frame work achieves 29.81× (20.88×) weight compression ratio, with 98.38% (96.96%) and 98.29% (97.47%) power and area reduction on VGG16 (ResNet18) networkmore »

Abstract In this paper, we study multistage stochastic mixedinteger nonlinear programs (MSMINLP). This general class of problems encompasses, as important special cases, multistage stochastic convex optimization with
nonLipschitzian value functions and multistage stochastic mixedinteger linear optimization. We develop stochastic dual dynamic programming (SDDP) type algorithms with nested decomposition, deterministic sampling, and stochastic sampling. The key ingredient is a new type of cuts based on generalized conjugacy. Several interesting classes of MSMINLP are identified, where the new algorithms are guaranteed to obtain the global optimum without the assumption of complete recourse. This significantly generalizes the classic SDDP algorithms. We also characterize the iteration complexity of the proposed algorithms. In particular, for a stage stochastic MINLP satisfying$$(T+1)$$ $(T+1)$L exact Lipschitz regularization withd dimensional state spaces, to obtain an optimal root node solution, we prove that the number of iterations of the proposed deterministic sampling algorithm is upper bounded by$$\varepsilon $$ $\epsilon $ , and is lower bounded by$${\mathcal {O}}((\frac{2LT}{\varepsilon })^d)$$ $O\left({\left(\frac{2LT}{\epsilon}\right)}^{d}\right)$ for the general case or by$${\mathcal {O}}((\frac{LT}{4\varepsilon })^d)$$ $O\left({\left(\frac{\mathrm{LT}}{4\epsilon}\right)}^{d}\right)$ for the convex case. This shows that the obtained complexity bounds are rather sharp. It also reveals that the iteration complexity depends$${\mathcal {O}}((\frac{LT}{8\varepsilon })^{d/21})$$ $O\left({\left(\frac{\mathrm{LT}}{8\epsilon}\right)}^{d/21}\right)$polynomially on the number of stages. We further show that the iteration complexity dependslinearly onT , if all the state spaces are finite sets, or ifmore »