skip to main content


This content will become publicly available on March 17, 2024

Title: Solving Quadratic Unconstrained Binary Optimization with Collaborative Spiking Neural Networks
Quadratic Unconstrained Binary Optimization (QUBO) problem becomes an attractive and valuable optimization problem formulation in that it can easily transform into a variety of other combinatorial optimization problems such as Graph/number Partition, Max-Cut, SAT, Vertex Coloring, TSP, etc. Some of these problems are NP-hard and widely applied in industry and scientific research. Meanwhile, QUBO has been discovered to be compatible with two emerging computing paradigms, neuromorphic computing, and quantum computing, with tremendous potential to speed up future optimization solvers. In this paper, we propose a novel neuromorphic computing paradigm that employs multiple collaborative spiking neural networks to solve QUBO problems. Each SNN conducts a local stochastic gradient descent search and shares the global best solutions periodically to perform a meta-heuristic search for optima. We simulate our model and compare it to a single SNN solver and a mult-SNN solver without collaboration. Through tests on benchmark problems, the proposed method is demonstrated to be more efficient and effective in searching for QUBO optima. Specifically, it exhibits x10 and x15-20 speedup respectively on the multi-SNN solver without collaboration and the single-SNN solver.  more » « less
Award ID(s):
2153440
NSF-PAR ID:
10403834
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2022 IEEE International Conference on Rebooting Computing (ICRC)
Page Range / eLocation ID:
84 to 88
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Spike train classification is an important problem in many areas such as healthcare and mobile sensing, where each spike train is a high-dimensional time series of binary values. Conventional re- search on spike train classification mainly focus on developing Spiking Neural Networks (SNNs) under resource-sufficient settings (e.g., on GPU servers). The neurons of the SNNs are usually densely connected in each layer. However, in many real-world applications, we often need to deploy the SNN models on resource-constrained platforms (e.g., mobile devices) to analyze high-dimensional spike train data. The high resource requirement of the densely-connected SNNs can make them hard to deploy on mobile devices. In this paper, we study the problem of energy-efficient SNNs with sparsely- connected neurons. We propose an SNN model with sparse spatiotemporal coding. Our solution is based on the re-parameterization of weights in an SNN and the application of sparsity regularization during optimization. We compare our work with the state-of-the-art SNNs and demonstrate that our sparse SNNs achieve significantly better computational efficiency on both neuromorphic and standard datasets with comparable classification accuracy. Furthermore, com- pared with densely-connected SNNs, we show that our method has a better capability of generalization on small-size datasets through extensive experiments. 
    more » « less
  2. null (Ed.)
    Spike train classification is an important problem in many areas such as healthcare and mobile sensing, where each spike train is a high-dimensional time series of binary values. Conventional re- search on spike train classification mainly focus on developing Spiking Neural Networks (SNNs) under resource-sufficient settings (e.g., on GPU servers). The neurons of the SNNs are usually densely connected in each layer. However, in many real-world applications, we often need to deploy the SNN models on resource-constrained platforms (e.g., mobile devices) to analyze high-dimensional spike train data. The high resource requirement of the densely-connected SNNs can make them hard to deploy on mobile devices. In this paper, we study the problem of energy-efficient SNNs with sparsely- connected neurons. We propose an SNN model with sparse spatio-temporal coding. Our solution is based on the re-parameterization of weights in an SNN and the application of sparsity regularization during optimization. We compare our work with the state-of-the-art SNNs and demonstrate that our sparse SNNs achieve significantly better computational efficiency on both neuromorphic and standard datasets with comparable classification accuracy. Furthermore, com- pared with densely-connected SNNs, we show that our method has a better capability of generalization on small-size datasets through extensive experiments. 
    more » « less
  3. Abstract In recent work, we provide computational arguments for expanding the class of proper cones recognized by conic optimization solvers, to permit simpler, smaller, more natural conic formulations. We define an exotic cone as a proper cone for which we can implement a small set of tractable (i.e. fast, numerically stable, analytic) oracles for a logarithmically homogeneous self-concordant barrier for the cone or for its dual cone. Our extensible, open-source conic interior point solver, Hypatia, allows modeling and solving any conic problem over a Cartesian product of exotic cones. In this paper, we introduce Hypatia’s interior point algorithm, which generalizes that of Skajaa and Ye (Math. Program. 150(2):391–422, 2015) by handling exotic cones without tractable primal oracles. To improve iteration count and solve time in practice, we propose four enhancements to the interior point stepping procedure of Skajaa and Ye: (1) loosening the central path proximity conditions, (2) adjusting the directions using a third order directional derivative barrier oracle, (3) performing a backtracking search on a curve, and (4) combining the prediction and centering directions. We implement 23 useful exotic cones in Hypatia. We summarize the complexity of computing oracles for these cones and show that our new third order oracle is not a bottleneck. From 37 applied examples, we generate a diverse benchmark set of 379 problems. Our computational testing shows that each stepping enhancement improves Hypatia’s iteration count and solve time. Altogether, the enhancements reduce the geometric means of iteration count and solve time by over 80% and 70% respectively. 
    more » « less
  4. Abstract

    Adaptive ‘life-long’ learning at the edge and during online task performance is an aspirational goal of artificial intelligence research. Neuromorphic hardware implementing spiking neural networks (SNNs) are particularly attractive in this regard, as their real-time, event-based, local computing paradigm makes them suitable for edge implementations and fast learning. However, the long and iterative learning that characterizes state-of-the-art SNN training is incompatible with the physical nature and real-time operation of neuromorphic hardware. Bi-level learning, such as meta-learning is increasingly used in deep learning to overcome these limitations. In this work, we demonstrate gradient-based meta-learning in SNNs using the surrogate gradient method that approximates the spiking threshold function for gradient estimations. Because surrogate gradients can be made twice differentiable, well-established, and effective second-order gradient meta-learning methods such as model agnostic meta learning (MAML) can be used. We show that SNNs meta-trained using MAML perform comparably to conventional artificial neural networks meta-trained with MAML on event-based meta-datasets. Furthermore, we demonstrate the specific advantages that accrue from meta-learning: fast learning without the requirement of high precision weights or gradients, training-to-learn with quantization and mitigating the effects of approximate synaptic plasticity rules. Our results emphasize how meta-learning techniques can become instrumental for deploying neuromorphic learning technologies on real-world problems.

     
    more » « less
  5. Abstract

    Metaheuristic algorithms such as simulated annealing (SA) are often implemented for optimization in combinatorial problems, especially for discreet problems. SA employs a stochastic search, where high‐energy transitions (“hill‐climbing”) are allowed with a temperature‐dependent probability to escape local optima. Ising spin glass systems have properties such as spin disorder and “frustration” and provide a discreet combinatorial problem with a high number of metastable states and ground‐state degeneracy. In this work, subthreshold Boltzmann transport is exploited in complementary 2D field‐effect transistors (p‐type WSe2and n‐type MoS2) integrated with an analog, nonvolatile, and programmable floating‐gate memory stack to develop in‐memory computing primitives necessary for energy‐ and area‐efficient hardware acceleration of SA for Ising spin systems. Search acceleration of >800× is demonstrated for 4 × 4 ferromagnetic, antiferromagnetic, and spin glass systems using SA compared to an exhaustive search using a brute force trial at miniscule total energy expenditure of ≈120 nJ. The hardware‐realistic numerical simulations further highlight the astounding benefits of SA in accelerating the search for larger spin lattices.

     
    more » « less