skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Parallelized Multi-Agent Bayesian Optimization in Lava
In parallel with the continuously increasing parameter space dimensionality, search and optimization algorithms should support distributed parameter evaluations to reduce cumulative runtime. Intel’s neuromorphic optimization library, Lava-Optimization, was introduced as an abstract optimization system compatible with neuromorphic systems developed in the broader Lava software framework. In this work, we introduce Lava Multi-Agent Optimization (LMAO) with native support for distributed parameter evaluations communicating with a central Bayesian optimization system. LMAO provides an abstract framework for deploying distributed optimization and search algorithms within the Lava software framework. Moreover, LMAO introduces support for random and grid search along with process connections across multiple levels of mathematical precision. We evaluate the algorithmic performance of LMAO with a traditional non-convex optimization problem, a fixed-precision transductive spiking graph neural network for citation graph classification, and a neuromorphic satellite scheduling problem. Our results highlight LMAO’s efficient scaling to multiple processes, reducing cumulative runtime and minimizing the likelihood of converging to local optima.  more » « less
Award ID(s):
2319619
PAR ID:
10638831
Author(s) / Creator(s):
 ;  ;  ;  ;  
Publisher / Repository:
IEEE
Date Published:
Page Range / eLocation ID:
295 to 298
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Recent research applies soft computing techniques to fit software reliability growth models. However, runtime performance and the distribution of the distance from an optimal solution over multiple runs must be explicitly considered to justify the practical utility of these approaches, promote comparison, and support reproducible research. This paper presents a meta-optimization framework to design stable and efficient multi-phase algorithms for fitting software reliability growth models. The approach combines initial parameter estimation techniques from statistical algorithms, the global search properties of soft computing, and the rapid convergence of numerical methods. Designs that exhibit the best balance between runtime performance and accuracy are identified. The approach is illustrated through nonhomogeneous Poisson process and covariate software reliability growth models, including a cross-validation step on data sets not used to identify designs. The results indicate the nonhomogeneous Poisson process model considered is too simple to benefit from soft computing because it incurs additional runtime with no increase in accuracy attained. However, a multi-phase design for the covariate software reliability growth model consisting of the bat algorithm followed by a numerical method achieves better performance and converges consistently, compared to a numerical method only. The proposed approach supports higher dimensional covariate software reliability growth model fitting suitable for implementation in a tool. 
    more » « less
  2. We present a lazy incremental search algorithm, Lifelong-GLS (L-GLS), along with its bounded suboptimal version, Bounded L-GLS (B-LGLS) that combine the search efficiency of incremental search algorithms with the evaluation efficiency of lazy search algorithms for fast replanning in problem domains where edge evaluations are more expensive than vertex expansions. The proposed algorithms generalize Lifelong Planning A* (LPA*) and its bounded suboptimal version, Truncated LPA* (TLPA*), within the Generalized Lazy Search (GLS) framework, so as to restrict expensive edge evaluations only to the current shortest subpath when the cost-to-come inconsistencies are propagated during repair. We also present dynamic versions of the L-GLS and B-LGLS algorithms, called Generalized D* (GD*) and Bounded Generalized D* (B-GD*), respectively, for efficient replanning with non-stationary queries, designed specifically for navigation of mobile robots. We prove that the proposed algorithms are complete and correct in finding a solution that is guaranteed not to exceed the optimal solution cost by a user-chosen factor. Our numerical and experimental results support the claim that the proposed integration of the incremental and lazy search frameworks can help find solutions faster compared to the regular incremental or regular lazy search algorithms when the underlying graph representation changes often. 
    more » « less
  3. This paper introduces a novel concept of intelligent partitioning for group-based distributed optimization (DO) algorithms applied to optimal power flow (OPF) problems. Radial partitioning of the graph of a network is introduced as a systematic way to split a large-scale problem into more tractable sub-problems, which can potentially be solved efficiently with methods such as convex relaxations. Spectral parameter selection is introduced for group-based DO as a crucial hyper-parameter selection step in DO. A software package DiCARP is created, which is implemented in Python using the Pyomo optimization package. Through several numerical examples, we compare the proposed group-based algorithm to component-based approaches, evaluate our radial partitioning method against other partitioning strategies, and assess adaptive parameter selection in comparison to non-adaptive methods. The results highlight the critical role of effective partitioning and parameter selection in solving large-scale network optimization problems. 
    more » « less
  4. Abstract Decomposition‐based solution algorithms for optimization problems depend on the underlying latent block structure of the problem. Methods for detecting this structure are currently lacking. In this article, we propose stochastic blockmodeling (SBM) as a systematic framework for learning the underlying block structure in generic optimization problems. SBM is a generative graph model in which nodes belong to some blocks and the interconnections among the nodes are stochastically dependent on their block affiliations. Hence, through parametric statistical inference, the interconnection patterns underlying optimization problems can be estimated. For benchmark optimization problems, we show that SBM can reveal the underlying block structure and that the estimated blocks can be used as the basis for decomposition‐based solution algorithms which can reach an optimum or bound estimates in reduced computational time. Finally, we present a general software platform for automated block structure detection and decomposition‐based solution following distributed and hierarchical optimization approaches. 
    more » « less
  5. Quadratic Unconstrained Binary Optimization (QUBO) problem becomes an attractive and valuable optimization problem formulation in that it can easily transform into a variety of other combinatorial optimization problems such as Graph/number Partition, Max-Cut, SAT, Vertex Coloring, TSP, etc. Some of these problems are NP-hard and widely applied in industry and scientific research. Meanwhile, QUBO has been discovered to be compatible with two emerging computing paradigms, neuromorphic computing, and quantum computing, with tremendous potential to speed up future optimization solvers. In this paper, we propose a novel neuromorphic computing paradigm that employs multiple collaborative spiking neural networks to solve QUBO problems. Each SNN conducts a local stochastic gradient descent search and shares the global best solutions periodically to perform a meta-heuristic search for optima. We simulate our model and compare it to a single SNN solver and a mult-SNN solver without collaboration. Through tests on benchmark problems, the proposed method is demonstrated to be more efficient and effective in searching for QUBO optima. Specifically, it exhibits x10 and x15-20 speedup respectively on the multi-SNN solver without collaboration and the single-SNN solver. 
    more » « less