skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Distributed Multi-Agent Bayesian Optimization for Unknown Design Space Exploration
In multi-agent Bayesian optimization for Design Space Exploration (DSE), identifying a communication network among agents to share useful design information for enhanced cooperation and performance, considering the trade-off between connectivity and cost, poses significant challenges. To address this challenge, we develop a distributed multi-agent Bayesian optimization (DMABO) framework and study how communication network structures/connectivity and the resulting cost would impact the performance of a team of agents when finding the global optimum. Specifically, we utilize Lloyd’s algorithm to partition the design space to assign distinct regions to individual agents for exploration in the distributed multi-agent system (MAS). Based on this partitioning, we generate communication networks among agents using two models: 1) a range-limited model of communication constrained by neighborhood information; and 2) a range-free model without neighborhood constraints. We introduce network density as a metric to quantify communication costs. Then, we generate communication networks by gradually increasing the network density to assess the impact of communication costs on the performance of MAS in DSE. The experimental results show that the communication network based on the range-limited model can significantly improve performance without incurring high communication costs. This indicates that increasing the density of a communication network does not necessarily improve MAS performance in DSE. Furthermore, the results indicate that communication is only beneficial for team performance if it occurs between specific agents whose search regions are critically relevant to the location of the global optimum. The proposed DMABO framework and the insights obtained can help identify the best trade-off between communication structure and cost for MAS in unknown design space exploration.  more » « less
Award ID(s):
2419423 2321463
PAR ID:
10630938
Author(s) / Creator(s):
; ;
Publisher / Repository:
American Society of Mechanical Engineers
Date Published:
ISBN:
978-0-7918-8837-7
Page Range / eLocation ID:
DETC2024-143377
Format(s):
Medium: X
Location:
Washington, DC, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Effective coordination of design teams must account for the influence of costs incurred while searching for the best design solutions. This article introduces a cost-aware multi-agent system (MAS), a theoretical model to (1) explain how individuals in a team should search, assuming that they are all rational utility-maximizing decision-makers and (2) study the impact of cost on the search performance of both individual agents and the system. First, we develop a new multi-agent Bayesian optimization framework accounting for information exchange among agents to support their decisions on where to sample in search. Second, we employ a reinforcement learning approach based on the multi-agent deep deterministic policy gradient for training MAS to identify where agents cannot sample due to design constraints. Third, we propose a new cost-aware stopping criterion for each agent to determine when costs outweigh potential gains in search as a criterion to stop. Our results indicate that cost has a more significant impact on MAS communication in complex design problems than in simple ones. For example, when searching in complex design spaces, some agents could initially have low-performance gains, thus stopping prematurely due to negative payoffs, even if those agents could perform better in the later stage of the search. Therefore, global-local communication becomes more critical in such situations for the entire system to converge. The proposed model can serve as a benchmark for empirical studies to quantitatively gauge how humans would rationally make design decisions in a team. 
    more » « less
  2. In distributed optimization schemes consisting of a group of agents connected to a central coordinator, the optimization algorithm often involves the agents solving private local sub-problems and exchanging data frequently with the coordinator to solve the global distributed problem. In those cases, the query-response mechanism usually causes excessive communication costs to the system, necessitating communication reduction in scenarios where communication is costly. Integrating Gaussian processes (GP) as a learning component to the Alternating Direction Method of Multipliers (ADMM) has proven effective in learning each agent’s local proximal operator to reduce the required communication exchange. A key element for integrating GP into the ADMM algorithm is the querying mechanism upon which the coordinator decides when communication with an agent is required. In this paper, we formulate a general querying decision framework as an optimization problem that balances reducing the communication cost and decreasing the prediction error. Under this framework, we propose a joint query strategy that takes into account the joint statistics of the query and ADMM variables and the total communication cost of all agents in the presence of uncertainty caused by the GP regression. In addition, we derive three different decision mechanisms that simplify the general framework by making the communication decision for each agent individually. We integrate multiple measures to quantify the trade-off between the communication cost reduction and the optimization solution’s accuracy/optimality. The proposed methods can achieve significant communication reduction and good optimization solution accuracy for distributed optimization, as demonstrated by extensive simulations of a distributed sharing problem. 
    more » « less
  3. This paper studies the distributed feedback optimization problem for linear multi-agent systems without precise knowledge of local costs and agent dynamics. The proposed solution is based on a hierarchical approach that uses upper-level coordinators to adjust reference signals toward the global optimum and lower-level controllers to regulate agents’ outputs toward the reference signals. In the absence of precise information on local gradients and agent dynamics, an extremum-seeking mechanism is used to enforce a gradient descent optimization strategy, and an adaptive dynamic programming approach is taken to synthesize an internal-model-based optimal tracking controller. The whole procedure relies only on measurements of local costs and input-state data along agents’ trajectories. Moreover, under appropriate conditions, the closed-loop signals are bounded and the output of the agents exponentially converges to a small neighborhood of the desired extremum. A numerical example is conducted to validate the efficacy of the proposed method. 
    more » « less
  4. null (Ed.)
    Intelligent utilization of resources and improved mission performance in an autonomous agent require consideration of cyber and physical resources. The allocation of these resources becomes more complex when the system expands from one agent to multiple agents, and the control shifts from centralized to decentralized. Consensus is a distributed algorithm that lets multiple agents agree on a shared value, but typically does not leverage mobility. We propose a coupled consensus control strategy that co-regulates computation, communication frequency, and connectivity of the agents to achieve faster convergence times at lower communication rates and computational costs. In this strategy, agents move towards a common location to increase connectivity. Simultaneously, the communication frequency is increased when the shared state error between an agent and its connected neighbors is high. When the shared state converges (i.e., consensus is reached), the agents withdraw to the initial positions and the communication frequency is decreased. Convergence properties of our algorithm are demonstrated under the proposed co-regulated control algorithm. We evaluated the proposed approach through a new set of cyber-physical, multi-agent metrics and demonstrated our approach in a simulation of unmanned aircraft systems measuring temperatures at multiple sites. The results demonstrate that, compared with fixed-rate and event-triggered consensus algorithms, our co-regulation scheme can achieve improved performance with fewer resources, while maintaining high reactivity to changes in the environment and system. 
    more » « less
  5. Diversity in behaviors is instrumental for robust team performance in many multiagent tasks which require agents to coordinate. Unfortunately, exhaustive search through the agents’ behavior spaces is often intractable. This paper introduces Behavior Exploration for Heterogeneous Teams (BEHT), a multi-level learning framework that enables agents to progressively explore regions of the behavior space that promote team coordination on diverse goals. By combining diversity search to maximize agent-specific rewards and evolutionary optimization to maximize the team-based fitness, our method effectively filters regions of the behavior space that are conducive to agent coordination. We demonstrate the diverse behaviors and synergies that are method allows agents to learn on a multiagent exploration problem. 
    more » « less