skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Asynchronous Multi-Information Source Bayesian Optimization
Abstract Resource management in engineering design seeks to optimally allocate while maximizing the performance metrics of the final design. Bayesian optimization (BO) is an efficient design framework that judiciously allocates resources through heuristic-based searches, aiming to identify the optimal design region with minimal experiments. Upon recommending a series of experiments or tasks, the framework anticipates their completion to augment its knowledge repository, subsequently guiding its decisions toward the most favorable next steps. However, when confronted with time constraints or other resource challenges, bottlenecks can hinder the traditional BO’s ability to assimilate knowledge and allocate resources with efficiency. In this work, we introduce an asynchronous learning framework designed to utilize idle periods between experiments. This model adeptly allocates resources, capitalizing on lower fidelity experiments to gather comprehensive insights about the target objective function. Such an approach ensures that the system progresses uninhibited by the outcomes of prior experiments, as it provisionally relies on anticipated results as stand-ins for actual outcomes. We initiate our exploration by addressing a basic problem, contrasting the efficacy of asynchronous learning against traditional synchronous multi-fidelity BO. We then employ this method to a practical challenge: optimizing a specific mechanical characteristic of a dual-phase steel.  more » « less
Award ID(s):
2119103
PAR ID:
10542581
Author(s) / Creator(s):
; ;
Publisher / Repository:
ASME
Date Published:
Journal Name:
Journal of Mechanical Design
Volume:
146
Issue:
10
ISSN:
1050-0472
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Bayesian optimization (BO) is a sequential optimization strategy that is increasingly employed in a wide range of areas including materials design. In real world applications, acquiring high-fidelity (HF) data through physical experiments or HF simulations is the major cost component of BO. To alleviate this bottleneck, multi-fidelity (MF) methods are increasingly used to forgo the sole reliance on the expensive HF data and reduce the sampling costs by querying inexpensive low-fidelity (LF) sources whose data are correlated with HF samples. Existing multi-fidelity BO (MFBO) methods operate under the following two assumptions: (1) Leveraging global (rather than local) correlation between HF and LF sources, and (2) Associating all the data sources with the same noise process. These assumptions dramatically reduce the performance of MFBO when LF sources are only locally correlated with the HF source or when the noise variance varies across the data sources. To dispense with these incorrect assumptions, we propose an MF emulation method that (1) learns a noise model for each data source, and (2) enables BO to leverage highly biased LF sources which are only locally correlated with the HF source. We illustrate the performance of our method through analytical examples and engineering problems on materials design. 
    more » « less
  2. Federated learning at edge systems not only mitigates privacy concerns by keeping data localized but also leverages edge computing resources to enable real-time AI inference and decision-making. In a blockchain-based federated learning framework over edge clouds, edge servers as clients can contribute private data or computing resources to the overall training or mining task for secure model aggregation. To overcome the impractical assumption that edge servers will voluntarily join training or mining, it is crucial to design an incentive mechanism that motivates edge servers to achieve optimal training and mining outcomes. In this paper, we investigate the incentive mechanism design for a semi-asynchronous blockchain-based federated edge learning system. We model the resource pricing mechanism among edge servers and task publishers as a Stackelberg game and prove the existence and uniqueness of a Nash equilibrium in such a game. We then propose an iterative algorithm based on the Alternating Direction Method of Multipliers (ADMM) to achieve the optimal strategies for each participating edge server. Finally, our simulation results verify the convergence and efficiency of our proposed scheme. 
    more » « less
  3. Analog circuit design requires substantial human expertise and involvement, which is a significant roadblock to design productivity. Bayesian Optimization (BO), a popular machine-learning-based optimization strategy, has been leveraged to automate analog design given its applicability across various circuit topologies and technologies. Traditional BO methods employ black-box Gaussian Process surrogate models and optimized labeled data queries to find optimization solutions by trading off between exploration and exploitation. However, the search for the optimal design solution in BO can be expensive from both a computational and data usage point of view, particularly for high-dimensional optimization problems. This paper presents ADO-LLM, the first work integrating large language models (LLMs) with Bayesian Optimization for analog design optimization. ADO-LLM leverages the LLM’s ability to infuse domain knowledge to rapidly generate viable design points to remedy BO's inefficiency in finding high-value design areas specifically under the limited design space coverage of the BO's probabilistic surrogate model. In the meantime, sampling of design points evaluated in the iterative BO process provides quality demonstrations for the LLM to generate high-quality design points while leveraging infused broad design knowledge. Furthermore, the diversity brought by BO's exploration enriches the contextual understanding of the LLM and allows it to more broadly search in the design space and prevent repetitive and redundant suggestions. We evaluate the proposed framework on two different types of analog circuits and demonstrate notable improvements in design efficiency and effectiveness. 
    more » « less
  4. We study critical systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing. These systems often support communities disproportionately affected by systemic racial, gender, or other injustices, so it is crucial to design these systems with fairness considerations in mind. To address this problem, we propose a framework for evaluating fairness in contextual resource allocation systems that is inspired by fairness metrics in machine learning. This framework can be applied to evaluate the fairness properties of a historical policy, as well as to impose constraints in the design of new (counterfactual) allocation policies. Our work culminates with a set of incompatibility results that investigate the interplay between the different fairness metrics we propose. Notably, we demonstrate that: 1) fairness in allocation and fairness in outcomes are usually incompatible; 2) policies that prioritize based on a vulnerability score will usually result in unequal outcomes across groups, even if the score is perfectly calibrated; 3) policies using contextual information beyond what is needed to characterize baseline risk and treatment effects can be fairer in their outcomes than those using just baseline risk and treatment effects; and 4) policies using group status in addition to baseline risk and treatment effects are as fair as possible given all available information. Our framework can help guide the discussion among stakeholders in deciding which fairness metrics to impose when allocating scarce resources. 
    more » « less
  5. Federated Learning (FL) enables edge devices or clients to collaboratively train machine learning (ML) models without sharing their private data. Much of the existing work in FL focuses on efficiently learning a model for a single task. In this paper, we study simultaneous training of multiple FL models using a common set of clients. The few existing simultaneous training methods employ synchronous aggregation of client updates, which can cause significant delays because large models and/or slow clients can bottleneck the aggregation. On the other hand, a naive asynchronous aggregation is adversely affected by stale client updates. We propose FedAST, a buffered asynchronous federated simultaneous training algorithm that overcomes bottlenecks from slow models and adaptively allocates client resources across heterogeneous tasks. We provide theoretical convergence guarantees of FedAST for smooth non-convex objective functions. Extensive experiments over multiple real-world datasets demonstrate that our proposed method outperforms existing simultaneous FL approaches, achieving up to 46.0% reduction in time to train multiple tasks to completion. 
    more » « less